Please use this identifier to cite or link to this item: http://dspace.mediu.edu.my:8181/xmlui/handle/10261/3001
Full metadata record
DC FieldValueLanguage
dc.contributorMinisterio de Ciencia y Tecnología (España)-
dc.creatorLópez de Mantaras, Ramón-
dc.creatorArcos, Josep Ll.-
dc.date2008-02-20T09:16:23Z-
dc.date2008-02-20T09:16:23Z-
dc.date2002-
dc.date.accessioned2017-01-31T01:00:16Z-
dc.date.available2017-01-31T01:00:16Z-
dc.identifierAI magazine, 2002, 23 (3): 43-57-
dc.identifier0738-4602-
dc.identifierhttp://hdl.handle.net/10261/3001-
dc.identifier.urihttp://dspace.mediu.edu.my:8181/xmlui/handle/10261/3001-
dc.descriptionIn this paper we first survey the three major types of computer music systems based on AI techniques: compositional, improvisational, and performance systems. Representative examples of each type are briefly described. Then, we look in more detail at the problem of endowing the resulting performances with the expressiveness that characterizes human-generated music. This is one of the most challenging aspects of computer music that has been addressed just recently. The main problem in modeling expressiveness is to grasp the performer’s “touch”; that is, the knowledge applied when performing a score. Humans acquire it through a long process of observation and imitation. For this reason, previous approaches, based on following musical rules trying to capture interpretation knowledge, had serious limitations. An alternative approach, much closer to the observation-imitation process observed in humans, is that of directly using the interpretation knowledge implicit in examples extracted from recordings of human performers instead of trying to make explicit such knowledge. In the last part of the paper we report on a performance system, SaxEx, based on this alternative approach, capable of generating high quality expressive solo performances of Jazz ballads based on examples of human performers within a case-based reasoning system.-
dc.description"AI and Music..." is partially supported by the Spanish Ministry of Science and Technology under project TIC 2000-1094-C02, "TABASCO: Content-Based Audio Transformation Using CBR".-
dc.descriptionPeer reviewed-
dc.format761695 bytes-
dc.formatapplication/pdf-
dc.languageeng-
dc.publisherAAAI Press-
dc.rightsopenAccess-
dc.subjectArtificial Intelligence-
dc.subjectCase-Based Reasoning-
dc.titleAI and Music: From Composition to Expressive Performance-
dc.typeArtículo-
Appears in Collections:Digital Csic

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.