Human-Computer Music Performances Use
System That Links Music & Musical Gestures
Every musical sound comes from a specific way that an instrument is played. With modern technology such as sensors, signal processing, and sometimes machine learning algorithms, researchers can determine the precise musical gesture used to produce a particular sound on an instrument. The ability to recreate musical gestures from sounds can be used for interactive human-computer music performances, music transcription, and other innovative applications.
In a new study, researchers have developed a method for capturing musical gestures and mapping them to sounds that overcomes some of the disadvantages of previous methods. Adam Tindale, Ajay Kapur, and George Tzanetakis – all trained musicians and computer scientists working at the University of Victoria in Victoria, Canada – have described the new method in a study to be published in IEEE Transactions on Multimedia. The method grew out of the authors' experiences in developing instruments for interactive human-computer music performances. At the time, Tindale and Kapur were both completing their PhDs at the University of Victoria. Tindale now works at the Alberta College of Art and Design and Kapur works at the California Institute of the Arts and the New Zealand School of Music.
As the researchers explain in their study, there are two main approaches for capturing musical gestures. One approach is direct acquisition, which involves attaching permanent sensors to instruments to create “hyper-instruments.” However, this approach is often invasive to performers and requires modification of expensive instruments. The second approach is indirect acquisition, which involves using a microphone to capture the sound, as well as sophisticated signal processing and machine learning algorithms to extract gestures from the sounds, which requires large amounts of training.
The researchers' new method is somewhat of a hybrid of these two approaches. They temporarily attach sensors to an instrument to capture musical gestures and a microphone to capture sound. This data is analyzed, and the gesture-sound mappings are used to train machine learning models to extract gestures from sounds only. These machine models then create what the researchers call a “surrogate sensor,” which behaves like the original invasive sensor but is not attached to the instrument. The surrogate sensor can determine the musical gestures based only on the analyzed sound captured from the microphone.
(For complete article, plus vids of the Machine Orchestra in action, see PhysOrg.Com)
-Maureen Lang
Recent Comments