We are very happy to introduce our three keynote speakers for SMC15, all leaders in their respective fields:
School of Communication, Simon Fraser University, Vancouver, Canada
Interacting with Inner and Outer Sonic Complexity: from Microsound to Soundscape Composition
It is possible to think of the two extremes of the world of sound as the inner domain of microsound (less than 50 ms) where frequency and time are interdependent, and the external world of sonic complexity, namely the soundscape. In terms of sonic design, the computer is increasingly providing tools for dealing with each of these domains, such as granular synthesis, convolution and the creation of virtual acoustic spaces through multi-channel soundscape composition utilizing computer-controlled spatial diffusion. The models of interaction involved with the complexity of each of these domains are instructive, and characterized by a blurring of the distinction between timbre and space. The presentation will include examples drawn from the composer’s practice, such as the octophonic soundscape works Temple, Chalice Well, Aeolian Voices, and Earth And Steel.
Barry Truax is a Professor Emeritus in the School of Communication and formerly the School for the Contemporary Arts at Simon Fraser University where he taught courses in acoustic communication and electroacoustic music. He has worked with the World Soundscape Project, editing its Handbook for Acoustic Ecology, and has published a book Acoustic Communication dealing with all aspects of sound and technology. As a composer, Truax is best known for his work with the PODX computer music system which he has used for tape solo works and those which combine tape with live performers or computer graphics. In 1991 his work, Riverrun, was awarded the Magisterium at the International Competition of Electroacoustic Music in Bourges, France, a category open only to electroacoustic composers of 20 or more years experience. Truax’s multi-channel soundscape compositions are frequently featured in concerts and festivals around the world.
Nimbus Centre, Cork Institute of Technology, Ireland
Musical Sound Source Separation
This talk focuses on presenting an overview of techniques for performing sound source separation with a particular focus on music recordings. Sound Source Separation attempts to extract individual sound sources or instruments from a recording containing multiple sources. In the case of recorded music, there are typically more sources than signals and so the music sound source separation problem is typically underdetermined. This has resulted in the development of a number of different model-based approaches such as matrix factorisation-based techniques and Bayesian methods. These are introduced using a real-world case study of using sound source separation techniques to create stereo upmixes from mono to stereo as an example. Following on from this, a number of recent developments in source separation algorithms will be presented, including Kernel Additive Modelling, and Spatial Projection-based methods. The talk will also highlight potential future directions for sound source separation research.
Dr Derry FitzGerald is a senior Post-Doctoral Researcher in Nimbus. He was a Stokes Lecturer in Sound Source Separation algorithms at the Audio Research Group in DIT from 2008-2013. Previous to this he worked as a post-doctoral researcher in the Dept. of Electronic Engineering at Cork Institute of Technology, having previously completed a Ph.D. and an M.A. at Dublin Institute of Technology. He has also worked as a Chemical Engineer in the pharmaceutical industry for some years. In the field of music and audio, he has worked as a sound engineer and has written scores for theatre. He has recently utilised his sound source separation technologies to create the first ever officially released stereo mixes of several songs for the Beach Boys, including 'Good Vibrations', 'Help me Rhonda' and 'I get around'. His research interests are in the areas of automatic music transcription, sound source separation, tensor factorizations, and music information retrieval systems.
Audio and Acoustics Group, University of Edinburgh
Perspectives on Physical Modelling Synthesis
Physical modelling synthesis has now been around for quite a while---and in the mainstream for more than 20 years. And yet, only recently has it become possible to perform simulations for relatively complex musical instrument designs in a reasonable amount of time. There are various different approaches to physical modeling---some can be viewed as descending from standard abstract methods such as additive/table lookup methods for the synthesis of waveforms, but others have their roots in simulation techniques for the dynamics of vibrating systems. The first part of this talk is concerned with examining the different approaches to physical modelling in this light, in order to highlight both the differences and unifying features---particularly with regard to computational cost, which is the main downside to working with physical models relative to other synthesis techniques. The remainder of the talk is devoted to an exploration of the possibilities of physical modelling synthesis for some more elaborate instrument constructions, including: brass instruments, percussion, guitar models, modular instrument construction environments, and, finally, the computationally “big” problem of embedding physical models in a surrounding 3D space. Sound and video demonstrations will be presented.
Dr. Stefan Bilbao is currently a Reader in the Music subject area at the University of Edinburgh and the co-director of the Acoustics and Audio Group. His background is in Physics (BA, Harvard, 1992) and Electrical Engineering (MSc., 1996, PhD, 2001, Stanford University). He was previously a lecturer at the Sonic Arts Research Centre, at the Queen’s University Belfast (2002-2005), and a postdoctoral research fellow at the Stanford Space Telecommunications and Radioscience Laboratory (2001-2002). He is currently the PI of a NESS project, concerned with the development of large scale physical modelling synthesis algorithms on parallel hardware for musicians.