Turning Silence of Space into Sounds
The "Sounds of Space" project is being developed to explore the connections between solar science and sound, to compare visual and aural representations of space data, mostly from NASA's STEREO mission, and to promote a better understanding of the Sun through stimulating interactive software. These software programs generate sounds through real-time processing of data previously collected from the Sun. All of the data is multichannel -- meaning that the experiments in space make measurements of multiple aspects of a solar wind property for any given moment in time. For example, the solar wind particle instrument measures particles at different energies and puts the measurements in different channels depending on their speed. We take this multiplicity of data and use it to control various parameters of the generated sound. A variation in one channel of the data might be used to control the pitch (how high or low), the amplitude (how loud or soft), or the rhythm for a portion of the overall sonic representation.
We have developed several computational tools to control, map, and sonify these data, i.e. convert these data to sound. The majority of our work has utilized the Max/MSP programming environment which allows for real-time user interaction with the sonification process. Through this interaction the user can become quickly engaged with the data and gain an depth of understanding not as easily attained through passive observation.
To hear the latest examples of our work, visit the latest web page, and for archived examples visit the examples web page. For even more sounds of space created by scientists and musicians around the world, visit our links page.
To delve into sonification, visit the programs page where it is possible to download the software programs we have created to listen to data a user has collected, solar wind particle data from ascii files, and solar wind particle data from graphical data and in stereo. This most recent work has relied on Spectrogram Image Analysis to be able to analyze data stored in diverse formats. By reading the changing color values represented in a spectrogram we are able to determine the number of channels and dynamically reconfigure the program to sonify accordingly.
Sonification Misconception Alert
Most of the sounds on these pages are not sound waves observed in space, i.e. most of the data are not of waves that travel through a medium by way of modifying the density of the medium. Rather, data from different regions in space, such as solar particles hitting a detector, have been converted into sound. This process is known as sonification. See our sonification page for a more detailed introduction to this concept.
CNMAT houses a dynamic group of educational, performance and research programs focused on the creative interaction between music and technology. CNMAT’s research program is highly interdisciplinary, linking all of UC Berkeley’s disciplines dedicated to the study or creative use of sound. CNMAT has been the leading contributor to our sonification project.