In today’s world, science and art don’t usually mix. For many years we operated under a now-defunct left brain/right brain dichotomy that fostered the presumption that people were good at either math and science or the arts, but not both.
In secondary education, funding for arts classes often finds itself directly threatened by the need to produce higher test scores in science, technology, engineering, and math (STEM) fields. Within the scientific world, experiments are often presented with tables, charts, and graphs that show raw data but nothing more. However, when there’s a confluence of art and science with the goal of representing, communicating, and disseminating data, it holds the potential to create a more interesting and evocative presentation that can appeal to a larger subset of the general public.
One way to blend art and science is to communicate facts and figures through music. But how do you aurally represent data in a way that is true to the information and allows it to be easily interpreted while still creating a pleasing musical piece? That was my challenge when, in June 2016, I arrived at the Hubbard Brook Experimental Forest in Woodstock, NH as part of a Research Experience for Undergraduates program that was investigating ways to represent data as sound.
I came to New Hampshire having just completed my third year at Oberlin College and Conservatory, where I was studying music composition, horn performance, and mathematics. The music I had been writing was meant to be played by living musicians as opposed to computers. This concept of turning data into sound was completely foreign to me.
However, it was already second nature to scientists and musicians at Hubbard Brook working on a project they called Waterviz. They were in the process of creating real-time aural representations of water cycle data, as well as aural representations of pre-existing water cycle data going back to 2010. All of these pieces were created by Marty Quinn, a musician and the owner of Design Rhythmics Sonification Research Lab, who has been involved in data sonification for the past two decades.
After listening to Marty’s pieces, I decided to work with Hubbard Brook’s water cycle data set from 2015. It had 8670 data points taken at hourly intervals for ten different variables. I was tasked with making my own aural representation of this data.
As I was getting started, I made a few calls to my school colleagues who studied in Oberlin’s Technology in Music and Related Arts program and purchased Max/MSP, a visual programming language that seemed like it had some potential for data sonification. Max/MSP allowed me to input the data values for each variable and scale them to appropriate ranges that were usable by the program. The scaled values for each variable could be assigned to a sound or instrument, and the magnitudes of the data values defined that sound or instrument’s pitch and volume. I chose a sound that I thought was representative of each variable and decided how the data would influence it.
To represent streamflow, I used the notation program Finale to write and record a tranquil, clarinet melody. The streamflow rate controlled the clarinet’s volume. If the stream flowed fast enough, cymbals would also start playing, hitting more frequently with higher streamflow. I wrote and recorded a more spritely countermelody for the flute. The volume was controlled by solar radiation. I used a vibraphone to represent rain and a celesta to represent snow, with heavier precipitation triggering more and higher pitches on those instruments.
For wind speed, I created a droning sine wave that sounded louder and played at higher octaves with higher wind speed. The panning of this drone was controlled by wind direction. I also chose a sine wave to represent temperature. It oscillated up and down with temperature changes. I used a rolling timpani, or kettledrum, to represent air pressure, with its pitch going up and down along with the pressure.
A pan flute chord added higher pitches as the soils became more saturated with water. A piano played a chromatic scale that went up and down with changes in humidity. A synthesizer represents evapotranspiration, the water vapor that evaporated from the soil and trees. It sounds like something breathing, mimicking the trees and soils “respiring” water vapor into the atmosphere.
The result was a fourteen-and-a-half-minute piece that played the entire year’s data, with a new hourly data point heard once every 100 milliseconds. A recording of this piece will join Marty’s other sonifications on Hubbard Brook’s Waterviz website.
In addition to the full piece, I also made recordings of each variable individually cycling through its entire range. Finally, I made recordings of several hydrological events such as rainstorms, snowmelt, and periods of extended low streamflow. These excerpt recordings were synchronized to stacked graphs of all the variables with a line moving across them in step with the data, allowing the listener to see the data in a graph format while they are listening to the piece.
I hope that my aural representation of data – and others like it – can help increase the number of people in the general public who are accessing and interacting with scientific data. The current prevailing methods of data representation (charts and graphs) make it very difficult for visually-impaired people in particular to consume scientific data. I anticipate that sonic representations presented in tandem with graphs will act as a bridge between the scientific community and non-traditional audiences.
I am also interested in presenting data in more aesthetically pleasing ways, as opposed to seemingly cumbersome lists of numbers or bland graphs. That will hopefully encourage members of the non-scientific community to spend more time interacting with the data. Finally, I am excited by the prospect of a world where the humanities and sciences are more closely intertwined as artists and scientists work together to communicate scientific research and findings and instigate change that benefit everyone.
Torrin J. Hallett is in his fourth year of study at Oberlin College and Conservatory in Ohio, where he is in the double degree program studying music composition, horn performance and mathematics. Torrin began composing when he was fourteen years old; his first published composition came when he was sixteen. Since then, he has written many works for chamber groups and wind ensembles, and he wrote music for the documentary “Consider the Conversation 2: Stories about Cure, Relief, and Comfort.” Torrin is currently serving as composer in residence and assistant conductor for the Northern Ohio Youth Orchestras. Outside of music, Torrin is a professional lumberjack and an Eagle Scout.