The creation of five sound sculptures centered around fire-affected areas of the 2020 Holiday Farm fire near Blue River, OR. The work was part of the Soundscapes of Socioecological Succession (SSS) project that was funded through a Center for Environmental Futures, Andrew W. Mellon 2021 Summer Faculty Research Award from the University of Oregon.
Over the summer, I produced five sound sculptures centered around fire-affected areas of the 2020 Holiday Farm fire. The work was part of the Soundscapes of Socioecological Succession (SSS) project that was funded through a Center for Environmental Futures, Andrew W. Mellon 2021 Summer Faculty Research Award from the University of Oregon.
Through field recording fieldwork, local wood sourcing, and custom electronic design, the five (5) sound sculpture prototypes were one way to generate a unique auditory experience aimed at the general public. The work was designed to unpack the sounds and scenes of wildfires in natural and human-systems and to document the regenerative succession of coupled social and ecological processes.
Video 1. Sound sculpture C prototype. Burnt cedar wood and audio sourced from fire affected area near Blue River, OR.
Socioecological systems emerge from interdependent processes through which people and nature self-organize across space and time (Gunderson and Holling, 2002). STEM-centric studies of socioecological dynamics miss literal and metaphorical connections between people and nature, which are difficult to quantify and to communicate. To address this limitation, the sound sculptures test a new approach to capture SSS as a qualitative record of collective response to catastrophic wildfire.
Like a slice of tree ring that marks age and time, the field recordings of audio in visits to fire-affected areas connotes a slice of succession activities. Sound recordings of the area are meant to capture multiple scenes and ecological voices, filtered through a raw material from the sites themselves.
Video 2. Sound sculpture D prototype. Wood and audio sourced from fire affected area near Blue River, OR.
Our sonic environment is polluted by man both in its content and its reflections. This is certainly true even for field recordists who venture further and further into the wild to break free from the noise pollution of a passing airplane, a highway’s din, or even audible underground activity such as fracking (One Square Inch, 2021). Treating site-specific wood as an acoustic resonator — a filter that distorts as much as it renders sound audible — casts a shadow onto the sounds it projects. The physical material acts as a filter upon the sound. The wood slightly changes the spectrum of sound by boosting or cutting the amount of different frequencies in the sound.
Our University of Oregon team expanded previous research by sampling the rich SSS at fire-affected sites, including soundscape field recordings, recorded interviews, and collecting “hazard tree” waste material. These materials offer a document of the resiliency of the landscape and illustrate how forest disturbance can set back human-defined sustainable development goals regionally. The development of the five sound sculptures are just one means to inform the public and inspire collective action towards sustainable futures.
Video 3. Sound sculpture E prototype. Wood and audio sourced from fire-affected area near Blue River, OR.
Audio field recordings were captured during two site visits to fire-affected areas on June 16, 2021 and July 2, 2021. The second visit was to H.J. Andrews forest and an interview and tour with Mark Schulze (H.J. Andrews Experimental Forest Director) Bailey Hilgren and I used a few field recording setups, and which mostly consisted of Bailey recording with a Zoom H6 using on-board mics and I recording with a Sound Devices 633 field mixer and three mics: Sennheiser MKH 30-P48 and MKH 50-P48 microphone in mid-side configuration and a LOM Uši Pro omnidirectional microphone. The Zoom recordings were captured at 96k-24bit, and the 633 recordings were captured at 192k-24bit. During the second visit, we were able to setup “tree ears” that consisted of two Uši Pro mics taped to a tree and a LOM Geofón low frequency microphone, and which we left recording for several hours in the H.J. Andrews forest (see Figure 2). Bailey organized all the audio recordings using the Universal Category System (UCS). The system is a public domain initiative for the classification of sound effects. While we chose not to make the 30+GB of audio files as a publicly available archive, we have made the audio categorization spreadsheet publicly available (SSS metadata spreadsheet).
Figure 1. Field recording setup at fire affected site.
Figure 2. “Tree ear” field recording configuration.
During the technical design phase, some secondary research questions were asked. Which audio exciter/transducers work best on non-flat, raw wood surfaces? Which exciters are the most cost-effective solution for an array of speakers?For fabrication of installing wood as pieces on a wall, can I cost-effectively source sturdier materials than aluminum posts?
Figure 3. Sound sculpture prototypes depicting standoffs and speakers.
I tested a few different models: waterproof transducer, round and square exciters, and distributed mode loudspeakers. I also tested different speaker formats: 10W 8ohm, 20W 4ohm, and 20W 8ohm. Unfortunately, the desired power outputs, 25-30W, models of exciters were consistently sold out throughout the project, therefore I was unable to equally distribute testing across similar power outputs. From experience more than a scientific A/B test, I found that the more flexible options for attaching to wood surfaces were the Dayton Audio DAEX25Q-4 Quad Feet 25mm and the Dayton Audio DAEX32SQ-8 Square Frame 32mm Exciter, 10W 8 Ohm. Generally, I realized that in order to get decent output in both frequency response and gain, the low-end of $15-20/transducer seems about right. I do not recommend anything below 10W for this type of work. Getting a stereo image was not important and would be difficult given the size of wooden pieces. I valued volume and minimizing visual distraction, so speakers were meant to be placed behind or under the sculptures. I doubled speakers whenever I used 10W drivers.
Figure 4. Recording log loader moving hazard tree material
Audio 1. Log loader field recording (see Figure 4)
For standoffs, I sourced variable size stainless steel standoff screws used in mounting glass hardware which worked extremely well on the river wood sound sculpture (Figure 5).
Figure 5. Stainless steel standoffs, 10W 8ohm speakers, and custom electronics board on sound sculpture D prototype.
I sourced audio amplifiers on sale for under $10 each, where $15 is normal pricing. The TPA3116D2 2x50W Class D stereo amplifier boards have handled well on previous projects, and finding them cheaply with the added volume control and power switch were a great addition for fine-tuning amplification in public spaces.
Normally powering the amplifiers and audio boards is where the real cost comes in, and I was happy to learn that Sparkfun’s Redbaord Arduino’s can now handle upwards of 15VDC, so I went with their MP3 Player Shield and Redboard UNO in order to split VDC power between the amplifier and board (12V, 2A power supplies were adequate for the project and transducer wattage).
Figure 6. Custom electronics board consisting of MP3 player shield, Arduino UNO Redboard, 2x50W class D amplifier, and power split for up to 15VDC.
Figure 7. Recording site near Eagle Rock along the McKenzie River.
I modified the outdated MP3 player code on Arduino to dynamically handle any number of tracks and named audio files, such that one doesn’t need to rename audio files in the convention “track001.mp3” “track002.mp3”.Whatever audio files are uploaded onto the SD cards, the filenames simply need to be placed into an array at the top of the code uploaded to the board. Thus, when powered on, the sound sculpture will play an endless loop of the uploaded audio files found on the SD card.
***For those interested in the Arduino code running on the MP3 players, I have made the code publicly accessible as a repository on Github.
Figure 8. Full electronics module example: 12V 2A power supply, MP3 player shield, Sparkfun Redboard Arduino, TDA 2x50W stereo amplifier, single 10W exciter.
Video 4. Sound sculpture A prototype. Wood and audio sourced from fire-affected area near Blue River, OR.
Selecting the audio for sound sculptures came through discussions with Bailey around ecological succession, the interviews conducted, and the types of audio that was captured and categorized. We chose four audio bins (categories) to work with: animals, soundscape or ambient, logging or construction, and scientific or interviews. Again, Bailey created a categorical spreadsheet of audio files within these four bins.
Video 5. Sound sculpture A prototype. Wood and audio sourced from fire-affected area near Blue River, OR.
Constructing the sound sculptures involved imagining public space and the materials. There are two pieces for wall, one for hanging, one for a pedestal, and one for the ground. The sculptures are stand alone pieces that simply require AC power for showing. See below for a gallery of stills of these works.
By activating sourced raw materials (e.g., “hazard tree” wood) with acoustic signals stemming from local sites, the sound sculptures amplify the regional and collective voice of wildfire succession even as it outputs a modified version of the input sound.
The process of developing sound sculptures led to additional ideas for iteration or for incorporating the sculptures within a larger-scale project. For example, in our interviews with Ines Moran and Mark Schulze, we found out about “acoustic loggers,” battery operated, weather-proof audio field recorders that record audio based upon a timer. We ordered one such acoustic logger for the project, an Audio Moth; however, the Audio Moth order did not arrive after the completion of the project. Working these into the project through sampling fire-affected sites would create a unique dataset.
The sound sculptures can be stand-alone works. We appreciated the modular approach to the design, and we could continue the modular approach or tether sound objects together. Future work could involve spatializing audio across multiple sculptures similar to previous sound artwork, like Wildfire and Awash.
For the sound sculptures themselves, there is gain control on speaker-level but not on the line output of the players. We could add buttons for increasing/decreasing volume on the MP3 boards to better manage levels, and if we want to provide an interactive component to the works, we could buttons for cycling through tracks on sound sculptures.
Listening to our environment is essential. In 2015, The United Nations Educational, Scientific, and Cultural Organization (UNESCO) formed a “Charter for Sound” to emphasize sound as a critical signifier in environmental health (LeMuet, 2017). By continuing to incorporate sonic practices (bioacoustics, sound art, field recording) into our work with the environment, we create more pathways to experiencing and understanding the planet we live on.
References / Resources
Gunderson, L.H., Holling, C.S., 2002. Panarchy: Understanding Transformations in Human and Natural Systems, Panarchy understanding transformations in human and natural systems. Island Press. https://doi.org/10.1016/j.ecolecon.2004.01.010
This is a short article on creating video spectrograms (time-frequency plots) of audio files. The work comes from research project, Soundscapes of Socioecological Succession, funded by a Center for Environmental Futures, Andrew W. Mellon 2021 Summer Faculty Research Award.
The example in Video 1 is a spectrogram video created using Matlab. The audio is a recording of a small dynamite blast of a 70″ stump across from Eagle Rock, just past Eagle Rock Lodge on the McKenzie Hwy in Vida, OR.
Video 1. Video of spectrogram with playback barline and synchronized audio file.
I love spectrograms. I’ve worked with time-frequency plots in various ways in the past, namely spectral smoothing music (listen on Spotify), collaborative research (read the paper), and even teaching (Data Sonification course) at the University of Oregon. Yet, I am still amazed by the work of spectrograms and sound in the sciences. I knew of the theories around animals occupying various frequency spaces within a habitat based upon the bioacoustics work of Garth Paine and great multimedia reporting by Andreas von Bubnoff. Yet, after an interview with a UO visiting researcher, Ines Moran, as part of our Soundscapes of Socioecological Succession project, I was further intrigued by how sound, spectrograms, and AI plays an integral role in her bioacoustics research on bird communication.
This led me to revisit my work with spectrograms. I was blown away by Merlin ID’s auto spectrogram video app, and I wanted to relook at how I create my own spectrogram videos. I’ve been frustrated with multiple software solutions to generating scrolling spectrogram videos. Not having a seamless solution other than using screen capture on iZotope RX or Audacity spectrograms, I did some more research looking at iAnalyse 5 software (replaces eAnalysis software) and Cornell Lab’s RavenLite software, but was unsatisfied with movie export results. I appreciated the zoom functionality of each software but wanted auto-chunking or scrolling of the spectrogram within a high-resolution video.
I didn’t easily discover a straightforward plug n’ play solution (although I’m open to hearing one if you have a suggestion!). I ended up going back to Matlab to see if I could find a pre-existing library or code I could implement. I found slightly different versions, and not exactly seamless. I ended up refashioning some pre-existing code written by Theodoros Giannakopoulos that generated gifs from spectrograms. See gif Figure 1.
Figure 1. Original Gif export using pre-existing Matlab code.
I used this code as a starter for me to build out the function to export videos of spectrograms, and which I can specify the length in seconds for each window. Video 2 depicts example display output of audio waveform and the spectrogram of a Swainson’s Thrush bird call. I sync’ed the audio afterward in Adobe Premiere. I removed the waveform to focus on the spectrogram, and I had to get fancy on x-axis labels to dynamically match the length of windows that could be any length of seconds.
Video 2. Video output on a single screen, with split waveform and spectrogram view.
While I was unable to get a scrolling spectrogram video in one software, the auto-chunking feature was quite time-saving. I simply crafted an Adobe Premiere template with a scrolling animation graphic that I can easily edit to equal the exact window length and sync my original audio file to the movie. All within about a minute or two (see Figure 2). The final version has a nice scrolling playback bar on pages of spectrogram videos.
Figure 2. Screenshot of Adobe Premiere with line graphic that keyframe animates across the spectrogram during playback.
Video 3 displays the spectrogram complete with audio waveform, audio file, and playback barline (audio and playback barline added in Adobe Premiere).
Video 3. Video example with scrolling playback barline
Video 4 shows the final version of the code output after removing the audio waveform, resizing the graph, and updating the title. Again, adding the playback barline and synchronizing the audio were done in Adobe Premiere.
Video 4. Final version of Matlab code that generates a 1920x1080p spectrogram video the same length as the audio file.
The code gave me an easy way to label the spectrogram and embed this in the video. There are four steps.
1. Run the script in Matlab which outputs the 1920×1080 video and contains the same length as the audio file,
2. Drag the video into Adobe Premiere with the Graphics playback bar template
3. Drag the audio into the start to match the animation
4. Export the 1920×1080 video.
The process for one audio file takes about 2-3 minutes from start to finish.
I could make this more dynamic by grabbing the audio file length automatically and setting the frame rate automatically to match. simply determine how many “screen/pages” I want by editing the function variables.
***For those interested in the Matlab code, I have made it publicly accessible as a repository on Github.