All posts by jpb-admin

Wildfire installation. Photo by http://janelleshootsphotos.com

The process behind sound artwork Wildfire

Wildfire is a 48-foot long speaker array that plays back a wave of fire sounds across its 48-foot span at speeds of actual wildfires. The sound art installation strives to have viewers embody the devastating spread of wildfires through an auditory experience.

Wildfire employs sound to investigate how the climate enables destructive wildfires that lead to statewide emergencies. The speed at which fires move can be mimicked in sound. By placing speakers along a surface (every three feet across 48 feet ~16 speakers), Wildfire implements spatialization techniques to play waves of fire sounds at speeds of simulated models and actual wildfire events. Comparing the speed of different fires through sound spatialization, we can hear how quickly different fires move across various wildfire behavior (fuel, topography, and weather).

Stereo audio example. Fire sound moving across stereophonic field at 16 mph.

Stereo audio example. Fire sound moving across stereophonic field at 83 mph.

Wildfire is comprised of sixteen 30W speakers, 120’ speaker cable, sixteen 8” square wood mounts, sixteen 6.25” diameter wood speaker rings, 64 aluminum speaker post mounts, eight custom electronic boards and enclosures, eight 50W power amps, one custom motherboard and enclosure, eight custom length Ethernet cables, custom-built power supply cable, sixteen 15V 4A power supplies, and three 9V 5A power supplies.

Eight different recordings of wildfire sound simulations are played across the 48-foot speaker array in looped playback. A narrator describes each wildfire event before the audio playback of fire sounds. Because audio on all eight stereo channels are triggered at the same time for simultaneous playback, audio spatialization is ‘baked-in’ on the audio files. The fire soundscapes are audio samples that have been simulated in a virtual space to move at speeds of actual wildfires and captured (read recorded) as eight stereo audio files at the same spatial location of the sixteen speakers in the physical world. The virtual mapping and recording process ensures little destructive interference as a result of phase shifts and time delays. I then mixed the resulting files inside Logic Pro X (see Figure below).

FIgure. Eight stereo audio files — each track represents audio for one sound FX board, which has stereo playback.

I am always amazed by how different topics are defined and vocabulary used when working across disciplines. For example, in seeking to play audio at varying rates of ‘speed,’ wildfire scientists and firefighters instead describe fires in terms of ‘rate of spread.’ Because fires are not single moving points but instead lines that can span miles moving in various directions all at once, speed is difficult for the field to put into practice. The term ‘spread’ and how it’s calculated serve wildfire science well but required me to think about how to convey the destructive ‘rates of spread’ as a rate a general observer may perceive along a two-dimensional speaker array (speakers mounted along a wall).

In order to distill wildfire science down to essential components for a gallery sound installation, I spent a lot of time speaking with various wildfire scientists on the phone, emailing various fire labs, working on estimating wildfire behavior using Rothermel’s Spread Rate model,[1][2] and working between the measured distance of ‘chains’ and miles. I am not a fire scientist; I am indebted to the help I received but any incongruencies are my own. I compiled eight narratives that juxtapose ‘common’ rates depicted in simulated models with real wildfires that have occurred in US western states over the past ten years based upon fire behavior (fuel, topography, and weather). These narratives are outlined in Table 1 at the bottom.

Earlier in the year, I worked with Harmonic Laboratory, the art collective I co-direct, on a 120- speaker environmental sound work called Awash.[3] The work was commissioned for the High Desert Museum in Bend, Oregon as part of the Museum’s 2019 Desert Reflections: Water Shapes the West exhibit, which ran from April 26 to Sept. 27, 2019. The 32’ x 8’ work evokes the beauty of the high desert through field recordings, timbral composition, and kinetic movement (Figure below).

Figure. Harmonic Laboratory’s 2019 sound artwork, Awash.

The electronic technology that I implemented in Awash for playing back audio across 120-speakers influenced my design of Wildfire. The electronics in Awash works by sending a basic low-voltage signal from the Arduino Mega motherboard to ten sound FX boards across Ethernet cable, thereby triggering simultaneous playback of audio across all 120-speakers (twelve 3W speakers per board powered by a 20W amplifier circuit). The electronics in Wildfire function in the same way: a low-voltage signal from the motherboard (Arduino Mega) is sent to eight electronic MP3 boards across Ethernet cable, thereby triggering simultaneous playback of audio across all sixteen speakers (Figures below). Instead of 3W speakers and 20W power amp boards used at the High Desert Museum, I chose to scale down the number of speakers and ramp up the wattage per board, choosing a stereo 50W power amp matched with two 30W speakers. The result is sixteen channels of audio running across eight stereo boards. And it doesn’t have to be sample-accurate!

Figure. Motherboard: Arduino Mega to eight Ethernet cable jacks.
Figure. Voltage trigger sent over Ethernet cable to each sound FX board. Any millisecond discrepancies in timing were inconsequential to possible phase interference due to the virtual audio recording process.

For Wildfire, I built custom laser-cut acrylic enclosures for the electronic boards (Figure below) using MakeABox.io (note: I found a good list of other services here). The second element was designing and creating custom PCB boards for the electronics themselves (Figure below). For the custom PCB board, I used Eagle CAD software (SparkFun has a great tutorial!) and then used an Oregon-based manufacturer OSH Park to print the boards.

For the sixteen panel mounts and speaker rings, I sourced all wood from the woodshop at my father-in-law’s, who has various wood collected over the last 50-60 years. The panels were planed, cut, and drilled on-site and the speaker rings were cut using a drill press. The figure below depicts the raw materials after applying a basic wood varnish. The wood mounts consist of black walnut, pine, and sycamore woods. The wooden speaker rings consist of alder, ebony, and myrtle woods.

Figure. Six different woods used for speaker mounts and rings.
Figure. Alder speaker ring, sycamore speaker stand. Photo by: http://janelleshootsphotos.com

In the build-out, I was unable to power both the power amp and the MP3 audio boards from a single power source, even with voltage regulators. A large hum was evident during the split of power. A future work could attempt to power from a single power source while sharing ground with the motherboard. Yet, the audible hum led me to power the boards separately.

During install, I ran into issues of triggering related to the MP3 Qwiik trigger boards. The power draw for each MP3 board is between 3 to 3.3V, and I ran four boards from a single 9V 5A power supply using a custom T-tap connector cable and 1117 voltage regulators, in which I registered 3.26V along each power connection. However, upon sending a low-voltage trigger from the motherboard to the MP3 boards, I was unable to successfully trigger audio from the fourth and final board located at the end of the power supply connector cable. The problem remained consistent, even after switching modules, switching boards, testing Ethernet data cable, testing a different I2C communication protocol in the same configuration, among other troubleshooting tasks. When powering the final board with a different power supply (5V 2A supply), I was able to successfully trigger all eight electronic boards at once. It should be noted that the issue seems to have cascaded from my failure to effectively split power from a single power source per electronics module.

Figure. Electronics in Edith Langley Barrett Art Gallery, 2019. Photo by: http://janelleshootsphotos.com

The minimal aesthetic was slightly hindered due to the amount of data and power cables running along the floor. There is minimal noise induction with long speaker cable runs, such that in my second install at SPRING|BREAK in NYC, I relied on longer speaker cable runs instead of long power and data cables. Speaker cable is cheaper than power cable, so keeping costs down, saving time in dressing cables, and minimizing cabling along the 48’ span, focusing the attention on the speakers, wood, and audio. And, if I use the MP3 boards again, I would implement the I2C protocol and consolidate the electronics, which would save on data cabling.

Figure. Wildfire at SPRING|BREAK in NYC in March, 2020. Photo from SPRING|BREAK Instagram.

Through the active listening experience of hearing sounds of wildfires at realistic speeds, viewers are openly invited to support sustainable and resilient policies, including ones that can be done immediately, like creating defensible space around their homes. In the face of continued ongoing wildfires that become more frequent, Wildfire sonically strives to impact the listener in registering the devastation caused by wildfires. Getting the public to support sustainable policies and/or individually prepare for wildfires helps make communities more resilient to the impacts of wildfires and other disaster-related phenomena caused by climate change.

The work was made possible through the University of Oregon Center for Environmental Futures and the Andrew W. Mellon Foundation. The Impact! exhibition at the Barrett Art Gallery was supported with funds from the Oregon Arts Commission. Thank you to Meg Austin for inviting me to display work at the Barrett Art Gallery, and I am indebted to Sarisha Hoogheem and Matthew Klausner for their hard work in putting the show together. Thank you to Meg Austin and Ashlie Flood for curating Wildfire at SPRING|BREAK in NYC. And kudos again to Matthew Klausner and Jay Schnitt for their hard work in putting the piece up. Thank you to my cousin John Bellona, a career Nevada firefighter, for his insight on western wildfires and contacts in the field. Thank you to Dr. Mark Finney for providing common averages of speed-related to wildfires; Dr. Kara Yedinek for sharing insights on audio frequencies from her fire research; and Sherry Leis, Jennifer Crites, Janean Creighton and the other fire specialists who helped me along the way.

Table 1: Narratives in Wildfire

Feature

Characteristics

Rate of Spread

Time across 48-foot speaker array

Surface Fire: Grass

Yarnell Hill Fire, June 30, 2013

Crown fire: Forest

Delta Fire, near Shasta, California. September 5th, 2018

Surface Fire: Western Grassland, Short Grass

Long Draw Fire. Eastern Oregon. July 12, 2012

Crown Fire: Pine and Sagebrush

Camp Fire, near Paradise California. November 8th 2018.

Low dead fuel moisture content, High wind speed, Level terrain

3-6% dead fuel moisture content, Wind speed 15-25 mph, Mixed terrain

Low dead fuel moisture content, High Wind speed, Level terrain

Moisture content unknown, Wind speed unknown, Mixed terrain

2% Dead Fuel Moisture, Wind speed 20 mph, Level Terrain

Moisture content unknown, Wind speed unknown, Mixed terrain

2% Dead Fuel Moisture, Wind speed 20 mph, Level Terrain

Low Moisture content, Wind speed 50 mph, Mixed terrain

Upper average forward rate of spread, 894 chains/hour

During Granite Mountain crew de- ployment, 1280 chains/hour

Upper average forward rate of spread, 297.6 chains/hour

Initial perimeter rate of spread, 16,993 sq. chains/hour

Perimeter rate of spread, 1250 chains/hour

Average perimeter rate of spread, 61,960 sq chains/hour

Perimeter rate of spread 525 chains/hour

Peak perimeter rate of spread, 67,000 sq. chains/hour

2.92 seconds

2.04 seconds

8.7 seconds

1.54 seconds

2.16 seconds

0.422 seconds

4.99 seconds

0.394 seconds

 

[1] F. A. Albini, “Estimating wildfire behavior and effects,” United States Department of Agriculture, Forest Service, Tech. Rep., 1976.
[2] J.H. Scott and R.E. Burgan, “Standard fire behavior fuel models: A comprehensive set for use with Rothermel’s surface fire spread model,” United States Department of Agriculture, Forest Service, Tech. Rep., June 2005.
[3] J. Bellona, J. Park, and J. Schropp, “Awash,” https://harmoniclab.org/portfolio/awash/

The Art of the Cron Job

Sound art installations that require digital computing, especially projects that rely on advanced software, demand added insurance of stability in order to remain up in an unattended space for extended periods of time. For exhibitions, this time period can mean a month or more with hours that vary from business hours to a taxing 24-7. One added insurance for artists relying on computers (e.g., Mac Minis) for unattended digital works is the cron job.

A cron is “a time-based job scheduler” that runs periodically (time intervals) to help “maintain software environments” (footnote 1). A software utility for Unix (read Mac), the cron automates processes and tasks, allowing the computer to be used as your personal docent to check on installation software, updating variables as part of the work or fixing issues as they crop up.

I got into cron jobs in 2014 while I was working with John Park on #Carbonfeed (URL), a multimedia installation that leverages Twitter API to transform real-time tweets into physical bubbles in tubes of water as well as a musical composition driven by behavior on Twitter (Figure 1). The piece incorporates a custom node.js script running on a Mac mini. To anticipate power failures, and to even alter hashtag sets on the LCD screens (Figure 2), I needed a way to automate software processes and failsafes. Enter the cron job.

#Carbonfeed (photo by Janelle Rodriguez)
Figure 1. #Carbonfeed (photo by Janelle Rodriguez http://janelleshootsphotos.com)
#Carbonfeed hashtags (photo by Janelle Rodriguez http://janelleshootsphotos.com)
Figure 2. #Carbonfeed hashtags (photo by Janelle Rodriguez http://janelleshootsphotos.com)

In #Carbonfeed, I used the cron to check if the software had crashed and automatically reboot, and every 8 minutes, I altered Twitter hashtag sets on the LCDs, in order to change the dynamic of the work and create new opportunities for discourse. For a how-to on the cron and cron specifics,  please jump to the bottom of this article.

Since #Carbonfeed, whenever I found myself working on a sound installation that required advanced software (e.g., Processing, Max/MSP, Logic Pro X), I inevitably involved a cron. For example, in 2017, I worked with Harmonic Laboratory (URL) on a Mozilla Gigabit Foundation Grant (URL) project called City Synth, which turned the city of Eugene, OR into a musical instrument. The piece involved taking live video feeds from Raspberry Pis (a collaboration completed by the South Eugene Robotics Team, URL) that was mangled by a Processing sketch and subsequently controlled a live synthesizer running in Logic Pro X. The work was up for a month in the Broadway Commerce Center in downtown Eugene, OR.

City Synth signal flow diagram
Figure 3. City Synth signal flow diagram

 

In 2019, my first solo exhibition at the Edith Barrett Gallery in Utica, NY (curated by Megan C. Austin and Sarisha Hogan and supported by funds from the Oregon Arts Commission) had six sound artworks running for three months. Since I was able to borrow Mac minis for the exhibition, I incorporated cron jobs and scripts to transform Mac minis into glorified audio players for two of the works. Sound Memorial for the Veteran of the Vietnam War (URL), ran an Automator script upon startup that opened iTunes and played a playlist holding the six-hour-long work (Figure 4). I mixed the 8-channel work down to a stereo headphone mix in order to account for the bleed of other works inside the space. Relay of Memory (URL) used the same script to output computer audio to an FM transmitter, which played the work through nine radios hung on a wall (Figure 5). Cron jobs checked the status of running software.

Figure 4. Sound Memorial for Veterans of the Vietnam War (photo by Janelle Rodriguez, http://janelleshootsphotos.com)
Figure 5. Relay of Memory (photo by Janelle Rodriguez, http://janelleshootsphotos.com)

The cron utility has been an amazing tool for my sound installation work. I can still recall driving home after installing Aqua•litative (URL) when I received a frantic call from the curator that there was a power outage. In the middle of the call, the power came back on, the computer turned on (setting to automatically start after power failure), and a minute later, the cron kicked in opening up all software. I didn’t need to turn around and drive back or walk the curator through how to turn on computer software. A happy moment.

I have saved countless hours that I know about, and I’m sure many other hours I won’t ever know about thanks to the cron. I even have started to implement the cron in other ways to help with basic tasks in my daily life (see below for code specifics) such that the cron has helped me get closer to what Allan Kaprow describes as the “fluid” and “indistinct” “line between art and life.” Maybe the overseer of digital automatons is what a 21st-century computing artist feels like (footnote 2).

CRON

This is a walkthrough of the crontab on Mac OS using Terminal. I’ve included some code specifics by theme below. If you use, please share your work with me and how you implemented your cron! If you like what you’ve read, sign up for my mailing list (URL), follow my music on Spotify (URL), and please share it with friends.

Setting up a cron

Googling helped me in every way possible for working with crontab, but there are three basic steps. Open up an editor via Terminal, add your cron code (requires setting a time of how often it’ll run), and then saving the file. For more on Terminal, here’s a beginner’s walkthrough, Apple’s user guide, and a command cheat sheet.

1. Open up an editor to add a cron via Terminal

 env EDITOR=nano crontab -e

2. Inside the editor add the executable file to the cron job

 *	*	*	*	*	~/Music/citysynth/cronjobs/citysynth_cron.sh

The asterisks tell how often to run the cron: Minute, Hour, Day of Month, Month, Day of Week. Straight asterisks mean “every” so this is a call to run the cron EVERY minute. The cron after the timer is a call to run a bash script called “citysynth_cron.sh”. The below cron runs every 5 minutes and closes the bash window in Terminal.

*/5	*	*	*	*	osascript -e 'tell application "Terminal" to close (every window whose name contains "bash")';

3. Save and exit the cron.

Ctrl-O, saves the file. Ctrl-X exits the file. You must save the temporary file after editing. When you are done with the cron and want to remove the cron job, follow step 1 to open, but then delete the lines (using Ctrl-K) and save the file. For reference, see
http://www.maclife.com/article/columns/terminal_101_creating_cron_jobs

4. Want to know if you have a cron on your machine? List your crons in Terminal with

crontab -l

Adding a bash script.

If you decide to run a bash script via a cron, you’ll need to make the .sh file executable, that is, give the cron the ability to run the script. In Terminal, navigate to the folder where the .sh file lives and change its permissions with

chmod +x bashfile_name

where “bashfile_name” is the name of the .sh script (make sure to include .sh in the filename).

Below is an example of a .sh script that checks to see if an app is running and if not, reopens the app. I included the initial bash line of the file in the code.

#!/usr/bin/env bash

echo "cron job";

PROCESS=api_hashtags-polyphonic
number=$(ps aux | grep $PROCESS | grep -v grep | wc -l)

if [ $number -gt 0 ]
    then
        echo Running;
else 
	echo "sound is Dead";
	# open music player application
	cd ~/Music/carbonfeed/work/sound/; 
	sleep 2;
	open api_hashtags-polyphonic.app; 
fi

Doing it all in one line of code

For recent projects, I opted to run code directly via the cron instead of relying on bash and AppleScripts. Below is code to start the Chrome web browser at a random time (to the second!) between 855-9p.

55 20 * * * perl -le 'sleep rand 300' && open -a 'Google Chrome'

Remember, the timing of the cron comes first:: Minute, Hour, Day of Month, Month, Day of Week. The cron is fired at 8:55pm, but has a random sleep time (between 0-300 seconds, read between your 8:55–9:00) and THEN opens the web browser.

Adding in an Apple Script

You can use your cron to trigger an Apple Script (.scpt file), just another way to execute commands on your Mac. Here’s an example of telling Safari to hit the spacebar (or could even be iTunes).

tell application "Safari"
	activate
end tell
delay 2
tell application "System Events"
	key code 49 -- space bar
end tell

Automator scripts (triggered by cron or system startup)

If cron and bash aren’t your thing, Apple has the Automator app that allows you to create automated processes straight from a GUI and then save out as an application (Figure 6).  You can also easily trigger the app via a cron or by system startup by going to System Prefs > Users > Login Items. Login items can be set to run Automator scripts upon computer startup, and configuring the computer to power up automatically after power failure will help ensure a work stays running.

Figure 6. Automator Script to open iTunes and play a playlist on repeat.

Hope this was helpful. Please get in touch if you have questions or want to share your work with cron in art.

Footnotes
1. Wikipidea, “Cron”. URL: https://en.wikipedia.org/wiki/Cron accessed August 27, 2020.

2.  Allan Kaprow. Essays on the Blurring Between Art and Life. University of California Press, Los Angeles. 1993. URL

So you want to distribute your music on streaming platforms?

The keywords in this question are distribute and streaming. Digital distribution is the delivery of your music to digital service providers like Spotify, Apple Music, Amazon Music, TIDAL, Napster, Google Play, Deezer, among many other streaming platforms.

Digital distribution companies (CD Baby, DistroKid, RouteNote, Mondotunes, ReverbNation, Landr, Awal, Fresh Tunes, Tunecore, Chorus Music, Symphonic, etc.) help get your music onto these digital service platforms. Without a digital distributor, the doors to these outlets are pretty much closed. That said, distribution companies do NOT own your music. They may take revenue from royalties, but you retain your rights. Distribution companies are also NOT stand-alone stores (i.e. BandCamp) although some offer this service (e.g., CD Baby).

This document is a walk-through going over the steps to digital distribution, from start to release. Over the course of the walk-through, we will create a track and then release that same track on a digital distribution service (all free). The goals of the walk-through are to:

  1. Understand the basics of digital distribution
  2. Take some of the fear out of releasing your music online
  3. Prepare for future self-release work

The walk-through should take about an hour, depending on your familiarity with audio software and a relaxed mind when it comes to generating names/titles.

The various components of releasing music in this walk-through consist of

  1. Create a track for release (we will create a pink noise track)
  2. Generate all materials associated with the release 
  3. Register with a digital distribution service (free)
  4. Distribute your work with the digital distribution service
  5. Following any additional steps you can take (PRO registration, SoundExchange, claim artist profile, digital store setup)

Since the hardest part of the release process is the music creation (right?), let’s just get over this hurdle by creating a noise track right now in the next five minutes. Don’t worry, we will create a pink noise track (for relaxation and sleep) to help us skirt around personal aesthetics, notions of perfectionism, genre, and all things that take time and intentional decision-making. If you have a music track already, just skip this next section.

1. CREATE A PINK NOISE TRACK FOR RELEASE

If you have a track for the release that you want to use INSTEAD of pink noise track(s), please skip this step. If not, read on. Open up Audacity (link) or any free software that can “generate” noise. Audacity is free and contains a noise generator.

Figure 1. Generate Noise menu in Audacity audio software
Figure 1. Generate Noise menu in Audacity audio software

After selecting Generate > Noise… choose “Pink Noise” (of course you may choose White or Brown noise). Read about the differences here (link) (link). An amplitude of about 0.7 will work for pink noise as this will help keep the loudness units of the track in the correct range for streaming services. Read about LUFS here (link).

Choose the length to be between 30 and 40 seconds. We’ll want to choose the length to be ABOVE 30 seconds as streaming services (like Spotify and Amazon) only consider a “play” if thirty seconds of the track has been streamed. If you want your music to have “plays,” the tracks need to be longer than 30 seconds (footnote 1).

Figure 2. Noise generator settings in Audacity.
Figure 2. Noise generator settings in Audacity.
Figure 3. Track after generating pink noise.
Figure 3. Track after generating pink noise.

Next, we’ll need to add in fade-ins and outs. Without adding fades, at least one fade at the end, some digital distribution services (and ultimately some streaming service providers) will not accept the track as they do not allow hard ends to tracks. (note: if you are specifically creating a loopable track, then you should add “Loopable (No Fade)” to the track title to help get around this hard-cut moderation flag).

Figure 4. Add two-second fade-in in Audacity using Effect menu
Figure 4. Add two-second fade-in in Audacity using Effect menu
Figure 5. Add two-second fade-out in Audacity using Effect menu
Figure 5. Add two-second fade-out in Audacity using Effect menu

Export the track as .wav files 16-bit, 44.1k file. You may consider the export of the audio file as our “mixdown.” (Note: While some distribution services allow higher-quality tracks for import, our track settings get us close to our target output for this release). We are near finishing up our track, but we aren’t done. We should first listen to the audio file and then we may still want to “master” the track, or at the very least check out our loudness units (LUFS) relative to our target (streaming services) before preparing the file for distribution (footnote 2).

We can accomplish metering our track by opening our file inside any software that can meter LUFS and possibly control gain. If you need a free LUFS meter to quickly assess integrated LUFS, try Orban (url). Using Logic Pro X, I dragged and dropped the audio file onto the track and inserted a stock “Loudness Meter” plugin on the stereo buss. Playing back the track, the short-term and integrated meters are roughly -13.4dB LUFS. Since most streaming services use Integrated LUFS to alter the volume of tracks, a good range for most services is between -12 and -16dB LUFS. At the time of writing this, Spotify uses -14dB LUFS. (url)  You may choose to alter the gain for the track or keep what you have. Since pink noise is already “mastered” in the production sense that it has equal energy per octave, I am choosing to not add any EQ, compression, or limiting, and I will instead stick with -13.4dB LUFS on my output meter.

Figure 6. Loudness Meter, measuring short-term and integrated LUFS.
Figure 6. Loudness Meter, measuring short-term and integrated LUFS.

If you did happen to alter the gain, you will either want to export out the “mastered” version as .wav or .aiff from your audio software or revisit Audacity to re-export another “mixdown” at a lower volume. Again, you will want to export out audio using uncompressed audio file formats (.wav or .aif), at least until you are ready to deliver to the distribution service.

2. GENERATE ALL MATERIALS ASSOCIATED WITH THE RELEASE

The materials for a release with a distributor include:

  1. Audio file in the correct format (FLAC, .wav, .mp3, etc) and output target volume (e.g., -14dB LUFS)
  2. Cover art for the single/EP/album
  3. Track title
  4. Artist name
  5. Album/EP title (if necessary)
  6. Choose a Genre (to categorize the music)
  7. Label name (if any)

While we created a “mastered” version of the audio to be released, the target format may need to be altered before distribution. Services like CDBaby allow for uncompressed formats like .wav but others like RouteNote only take .flac or .mp3 file formats. Since FLAC is an uncompressed format and the distribution service for this walkthrough is RouteNote, let’s convert our “master” into a .flac file. Audacity software handles exporting out to this format. Just open up your mastered track in Audacity and export audio out as FLAC (footnote 3).

Figure 7. FLAC settings in the Export Audio window in Audacity software
Figure 7. FLAC settings in the Export Audio window in Audacity software

Note: you do not want to convert to .mp3 for your release as this not only reduces the quality, but may introduce short bits of silence (10-20ms) at the beginning and end of your audio track(s). So for the case of “Loopable (No Fade)” tracks, .mp3 conversion can actually print silence into the track and cause a quick burst of amplitude rupture on streaming services due to the added silence from the codec compression conversion. This has nothing to do with buffering.

The cover art doesn’t need to be fancy, it just needs to fit the specifications. At the time of this writing, RouteNote has an image database that’s free to use and a photo resizer. If you want to make your own, RouteNote requires 3000×3000 pixel .jpg files. Just find your favorite pink color (RGB or Hex color) and fill a 3000×3000 pixel canvas with this color. I use Adobe Photoshop, but any free image editing software will do. Most other distributors also have free tools you may use to generate cover art. And should you choose to add images as part of your cover art, make sure you have permission first (again RouteNote has a free image database).

Figure 8. Color cover art (what I didn’t use but this image is 3000x3000 pixels!)
Figure 8. Color cover art (what I didn’t use but this image is 3000×3000 pixels!)
Figure 9. My 8-minute cover art for the release made in Photoshop
Figure 9. My 8-minute cover art for the release made in Photoshop

For this walkthrough, naming may be the hardest part. Maybe? Did I prime you to overthink it? Come up with an artist name and a track title. Just relax and let the word association flow. Seriously. Track titles can be anything— scientific “Equal Amplitude Per Octave”; direct “Pink Noise with fades”; spiritual “Soothing Pink Noise”; or cheeky, “Pink Panther’s Pink Noise”. The point is to pick a title and move on. The walk-through is about getting comfortable with the process, not to get bogged down by the details— that is, a “perfect” name. Sidenote: you cannot use “Untitled” or “No name” as these generic titles can be flagged by the distributor. You should do a quick word association for the artist name as well… remember, you never have to use your artist name again, but you must pick a name. 

Afterward, you should get on a streaming service (here) (here) or (here) and do a quick check to make sure your new artist’s name doesn’t match existing artists (unique names make searching easier, and your work/streams will be attributed to you without added hassle).  

Here’s what I came up with for my track (ie. You shouldn’t use. Now on Spotify)

Artist: Sounding Human
Album/EP title: “Deep Wave” EP (can also be same name as your track)
Track 1: “Deep Wave Pink Noise” and
Track 2: “Deep Wave Pink Noise (Loopable, No Fade)” 

Ready to move on?  What? No?!  Seriously? You don’t have a name yet? Use the letters from your name in this anagram maker. (url) Take one of the top five that appears. This is your artist name.

3. REGISTER WITH ROUTENOTE DISTRIBUTION SERVICE

Register with RouteNote. (url) On the RouteNote page, click on the “Join RouteNote” button. All you need is your email and a username. If you have an account already, just log in (footnote 4).

For any release, you have to pick a distribution service (DistoKid, RouteNote, CDBaby, Mondotunes, ReverbNation, Landr, Awal, Fresh Tunes, Tunecore, Chorus Music, Symphonic, etc). You don’t have to choose the same service in each release, but you cannot release the same music on multiple digital distribution providers. I’ve chosen RouteNote as it’s free to release, will keep your music up after you release, and satisfies the purposes of this walkthrough. Fun fact: you also can share revenue with this service. Please note that all distribution services take some sort of cut, whether upfront in fees, or later on in streaming. You always retain the rights to your music. For a full list comparing all the services check out Ari’s Take (url).

4. DISTRIBUTE YOUR WORK 

Ok. You’re ready to create your release with the distributor. Just log in to RouteNote and click Distribution > Create Release.

Figure 10. RouteNote Distribution menu
Figure 10. RouteNote Distribution menu

You’ll need to add your track title (or EP or Album title) to the release. Don’t worry about the UPC as RouteNote assigns you one for free. A UPC is a universal product code associated with the release. Think of it like a barcode that you see on a CD or LP. The UPC is specific to YOUR single/EP/album. Some services, like CDBaby, charge for this. It’s free here.

Figure 11. RouteNote initial Release Data input fields
Figure 11. RouteNote initial Release Data input fields

After the initial title name and receiving a UPC, there’s a four-step process to the release that we have prepped for in step 2.

Figure 12. RouteNote Release overview (four steps)
Figure 12. RouteNote Release overview (four steps)

1. Album Details. See the image below for all fields. You may choose to use your own name for C and P copyright, although you may use the artist name. C is for the underlying composition (the written music) and P is for the recording (what the artist records). Often, the C and P lines on the record are attributed to the record label (e.g., Sub Pop, Matador), but not always. You’ll also need a genre, but for something like pink noise, maybe “Easy Listening”? A note about release date. If you are setting this up for music release, you’ll want to time this in advance and have an album release strategy.  As Bobby Schenk, digital marketing manager for Dub-Stuy records, puts it, “Include the 7-10 day delay in your release strategy. Release earlier rather than later with a scheduled release, as you’ll need to align with your PR machine.”

Figure 13. RouteNote Album details screen
Figure 13. RouteNote Album details screen

2. Add Audio. This is where you’ll upload ALL audio files. You’ll need the track name, but you’ll be asked to assign some additional metadata to each track (if you’re uploading more than one track). Since the track is pink noise with no lyrics, there will be no language associated with the track. Your track will be assigned an ISRC (‘International Standard Recording Code’) and that ISRC is attached to the recording, not the underlying composition. ISRCs are one important way for tracking streams (read royalties) as they are individual barcodes to the musical recordings. Read about ISRC (url). Read more on composition vs recording (url).

Figure 14. RouteNote Upload Track screen
Figure 14. RouteNote Upload Track screen
Figure 15. RouteNote Track Metadata screen
Figure 15. RouteNote Track Metadata screen

3. Add Artwork. We’re halfway there! Next, we need to upload our album/single cover art. Remember, the guidelines are hi-resolution files, and for RouteNote that is 3000×3000 jpg files only.

Figure 16. Route Upload artwork Screen
Figure 16. Route Upload artwork Screen

4. Choose Your Stores. This part should be simple. What services do you want your music on? Spotify? YouTube? Tidal? All?  You can be picky but often the default is to distribute on all platforms all over the world. RouteNote makes it easy with one button-click.

Figure 17. RouteNote Store selection screen
Figure 17. RouteNote Store selection screen
Figure 18. RouteNote Territories selection screen
Figure 18. RouteNote Territories selection screen

Now you’re ready to distribute! All you need to do is check over your work and then click on “Distribute Free.” And that’s it! RouteNote will take a cut of your streams (15%) but there are no upfront costs to the process.

Figure 19. RouteNote Completion Screen. Two options for distribution (free vs paid).
Figure 19. RouteNote Completion Screen. Two options for distribution (free vs paid).
Figure 20. RouteNote Post-Distribution. In Review details
Figure 20. RouteNote Post-Distribution. In Review details

5. NOW WHAT?

The release will take about a week or so to be released, at which point you should receive an email. If there are issues with your work (track titles too generic, audio file has copyright issues, etc.) you will receive an email in which you’ll need to resolve all issues before the release can move forward (footnote 5). 

While you wait for your release to go through moderation, here are a few things you’ll want to consider as part of any release that you do in the future. Maybe not part of this walkthrough, but certainly if you are getting serious about releasing your music.

1. Check out new music. Listen to the walkthrough release, Sounding Human, on Spotify, 🙂

 https://open.spotify.com/album/2qUZchgXBIWkz7Di6jdFiY?si=_2LF9vwmTICyeSXMCzATTg 

2. Register your work(s) with a Performing Rights Organization (PRO)

If you’re not already part of a PRO (ie., ASCAP, BMI) you should strongly consider it. …. You’ll need to be registered with a PRO in order to register your work for admin publishing royalties among other things. Here’s some reading about PROs (url) (url). Quick note: A composition can have multiple recordings (ISRCs), but only one composition (ISWC). What’s an ISWC? Read here (URL).

3. Register your work on SoundExchange

At this point, your music will appear on streaming platforms that have two types of plays (interactive and non-interactive). Interactive streams are where people hit the play button on your music (or if it’s on a playlist. However, digital distribution services like RouteNote cannot collect on non-interactive streams (radio-type play). SoundExchange (url) is the collector of these royalties. Registration is free, but you’ll need to upload and claim all your recordings if you want to collect within this market. Want to learn more about this? Read here (URL)

4. Claim your Artist Profile 

After your release is live, you’ll want to decide if you want to claim your artist profile. Doing so allows you to update profile pic, add a bio, create artist playlists, and even track who is listening to your music every day. Claim your artist on Apple (url), Spotify (url), Amazon (URL).

5. Start a Digital Store

Some services offer you to sell your music directly to fans/consumers, but many are not digital stores. In this case, you may consider a digital store like Bandcamp (url), where you can sell your music as direct downloads, all from one location.

That’s basically it! In one hour or one pot of coffee (hopefully that’s all it took), you’ve gone from zero to release. If you dug this walkthrough, please share, follow my music on Spotify (url), and pay it forward in your own musical community. Thanks for being an active participant!

 

Footnotes:
1. 30 seconds for a stream count seems to be an agreed-upon time length. I ran a test on my EP Software 1.17 (Spotify link) with the final track clocking in at 26 seconds. After a year and with friends streaming this track across multiple services, the track still has 0 plays for royalties. If you want to dig deeper on Spotify’s algorithm, which helps support the 30-second rule, check out this article (url).

2. Streaming services typically average -12dB to -16dB Integrated LUFS (loudness units to full scale). All streaming services use LUFS to act as your own personal DJ, helping adjust the volume between tracks that may be coming from a different genre, era, or artist. It’s become more common that Spotify uses -14dB LUFS for its target. This means you that if you crush your track with an integrated LUFS (overall average loudness) of -8dB, Spotify can very well turn your track down by -6dB making it half as loud… meaning you lose all that intensity you spent so hard to work for. Use a meter! (url)

3. Make sure you always listen to your work after you export it! You want to make sure everything sounds correct before you upload your audio file. Do NOT distribute without listening to your final work first.

4. RouteNote registration full disclosure. I included a referral link for registering with RouteNote in step 3. Referral ID: 2f79120f. You get your cut regardless, just RouteNote takes a percentage of their own earnings and passes it on to me. To date, I have earned $0.00 from this. Maybe someday.

5. Note for our walkthrough that releasing a noise track via RouteNote will not appear on Apple Music / iTunes. I inquired with RouteNote directly, and here’s what their moderation team had to say (email dated 7/24/2020), “Unfortunately iTunes no longer accept white noise/nature sounds content due to the high amount that was being uploaded to them. They have asked us to no longer send it to them, for this reason the store was blocked.” If you go through a different distribution service, you can get noise albums on Apple Music (case in point, here’s an album I created for an Oregon-based birth center: https://music.apple.com/us/album/calming-sounds-for-pregnancy-birth-and-parenting/1522834278)

Challenge Song: Supertramp “The Logical Song”

The students in my Audio Recording Techniques III (Spring 2019) course at University of Oregon had ten weeks to recreate a recorded song of their choice. They voted on reverse engineering Supertramp’s “The Logical Song” from the band’s 1979 album, Breakfast in America. The goal was to get the song as close as they could to the original recording. They recorded, overdubbed, mixed, mastered, and played parts!, on all elements of the song. I am amazed at their accomplished product. Enjoy!

Kyma: Encapsulation

While teaching Data Sonification at the University of Oregon, we talked a lot about inference preservation, communication of idea, filtering and bias of data, and by extension, tool building as a process for supporting sonic hypotheses. To that end, I wanted to empower students with their own work inside Kyma, so we spent a class walking through the process of Encapsulation.

Encapsulation allows one to take a Sound and “create a simpler, cleaner interface for that Sound by reducing the number of controls and putting them all on the same level. Encapsulating a Sound does not make it more computationally efficient, but it does present a clearer control interface between you and the Sound” (Kyma X Revealed 2004: 293). Or, another computer music way to say it…
Max/MSP::abstraction
Kyma::encapsulation

For those familiar with NeverEngine Labs, one can understand the power of encapsulation to create some really great Sounds that serve compositional, sonic, aesthetic, and educational goals. Encapsulated Sounds can help one save time, grow as a practitioner, and engage with the growing Kyma community. Tool building and sharing also invites positive activities like research, collaboration, and publication. The Kyma X Revealed section on Encapsulation (pp. 293-303) is a great starter, but can be a difficult first reference for the uninitiated. This article seeks to provide a current walkthrough of encapsulation that supplements existing documentation.

What will you need? Head over to the Kyma Community Library (https://kyma.symbolicsound.com/library/encapsulation-walkthrough/) to grab walkthrough files, but beyond this article, you will find Kyma X Revealed (293-303), any software to create a .png icon (e.g. Adobe Illustrator, Photoshop), and your design thinking hat helpful.

The process to Encapsulation follows five basic steps.
1. Create a Sound(s) to encapsulate
2. Define your controls and change the values (numbers or !EventVariables) to ?vars
3. Create a new class (Action > “New class from example”)
4. Add default values to the controls to open up Class Editor
5. Add descriptions and icon, set parameter types, and close to Save Class.

Step 1. I created a simple Sound to encapsulate (Figure 1).

Figure 1. Kyma Sound, a one-sample wide impulse of N-samples long, ready for Encapsulation.

The Kyma Sound to encapsulate, a one-sample wide impulse of N-samples long, is meant for controlling the amplitude of a single band in a spectral analysis of the same sample length (e.g. 256 samples). Bearing this user case in mind, where the encapsulated Sound will effect a spectral analysis’ amplitudes, Figures 2 and 3 depict the parameters fields of the two Sounds that create the effect (SyntheticSpectrumFromArray and DelayWithFeedback, respectively).

Figure 2. SyntheticSpectrumFromArray Sound parameters ready for encapsulation.
Figure 3. DelayWithFeedback Sound parameters ready for encapsulation

Step 2. I labelled the most helpful controls for the encapsulation process as green ?variables (Figure 2 and 3). Green ?variables are what enable a user to access parameter fields after encapsulation. The three user parameters, ?ImpulseAmplitude, ?samples, and ?Delay provide the user with the ability to control the amplitude of any single partial in a spectral analysis of n-window size. SyntheticSpectrumFromArray (Figure 2) creates a n-sample long spectrum with only one envelope. Since Kyma handles spectrums in the time domain as Amplitudes in the Left channel and Frequencies in the Right Channel, we treat the Partials parameter field more like the Sample length of the analysis. A single envelope is generated with the Envelope parameter field set to 1, there will only be one partial to control, with all other envelope amplitudes set to 0. That single envelope’s gain is controlled by ?ImpulseAmplitude. The Left Channel is selected, which means the SyntheticSpectrum Sound will only impact the spectrum partial’s amplitude, not its frequency. [See Gustav Scholda’s in-depth video for how spectral analysis works in Kyma and how to spectrally manipulate frequency and amplitude.]

?samples is meant to match the length of the spectral analysis it will later control. The delay length is also set to the same length, as DelayWithFeedback enables the single envelope to “scrub” across the sample length. In essence, ?Delay enables a user to select which partial’s amplitude they will effect.

Footnote: An esoteric note about this particular Sound. The Amplitudes parameter field of SyntheticSpectrumFromArray expects an array. Because the variable ?ImpulseAmplitude is a green ?variable, Kyma will prompt and ask the user if the ?variable is an “Element” or an “Array.” Because the Sound is meant to control a single partial, the ?variable is an “Element,” not an “Array.”

Step 3. Time for Encapsulation. From the main menu, select Action > New class from example (Figure 4).

Figure 4. Kyma Action menu, New class from example, which encapsulates the chosen Sound.

Step 4. The menu selection will then generate a user prompt to add default values to the three green ?variables (Figure 5). All variables are “Values” and whatever is entered will generate defaults values one may alter later. For now, one may enter 1 for ?ImpulseAmplitude, 256 for ?samples, and 0 for ?Delay.

Figure 5. Green ?variables default value user prompt.

Step 5. The real encapsulation work begins adding Class name, descriptions, icon, and Input/Output type for formatting look and feel. Figures 6 and 7 depict the encapsulation editing process before and after.

Figure 6. Encapsulation editor before edits are made.
Figure 7. Encapsulation editor after edits are made.

The various fields altered for the encapsulation are as follow. Name is the name of the class, which can be searched for. Class description is the overall description, which can include overall sonic description, use cases, and user specific comments.

Parameters are designated before creating a new class. Each ?var ends up as a parameter field. For example, ?samples becomes the parameter field “Samples.” Naming a ?var sets the Class parameter field name. The parameter field in the Class editor contains our default value from the previous step, but can be changed in the editor. In addition, the Parameter options in the left tab will enable one to set Type, Field Type, and Category of the Parameter altering how the parameter field behaves and looks. Figure 7 depicts two of three parameter fields and these options.

Close the Editor window to save the class. You may always edit the class by choosing the “Edit class” option from the Action menu (Action > Edit class). Figure 8 shows the completed encapsulated sound.

Figure 8. Kyma Sound after encapsulation. Three user fields generated in the process.

Example

Figure 9 depicts our new One Sample-Wide Impulse Class played through a 256 sample-wide oscilloscope. Since the delay is set to 0.5, we see our single sample residing in the middle of the oscilloscope (128th sample). Because the single sample may be moved in time (Delay parameter) and has control of gain (ImpulseAmplitude parameter), the Class may be used as a partial picker in Spectral Analysis.

Figure 9. One Sample-Wide Impulse Class running through an Oscilloscope at full volume, with delay set to 0.5

Figure 10 depicts spectral analysis in Kyma, where amplitude and frequencies are divided between left and right channels. The first partial is displayed as the first sample, second partial as second sample, etc. Understanding this concept, we may use One Sample-Wide Impulse to control (read multiply) amplitudes of the left channel in a spectral analysis.

FIgure 10. Live Spectral Analysis (256 samples) in Kyma shown within an oscilloscope. Left channel are amplitudes, and Right channel are frequencies (multiplied by halfSampleRate)

Figure 11 shows how an encapsulated Sound is used to multiply against amplitudes of a Spectral Analysis.

Figure 11. One Sample-Wide Impulse multiplying the Amplitudes of a spectral analysis, so the Class functions as a single partial picker.

Figure 12 shows oscilloscope view of partials, with single sample wide (one partial) amplitude control. Delay is set relative (0-1) to the 256 partials in the analysis.

Figure 12. Before and After oscilloscope views of the One Sample-Wide Impulse Class multiplying the Amplitudes of a spectral analysis.

Two audio examples using Beck “Dreams” to depict the One Sample-Wide Impulse Class in use as a partial picker.

Audio 1. Beck “Dreams” running through live spectral analysis using a 256 sample window. No partial picking.

Audio 2. Beck “Dreams” with the One Sample-Wide Impulse class controlling playback of a single partial of the 256 sample live spectral analysis. Audio sweeps from a singular low partial to high partial selection and then back down again.

Samuel Pellman’s Tower of Voices

By Jon Bellona and Ben Salzman. (Note: This post is a part of a presentation with Ben Salzman at the 2018 Kyma International Sound Symposium in Santa Cruz, CA.)
[Above photo: Nancy L. Ford]

On Music faculty at Hamilton College since 1979, Samuel Pellman devoted his life to making music with students and turning them onto art. I was happily one of those students. After all, it was Sam who led me to pursue my graduate degree in Music. I’ve since earned my Music doctorate and had the opportunity to catch up with Sam at a few Kyma conferences over the past few years.

Sam was a sturdy mentor and friend, and one I knew I could bounce any idea or question off of. He was just someone I counted on being there when I could use a little help. Tragically, all that changed in November 2017 when Sam was struck and killed while out riding his bike. As previous students, Ben (H’14) and I (H’03) look back and see there is much to be gained from Sam’s work, his presence, and his joie de vivre. Sam’s ideas are woven into the sonic fabric of Kyma. Sam may have been at times quiet and his voice soft, but his work remains a powerful force in the sonic arts. His pitch design for Flight 93 National Memorial (Tower of Voices) sonically embodies the dead in a way that pushes sound to the forefront of remembrance. The National Memorial is one of the first in the country to embrace sound as its defining factor. Sam also developed digital interactive sound installations in the 80s and 90s when MIDI and digital sensors were first coming on line. And Sam’s music hoists micro-tonality and its mathematical roots of equal temperament to the aural top, creating interesting and complex structures amidst electronic synthesis techniques. We will discuss the threads in Sam’s work, especially those using Kyma, and how these threads intertwine with his ultimate work, Tower of Voices.

This webpage contains embedded links that jump to the musical or cultural reference. We encourage you to listen, click, and read along with us as we talk about Sam Pellman and his work. Feel free to skip to the bottom for videos and links.

///// Sam at the Kyma International Sound Symposium (KISS) /////

Over the last eight years, Sam presented work at KISS five times: 2010, 2012, 2014, 2015, and 2017. Sam grew up an organist; he regularly performed at the Clinton United Methodist Church and during the convocation ceremonies at Hamilton College. Yet, much of Sam’s work at Kyma conferences were interspersed with whole-number ratios and micro-tonal temperaments. Sam compositionally split the twelve divisions of the octave as much as he performed within their boundaries. For example, Sam’s various Peculiar Galaxies, which are part of his Selected Galaxies (KISS 2012; Ravello Records 7912), uses pitches “based on a dorian scale, tuned in 5-limit just-intonation, that is friendly to both quartal and tertian harmonies (i.e., harmonies built of fourths or thirds, respectively)” (Pellman 2012a; 2012b; 2012c).

The Selected Galaxies album also includes Selected Cosmos (KISS 2014), which is a two-tone sonification of human DNA (as reported by the Human Genome Project) using Kyma. Sam supports these two sequences of pitch and timbre by drone timbres, whose pitches are Shepard-filtered tones, “54 octaves above the sound emitted by an active galactic nucleus in the Perseus Cluster.” (Pellman 2015).

///// Tower of Voices /////

Sam’s ultimate work, and perhaps the one which will become his most memorable, is his pitch design for Tower of Voices. Tower of Voices is a visual and audible reminder of the heroism of the 40 passengers and crew of United Flight 93 that was hijacked and crashed in Shanksville, Pa., on Sept. 11, 2001. According to the National Park Service, “there are no other chime structures like this in the world” (NPS 2018). The Tower is 93 feet tall with 40 chimes measuring from 59 1/4″ to 97 7/16″ with walls “designed to optimize air flow… to reach the interior chime chamber” (ibid). Sam’s pitch design of the forty chimes “allows the sound produced by individual chimes to be musically compatible with the sound produced by the other chimes in the Tower. The intent is to create a set of forty tones (voices) that can connote through consonance the serenity and nobility of the site while also through dissonance recalling the event that consecrated the site” (ibid). While Sam designed the frequencies of the forty chimes, the chimes were built by Gregg Payne, an artist based in Chico, CA.

The Tower of Voices has eight columns with five chimes in each column. Sam indicated in his files preferences for particular groupings of chimes and he collected them horizontally. For tuning the chimes, Sam based his work on whole-number ratios. “The tuning ratios indicate the frequencies of the chimes relative to a middle-C of 264 hz. The chimes are tuned according to a system of just intonation, based on whole-number ratios” (Pellman 2017). Sam went through five versions of his tuning system before settling on the final system, as shown below in Table 1. Each cell in Table 1 depicts the pitch, frequency, and whole-number ratio (in relation to C=264hz) of each chime tone.

Table 1. Tower of Voices Pitch Design

C pitch
264 hz
1/1 ratio
E
330hz
5/4
E
334
81/64
F#
372
45/32
G
396
3/2
B
495
15/8
B
501
243/128
C
528
2/1
D
149
9/16
G
198
3/4
D
293
10/9
D
297
9/8
E
330
5/4
F#
367
25/18
F#
372
45/32
G
396
3/2
G
198
3/4
E
330
5/4
E
334
81/64
F#
372
9/8
G
396
3/2
B
495
15/8
B
501
243/128
C
528
2/1
C
264
1/1
D
297
9/8
E
330
5/4
E
334
81/64
G
396
3/2
B
495
15/8
C
528
2/1
C
535
81/40
C
264
1/1
C
267
81/80
D
293
10/9
E
330
5/4
E
334
81/64
F#
367
25/18
F#
372
45/32
G
396
3/2

Sam created several sonic prototypes/models throughout the build. Aluminum chimes were recorded in Tuzigoot, AZ, which Sam then used for modeling the sound of the forty chimes. One of the most striking auralizations is his October 2016 prototype built using Kyma, and which is still available on the National Park Service website for Tower of Voices. That same prototype can also be heard below.

Inside Kyma, Sam used multiple Sample Sounds played through a single MIDIVoice object that was triggered using a custom Max/MSP patch (sending MIDI note on/off messages). Sam’s model uses a lot of memory, and his Kyma’s Multigrid model is only able to playback on a Paca(rana). To optimize playback on just a Paca, we leveraged the MIDIVoice Sound script, and in turn, we were able to keep all the ratios in a single script array, referencing a single 264hz audio sample.

While the Tower of Voices model doesn’t represent Sam’s final pitch design, Sam did create a final auralization before his death based on his final v05.1 tuning design. For his aural model, Sam relied on a custom Max/MSP patch playing back buffers of five audio files at various speeds (read frequencies). The model runs by generating random timing bangs, where each bang selects a new number from a Coll object (an array list of pitch ratios and audio file names). The output of the Coll object first selects the appropriate audio sample to playback and then alters the sample playback speed based upon the pitch ratio. A final bang plays back the audio file. The use of a poly~ object in the patch allows multiple chimes to be played back simultaneously.

Sam never made a Kyma version of his latest tuning design, and since one can hear digital artifacts in the Max/MSP model, we decided to merge Sam’s tuning design inside Kyma. We wanted to hear the tuning design with the same high fidelity that Kyma delivers. Using our optimized Kyma Sound of a single MIDIVoice Sound script, we took Sam’s v05.1 tuning design and input these ratios into the script’s pitch array. The result is all at once beautiful and all Sam. We recorded a short bit of Sam’s final tuning design using Kyma, which can be heard in the audio player immediately below.

Sam’s compositional work is a by-product of his tireless passion for students and ideas, collaborative learning and theoretical concepts, and the intersection of science and the arts. Sam’s upbeat ethics, positive attitude, and dedication to the hope and optimism of the arts to construct meaningful dialogue has helped create a catalog of meaningful works. We know Sam will live on in the memory of the Tower of Voices, where each and every time a chime is rung remembering those who lost their lives on Flight 93, his ideas sound out across the valley of Shanksville, PA.

///// More About Sam /////

Sam Pellman studied with David Cope, Karel Husa, and Robert Palmer.  Sam was co-director of the Studio for Transmedia Arts and Related Studies at Hamilton College and oversaw the development of Hamilton’s multi-million dollar Kennedy Center for Theatre and the Studio Arts. Sam served as Associate Dean of the Faculty, Posse mentor, and was recipient of Hamilton College’s 2015 Alumni Association’s Distinguished Service Award. Sam’s work can be heard on Innova and Ravello Record labels, and found at: http://academics.hamilton.edu/music/spellman/MfS/MfS.htm

///// Selections of Sam’s work /////


Peculiar Galaxies: UGC4881 from Samuel Pellman.


KISS2014 — Selected Cosmos: Sounds of Life, Samuel Pellman from Symbolic Sound.

///// References /////

///// Additional Links /////

Challenge Song: Massive Attack “Teardrop”

The students in my Audio Recording Techniques III (Spring 2018) course at University of Oregon had ten weeks to select and recreate a recorded song of their choice. They voted on producing a recreation of Massive Attack’s “Teardrop.” The goal was to get the song as close as they could to the original recording. They recorded (and played parts!), mixed, and mastered all elements of the song in just under ten weeks. I am so proud of my students and what they were able to do. I am posting here to give me (and I hope you!) a smile when one needs it.

Reading CSV/TSV files in Kyma (part 2 of 2)

In the previous article (part 1 of 2), we explored how to get a single column CSV [comma-separated-values] file working with Kyma. We used this single stream of numbers to generate MIDI messages that controlled pitch, amplitude, timbre, location, and note duration. This article builds upon that knowledge, adding multi-column CSV/TSV [tab-separated-values] files as parameter arrays inside Kyma.

The following article will tackle four CSV/TSV data topics.
1. Importing TSV files into the Sample Editor (data as waveform)
2. TSV controls MIDI messages (MIDI Script)
3. TSV controls EventValues (MIDI Script)
4. TSV controls spectra (MIDI Script)

Quick terminology. CSV and TSV files structure data similarly, but each use a different delimiter (comma vs. tab) within the file. It is important to note that Kyma works extremely well with TSV files. If you use multi-column CSV files, you’ll need to either convert your file to TSV (I recommend csvkit), or specify Capytalk to ignore comma characters. For purposes of this article, all example files will use TSV format.

Now’s a good time to download [link to Community Library] [but for now, download is on jpbellona] the example files. Of course, feel free to continue reading without the examples.

1. Importing TSV files in the Sample Editor (data as waveform)

Figure 1. Kyma Sample Editor with Generate tab open
Figure 1. Kyma Sample Editor with Generate tab open

The Sample Editor in Kyma 7, (File > New > Sample file), allows you to generate audio from a variety of inputs (zero, connect points, fourier, polynomial, impulse response, and data file). Using the “from data file” selection in the Generate tab (Figure 1), we can import data from a TSV file, translating our TSV data into samples of amplitude data (e.g. an audio waveform).

Figure 2. Generate Tab of Kyma Sample Editor.
Figure 2. Generate Tab of Kyma Sample Editor.

The column number input (Figure 2) selects which column to import, and clicking on “Insert” translates this single column into amplitude data. In the included file, ‘BobJames_1-10000.tsv’ there are two columns, each column contains 5000 points of data. To understand how TSV data is translated into amplitude information (-1 to +1), this .tsv file contains amplitude data from an audio file–the first column contain the first 5000 samples, and the second column contain another 5000 samples of audio. Click “Insert” on column 1 and watch as how the audio waveform is recreated in front of our eyes.

Figure 3. 5000 points of amplitude data imported in one click.
Figure 3. 5000 points of amplitude data imported in one click.

Listening back, we hear a kick drum.

Regardless of information from the TSV file, data is normalized between -1 and +1 in the Sample Editor. Because one data point is translated into one sample of audio, treating these files as short wavetables may be best (4096 samples anyone?)

009 sample after 2nd import
Figure 4. Importing column 2 of data file with 5000 points of amplitude data.

Column 2 contains another 5000 values. Below is Figure 4 playback, with loop on.

Other non-audio data can also be imported. The next two examples use ‘grid_74_73_61_60_59_48_47_46_36_35.tsv’, which represents Palmer Drought Severity Index (PDSI) for grids within California over the last 2000 years. Each line (data point) represents a single year, and each column accounts for a different grid in California. There are 2003 data point arrays. Each array has ten values. As we import different columns of like data, we can see and hear how the PDSI index for different locations in California change, especially listening with loop on. Looped continuously, each file generates a nice, sonic texture. PDSI grids 61 and 36 (TSV columns 3 and 9) are included below.

012 PDSI column 3
Figure 5. Palmer Drought Severity Index (PDSI) data from Grid 61 (near San Diego, CA)
013 PDSI column 9
Figure 6. Palmer Drought Severity Index (PDSI) Grid 36 (near San Francisco, CA)


2. TSV controls MIDI messages
Sections 2-4 describe how TSV data control different sound parameters (MIDI message, EventValues, spectra tracks) within Kyma. All three sections use the MIDI Script. While this article is not dedicated to Scripts per se, variables (?var), EventValues (!Pan), and MIDI messages, all may be controlled by TSV data from within the MIDI Script using CapyTalk. I certainly do not profess to know much about Scripts; however, we can use Scripts to translate our data arrays into sound event controls over time.

Building from the first article, we convert our entire TSV data line into an array (instead of one-column, single value variables). The conversion to data array variables takes two steps. First, we open the file and call textFileStream when we declare our file variable.

f := 'SanFran1990-2015_rows2-5.tsv' asFilename textFileStream.

We cannot use readStream as we want to utilize CapyTalk that will allow us to save an array of parameters in one function call. The function we want to utilize is nextParameters.

lineValArray := f nextParameters.

nextParameters creates an array of elements for the entire line. Since commas are considered elements within nextParameters, be sure to only use TSV files.

Figure 7. First line of TSV file saved as data array.
Figure 7. First line of four-column TSV file saved as data array.
Figure 8.
Figure 8. First line of four-column CSV file saved as data array. Notice how commas are treated as values within the array.

The rest of the code in the Sound uses the data array to construct a MIDI message. Instead of one variable, however, there are multiple data points we may utilize within each MIDI message. To access a data point, use (ArrayName at: Index) syntax. For example,

register := ((lineValArray at: 2) roundTo: 12).

The TSV array, lineValArray, gets the second index value (2nd column of TSV) and rounds to the closest multiple of 12. In the example, we use this value for octave transposition within our MIDI note-on message. Open up the Sound “TSV_file temperature data as MIDI message” to see more examples of how the TSV values were used in the script. Audio of San Francisco Weather Data 1990-2015 is below.


3. TSV controls EventValues

Any Sound passing through a MIDI Script has its parameters available for mapping. This is especially important for mapping TSV data onto control parameters within other Sounds. One way to control parameters is to use EventValues. Inside the Sound “TSV_file controls EventValues” the MIDI Script uses two TSV data values (TSV columns 1 and 2) to control EventValues !Reverb and !Pan separately. EventValues !Reverb and !Pan reference HotValues in the Eugenio Reverb Sound passing through the MIDI Script. Here is how a data value is used to control the EventValue !Pan.

params := f nextParameters.
pan := ((params at: 1) into: #({-2.341@0} {1.758@1}) ) abs.
self controller: !Pan slideTo: pan byTime: (line * 100) ms.

Like before, nextParameters converts the TSV line into an array (params). The next line normalizes the data (TSV column 1) to between 0 and 1. (note: I used the first column of data’s maximum and minimum (-2.341 and 1.758) to achieve this linear mapping). The data is stored into a variable (pan). The last line of code points to the EventValue !Pan and algorithmically sets its value to our variable, pan, within 100 ms. Consecutive data points are interpolated by Kyma. Listening back, we hear how the Reverb and Pan are independently controlled by the data.

In the VCS window, !Reverb and !Pan are absent from view. (Figure 9)

Figure 8. Controlling EventValues with MIDI Script removes them from the VCS.
Figure 9. Controlling EventValues with MIDI Script removes them from the VCS.

This is because the MIDI Script grabbed these EventValues before they got to the VCS…. (C&K:::: Is this true? How would one display the algorithmically controlled data (e.g. EventValue) if one wanted to view values while running the Sound???)

4. TSV controls Spectra

This last example is a bit more complex. A TSV data array controls an array of spectra over time. The mapping seems pretty straight-forward, but during my initial research, I got a little over my head in CapyTalk. (A big thank you to Carla and Kurt for helping out.) In the Sound “TSV_PDSI_CA controls spectra amplitude” navigate to the SpectrumModifier Sound. The AmpScale parameter field contains the CapyTalk

TrackNumber of: {!Fader copies: 20}

The first 20 spectral tracks are copied into an array of EventValues {!Faders}. Every time point, we loop through our params array (10 points of data) and set the value for each !Fader. Our ten TSV values are used twice, once for spectra !Fader 0-9 and a second loop for spectra !Fader 10-19.

1 to: params size do: [ :i |
self controller: (i - 1 of: {!Fader copies: 20}) slideTo: ((params at: i) into: #({-3.617@0} {0@0.1} {2.186@1}) ) abs byTime: (line * 100) ms. "each line is 100 ms"
"reloop and use params for next 10 Faders too"
self controller: (i + 9 of: {!Fader copies: 20}) slideTo: ((params at: i) into: #({-3.617@0} {0@0.1} {2.186@1}) ) abs byTime: (line * 100) ms].

The Script only utilizes the first 20 spectra tracks from the OscillatorBank, even though more spectra are available. The TSV data is the Palmer Drought Severity Index (PDSI) for CA, and we are tying the data to spectral amplitudes of the sound of rain.  Lower rainfall levels equate to quieter sound (lower spectral amplitudes), while the increased sound of rain equates to higher levels of rainfall.

Going a step further, the next Sound “TSV_PDSI_CA controls spectra on/off and amplitude” not only control 40 spectral tracks of amplitude, but also turns these tracks on/off depending on a threshold. As we can see inside the Script, I am reusing ten data points a bit much. However, now that one can control individual spectra with a Script, larger arrays of data points could easily be assimilated within Kyma. (e.g. 128 columns of TSV data for 128 spectral tracks)


Conclusion
As the article outlines, TSV files are just a single click (or a line of code) away from integration with Kyma. Whether data is used to algorithmically control MIDI events, EventValues, or any other type of parameter inside Kyma, one can quickly listen to data in new and interesting ways. Feel free to populate the example files [add link to Community Library] with your own data, or try inserting your data directly into Kyma’s Sample Editor (File > New > Sample File). Please leave a Reply with a link to your own TSV data sound!  #csvkyma

Max/MSP Package: Korg Nano

Ever since Cycling 74 introduced the idea of packages in Max 6.1, I’ve been pretty excited. Previously, there wasn’t a great way to distribute and install tools, objects, externals, media. And if you wanted to use anyone else’s tools, you had to wade through the murky collection of application directories and dump in single files–an unfailing way to ensure that you’d have to re-install these tools after a Max/MSP update.

With packages, Cycling 74 got rid of the mess. Tool creation, installation, and for me, distribution is clear and simple. Even if I’m developing my own set of abstractions for nobody’s computer but my own, packages provide a platform for a confident working-practice with long-term benefits. This post is meant to outline the pros of Max packages by walking through a working example of how one can set up her/his own Max package.

While I have created several Max packages since 2014, the post will outline my latest Max package, Korg Nano.  It’s a basic example, two objects that comprise a software implementation of the Korg nanoKontrol USB controller, certainly enough to get one started.

Installation
After downloading the Korg Nano package, unzip the file and place the unzipped folder directly into the ‘packages’ directory.  For Mac users, the folder is Applications > Max 6.1 > packages.  Or, you can read a short article by Cycling 74 on packages for installation.

What It Is
In short, packages provide global access. Autocompletion, media in global search paths, extras in the top Extras dropdown menu, option clicking helpfiles, it’s all there. What the Korg Nano package provides is a software listener for the 151 controls on the Korg nanoKontrol USB controller. The package is meant to be a plug ‘n play solution for this hardware device (and I use it for prototyping all the time).

After installation of Korg Nano in the Max packages directory (make sure you restart Max), navigate to the folder.  You will see four folders inside (docs, help, media, patchers) and a README file. Each folder has a unique purpose, and there are many more one can add (extras, javascript, clippings, templates, etc). If you’re curious, there is an “about packages.txt” file in the packages directory that outlines the finer points of Max packages. For now, we’ll unpack these four folders (docs, help, media, patchers).

Max/MSP Autocomplete feature for Korg Nano package.
Max/MSP Autocomplete feature for Korg Nano package.

The patchers folder is where you throw your abstractions and objects (not externals), including any additional bpatchers that you may have used to create your objects. Of course, if your package depends upon third-party objects, you can place them here (and within any named subfolder). For Korg Nano, there are two main objects, korgnano and korgnano.inputmenu.  korgnano is built from several bpatchers, which one will see listed in a subfolder (“patchers > korg_nanoKontrol”).

The media folder allows one to place images, audio, and video. This folder becomes global (after restarting Max), so you can also use packages as a way to manage media instead of worrying about “filepath” when you move from computer to computer. Since Korg Nano is a software implementation of the USB hardware controller, I used image buttons that simulate the look and feel of the hardware controller. Placing images in the media folder ensures they will be found, regardless of what computer I am using.

The help folder is exactly what one would expect. Help files ending with the extension .maxhelp.  While help files are useful (e.g. option-click an object to access its help file), Max packages allows one to provide some serious help to the help files. This helpful power boost comes by way of the docs folder.

Korg Nano help file that looks like a standard Max help file.
Korg Nano help file that looks like a standard Max help file.

The docs folder contains reference files that enable hover tooltips, documentation window text, uniform descriptions, and fancy descriptive break downs and object links from within the reference window. To understand what is happening in the help file screenshot above, let’s dig into the docs folder.  Navigate to the “korgnano-0.0.1 > docs > refpages > korgnano > korgnano.inputmenu.maxref.xml” file.  This xml file contains all the descriptions that get pulled for the help file. While this file contains confusing html/xml style tags, one need only look to two examples to see their power.

The first example comes from the first two xml tags <digest> and <description>.  These two description tags show up in the Autocomplete menu, the documentation window, the reference window (outside any help file), and the help file’s title object (actually, a jsui object that uses an application Max script “helpdetails.js” to parse these xml tags and display them for clean documentation).

The second example of documentation power comes from the <seealsolist> tag near the bottom of the .xml file.  One only needs to place additional object names here (e.g. “<seealso name=’korgnano’/>”) and links automatically appear in the reference documentation window, linking to your objects’ help files. This is handy here, as I want to link the korgnano object and the korgnano.inputmenu object together since these objects are symbiotic. The korgnano object grabs data from your Korg hardware controller and then sends the controller data direct to korgnano.inputmenu objects.

Docs, Help, Media, Patchers. That’s it.  A Max package that enables software listening for the Korg nanoKontrol, neatly bundled for distribution. Clear documentation files to help anyone navigate the tools, even me! when I revisit my tool a few months down the line. However, I do not need to distribute to reap the benefits.  Clippings, templates, patchers, or even externals that I use often in my own work have a place within a Max package, easily searchable and documented so I have a working practice that is efficient and scalable. For anyone working in Max, packages offer a clean way to keep your sh** together.

Korgnano object help file
Korgnano object help file

 

Reference
Korg Nano Max package

Notes
Packages also work with Max 7. While my example was built using Max 6.1, there is no reason why it shouldn’t work in Max 7. Email me if you have issues.

Speaking of issues… if you’re having trouble with autocomplete, try creating a message object in a Max window with the text “; max db.reset”.  This will refresh Max’s preferences, but may take 60 seconds+ to reload. Here’s the original forum post where I found this fix.