Category Archives: Intermedia

#Carbonfeed

The project #CarbonFeed directly challenges the popular notion that virtuality is disconnected from reality.  Through sonifying Twitter feeds and correlating individual tweets with a physical data visualization in public spaces, artists Jon Bellona and John Park invite viewers to hear and see the environmental cost of online behavior and its supportive physical infrastructure.

CarbonFeed works by taking in realtime tweets from Twitter users around the world. Based on a customizable set of hashtags, the work listens for specific tweets. The content of these incoming tweets generates a realtime sonic composition. An installation-based visual counterpart of compressed air being pumped through tubes of water further provides a physical manifestation of each tweet.

To see a running counter of the carbon footprint of digital behavior, learn more about this project or even listen to a song based on your personal twitter feed, please visit http://carbonfeed.org

#Carbonfeed installed at the University of Virginia.
#Carbonfeed installed at the University of Virginia.

simpleKinect

simpleKinect is an application for sending data from the Microsoft Kinect to any OSC-enabled application. The application attempts to improve upon similar software by offering more openni features and more user control.

simpleKinect Features

  • Auto-calibration.
  • Specify OSC output IP and Port in real time.
  • Send CoM (Center of Mass) coordinate of all users inside the space, regardless of skeleton calibration.
  • Send skeleton data (single user), on a joint-by-joint basis, as specified by the user.
  • Manually switch between users for skeleton tracking.
  • Individually select between three joint modes (world, screen, and body) for sending data.
  • Individually determine the OSC output url for any joint.
  • Save/load application settings.
  • Send distances between joints (sent in millimeters). [default is on]

Download simpleKinect.

simpleKinect FAQ page

Projects utilizing simpleKinect

Casting. Electronic composition for solo performer with the Microsoft Kinect and Kyma.

San Giovanni Elemosinario

San Giovanni Elemosinario is a music for film work that attempts to recreate a Venetian church through sound. Collaborating with architecture students studying in Venice, Italy, I received sketches of axonometric views, floor plans, column details, entrances, and other structural perspectives. Placing these sketches inside Iannix allowed cursors to trace the architectural renderings in real time. These cursors output data to Kyma, where mappings of data control oscillators, harmonic resonators, noise filters, as well as other acoustic treatments (panning, reverb, EQ, frequency shifts, etc.). While no impulse response was recorded, listening tests inside the church determined a ~3 second decay time, and helped influence the creation of spatial reverberation.


A huge thank you to Matthew Burtner and Anselmo Canfora, both of whom made the collaboration possible.
Video/Music: Jon Bellona
Drawing: Olivia Morgan, Alex Picciano

Treason of Images

Brad Garner of Harmonic Laboratory asked for a visual component to his choreography for the 2012 (sub)Urban Projections digital arts festival. Originally a single Processing sketch, I split the video between two projectors in order to fit the venue, the top of a parking lot in Eugene, OR. The work explores male stereotypes, especially in dance, and the text augments these portrayals, which are often quick to be placed upon the male body.

Zero Crossing

Zero Crossing is a collaborative work by Harmonic Laboratory. The piece explores the relationships between moving bodies, real and perceived, and the line that exists at the junction of action.

Music was composed by Jon Bellona. Choreography by Brad Garner. Digital Projections by John Park. The piece was created, in part, for (sub)Urban Projections, a digital arts festival sponsored by the University of Oregon and the City of Eugene. The video performance is the premiere. Please wear headphones to take advantage of the full audio spectrum.

Human Chimes

Human Chimes transforms users into sound that bounce between other users inside the space. The sounds infer interaction with all other participants inside the space. Participants perceive themselves and others as transformed visual components projected onto the front wall as well as sonic formulations indicating where they are. As people move, the sounds move and change to show changing personal interactions. As more users enter the space, more sounds are layered upon the existing body. In this way, sound patterns, like our relationships with others, continuously evolve.

The social work dynamically tracking users’ locations in real time, transcoding participants as sounds that pan around the space according to the participants’ positions. Human Chimes enables users to create, control, and interact with sound and visuals in real time. The piece uses a multimedia experience to ignite our curiosity and deepen our playful attitude with the world around us.

The work was commissioned in part by the University of Oregon and the city of Eugene, Oregon. The work was presented as part of the (sub)Urban Projections film festival: Nov. 9, 2011.

                       

Graffiti

(sub)Urban Projections Film Festival wanted to include live projection bombing in downtown Eugene, OR, and I was commissioned to create an interactive installation that allows a user to paint graffiti upon any projected surface. The human interface uses TouchOSC on an iPad or iPhone, which drives my graffiti computer software. The work was presented each night of the (sub)Urban Projections festival: Nov. 9, 16, 23; 2011, the WhiteBox gallery in Portland, OR Dec. 10, 2011, and the second (sub)Urban Projections festival: Nov. 7, 11, 14 2012.

Running Expressions

Running Expressions is a real-time performance composition using bio-feedback and remote controllers. Written primarily in Kyma and Max/MSP, the piece captures live physiological data to create and control music within an 8-channel and video projection environment. The musical performance narrates a distance run, the psychological and emotional impacts of a running experience.

+ Download Documentation .pdf and the performance software (Max/MSP/Jitter, OSCulator, and Processing) files. (.zip, 11.5 MB)

+ Download Kyma performance audio files. (.zip, 45.3 MB)

+ Download Thesis documentation separately. (.pdf, 11.2 MB)

Play! Sequence

Play! Sequence is a multimedia installation for iPod Touch, USB camera, and VGA video display and TouchOSC, Max/MSP/Jitter, and Isadora software applications. By creating a multitouch sequencer that controls the playback of audio and video masks, Play! Sequence enables the user to simultaneously interact with the space’s sonic and visual environment.

The iPod Touch provides a familiar language for the user and for the nature of the tactile interactions. The user is allowed to create, edit, and delete three synchronous sequences of sixteen steps, thereby changing the evolution and the complexity of the piece over time.

Each of the three sequences represent a sonic timbre and color mask that mirror the user’s actions. With each sonic timbre, the user has control over pitch, rhythm, and amplitude. The color masks follow the sounds across the screen, repeating from the left upon the start of each loop. The masks help visualize the user’s tactile and sound experience by revealing the user inside the space, and each mask represents one of elements in the RGB color model.

Play! Sequence operates within the framework of natural human interaction, playing off of our curiosity and our engagement with objects that we can creatively control. The user manipulates and interacts with the sounds and visuals in real time, driven by the immediate feedback that the system provides.