TaPIR Lab Composer in Residence Concert

In July 2022, TaPIR put out a call for composers to collaborate with the lab on the creation of four new pieces. This concert will present the culmination of a nearly year-long commission and collaboration process. Each piece utilizes experimental computer music techniques, from computer vision and AI sound synthesis to multichannel audio processing and hardware hacking.

Time: Friday May 19, 2023 19:30

Venue: Array Space, 155 Walnut Ave



STEVEN LEWIS, The Illusion of Separateness



MOLLY JONES, approximations


Christopher Dobrian

I Dreamed of Naïma

Performers: Aiyun Huang

About the piece

I Dreamed of Naïma references a composition by John Coltrane in fragmented and distorted fashion, as if recollected in a dream. The computer program, written in Max for Live, senses the sound of the vibraphone, and algorithmically adds its own sounds with the intention of extending and elaborating the instrumental sound. The piece was composed for Aiyun Huang; Chieh Huang was a valuable assistant during its development.

Nolan Hildebrand


Performers: Andrew Bell, Nikki Huang, Hoi Tong Keung, Thomas Li, Bevis Ng, Jasmine Tsui,

About the piece

DADA BENDER is a piece of music written for six percussionists and six loudspeakers. The music in DADA BENDER was created using noisy electronic sounds derived from raw data sonifications and improvisations on a no-input mixer. Raw data sonification is the process of mapping aspects of data to produce sound signals. Sonifying raw data often creates glitchy digital noise and is sometimes referred to as data bending. No-input mixing creates sounds by routing a mixer’s outputs back into its inputs to create feedback loops. Like raw data sonifications, no-input mixing creates sound through misuse. These noisy, skronky electronic sounds were chopped, arranged, and quantized, into rhythms and gestures that were then orchestrated for the percussionists.

Noise music has always been strongly connected to Dadaism. Dadaism inherited the term anti-art from Marcel Duchamp who constantly challenged accepted definitions of art. Noise music too has constantly challenged the idea of what music is through relentless abstract forms, ugly sounds, and high amplitude. Today, the connections between noise music and Dadaism are exemplified through the godfather of noise music, Merzbow who derives his stage name from Dada artist Kurt Schwitters and his concept of Merz.

Molly Jones


Performers: Louis Pino and Matti Pulkki

About the piece

approximations for accordion, percussion, and neural networks  is a collection of five short movements featuring the sounds of two audio-generating neural networks trained on the sounds of the two performers, Louis Pino and Matti Pulkki.  The neural networks (named The Accordionator and The Great Percussionist In The Sky by GPT-3) are AIs who have only ever had one sensory input: the sound of their assigned performer.  They’re quirky robots trying to imitate the performers who are, in turn, trying to interact with and imitate the robots.

Movement i, “from the fog of randomness,” mimics the initial stage of training a neural network.  At the beginning of the training process, the network generates statistical noise.  Over hundreds of thousands of training steps, the network learns to generate sounds that more and more resemble the instruments.

Movement ii, “making friends with the robots,” takes us from rumbling machine noise to a robot dance party.  The robots and performers become friends as they create a beat together, closing with a courteous, courtly dance.

Movement iii, “call + response,” gives the performers a chance to imitate the robots who have been trained to imitate them.

Movement iv, “mutual listening,” is a speculative representation of the inner/spiritual experience of the networks.  These AIs aren’t conscious, though.  Or are they?

The title of movement v, “educated guessing,” refers to the way networks generate audio.  A network looks at a chunk of audio samples and guesses what samples should come next.

If you have questions about the technical details or want to geek out about generative AI and audio, please contact me at mojones.e@gmail.com.

Steven Lewis

Illusion of Separateness

Performers: Randall Chaves Camacho, Alex Fraga, Steven Lewis

About the piece

  The Illusion of Separateness is an experiment in designing an environment that situates the performers within a particular assemblage of digital technologies, one which features various percussion instruments, computer vision, the Mugic® motion controller, and interactive computer music software. The most musically consequential interactive technologies are the computer vision components and the Mugic® microcontroller, as the audio from the virtual devices and percussion is either instantiated or processed through various analyses of body movement. One percussionist uses the screen-based interface to leverage computer vision technologies as the means to manipulate the original sound generated from their performing counterpart, while the other player does the same with the wearable Mugic® device.


To control the behavior of these assorted components, the performers must analyze how their gestural motions influence an emergent and continuously shifting sonic morphology. Each player uses pre-programmed parameter mappings and signal routings to manipulate the other’s real-time improvisation as they both navigate their performance and decision-making through an array of different virtual audio processing modules. While both performers generate their own original source material, subsequent musical and gestural choices are rarely unilateral, yet always contingent on the decisions the other percussionist makes through their personal approach to improvising within this environment. In effect, each performer must mediate their way through this technological assemblage in a collaborative, reciprocal manner, thus mutually contributing to a sonic outcome that reveals itself to be more generative than fully contrived.

Reilly Spitzfaden

Reach Through

Performers: Alex Fraga, Louis Pino, Jonny Smith, Jasmine Tsui

About the piece

In Reach Through, the performers use a custom web app on their smartphones to send motion data to Max/MSP, as well as using microphones and small amplifiers as instruments to create feedback and other noises. In response to the performers’ gestures, the Max patch manipulates recordings of “cheesy” retro synthesizers in a way that imitates broken CDs and tapes, and plays back “databending” sounds (non-audio data treated as audio, which creates a detailed, crackling mixture of digital noise and pitch). These sounds of audio playback and/or storage media failing, and of retro 80s/90s synthesizers make me think about memory, nostalgia, and loss in a way I find aesthetically pleasing.