I Dreamed of Naïma references a composition by John Coltrane in fragmented and distorted fashion, as if recollected in a dream. The computer program, written in Max for Live, senses the sound of the vibraphone, and algorithmically adds its own sounds with the intention of extending and elaborating the instrumental sound. The piece was composed for Aiyun Huang; Chieh Huang was a valuable assistant during its development.
DADA BENDER is a piece of music written for six percussionists and six loudspeakers. The music in DADA BENDER was created using noisy electronic sounds derived from raw data sonifications and improvisations on a no-input mixer. Raw data sonification is the process of mapping aspects of data to produce sounds signals. Sonifying raw data often creates glitchy digital noise and is sometimes referred to as data bending. No-input mixing creates sounds by routing a mixer’s outputs back into its inputs to create feedback loops. Like raw data sonifications, no-input mixing creates sounds through misuse. These noisy, skronky electronic sounds were chopped, arranged, and quantized, into rhythms and gestures that were then orchestrated for the percussionists.
Noise music has always been strongly connected to Dadaism. Dadaism inherited the term anti-art from Marcel Duchamp who constantly challenged accepted definitions of art. Noise music too has constantly challenged the idea of what music is through relentless abstract forms, ugly sounds, and high amplitude. Today, the connections between noise music and Dadaism are exemplified through the godfather of noise music, Merzbow who derives his stage name from Dada artist Kurt Schwitters and his concept of Merz.
approximations for accordion, percussion, and neural networks is a collection of five short movements featuring the sounds of two audio-generating neural networks trained on the sounds of the two performers, Louis Pino and Matti Pulkki. The neural networks (named The Accordionator and The Great Percussionist In The Sky by GPT-3) are AIs who have only ever had one sensory input: the sound of their assigned performer. They’re quirky robots trying to imitate the performers who are, in turn, trying to interact with and imitate the robots.
Movement i, “from the fog of randomness,” mimics the initial stage of training a neural network. At the beginning of the training process, the network generates statistical noise. Over hundreds of thousands of training steps, the network learns to generate sounds that more and more resemble the instruments.
Movement ii, “making friends with the robots,” takes us from rumbling machine noise to a robot dance party. The robots and performers become friends as they create a beat together, closing with a courteous, courtly dance.
Movement iii, “call + response,” gives the performers a chance to imitate the robots who have been trained to imitate them.
Movement iv, “mutual listening,” is a speculative representation of the inner/spiritual experience of the networks. These AIs aren’t conscious, though. Or are they?
The title of movement v, “educated guessing,” refers to the way networks generate audio. A network looks at a chunk of audio samples and guesses what samples should come next.
If you have questions about the technical details or want to geek out about generative AI and audio, please contact the composer at mojones.e@gmail.com.
The Illusion of Separateness is an experiment in designing an environment that situates the performers within a particular assemblage of digital technologies, one which features various percussion instruments, computer vision, the Mugic® motion controller, and interactive computer music software. The most musically consequential interactive technologies are the computer vision components and the Mugic® microcontroller, as the audio from the virtual devices and percussion is either instantiated or processed through various analyses of body movement. One percussionist uses the screen-based interface to leverage computer vision technologies as the means to manipulate the original sound generated from their performing counterpart, while the other player does the same with the wearable Mugic® device.
To control the behavior of these assorted components, the performers must analyze how their gestural motions influence an emergent and continuously shifting sonic morphology. Each player uses pre-programmed parameter mappings and signal routings to manipulate the other’s real-time improvisation as they both navigate their performance and decision-making through an array of different virtual audio processing modules. While both performers generate their own original source material, subsequent musical and gestural choices are rarely unilateral, yet always contingent on the decisions the other percussionist makes through their personal approach to improvising within this environment. In effect, each performer must mediate their way through this technological assemblage in a collaborative, reciprocal manner, thus mutually contributing to a sonic outcome that reveals itself to be more generative than fully contrived.
In Reach Through, the performers use a custom web app on their smartphones to send motion data to Max/MSP, as well as using microphones and small amplifiers as instruments to create feedback and other noises. In response to the performers’ gestures, the Max patch manipulates recordings of “cheesy” retro synthesizers in a way that imitates broken CDs and tapes, and plays back “databending” sounds (non-audio data treated as audio, which creates a detailed, crackling mixture of digital noise and pitch). These sounds of audio playback and/or storage media failing, and of retro 80s/90s synthesizers make me think about memory, nostalgia, and loss in a way I find aesthetically pleasing.
Fish Yu and KöNG Duo (Bevis and Hoi Tong) collaborated to create Frolic, a multimedia four-movement work exploring games in Hong Kong through percussion, spoken Cantonese, theatre, projections, and electronics. One of the challenges of creating this work is the interaction between the marimba and malletSTATION (MIDI controller with a percussion keyboard layout). The revised version adds the vibraphone to bring out the strengths of both the vibraphone and the malletSTATION.
Henki translates as: breath, life, ghost, person, spirit, soul, atmosphere. Commissioned by Matti Pulkki, the creation of Henki was supported by The Sibelius Fund of the Society of Finnish Composers, Koneen Säätiö, and TaPIR Lab.
Henki explores the accordion and the accordionist as an auditive, visual , bodily, and theatrical element through the allusion of breathing both in literal and metaphorical sense. In addition to acoustic sounds production, the piece utilizes video projection and fixed and live media. In performance, the accordionist appears on stage under a blank canvas with video animations projected on them, while their playing is being processed live.
Cave is a collaborative work between Bevis Ng and Fish Yu for solo tam-tam with live processing electronics and immersive audio. The collaboration aimed to create a piece that utilize the infinite sonic possibilities in tam-tam. While existing repertoire for tam-tam and electronics such as Mikrophonie I by Karl Stockhausen explores ways to make the instrument sound like “other,” Bevis’ and Fish’s shared artistic vision is to explore the augmentation function in live processing electronics and the wide spectrum of pitch produced by the tam-tam. In addition, the ever-ringing tam-tam sound echoes the title of the piece, which resembles the infinite reverb acoustic quality inside a cave. Moreover, their shared identity of being Hongkongers motivated them to create works that can bring a sign of hope in this post-2019 Hong Kong Protest era. The title Cave is a metaphor for the socio-political environment in Hong Kong now, where people are suffering and being trapped in this dark place. Composition-wise, the use of chanting symbolizes the mourning of Hongkongers. While the situation seems to be in despair, the transition from mourning to howling towards the end of the piece reminds us that we are resilient. The unyielding spirit will guide us to the light coming from the exit of the cave.
MadLib is an electro-acoustic piece for open instrumentation with live electronics created using Max software. It was commissioned by Jonny Smith in 2021 from composer Louis Pino. The concept of this work was to create a piece that can be customized by the performer in a variety of ways thereby giving the performer greater creative agency and allowing for a wide array of potential musical outcomes. The concept of the piece is inspired by the word game, Mad Libs. In the game, the reader or group of readers is asked to think of and write down random words. These words are then used to fill in the blanks of a prewritten story, usually for comic effect. A core aspect of the piece is the performer uploading their own audio samples to then be manipulated by the patch in some preset and some personally customizable processes.
A study was held from October 4th to November 6th 2022, involving various TaPIR researchers learning, experimenting with, and recording their own versions of the piece. The goal of this experiment was to analyze how performers chose to perform and interact with the electronic accompaniment, and to evaluate the piece as a creative practical tool to aid in learning the Max software. The playlist with the MadLib recordings can be found here. MadLib was premiered in April 2022 at The Space Between conference at McMaster University in Hamilton, Ontario, Canada. Both Smith and Pino performed their own versions of the piece in order to demonstrate how the piece can be shaped in a variety of ways by different performers.
Doldrum Études is a composition for three percussionists, piano, and electronics. It was conceived under the auspices of Professor Aiyun Huang and the TaPIR lab at the University of Toronto Faculty of Music and funded with the support of a University of Toronto Excellence Award.
The piece is a pessimistic, gradual exploration of the listless, meaningless, and futile doldrums that many found themselves in somewhere between 2020 and 2022 – compositionally, the work takes a handful of motifs and transposes, augments, retrogrades, and otherwise smashes them together and against walls until they are almost unrecognizable at the conclusion of the piece – this includes an episodic fugue in the middle of the work, and the use of musical morse code at the conclusion (spelling SOS and DOLDRUMS).
To aid in this musical deformation, successive electronic filters are used throughout the composition, each belonging to a ‘Phase’ and altering a given characteristic of the live sound. This begins with pitch degradation, then the addition of a noise filter, followed by a final phase with degradation, noise, and a delay effect. These effects are controlled live through the use of a MUGIC sensor, a small device that can output various motion-based parameters into a Max
patch, which then scales the aforementioned filters based on the live data. Applying this to Doldrum Études, the filters were each controlled by a player bowing a vibraphone; with MUGIC sensors in specially-fitted gloves, the motion of the descending bow in the hands of each performer was used to scale the given effects of each phase through the piece.
Doldrum Études is, musically, a study in banality. Practically, this project was an exploration in how musical ideas can both coexist with and be reinforced by live processing, especially with novel and exciting MUGIC technology. Doldrum Études was possible with the gracious support of Aiyun Huang, Gary Kulesha, Andrew Bell, and Louis Pino, and was premiered by Hoi Tong Keung, Jasmine Tsui, Alex Fraga, and Geoffrey Conquer.
Three Roses is a quartet for percussion, incorporating 2 technological devices to give the performers control over lighting and sound design. First is the MUGIC, a gestural sensor developed by violinist Mari Kimura, which you can see on each of the players’ hands. Second is the Arduino which controls the lights, and through the software Max MSP can respond to the gestures of the performers captured by the MUGIC. This piece was commissioned by Aiyun Huang for the TaPIR lab in the early stages of COVID, and as such has gone through multiple iterations from live concert performance, to remote collaboration, and eventually settled as an in person recording project.
Three Roses is split up into three movements, each representing a different breed of rose, and each representing a different stage of my grandmother’s garden. Moonshadow, a wide blooming soft purple flower, is a soloistic meandering walk through a fading garden, accompanied by each of the three other performers in the form of individual memories, underlying and influencing the wanderer. Knock Out, a bush of small and bright red roses, is a short and energetic dance between the four players, children running in open space. Heirloom, a descriptor given to plants which have not undergone selective breeding or genetic modification in recent centuries. As a young child, this garden seemed absolutely massive and could fully enshroud you from the rest of the world. Being surrounded by this cave of vines and flowers is one of my earliest fragments of a memory.
compound.transverse.oblique. explores concepts of fragility and fracture through simple electronic instruments built with Arduino microcontrollers and percussion instruments.
The Arduino instrument’s exposed circuitry presents a vulnerable and fragile aesthetic that became the central focus of the composition.
Throughout the work frail sounds dissolve as delicate textures breakdown and snap under pressure, creating an abstract composition that is intense and unpredictable.
compound. uses simple speaker electronic instruments that are extremely precarious both in their playability and its sound. Two percussionists coerce cracks, whispers, and buzzy screeches by scraping amplified coins across Almglocken while one percussionist plays a large woodblock with a vibra bullet and another rips large pieces of paper.
In oblique. a single timpano is used as a resonator for the Arduino electronic instrument speaker and the performer’s voices. Multiple percussionists perform overtone singing into the drumhead and manipulate its tension to create a delicate polyphony between humans and machine.
transverse. is characterized by electronic and acoustic sounds that are melted down and synthesized to create a bright, sharp timbre. Pitches begin in unison and gradually shift by microtones to illustrate harmonic cracks and fractures.