by Jonny Smith
SpokenWeb 2023 Symposium: ReVerb: Echo-Locations of Sound and Space, May 1-3:
In May 2023, Jonny Smith performed and discussed a new solo percussion and multi-media piece, Renderer, composed for me by Quinn Jacobs in 2021. The piece incorporates live sound, pre-recorded audio, and video projection and uses each element to affect the audiences perception of what is really happening in the performance. As the title of the piece suggests, Renderer critiques the ways in which humans try to control (or “render”) the perceptions of others, particularly in the virtual realm, and present an image of themselves that is not true to reality.
by Andrew Bell
The Aux-Cord Etudes are a series of pedagogical pieces for auxiliary percussion and live electronics by Andrew Gordon Bell. These etudes focus on exploring methods to teach undergraduate percussionists both the skills needed to play at a high level on triangle, tambourine, bass drum and cymbals, and the technological knowledge required to perform with live electronics. These topics are often overlooked as playing these instruments are seen as less musical by most percussionists and the technical knowledge to perform with electronics can seem daunting, discouraging students from even exploring that avenue. This leads to suboptimal results as although these instruments seem less important they still require practice to achieve technical facility, and by the time students realize they are interested in performing with technology they are so behind in terms of knowledge that it can be difficult to catch up. These pieces aim to alleviate this hurdle by giving students a more engaging way to practice auxiliary instruments while also introducing different aspects of performing with live electronics. Topics covered include live processing, hyperinstruments, fixed media, OSC, and more.
by Alex Fraga
by Hoi Tong Keung & Fish Yu
by Bevis Ng
by Timothy Roth, Benjamin Duinker, Tristan Loria, Aiyun Huang & Michael Thaut
by Jonny Smith & Louis Pino
The Theatre of Schizophonic Performance in John Cage’s Cartridge Music
by Timothy Roth, Tyler Cunningham, Gordon Fry and Bryn Lutek
John Cage’s Cartridge Music (1960) was one of the first works to break the acousmatic tradition of electronic music by incorporating live performers. Where, in acoustic performance, gesture and sound are typically concatenated, performance with live electronics makes this link less immediate and at times inscrutable. This alienation of sound from performative gesture, described by R. Murray Schafer as “schizophonia,” creates a unique theatrical scenario (Schafer 1969). In the liner notes for the first recording of the work, Cage wrote that one of his objectives was “to make a theatrical situation involving amplifiers and loudspeakers and live musicians” (Cage 1962d). Although the work developed into one of Cage’s most flexible compositions, there are a number of restrictions in the score that limit the theatrical potential of the electronic framework.
We propose to perform a historically-informed version of Cartridge Music for four musicians with a number of modifications to Cage’s original framework. We created hand-held versions of the phono and piezo pickups which allows us to assemble a diverse array of sounding objects for producing “auxiliary sounds.” We suspended a number of these instruments, expanding the performance frame beyond the typical tabletop setting of the piece.
The results of these modifications are twofold. First, the sonic palette is significantly expanded. Along with additional “auxiliary sound” objects, freestanding contact microphones and pickups allow for creative variation in the placement and usage of the sensors which contributes a significant textural and timbral depth. Second, incorporation of the pickups as implements and extension of the performance frame unfetters the performing bodies, opening up the piece as a more dynamic theatrical vehicle. In this way, our interpretation refines Cage’s goal of creating theatricality in a live electronic environment while maintaining the integrity of the original framework.
Tough Questions Better Answers: the centrality of creative practice in a DMA thesis
by Greg Bruce
Part of Greg Bruce’s research involves first principles of knowledge generation, which includes methodologies for research-creation. In this presentation, Greg discusses how research-creation compliments conventional forms of research and illustrates his ‘problem-practice-exegesis’ methodology for carrying out large-scale research-creation projects, such as a DMA thesis. This research provides a useful framework for artist-scholars who wish to employ their own creative practice as a primary means of investigation.
Sonic Canvas: Digital Art as Musical Improvisation (Video Performance)
by Timothy Roth and Jasmine Tsui
Sonic Canvas is a multidisciplinary digital improvisation performed by visual artist Jasmine Tsui and music technologist Tim Roth.
The work uses a colour-retrieval patch created in Max, a visual programming language, to generate information about the size and location of colour on a screen.
The performers negotiate the sound world together: the artist creates illustrations on the iPad illustration app Procreate that set command values, and the technologist scales these values and assigns them to different parameters of a synthesizer. Unlike traditional collaboration between musician and artist, the artist now has first-person control over all the sounds produced.
Video information from the iPad can be transmitted to the Max patch over the internet, resulting in a work that is best consumed digitally and adheres to social distancing guidelines.
This project can accommodate numerous digital artists from a variety of mediums and has much room for expansion.
When the outputs of a mixer are routed back into its inputs, the inherent noise of the machine sets off an internal feedback loop, generating a diverse range of sound depending on the volume and EQ settings of the mixer. Where is my Mind? is an experiment in no-input mixer performance; using motion, light, and biological signals to insert the performer’s own mind and body within the feedback loop of the mixer. Using Arduino, MUGIC, Max, two LEDs, and a homemade light sensor, the performer can perform on the mixer using intense thought, spatial and facial motion, and as normal with hands on the volume and EQ settings.