Music Composition

ASMRtronica

ASMRtronica is an ongoing project developed in the artist’s home-studio during the novel coronavirus pandemic: a manifestation of a desire for intimacy in sound, when touch was not possible. This is a style of music applied to several works as Gee develops her own vocabularies of psychosomatic performance.

Through ASMRtronica, Gee brings to life a combination of electroacoustic music and the sounds of Autonomous Sensory Meridian Response (ASMR) videos: clicks, whispers, soft spoken voice, taps, and hand gestures inspired by hypnosis, tactility, intimacy, and verbal suggestion. Through ongoing development of this genre, she explores the sonic limits of the sensorial propositions of ASMR, journeying into embodied and unconscious feedback loops in sound.

We as Waves (2020) is a collaboration between myself and queer playwright Jena McLean. The text in this work is inspired by an essay by feminist theorist of electronic music Tara Rodgers. What does it mean to enter into an affective relationship of touch with sound? The work embodies a dark narrative of sonic becoming aided by hypnosis and physiological relationship to sound and voice, closed by the the following quotes from queer theologist Catherine Keller:

“As the wave rolls into realization, it may with an uncomfortable passion
fold its relations into the future: the relations, the waves of our possibility,
comprise the real potentiality from which we emerge…”

“We are drops of an oceanic impersonality. We arch like waves,
like porpoises.”

In September 2020 I launched To the Farther as part of MUTEK Montreal’s online exhibition Distant Arcades. It is first a series of music that explore the limits of tactile whispers, proximity, and hypnotic language through ASMR and electronic sound.
To the Farther is the title of the first iteration: A fresh take on texture, form, and the plasticity of reality under digital transformations, also is a “remix” of my ASMR recordings made in Machine Unlearning (2020).

credits

To the Farther released September 8, 2020 by Erin Gee. Music composition and art by Erin Gee.
We as Waves (2020) released August 2021 by Erin Gee. Music composition and performance by Erin Gee. Text by Jena McLean. Videography by Michel de Silva.

Audio Placebo Plaza: Montreal Edition

Audio Placebo Plaza is a community sound art project conceived by Erin Gee and Julia E Dyck in collaboration with invited artist Vivian Li .

In June 2021 the trio transformed a former perfume shop in the St Hubert Plaza of Montreal into a pop up radio station, sensory room, therapist office, and audio production studio, uniting these spaces through the aesthetics of a sandwich shop or cafe to offer customizable audio placebo “specials” and “combos” to the public.

Founded upon principles of feminism, socialism, and audio production excellence, Audio Placebo Plaza invites everyday people to take appointments with artists to discuss how an audio placebo could help improve their lives. These appointments are entirely focused on the individual and are in themselves part of the process. Common topics of discussion included increasing productivity, self-esteem, self-care, social interactivity, brain hacking, mitigating insomnia, and pain management, but also one’s aural preferences, sensitivities, and curiosities. Intake sessions were conducted in a blended telematic/in-person structure to determine one’s familiarity and comfort levels with a variety of psychosomatic audio techniques including but not limited to soundscapes, binaural beats, simulated social interactions, positive affirmations, drone, participatory vocalization, ASMR, guided meditation and deep listening.

After the consultation is complete, team members met to discuss each participant’s case to fulfill their “prescription,” and also to divide the labor amongst the three creators. The collaborations are non-hierarchical, adaptive, and simultaneous: one might be working on up to four projects at a time, or trade tasks depending on one’s backlog of labor. Labor is divided into recording sounds, conducting intake sessions, writing scripts, performing spoken or sung content, writing music, editing and audio mixing, cleaning and maintaining the shared spares, and communicating with visitors or walk-ins.

Audio Placebo Plaza Radio broadcast was facilitated through a pirate radio transmitter as well as an internet radio station. We broadcast completed placebos, shared technical advice and performance practices during informal critiques, work sessions in progress through the DAW, and sometimes informal chats with visitors. Intake sessions were also broadcasted (with the consent of visitors).

Through Audio Placebo Place, we explore and develop methods for sound and music that propose emotional labor, listening, collaboration and “music as repair” (see Suzanne Cusick, 2008) as key elements that shape the sonic-social encounter between artists and the public.

Can placebos help?
Does sound have the power to process complex emotions?
Can music give you what you need?
Is this even music?

Presence

(March 2020) I was quarantining intensely during the coronavirus pandemic when Jen Kutler reached out to me asking if I would like to collaborate on a new work that simulates presence and attention over the network.  We have never met in real life, but we started talking on the internet every day. We eventually built a musical structure that implicates live webcam, endoscopic camera footage, biosensor data, sounds rearranged by biosensor data, ASMR roleplay and touch stimulation devices delivering small shocks to each artist. We developed this work at first through a month-long intensive online residency at SAW Video, while in conversation with many amazing artists, curators and creative people.

In Presence, artists Erin Gee and Jen Kutler reconfigure voice and touch across the internet through a haptic/physical feedback loop, using affective and physical telematics to structure an immersive electronic soundscape through physiological response.

Technical diagram for Presence, Erin Gee and Jen Kutler 2020

Presence is a telematic music composition for two bodies created during the Spring of 2020, at the height of confinement and social distancing during the COVID19 epidemic in Montreal and New York state. This work has been performed for online audiences by both artists while at home (Montreal/New York), featuring Gee and Kutler each attached to biosensors that collect the unconscious behaviours of their autonomic nervous systems, as well as touch simulation units that make this data tactile for each artist through transcutaneous nerve simulation.

Audiences are invited to listen attentively this networked session for physicalized affect through the sonification of each artists’ biodata, which also slowly triggers an ASMR roleplay that is actively reconfigured by the bodily reactions of each artist. Music and transcutaneous electronic nerve stimulation is triggered by listening bodies: these bodies are triggered by the sounds and electric pulses, everything in the system is unconscious, triggering and triggered by each other through networked delays, but present. Through this musical intervention the artists invite the listeners to imagine the experience and implicate their own bodies in the networked transmission, to witness the artists touching the borders of themselves and their physical spaces while in isolation. 

Technical Credits

web socket for puredata (wspd) created for Presence by Michael Palumbo. Available on Github here.

Biodata circuitry and library created by Erin Gee. Available on Github here.

Electronic touch stimulation device for MIDI created by Jen Kutler. Available on Github here.

Performance built with a combination of puredata (data routing), Processing (biodata generated visuals), Ableton Live (sounds) and OBS (live telematics) by Erin Gee and Jen Kutler.

Performance and Exhibition History

SAW Video “Stay at Home” Residency March-April 2020

Network Music Festival July 17 2020

Fonderie Darling – As part of Allegorical Circuits for Human Software curated by Laurie Cotton Pigeon. August 13 2020

 

Echo Grey

Echo Grey is a composition for four voices, feedback musical instruments, and tape part (which features the sounds of a broken image file).

World premiere at Vancouver New Music with Andrea Young, Marina Hasselberg, Sharon Chohi Kim, Micaela Tobin, Michael Day, Braden Diotte, and Erin Gee in November 2016. It has also been performed at Open Space Gallery (Victoria), and Neworks (Calgary).

Movement between words or utterance, the echo’s voice exceeds the signal itself and speaks to a deeper engagement with materiality.  In Echo Grey, I composed a series of vocal patterns that emerge directly with breath as raw material, the movement of intake and exhalation made audible. The choir’s engagement with the mechanistic, impossible repetition eventually negates the signal: all that is left is the lungs and vocal vibrations of the individual who gasps, cries in defeat, and whoops in ecstasy.  These human voices are simultaneously punctuated by the feedback of microphone and amplified instruments, and a tape track composed through process – a bouncing of data back and forth between visual and aural softwares that eventually results in nothing but glitched statements.  This tape track is analogous to the squealing proximity of the sender to the receiver in the scored feedback parts.  The colour grey in the work’s title is inspired by the back and forth motion of a 2HB pencil stroking endlessly across an empty pad of paper.

Echo Grey 2016

 

 

Song of Seven

A composition for children’s choir featuring seven voices and seven sets of biodata with piano accompaniment.

In this song, young performers contemplate an emotional time in their lives, and recount this memory as an improvised vocal solo.The choir is instructed to enter into a meditative state during these emotional solos, deeply listening to the tale and empathizing with the soloist, using imagination to recreate the scene.  Choir members are attached to a musical instrument I call the BioSynth a small synthesizer that sonifies heartbeats and sweat release for each individual member to pre-programmed tones. Sweat release, often acknowledged as a robust measure of emotional engagement, is signaled by overtones that appear and reappear over a drone; meanwhile the heartbeats of each chorister are sounded according to blood flow, providing a light percussion.

The musical score combines traditional music notation with vocal games and rhythms determined not necessarily by the conductor or score but by beatings of the heart and bursts of sweat. Discreet flashing lights on the synthesizer boxes in front of the choristers allowed the singers to discern the rhythms and patterns of their heart and sweat glands, which therefore permits compositions to incorporate the rhythms of the body into the final score as markers that trigger sonic events.

This choral composition was workshopped over a one-week residency at the LIVELab (McMaster University) with selected members of the Hamilton Children’s Choir, and facilitated by Hamilton Artists Inc. with support from the Canada Council for the Arts.

For more information

Hamilton Children's Choir
Daniel Àñez (Spanish biography)
Hamilton Artists' Inc
LIVElab
Canada Council for the Arts

Piano accompanist: Daniel Àñez
Hardware design: Martin Peach
Software design: Erin Gee

Partitions1

Erin Gee - Larynx Series

Larynx Series

Larynx1, Larynx2, Larynx3, Larynx4 (2014)

Epson UltraChrome K3 ink on acid-free paper.

Edition of 5.

86 x 112 cm.

Communication is a collaborative process between listener and speaker that implicates both their sensory bodies. In consideration of what Roland Barthes called the “grain” of the voice, I think about how contemporary technological tools listen to and reproduce this naturalized, sonorous voice, concretize it, compress it, amplify it, and sometimes distort it. What we consider our voice in a technologically mediated environment is a visual-vocal-technological assemblage that implicates amplification, scale, human and digital bodies and networks. The multiplication and proliferation of voice on someone else’s device happens in asynchronous ways, much the same as a vocal score is a vocal performance that lay crystalized and dormant until activated by human action. 

This series of printed works is a set of vocal quartets created from the original material of the human voice, the larynx, which was amplified/reproduced/echoed through visual perception processes in machine and human cognizers and re-performed by multiple human singers. In endoscopic photography the flesh material of the larynx is extended through the sensory mechanisms of a machine. Light bounces off the flesh of the larynx and is interpreted by a camera as pixel data. This digital image is made of raster pixels faithful to their fleshy origins but limited in detail. If one amplifies the raster image of the voice (zoom) the image reveals its materiality as a technical assemblage. I transformed the raster image into a vector in order to continue playing with bouncing machine processes off one another to “voice” how a machine might perceive this human larynx. While the rasterization process I used eliminated the fleshy details of the original larynx, the image emphasized original architectural structures of the larynx, which now more closely resembled a topographical map, or circuit board. This technologically processed version of the larynx could be infinitely amplified or diminished without loss or distortion. At this point I detected an unexpected feature: my associative, human perception could see markings that resembled Western notation at the edges of this transformed image of the human voice, complete with staves, bar lines and notes. My transcription process included dividing each bar into four equal parts, and then transcribing rhythms in a linear relationship to where the small note-like marks were present horizontally in common 4/4 time.  Pitches were interpreted as they appeared vertically on the abstracted staves. 

Since there exist four sides to each two-dimensional image, there were four staves for each representation of the larynx in the series. I set this music into four separate vocal partitions for choral song: returning this technologically amplified process of voicing back into multiple human throats.

Exhibition/Performance history:
(Performance) Tellings: A Posthuman Vocal Concert. Toronto Biennial. Curated by Myung-Sun Kim and Maiko Tanaka.
Vocales Digitales – Solo exhibition. March 26 – May 14 2016, Hamilton Artists’ Inc.: Hamilton, Canada. Curated by Caitlin Sutherland.
(Premiere Performance) Rhubarb, Rhubarb, peas and carrots. July 17-September 5, 2015. Dunlop Art Gallery: Regina, Canada. Curated by Blair Fornwald.
Erin Gee and Kelly Andres. August 25 – October 24, 2014. Cirque du Soleil Headquarters: Montreal, Canada. Curated by Eliane Elbogen.
Collections
Larynx3 (edition 1/5) was purchased by the Saskatchewan Arts Board for their permanent collection in 2019.

Exhibition/Performance history:

Toronto Biennial (upcoming) November 2019.

Vocales Digitales (Solo Exhibition), 2016. Hamilton Artists’ Inc. Hamilton, Canada. Curated by Caitlin Sutherland.

Rhubarb, rhubarb, peas and carrots, 2015. Dunlop Art Gallery. Regina, Canada. Curated by Blair Fornwald. Larynx Songs premiered with singers Erin Gee, Carrie Smith, Kristen Smith, and Kaitlin Semple.

Erin Gee and Kelly Andres. Cirque du Soleil Headquarters, Montreal Canada. Curated by Eliane Elbogen.

Voice of Echo (Solo Exhibition), 2014. Gallerywest. Toronto, Canada. Curated by Evan Tyler.

Erin Gee - Swarming Emotional Pianos

Swarming Emotional Pianos

A looming projection of a human performer surrounded by six musical chime robots: their music is driven by the shifting rhythms of the performer’s emotional body, transformed into data and signal that activates the motors of the ensemble.

(2012 – ongoing)

Aluminium tubes, servo motors, custom mallets, Arduino-based electronics, iCreate platforms

Approximately 27” x 12” x 12” each

Swarming Emotional Pianos is a robotic installation work that features performance documentation of an actress moving through extreme emotions in five minute intervals. During these timed performances of extreme surprise, anger, fear, sadness, sexual arousal, and joy, Gee used her own custom-built biosensors to capture the way that each emotion affects the heartbeat, sweat, and respiration of the actress. The data from this session drives the musical outbursts of the robotics surrounding the video documentation of the emotional session. Visitors to this work are presented with two windows into the emotional state of the actress: both through a large projection of her face, paired with stereo recording of her breath and sounds of the emotional session, and through the normally inaccessible emotional world of physiology, the physicality of sensation as represented by the six robotic chimes.

Micro bursts of emotional sentiment are amplified by the robots, providing an intimate and abstract soundtrack for this “emotional movie”. These mechanistic, physiological effects of emotion drive the robotics, illustrating the physicality and automation of human emotion. By displaying both of these perspectives on human emotion simultaneously, I am interested in how the rhythmic pulsing of the robotic bodies confirm or deny the visibility and performativity of the face. Does emotion therefore lie within the visibility of facial expression, or in the patterns of bodily sensation in her body? Is the actor sincere in her performance if the emotion is felt as opposed to displayed?

Custom open-source biosensors that collect heartrate and signal amplitude, respiration amplitude and rate, and galvanic skin response (sweat) have been in development by Gee since 2012.  Click here to access her GitHub page if you would like to try the technology for yourself, or contribute to the research.

Credits

Thank you to the following for your contributions:

  • In loving memory of Martin Peach (my robot teacher) – Sébastien Roy (lighting circuitry) – Peter van Haaften (tools for algorithmic composition in Max/MSP) – Grégory Perrin (Electronics Assistant)
  • Jason Leith, Vivian Li, Mark Lowe, Simone Pitot, Matt Risk, and Tristan Stevans for their dedicated help in the studio
  • Concordia University, the MARCS Institute at the University of Western Sydney, Innovations en Concert Montréal, Conseil des Arts de Montréal, Thought Technology, and AD Instruments for their support.

Swarming Emotional Pianos (2012-2014) Machine demonstration March 2014 – Eastern Bloc Lab Residency, Montréal

Anim.OS

(2012)

Generative software choir installation in collaboration with Oliver Bown

Inspired by exerpts of Elizabeth Grosz’s book “Architecture from the Outside”, I made recordings of myself singing text that made reference to insideness, outsideness, and flexible structures. These recordings were arranged by composer Oliver Bown into a networked choral software.

Anim.OS is a networked computer choir developed by Oliver Bown (Sydney) and Erin Gee (Montreal) in 2012. Videography and sound recording by Shane Turner (Montreal).

This is documentation of one of the first tests for improvisation and control of the choir at the University of Sydney.

Erin Gee and Stelarc - Orpheux Larynx

Orpheux Larnyx

(2011)

Vocal work for three artificial voices and soprano, feat. Stelarc.

Music by Erin Gee, text by Margaret Atwood.

I made Orpheux Larynx while in residence at the MARCs Auditory Laboratories at the University of Western Sydney, Australia in the summer of 2011. I was invited by Stelarc to create a performance work with an intriguing device he was developing there called the Prosthetic Head, a computerized conversational agent that responds to keyboard-based chat-input with an 8-bit baritone voice. I worked from the idea of creating a choir of Stelarcs, and developed music for three voices by digitally manipulating the avatar’s voice. Eventually Stelarc’s avatar voices were given the bodies of three robots: a mechanical arm, a modified segueway, and a commercially available device called a PPLbot. I sang along with this avatar-choir, while carrying my own silent avatar with me on a djgital screen.

It is said that after Orpheus’ head was ripped from his body, he continued singing as his head floated down a river. He was rescued by two nymphs, who lifted his head to the heavens, to become a star. In this performance, all the characters (Stelarc’s, my voice, Orpheus, Euridice, the nymphs) are blended into intersubjective robotic shells that speak and sing on our behalf. The flexibility of the avatar facilitates a pluratity of voices to emerge from relatively few physical bodies, blending past subjects into present but also possible future subjects. Orpheus is tripled to become a multi-headed Orpheux, simultaneously disembodied head, humanoid nymph, deceased Euridice. The meaning of the work is in the dissonant proximity between the past and present characters, as well as my own identity inhabiting the bodies and voices of Stelarc’s prosthetic self.

Credits

Music, video and performance by Erin Gee. Lyrics “Orpheus (1)” and “Orpheus (2)” by Margaret Atwood. Robotics by Damith Herath. Technical Support by Zhenzhi Zhang (MARCs Robotics Lab, University of Western Sydney). Choreography coaching by Staci Parlato-Harris.

Special thanks to Stelarc and Garth Paine for their support in the creation of the project.

This research project is supported by the Social Sciences and Humanities Research Council of Canada and MARCS Auditory Labs at the University of Western Sydney. The Thinking Head project is funded by the Australian Research Council and the National Health and Medical Research Council.

Music: Orpheux Larynx © 2011 . Lyrics are the poems by Margaret Atwood: “Orpheus (1)” and “Orpheus (2)”, from the poetry collection Selected Poems, 1966 – 1984 currently published by Oxford University Press © 1990 by Margaret Atwood. In the United States, the poems appear in Selected Poems II, 1976 – 1986currently published by Houghton Mifflin © 1987 by Margaret Atwood. In the UK, these poems appear in Eating Fire, Selected Poetry 1965 – 1995 currently published by Virago Press, ©1998 by Margaret Atwood. All rights reserved.

BodyRadio

(2011)

Four-part score for electronic voices in organic bodies debuted as part of New Adventure in Sound Art’s Deep Wireless Festival of Transmission Art, Toronto, Canada

Body Radio is a composition for four performers that reverses the interiority/exteriority of a radio, which is a human voice in an electronic body. Small wireless microphones are placed directly in the mouths of the performers, who are each facing a guitar amplifier. The performers control the sensitivity of both the amplifier’s receiving function and the microphone’s sending function in accordance with the score. The final sounds are a combination of inner mouth noises, breathing, and varying pitches feedback controlled by the opening and closing of mouths.