Performance

AFFECT FLOW

AFFECT FLOW (2022)
Performance at MUTEK Montreal 2023. Photography by Vivien Gaumand.

2022

AFFECT FLOW is a music performance work of approximately 30 minutes that initiates listeners into a state of “non-naturalist emotion”: emotional manufacture as a technology for survival or pleasure. It is a hybrid of electroacoustic music with live-spoken verbal suggestion, an ensemble of live biofeedback created by hardware synthesisers, and song.

In AFFECT FLOW I use psychological hacks borrowed from method acting and clinical psychology in order to move beyond “natural” emotion, playing with biofeedback music paradigms and group participation through folk hypnosis, verbal suggestion, clinical psychology methods, roleplay, song, and textural sounds.

These performance techniques, which I call “wetware,” challenge the authoritarian aura of quantification, transforming biofeedback into a feminist space of posthumanist connection and expression.

The biofeedback performers (up to 10) in AFFECT FLOW are volunteers referred to as surrogates who meet me a half hour before the performance. After a brief musical interlude, I extend an invitation for the audience to join us in guided visualization and hypnosis led by me and my voice. Each surrogate operates a BioSynth, a musical instrument of my design that responds to physiological markers like heart rate, breathing, and skin conductance as a control parameter for electronic sound. The mechanics of the BioSynths are explained clearly, allowing listeners to perceive the shifting mood in the room during the performance through the bodies of the performers. This collaborative interplay of bodies gives rise to affect as an ecological relation, transcending individual subjectivity.

A lightbulb illuminates at the feet of each performer when their signals are amplified. Because I can control the audio outputs of each body via a mixing board, I can highlight solos, duets, trios, and ensemble moments live in the moment.

Credits

Affect Flow (2022)
Music composition and performance by Erin Gee.

Dramaturgy and text by Jena McLean. Poetry by Andrew C. Wenaus.

BioSynth affective hardware synthesizers are an open-source project by Erin Gee. Programming for this iteration by Etienne Montenegro with sonification programming by Erin Gee. PCB design by Grégory Perrin.

Click here for the BioSynth GitHub.

Click here for Tech Rider

Performances:

International Symposium of Electronic Art. CCCB Barcelona, ES, May 2022.

Society for Art and Technology, Montreal CA, July 2022.

Vancouver New Music – Orpheum Annex, Vancouver CA November 2022.

Electric Eclectics Festival, Meaford ON, CA, August 2023

MUTEK Montreal, CA, August 2023.

AFFECT FLOW (2022) at Vancouver New Music, Vancouver.

We as Waves

We as Waves (2021)
Premiere performance at Akousma Festival, Montreal.
Photography by Caroline Campeau.

2020

ASMRtronica is an ongoing project developed in the artist’s home-studio during the novel coronavirus pandemic: a manifestation of a desire for intimacy in sound, when touch was not possible. This is a style of music applied to several works as Gee develops her own vocabularies of psychosomatic performance.

Through ASMRtronica, Gee brings to life a combination of electroacoustic music and the sounds of Autonomous Sensory Meridian Response (ASMR) videos: clicks, whispers, soft spoken voice, taps, and hand gestures inspired by hypnosis, tactility, intimacy, and verbal suggestion. Through ongoing development of this genre, she explores the sonic limits of the sensorial propositions of ASMR, journeying into embodied and unconscious feedback loops in sound.

Credits

We as Waves (2020)
Released August 2021 by Erin Gee.
Music composition and performance by Erin Gee. Text by Jena McLean. Videography by Michel de Silva.

To the Farther (2020)
Released September 8, 2020 by Erin Gee.
Music composition and art by Erin Gee.

We as Waves

We as Waves (2020) is a collaboration between myself and queer playwright Jena McLean. The text in this work is inspired by an essay by feminist theorist of electronic music Tara Rodgers. What does it mean to enter into an affective relationship of touch with sound? The work embodies a dark narrative of sonic becoming aided by hypnosis and physiological relationship to sound and voice, closed by the the following quotes from queer theologist Catherine Keller:

“As the wave rolls into realization, it may with an uncomfortable passion
fold its relations into the future: the relations, the waves of our possibility,
comprise the real potentiality from which we emerge…”

“We are drops of an oceanic impersonality. We arch like waves,
like porpoises.”

We as Waves (2020)

To the Farther

In September 2020 I launched To the Farther as part of MUTEK Montreal’s online exhibition Distant Arcades. It is first a series of music that explore the limits of tactile whispers, proximity, and hypnotic language through ASMR and electronic sound.

To the Farther is the title of the first iteration: A fresh take on texture, form, and the plasticity of reality under digital transformations, also is a “remix” of my ASMR recordings made in Machine Unlearning (2020).

To the Farther (2020)

Presence

Presence (2020)
Screen capture from performance at Network Music Festival 2020. Online.

2020

In Presence, artists Erin Gee and Jen Kutler reconfigure voice and touch across the internet through a haptic/physical feedback loop, using affective and physical telematics to structure an immersive electronic soundscape through physiological response.

(March 2020) I was quarantining intensely during the coronavirus pandemic when Jen Kutler reached out to me asking if I would like to collaborate on a new work that simulates presence and attention over the network.  We have never met in real life, but we started talking on the internet every day. We eventually built a musical structure that implicates live webcam, endoscopic camera footage, biosensor data, sounds rearranged by biosensor data, ASMR roleplay and touch stimulation devices delivering small shocks to each artist. We developed this work at first through a month-long intensive online residency at SAW Video, while in conversation with many amazing artists, curators and creative people.

Presence is a telematic music composition for two bodies created during the Spring of 2020, at the height of confinement and social distancing during the COVID19 epidemic in Montreal and New York state. This work has been performed for online audiences by both artists while at home (Montreal/New York), featuring Gee and Kutler each attached to biosensors that collect the unconscious behaviours of their autonomic nervous systems, as well as touch simulation units that make this data tactile for each artist through transcutaneous nerve simulation.

Audiences are invited to listen attentively this networked session for physicalized affect through the sonification of each artists’ biodata, which also slowly triggers an ASMR roleplay that is actively reconfigured by the bodily reactions of each artist. Music and transcutaneous electronic nerve stimulation is triggered by listening bodies: these bodies are triggered by the sounds and electric pulses, everything in the system is unconscious, triggering and triggered by each other through networked delays, but present. Through this musical intervention the artists invite the listeners to imagine the experience and implicate their own bodies in the networked transmission, to witness the artists touching the borders of themselves and their physical spaces while in isolation.

Credits

web socket for puredata (wspd) created for Presence by Michael Palumbo. Available on Github here.

Biodata circuitry and library created by Erin Gee. Available on Github here.

Electronic touch stimulation device for MIDI created by Jen Kutler. Available on Github here.

Performance built with a combination of puredata (data routing), Processing (biodata generated visuals), Ableton Live (sounds) and OBS (live telematics) by Erin Gee and Jen Kutler.

Presence was created in part with the support from SAW Video artist-run centre, Canada.

Exhibition/Performance history

SAW Video “Stay at Home” Residency March-April 2020

Network Music Festival July 17 2020

Fonderie Darling – As part of Allegorical Circuits for Human Software curated by Laurie Cotton Pigeon. August 13 2020

Video

Presence (2020)
Performance by Erin Gee and Jen Kutler at Network Music Festival.

Gallery

Machine Unlearning

Vision calibration from Machine Unlearning (2020).
Photography by Elody Libe. Image courtesy of the artist.

2020

In Machine Unlearning, the artist greets the viewer and slowly offers them a unique neural conditioning “treatment”: sonically reproducing the unraveling outputs of an LSTM algorithm as it “unlearns” through whispering, moving backwards in time through its epochs of training.

This aural treatment is couched in a first-person roleplay scenario that grounds the viewer through a series of simple audio visual tests. At no point is the neural network technology “seen” – it is instead performed by a human interlocuter, translated into affective vocality and whispered text. The algorithm was created by media artist Sofian Audry, and trained on the text of Emily Brontë’s novel Wuthering Heights (1847). This novel was chosen in part because of its richly poetic syntax, but also for its feminine vocality and conceptual themes of love and intergenerational trauma. Machine Unlearning is a novel combination of neural network technologies and the popular internet genre “Autonomous Sensory Meridian Response,” or ASMR. ASMR is a social media genre that has developed largely through massive social media metrics in the form of upvotes, clicks, comments, subscribes, and likes in response to audio visual stimuli that creates feelings of mild euphoria, relaxation and pleasure. ASMR fans online seek out specific video content that causes the physiological reaction of “tingles” – tingling sensations across the skin, a mild body high, or simply a means of falling asleep. Gee considers ASMR as a form of psychosomatic body hacking. By combining machine learning with ASMR, Gee draws parallels between cutting edge autonomous/non-conscious algorithms and the autonomous/unconscious functions of the human body. Just as ASMRtists use specific sounds and visual patterns in their videos to “trigger” physical reactions in the viewer, machine learning algorithms also unconsciously respond to patterns perceived through limited senses in order to develop learning (and unlearning) results. The artist’s emphasis on whispering the textual outputs of the algorithm as it slowly “unlearns” allows the listener to grasp the materiality of machine learning processes at a human level, but also a subconscious level: allowing one’s body to be mildly and charmingly “hacked” through soft and gentle play.

The use of the word “intelligence” in the metaphor of AI focuses on higher functions of consciousness that algorithms do not possess. While algorithms have not meaningfully achieved a humanistic consciousness to date, today’s algorithms act autonomously on sensory information, processing data from its environment in unconscious, automatic ways. The human brain also responds unconsciously and automatically to sensory data in its environment, for example, even if you are not conscious of how hot a stove is, if you place your hand on a hot stove, your hand will automatically pull away. These unconscious, physiological actions in the sensory realm points to an area of common experience between algorithms and the human.  For more explanation of these ideas, take a look at the work of postmodern literary critic N. Katherine Hayles in her 2017 book Unthought: The power of the cognitive nonconscious.  In this way I wonder if the expression “autonomous intelligence” makes more sense than “artificial intelligence”, however like posthumanist feminist Rosi Braidotti I am deeply suspicious of the humanist pride that our species takes in the word “intelligence” as something that confers a special status and justification for domination of other forms of life on earth.

Credits

Photography and videography by Elody Libe.

Production Support: Machine Unlearning video installation was produced at Perte de Signal with the support of the MacKenzie Art Gallery for the exhibition To the Sooe (2020) curated by Tak Pham.

The roleplay performance was developed during my artistic residency at Locus SonusÉcole Superieur d’art d’Aix en Provence and Laboratoire PRISM.

Custom LSTM Algorithm created by media artist Sofian Audry

Video

Machine Unlearning (2020)
Videography by Elody Libe

Gallery

This work was first developed as a performance that debuted at Cluster Festival, Winnipeg in 2019.  During live performance, each audience member dons a pair of wireless headphones.  The performance allows the audience members to see the ASMR “result” of the performance for camera, simultaneous with the ability to see my “backstage” manipulation of props and light in real time.

BioSolo

BioSolo 2016
Photography: Wren Noble

2016

Using the BioSynth, I improvised a set for my breath/voice and my sonified heart and sweat release at No Hay Banda in an evening that also featured the very interesting work of composer Vinko Globokar (Russia).

The improvisation is very sparing, the goal is to exploit interesting rhythmic moments between heavy breath-song and the heartbeat, all the while exploring limits of respiratory activity and seeing what effect it has on my physiology.

Exhibition/Performance history

BioSolo was first performed at No Hay Banda series in Montreal at La Sala Rossa, organized by Daniel Àñez and Noam Bierstone.

Gallery

BioSolo 2016
Photography: Wren Noble

Song of Seven

Song of Seven (2016)

2016

A composition for children’s choir featuring seven voices and seven sets of biodata with piano accompaniment.

In this song, young performers contemplate an emotional time in their lives, and recount this memory as an improvised vocal solo.The choir is instructed to enter into a meditative state during these emotional solos, deeply listening to the tale and empathizing with the soloist, using imagination to recreate the scene.  Choir members are attached to a musical instrument I call the BioSynth a small synthesizer that sonifies heartbeats and sweat release for each individual member to pre-programmed tones. Sweat release, often acknowledged as a robust measure of emotional engagement, is signaled by overtones that appear and reappear over a drone; meanwhile the heartbeats of each chorister are sounded according to blood flow, providing a light percussion.

The musical score combines traditional music notation with vocal games and rhythms determined not necessarily by the conductor or score but by beatings of the heart and bursts of sweat. Discreet flashing lights on the synthesizer boxes in front of the choristers allowed the singers to discern the rhythms and patterns of their heart and sweat glands, which therefore permits compositions to incorporate the rhythms of the body into the final score as markers that trigger sonic events.

Credits

Piano accompanist: Daniel Àñez

Hardware design: Martin Peach

Software design: Erin Gee

Performance history

This choral composition was workshopped over a one-week residency at the LIVELab (McMaster University) with selected members of the Hamilton Children’s Choir, and facilitated by Hamilton Artists Inc. with support from the Canada Council for the Arts.

Links

Hamilton Children's Choir
Daniel Àñez (Spanish biography)
Hamilton Artists' Inc
LIVElab
Canada Council for the Arts

Video

Song of Seven (2016)

Scores

Song of Seven (2016)

Gallery

Song of Seven (2016)

Erin Gee - Larynx Series

Larynx Series

Larynx1, Larynx2, Larynx3, Larynx4 (2014)
Epson UltraChrome K3 ink on acid-free paper.
Edition of 5.
86 x 112 cm.

2014

What we consider our voice in a technologically mediated environment is a visual-vocal-technological assemblage that implicates amplification, scale, human and digital bodies and networks. The multiplication and proliferation of voice on someone else’s device happens in asynchronous ways, much the same as a vocal score is a vocal performance that lay crystalized and dormant until activated by human action.

This series of printed works is a set of vocal quartets created from the original material of the human voice, the larynx, which was amplified/reproduced/echoed through visual perception processes in machine and human cognizers and re-performed by multiple human singers. In endoscopic photography the flesh material of the larynx is extended through the sensory mechanisms of a machine. Light bounces off the flesh of the larynx and is interpreted by a camera as pixel data. This digital image is made of raster pixels faithful to their fleshy origins but limited in detail. If one amplifies the raster image of the voice (zoom) the image reveals its materiality as a technical assemblage. I transformed the raster image into a vector in order to continue playing with bouncing machine processes off one another to “voice” how a machine might perceive this human larynx. While the rasterization process I used eliminated the fleshy details of the original larynx, the image emphasized original architectural structures of the larynx, which now more closely resembled a topographical map, or circuit board. This technologically processed version of the larynx could be infinitely amplified or diminished without loss or distortion. At this point I detected an unexpected feature: my associative, human perception could see markings that resembled Western notation at the edges of this transformed image of the human voice, complete with staves, bar lines and notes. My transcription process included dividing each bar into four equal parts, and then transcribing rhythms in a linear relationship to where the small note-like marks were present horizontally in common 4/4 time.  Pitches were interpreted as they appeared vertically on the abstracted staves.

Since there exist four sides to each two-dimensional image, there were four staves for each representation of the larynx in the series. I set this music into four separate vocal partitions for choral song: returning this technologically amplified process of voicing back into multiple human throats.

Credits

???

Exhibition/Performance history

Toronto Biennial (upcoming) November 2019.

Vocales Digitales – Solo exhibition. March 26 – May 14 2016, Hamilton Artists’ Inc.: Hamilton, Canada. Curated by Caitlin Sutherland.

Rhubarb, rhubarb, peas and carrots, 2015. Dunlop Art Gallery. Regina, Canada. Curated by Blair Fornwald. Larynx Songs premiered with singers Erin Gee, Carrie Smith, Kristen Smith, and Kaitlin Semple.

(Premiere Performance) Rhubarb, Rhubarb, peas and carrots. July 17-September 5, 2015. Dunlop Art Gallery: Regina, Canada. Curated by Blair Fornwald.

Erin Gee and Kelly Andres. August 25 – October 24, 2014. Cirque du Soleil Headquarters: Montreal, Canada. Curated by Eliane Elbogen.

Voice of Echo (Solo Exhibition), 2014. Gallerywest. Toronto, Canada. Curated by Evan Tyler.

(Performance) Tellings: A Posthuman Vocal Concert. Toronto Biennial. Curated by Myung-Sun Kim and Maiko Tanaka.

Collections

Larynx3 (edition 1/5) was purchased by the Saskatchewan Arts Board for their permanent collection in 2019.

Gallery

Photo Credits
???

Erin Gee - Vocaloid Gig At Nocturne (X + 1)

Gig Vocaloid

Gig Vocaloid (2015)
Vocaloid Gig At Nocturne (X + 1)

2015

A video-text pop band from a dystopic future where the human voice is lost and pop music reigns supreme.

Virtual voices are key for these pop stars. Dancing, costumed performers carry tablets that display the human larynx and song lyrics as they dance in sync.

The project is inspired by virtual pop stars such as Hatsune Miku, which exist equally as distributed visual media avatar (holograms, merchandise), and as digital software tools for public, fan-based synthesized vocal creation. GIG VOCALOID is also inspired by boy and girl pop bands, whereupon individual voices and musicality are often superseded by a pop “character.” This is especially true in Japanese pop group AKB48, which has 48 female members whom are voted upon by the public for the right to solo singing and “leadership” within the group.

In this pop music context, celebrity character, fashion and visual appeal is more important than the human singing voice itself, which is often replaced by synthesizers and pitch correction. GIG VOCALOID invokes a fantasy posthumanist future where the human voice is lost, subjectivity is dead, and everyone is celebrating.

Externalizing the human voice outside of the preciousness of the human body, the human larynx (typically a hidden, interior aspect of vocal performance) is displayed prominently on tablets. “Lyrics” to their song flash aleatorically through these videos, which enable humans performers to be the support for digital artwork. GIG VOCALOID re-localizates the voice beyond the borders of the flesh body in an infectious avatar-dream.

Performance history

GIG VOCALOID is a virtual pop band that had its first performance at the Musée d’art Contemporain de Montreal in February 2015 at X + 1: an evening of Internet-inspired art.

Gallery

Gig Vocaloid (2015)
Vocaloid Gig At Nocturne (X + 1)

Erin Gee - 7 Nights of Unspeakable Truth at Nuit Blanche Toronto 2013

7 Nights of Unspeakable Truth

7 Nights of Unspeakable Truth at Nuit Blanche Toronto (2013)
7-channel audio installation, woven blankets, text work
8 hours duration

2013

It’s a search for disembodied voices in technotongues.

7 Nights of Unspeakable Truth is a recording that consists of dusk-till dawn searches for number stations on shortwave radio frequencies. Arranged in order, from day one to day seven, the installation allows one to physically walk through seven evenings of shortwave, synchronized in their respective times, in physical space. This spatialization of each night allows listeners to observe patterns and synchronicities in Gee’s nightly search for unexplained broadcasts that consist only of numbers, tones and codes.”

This body of work is informed by my fascination with mystery, symbolic organization and communication. I take on the nocturnal patterns of a solitary listener, connecting to other enthusiasts via online chat in order to share an obscure passion. The patterns of my searching during 7 Nights of Unspeakable Truth are woven directly into blankets, another evening activity partaken during Nuit Blanche 2013 in which I encoded and wove my audio searches into a physical form that you could wrap yourself in while you listen – two different versions of encoded time on radio airwaves.

More on this work:

Gautier, Philippe-Aubert. “Multichannel sound and spatial sound creation at Sporobole: A short account of live performance, studio design, outdoor multichannel audio, and visiting artists.” Divergence Press #3: Creative Practice in Electroacoustic Music (2016).

Exhibition/Performance history

Nuit Blanche Toronto (2013)

Links

Additional Research by Erin Gee
Academic article by Philippe-Aubert Gautier

Video

7 Nights of Unspeakable Truth (2013)

Gallery

7 Nights of Unspeakable Truth (2013)

Anim.OS

Anim.OS (2012)

2012

Inspired by exerpts of Elizabeth Grosz’s book “Architecture from the Outside”, I made recordings of myself singing text that made reference to insideness, outsideness, and flexible structures. These recordings were arranged by composer Oliver Bown into a networked choral software.

Anim.OS is a networked computer choir developed by Oliver Bown (Sydney) and Erin Gee (Montreal) in 2012. Videography and sound recording by Shane Turner (Montreal).

This is documentation of one of the first tests for improvisation and control of the choir at the University of Sydney.

Credits

Generative software choir installation in collaboration with Oliver Bown

Video

Anim.OS – Development – Lab Improvisation with Oliver Bown and Erin Gee