Physical Media

Presence

Presence (2020)
Screen capture from performance at Network Music Festival 2020. Online.

2020

In Presence, artists Erin Gee and Jen Kutler reconfigure voice and touch across the internet through a haptic/physical feedback loop, using affective and physical telematics to structure an immersive electronic soundscape through physiological response.

(March 2020) I was quarantining intensely during the coronavirus pandemic when Jen Kutler reached out to me asking if I would like to collaborate on a new work that simulates presence and attention over the network.  We have never met in real life, but we started talking on the internet every day. We eventually built a musical structure that implicates live webcam, endoscopic camera footage, biosensor data, sounds rearranged by biosensor data, ASMR roleplay and touch stimulation devices delivering small shocks to each artist. We developed this work at first through a month-long intensive online residency at SAW Video, while in conversation with many amazing artists, curators and creative people.

Presence is a telematic music composition for two bodies created during the Spring of 2020, at the height of confinement and social distancing during the COVID19 epidemic in Montreal and New York state. This work has been performed for online audiences by both artists while at home (Montreal/New York), featuring Gee and Kutler each attached to biosensors that collect the unconscious behaviours of their autonomic nervous systems, as well as touch simulation units that make this data tactile for each artist through transcutaneous nerve simulation.

Audiences are invited to listen attentively this networked session for physicalized affect through the sonification of each artists’ biodata, which also slowly triggers an ASMR roleplay that is actively reconfigured by the bodily reactions of each artist. Music and transcutaneous electronic nerve stimulation is triggered by listening bodies: these bodies are triggered by the sounds and electric pulses, everything in the system is unconscious, triggering and triggered by each other through networked delays, but present. Through this musical intervention the artists invite the listeners to imagine the experience and implicate their own bodies in the networked transmission, to witness the artists touching the borders of themselves and their physical spaces while in isolation.

Credits

web socket for puredata (wspd) created for Presence by Michael Palumbo. Available on Github here.

Biodata circuitry and library created by Erin Gee. Available on Github here.

Electronic touch stimulation device for MIDI created by Jen Kutler. Available on Github here.

Performance built with a combination of puredata (data routing), Processing (biodata generated visuals), Ableton Live (sounds) and OBS (live telematics) by Erin Gee and Jen Kutler.

Presence was created in part with the support from SAW Video artist-run centre, Canada.

Exhibition/Performance history

SAW Video “Stay at Home” Residency March-April 2020

Network Music Festival July 17 2020

Fonderie Darling – As part of Allegorical Circuits for Human Software curated by Laurie Cotton Pigeon. August 13 2020

Video

Presence (2020)
Performance by Erin Gee and Jen Kutler at Network Music Festival.

Gallery

to the sooe

to the sooe (2018)
Sofian Audry and Erin Gee. Photography: Alexandre Saunier

2018

A 3D printed sound object that houses a human voice murmuring the words of a neural network trained by a deceased author.

to the sooe (SLS 3D printed object, electronics, laser-etched acrylic, audio, 2018) is the second piece in a body of work Erin Gee made in collaboration with artist Sofian Audry that explores the material and authorial agencies of a deceased author, a LSTM algorithm, and an ASMR performer.

The work in this series transmits the aesthetics of an AI “voice” that speaks through outputted text through the sounds of Gee’s softly spoken human vocals, using a human body as a relatively low-tech filter for processes of machine automation.  Other works in this series include of the soone (2018), and Machine Unlearning (2018-2019)

to the sooe is a sound object that features a binaural recording of Erin Gee’s voice as she re-articulates the murmurs of a machine learning algorithm learning to speak. Through this work, the artists re-embody the cognitive processes and creative voices of three agents (a deceased author, a deep learning neural net, and an ASMR performer) into a tangible device. These human and nonhuman agencies are materialized in the object through speaking and writing: a disembodied human voice, words etched onto a mirrored, acrylic surface, as well as code written into the device’s silicon memory.

The algorithmic process used in this work is a deep recurrent neural network agent known as “long short term memory” (LSTM). The algorithm “reads” Emily Brontë’s Wuthering Heights character by character, familiarizing itself with the syntactical universe of the text. As it reads and re-reads the book, it attempts to mimic Brontë’s style within the constraints of its own artificial “body”, hence finding its own alien voice.

The reading of this AI-generated text by a human speaker allows the listener to experience simultaneously the neural network agent’s linguistic journey as well as the augmentation of this speech through vocalization techniques adapted from Autonomous Sensory Meridian Response (ASMR). ASMR involves the use of acoustic “triggers” such as gentle whispering, fingers scratching or tapping, in an attempt to induce tingling sensations and pleasurable auditory-tactile synaesthesia in the user. Through these autonomous physiological experiences, the artists hope to reveal the autonomous nature of the listener’s own body, implying the listener as an already-cyborgian aspect of the hybrid system in place.

Credits

Sofian Audry – neural network programming and training

Erin Gee – vocal performer, audio recording and editing, electronics

Grégory Perrin – 3D printing design and laser etching

Exhibition history

Taking Care – Hexagram Campus Exhibition @ Ars Electronica, Linz Sept 5-11 2018. Curated by Ana Kerekes.

Printemps Numérique – McCord Museum Montreal, May 29-June 3 2019. Curated by Erandy Vergara.

To the Sooe – MacKenzie Art Gallery, Regina January 26-April 26, 2020. Curated by Tak Pham.

Sounds

to the sooe (2018)

Gallery

Project H.E.A.R.T.

Project H.E.A.R.T. (2017)

2017

A biodata-driven VR game where militainment and pop music fuel a new form of emotional drone warfare.

A twist on popular “militainment” shooter video games, Project H.E.A.R.T. invites the viewer to place their fingers on a custom biodata device, and summon their enthusiasm to engage their avatar, Yowane Haku, in “combat therapy.” Fans of the Vocaloid characters may recognize Haku as the “bad copy” of Japanese pop celebrity Hatsune Miku, a holographic personnage that invites her fans to pour their content and songs into her virtual voice.

The biosensing system features a pulse sensor, and a skin conductance sensor of Gee’s design. Through principles of emotional physiology and affective computing, the device gathers data relative to heart rate and blood flow from index finger, and skin conductance from middle and ring fingers of users. The biodata is read by a microcontroller and transferred to Unity VR, thus facilitating emotional interactivity: a user’s enthusiasm (spikes in signal amplitude in skin conductance, elevated heart rate, and shifts in amplitude of the pulse signal) stimulates the holographic pop star to sing in the virtual warzone, thus inspiring military fighters to continue the war, and create more enemy casualties. At the end of the experience the user is confronted with their “score” of traumatized soldiers vs enemies killed, with no information whether this means that they won or lost the “game”.

The user is thus challenged to navigate soldier’s emotional anxieties and summon their positivity to activate Haku’s singing voice as soldiers battle not only against a group of enemies, but also against their own lack of confidence in times of global economic instability.

The landscape of Project H.E.A.R.T. was built from geopolitically resonant sites found on Google Maps, creating a dreamlike background for the warzone. In-game dialogue wavers between self-righteous soldier banter typical of video games, and self-help, bringing the VR participant to an interrogation of their own emotional body in a virtual space that conflates war, pop music, drone technology, and perhaps movement-induced VR nausea.

As Kathryn Hamilton pointed out in her 2017 essay “Voyeur Realism” for New Inquiry,

“VR’s genesis and development is in the military, where it has been used to train soldiers in “battle readiness,” a euphemism for: methods to overcome the innate human resistance to firing at another human being. In the last few years, VR’s usage has shifted 180 degrees from a technology used to train soldiers for war, to one that claims to “amplify” the voices afflicted by war, and to affect “world influencers” who might be able to stop said wars.”

Credits

Narrative Design: Sofian Audry, Roxanne Baril-Bédard, Erin Gee
3D Art: Alex Lee and Marlon Kroll
Animation and Rigging: Nicklas Kenyon and Alex Lee
VFX: Anthony Damiani, Erin Gee, Nicklas Kenyon
Programming: Sofian Audry, Erin Gee, Nicklas Kenyon, Jacob Morin
AI Design: Sofian Audry
Sound Design: Erin Gee, Austin Haughton, Ben Hinckley, Ben Leavitt, Nicolas Ow
BioSensor Hardware Design: Erin Gee and Martin Peach
BioSensor Case Design: Grégory Perrin
BioSensor Hardware Programming: Thomas Ouellet Fredericks, Erin Gee, Martin Peach
Featuring music by Lazerblade, Night Chaser and Austin Haughton
Yowane Haku character designed by CAFFEIN
Yowane Haku Cyber model originally created by SEGA for Hatsune Miku: Project DIVA 2nd (2010)
Project H.E.A.R.T. also features the vocal acting talents of Erin Gee, Danny Gold, Alex Lee, Ben McCarthy, Gregory Muszkie, James O’Calloghan, and Henry Adam Svec.

Thanks to the support of the Canada Council for the Arts and AMD Radeon, this project was commissioned by Trinity Square Video for the exhibition Worldbuilding, curated by John G Hampton and Maiko Tanaka.

This project would have not been possible without the logistical and technical support of the following organizations:

Technoculture Art and Games Lab (Concordia University)

Concordia University

ASAP Media Services (University of Maine)

Exhibition history

November-December 2017  Worldbuilding @ Trinity Square Video, Toronto

February-March 2018 Future Perfect @ Hygienic Gallery, New London Connecticut

April 26-28, 2018 @ Digifest, Toronto

June 7-17, 2019 @ Elektra Festival, Montreal

January 2020 @ The Artist Project, Toronto

October 2020 @ Festival LEV Matadero, Spain

Links

Project H.E.A.R.T. official website
Worldbuilding Exhibition Website
Review in Canadian Art
My research blog: Pop and Militainment
Featured on Radiance VR

Video

Project H.E.A.R.T (2017)
Installation and Gameplay

Gallery

Erin Gee - Larynx Series

Larynx Series

Larynx1, Larynx2, Larynx3, Larynx4 (2014)
Epson UltraChrome K3 ink on acid-free paper.
Edition of 5.
86 x 112 cm.

2014

What we consider our voice in a technologically mediated environment is a visual-vocal-technological assemblage that implicates amplification, scale, human and digital bodies and networks. The multiplication and proliferation of voice on someone else’s device happens in asynchronous ways, much the same as a vocal score is a vocal performance that lay crystalized and dormant until activated by human action.

This series of printed works is a set of vocal quartets created from the original material of the human voice, the larynx, which was amplified/reproduced/echoed through visual perception processes in machine and human cognizers and re-performed by multiple human singers. In endoscopic photography the flesh material of the larynx is extended through the sensory mechanisms of a machine. Light bounces off the flesh of the larynx and is interpreted by a camera as pixel data. This digital image is made of raster pixels faithful to their fleshy origins but limited in detail. If one amplifies the raster image of the voice (zoom) the image reveals its materiality as a technical assemblage. I transformed the raster image into a vector in order to continue playing with bouncing machine processes off one another to “voice” how a machine might perceive this human larynx. While the rasterization process I used eliminated the fleshy details of the original larynx, the image emphasized original architectural structures of the larynx, which now more closely resembled a topographical map, or circuit board. This technologically processed version of the larynx could be infinitely amplified or diminished without loss or distortion. At this point I detected an unexpected feature: my associative, human perception could see markings that resembled Western notation at the edges of this transformed image of the human voice, complete with staves, bar lines and notes. My transcription process included dividing each bar into four equal parts, and then transcribing rhythms in a linear relationship to where the small note-like marks were present horizontally in common 4/4 time.  Pitches were interpreted as they appeared vertically on the abstracted staves.

Since there exist four sides to each two-dimensional image, there were four staves for each representation of the larynx in the series. I set this music into four separate vocal partitions for choral song: returning this technologically amplified process of voicing back into multiple human throats.

Exhibition/Performance history

MacKenzie Art Gallery January 2020.

Toronto Bienniale November 2019.

Vocales Digitales – Solo exhibition. March 26 – May 14 2016, Hamilton Artists’ Inc.: Hamilton, Canada. Curated by Caitlin Sutherland.

Rhubarb, rhubarb, peas and carrots, 2015. Dunlop Art Gallery. Regina, Canada. Curated by Blair Fornwald. Larynx Songs premiered with singers Erin Gee, Carrie Smith, Kristen Smith, and Kaitlin Semple.

(Premiere Performance) Rhubarb, Rhubarb, peas and carrots. July 17-September 5, 2015. Dunlop Art Gallery: Regina, Canada. Curated by Blair Fornwald.

Erin Gee and Kelly Andres. August 25 – October 24, 2014. Cirque du Soleil Headquarters: Montreal, Canada. Curated by Eliane Elbogen.

Voice of Echo (Solo Exhibition), 2014. Gallerywest. Toronto, Canada. Curated by Evan Tyler.

(Performance) Tellings: A Posthuman Vocal Concert. Toronto Biennial. Curated by Myung-Sun Kim and Maiko Tanaka.

Collections

Larynx3 (edition 1/5) was purchased by the Saskatchewan Arts Board for their permanent collection in 2019.

Gallery

Photo Credits
???

Erin Gee - Swarming Emotional Pianos

Swarming Emotional Pianos

Swarming Emotional Pianos (2012 – ongoing)
Aluminium tubes, servo motors, custom mallets, Arduino-based electronics, iCreate platforms
Approximately 27” x 12” x 12” each

2012

A looming projection of a human performer surrounded by six musical chime robots: their music is driven by the shifting rhythms of the performer’s emotional body, transformed into data and signal that activates the motors of the ensemble.

Swarming Emotional Pianos is a robotic installation work that features performance documentation of an actress moving through extreme emotions in five minute intervals. During these timed performances of extreme surprise, anger, fear, sadness, sexual arousal, and joy, Gee used her own custom-built biosensors to capture the way that each emotion affects the heartbeat, sweat, and respiration of the actress. The data from this session drives the musical outbursts of the robotics surrounding the video documentation of the emotional session. Visitors to this work are presented with two windows into the emotional state of the actress: both through a large projection of her face, paired with stereo recording of her breath and sounds of the emotional session, and through the normally inaccessible emotional world of physiology, the physicality of sensation as represented by the six robotic chimes.

Micro bursts of emotional sentiment are amplified by the robots, providing an intimate and abstract soundtrack for this “emotional movie”. These mechanistic, physiological effects of emotion drive the robotics, illustrating the physicality and automation of human emotion. By displaying both of these perspectives on human emotion simultaneously, I am interested in how the rhythmic pulsing of the robotic bodies confirm or deny the visibility and performativity of the face. Does emotion therefore lie within the visibility of facial expression, or in the patterns of bodily sensation in her body? Is the actor sincere in her performance if the emotion is felt as opposed to displayed?

Custom open-source biosensors that collect heartrate and signal amplitude, respiration amplitude and rate, and galvanic skin response (sweat) have been in development by Gee since 2012.  Click here to access her GitHub page if you would like to try the technology for yourself, or contribute to the research.

Credits

Thank you to the following for your contributions:

In loving memory of Martin Peach (my robot teacher) – Sébastien Roy (lighting circuitry) – Peter van Haaften (tools for algorithmic composition in Max/MSP) – Grégory Perrin (Electronics Assistant)

Jason Leith, Vivian Li, Mark Lowe, Simone Pitot, Matt Risk, and Tristan Stevans for their dedicated help in the studio

Concordia University, the MARCS Institute at the University of Western Sydney, Innovations en Concert Montréal, Conseil des Arts de Montréal, Thought Technology, and AD Instruments for their support.

Videos

Swarming Emotional Pianos (2012-2014)
Machine demonstration March 2014 – Eastern Bloc Lab Residency, Montréal

Swarming Emotional Pianos (2012-2014)
Machine demonstration March 2014 – Eastern Bloc Lab Residency, Montréal

Gallery

Swarming Emotional Pianos

Erin Gee - 7 Nights of Unspeakable Truth at Nuit Blanche Toronto 2013

7 Nights of Unspeakable Truth

7 Nights of Unspeakable Truth at Nuit Blanche Toronto (2013)
7-channel audio installation, woven blankets, text work
8 hours duration

2013

It’s a search for disembodied voices in technotongues.

7 Nights of Unspeakable Truth is a recording that consists of dusk-till dawn searches for number stations on shortwave radio frequencies. Arranged in order, from day one to day seven, the installation allows one to physically walk through seven evenings of shortwave, synchronized in their respective times, in physical space. This spatialization of each night allows listeners to observe patterns and synchronicities in Gee’s nightly search for unexplained broadcasts that consist only of numbers, tones and codes.”

This body of work is informed by my fascination with mystery, symbolic organization and communication. I take on the nocturnal patterns of a solitary listener, connecting to other enthusiasts via online chat in order to share an obscure passion. The patterns of my searching during 7 Nights of Unspeakable Truth are woven directly into blankets, another evening activity partaken during Nuit Blanche 2013 in which I encoded and wove my audio searches into a physical form that you could wrap yourself in while you listen – two different versions of encoded time on radio airwaves.

More on this work:

Gautier, Philippe-Aubert. “Multichannel sound and spatial sound creation at Sporobole: A short account of live performance, studio design, outdoor multichannel audio, and visiting artists.” Divergence Press #3: Creative Practice in Electroacoustic Music (2016).

Exhibition/Performance history

Nuit Blanche Toronto (2013)

Links

Additional Research by Erin Gee
Academic article by Philippe-Aubert Gautier

Video

7 Nights of Unspeakable Truth (2013)

Gallery

7 Nights of Unspeakable Truth (2013)

Voice of Echo

Voice of Echo Series (2011)
Works for video, audio, and archival inkjet prints.

2011

Propelling the mythology of Narcissus and Echo into a science-fiction future, I translate Echo’s golem-like body into a digital environment.

I became Echo in a silent performance for camera: a love song for an absent Narcissus (who is necessary to give Echo presence at all!). I later interpret the digital data from these images not in imaging software, but instead in audio software, revealing a noisy landscape of glitch, expressivity and vocality.  I bounced the data back and forth between the audio and image softwares, “composing” the visual and audio work through delays, copy/paste of image. While the natural world and human perspective created a cruel hierarchy between a human subject/image and a golem-like nymph who was invisible except as voice, technology and machine perspective allow the image and the sound to coexist and presuppose one another. The work is a futurist, emancipatory tale of non-human wrenching itself from dependency on human and instead revealing itself as an entangled, co-constitutive force.

What is the Voice of Echo?  It exists as repetition – of human voice, of Narcissus, a voice that extends anothers’ voice, this other body is somehow more tangible than Echo’s own body. The voice of echo and other non-human voices are unconscious and environmental, ambient, existing beyond symbolic content, the repetitions. The voice of Echo exists as a bouncing of processes, a distortion, a glitch, born of a love and desire uttered but never really heard.

I took stills from this love song and translated the raw visual data into an audio editing program, choosing particular interpretation methods to “compose” the echo.  I bounced this data between photoshop and audacity multiple times, eventually coming at glitched sounds of data interpretation, as well as an accompanying distorted image for each “song”.  Echo may only traditionally exist as a re-utterance of Narcissus’ voice, but in this case her cyberfeminist reimagining points at perverse loops somewhere between love, repetition and becoming.

Exhibition history

Dream Machines. TCC Chicago. Curated by Her Environment, August 16-30 2016.

Voice of Echo (solo exhibition) Gallerywest, Toronto. Curated by Evan Tyler, January 5–27, 2012.

Parer Place Urban Screens. Queensland University of Technology, Brisbane AUS. May 18-20 2012.

Uncanny Sound. TACTIC, Cork, Ireland. Curated by Liam Slevin, September 14-24 2012.

Contemporary Projects. Curated by David Garneau and Sylvia Ziemann, Regina SK, 2011.

Links

Essay by G. Douglas Barrett (2011)
Review - Zouch Magazine Toronto

Sounds

Voice of Echo (2011)

Video

Voice of Echo: Song of Love for Technological Eyes (2011)
silent HD video for monitor playback, 18:01 (looped)  Photography by Kotama Bouabane.

Echo is in love with recording technology, particularly the video camera. The mirrors emanating from her throat are her concrete manifestations of her voice – the lovesong intended for the camera’s eye.

Above is the “original video work” that got the call and response process started.

Gallery

No grid was found for: Voice of Echo.

Voice of Echo (2011)

Erin Gee - Formants - Image courtesy of InterAccess Gallery

Formants

Formants (2008)
Fiberglass, plexiglas, hair, copper, wood, electronics
20” x 49” x 27.5”

2008

Formants is an interactive audio sculpture featuring the heads of two female figures that sing when their hair is brushed: a musing on desire, vanity, absent bodies, morality, intimacy and touch.

Credits

(version 1) Pure Data Programming: Michael Brooks

(version 2) Electronics technician and programmer: Martin Peach

Vocalists: Lynn Channing and Christina Willatt

Made with the support of Soil Digital Media Suite

Video

Formants (2008)

Gallery

Formants (2008)