Digital Media

ASMRtronica

In September 2020 I launched To the Farther as part of MUTEK Montreal’s online exhibition Distant Arcades. It is first a series of music that explore the limits of tactile whispers, proximity, and hypnotic language through ASMR and electronic sound.
To the Farther is the title of the first iteration: A fresh take on texture, form, and the plasticity of reality under digital transformations, also is a “remix” of my ASMR recordings made in Machine Unlearning (2020).

ASMRtronica is the title of the project: an ongoing set of sonic configurations that explore haptics, intimacy and tactility in electronic sound, created progressively and without schedule in coming months and years. I move forward in this project through my interests to explore the sonic limits of the conceptual and sensorial propositions of ASMR: also lead by my passion for musical form informed by feminist aesthetics.

credits

To the Farther released September 8, 2020 by Erin Gee. Music composition and art by Erin Gee.

Presence

(March 2020) I was quarantining intensely during the coronavirus pandemic when Jen Kutler reached out to me asking if I would like to collaborate on a new work that simulates presence and attention over the network.  We have never met in real life, but we started talking on the internet every day. We eventually built a musical structure that implicates live webcam, endoscopic camera footage, biosensor data, sounds rearranged by biosensor data, ASMR roleplay and touch stimulation devices delivering small shocks to each artist. We developed this work at first through a month-long intensive online residency at SAW Video, while in conversation with many amazing artists, curators and creative people.

In Presence, artists Erin Gee and Jen Kutler reconfigure voice and touch across the internet through a haptic/physical feedback loop, using affective and physical telematics to structure an immersive electronic soundscape through physiological response.

Technical diagram for Presence, Erin Gee and Jen Kutler 2020

Presence is a telematic music composition for two bodies created during the Spring of 2020, at the height of confinement and social distancing during the COVID19 epidemic in Montreal and New York state. This work has been performed for online audiences by both artists while at home (Montreal/New York), featuring Gee and Kutler each attached to biosensors that collect the unconscious behaviours of their autonomic nervous systems, as well as touch simulation units that make this data tactile for each artist through transcutaneous nerve simulation.

Audiences are invited to listen attentively this networked session for physicalized affect through the sonification of each artists’ biodata, which also slowly triggers an ASMR roleplay that is actively reconfigured by the bodily reactions of each artist. Music and transcutaneous electronic nerve stimulation is triggered by listening bodies: these bodies are triggered by the sounds and electric pulses, everything in the system is unconscious, triggering and triggered by each other through networked delays, but present. Through this musical intervention the artists invite the listeners to imagine the experience and implicate their own bodies in the networked transmission, to witness the artists touching the borders of themselves and their physical spaces while in isolation. 

Technical Credits

web socket for puredata (wspd) created for Presence by Michael Palumbo. Available on Github here.

Biodata circuitry and library created by Erin Gee. Available on Github here.

Electronic touch stimulation device for MIDI created by Jen Kutler. Available on Github here.

Performance built with a combination of puredata (data routing), Processing (biodata generated visuals), Ableton Live (sounds) and OBS (live telematics) by Erin Gee and Jen Kutler.

Performance and Exhibition History

SAW Video “Stay at Home” Residency March-April 2020

Network Music Festival July 17 2020

Fonderie Darling – As part of Allegorical Circuits for Human Software curated by Laurie Cotton Pigeon. August 13 2020

 

Machine Unlearning

Vision calibration from Machine Unlearning (2020). Photography by Elody Libe. Image courtesy of the artist.

In Machine Unlearning, the artist offers a neural conditioning treatment by whispering the unraveling outputs of an LSTM algorithm trained on Emily Brontë’s Wuthering Heights as the algorithm “forgets.” The combination of machine learning and ASMR draws parallels between autonomous algorithms and the autonomous functions of the human body.  Just as ASMRtists use specific sounds and visual patterns in their videos to “trigger” physical reactions in the user using stimuli, acting on the unconscious sensory processing of the listener as they watch the video, the algorithm also unconsciously responds to patterns perceived by its limited senses in order to develop its learning (and unlearning) processes.

Credits: Photography and videography by Elody Libe.

Production Support: Machine Unlearning video installation was produced at Perte de Signal with the support of the MacKenzie Art Gallery for the exhibition To the Sooe (2020) curated by Tak Pham.

The roleplay performance was developed during my artistic residency at Locus SonusÉcole Superieur d’art d’Aix en Provence and Laboratoire PRISM.

More...

The use of the word “intelligence” in the metaphor of AI focuses on higher functions of consciousness that algorithms do not possess. While algorithms have not meaningfully achieved a humanistic consciousness to date, today’s algorithms act autonomously on sensory information, processing data from its environment in unconscious, automatic ways. The human brain also responds unconsciously and automatically to sensory data in its environment, for example, even if you are not conscious of how hot a stove is, if you place your hand on a hot stove, your hand will automatically pull away. These unconscious, physiological actions in the sensory realm points to an area of common experience between algorithms and the human.  For more explanation of these ideas, take a look at the work of postmodern literary critic N. Katherine Hayles in her 2017 book Unthought: The power of the cognitive nonconscious.  In this way I wonder if the expression “autonomous intelligence” makes more sense than “artificial intelligence”, however like posthumanist feminist Rosi Braidotti I am deeply suspicious of the humanist pride that our species takes in the word “intelligence” as something that confers a special status and justification for domination of other forms of life on earth.

Live Performance

This work was first developed as a performance that debuted at Cluster Festival, Winnipeg in 2019.  During live performance, each audience member dons a pair of wireless headphones.  The performance allows the audience members to see the ASMR “result” of the performance for camera, simultaneous with the ability to see my “backstage” manipulation of props and light in real time.

Machine Unlearning (2019) Performance at Cluster Festival, Winnipeg. Photo: Leif Norman.

Machine Unlearning (2019) Performance at Cluster Festival, Winnipeg. Photo: Leif Norman.

Machine Unlearning (2019) Performance at Cluster Festival, Winnipeg. Photo: Leif Norman.

LAUGHING WEB DOT SPACE

An interactive website and virtual laugh-in for survivors of sexual violence.

The URL: https://laughingweb.space

This website enables survivors to record and listen to the sounds of their laughter, and through the magic of the internet, laugh together. Visitors of any gender that self-identify as survivors are invited to use the website’s interface to record their laughter and join in: no questions asked. Visitors can also listen to previously recorded laughter on loop.

Why laughter? Laughter is infectious, and borne of the air we still breathe. We laugh in joy. We laugh in bitterness. We laugh awkwardly. We laugh in relief. We laugh in anxiety. We laugh because it is helpful for laugh. We laugh because it might help someone else. Laughing is good for our health: soothing stress, strengthening the immune system, and easing pain. Through laughter, we proclaim ourselves as more complex than the traumatic memories that we live with. Our voices echo, and will reverberate in the homes, public places, and headphones of whoever visits.

The site is officially launched on October 3rd, 2018!  HERE is my video presentation about the meaning of Laughing Web Dot Space from SGFA Tokyo (2019)

Dedicated to Cheryl L’hirondelle

This project was commissioned by Eastern Bloc (Montreal) on the occasion of their 10th anniversary exhibition. For this exhibition, Eastern Bloc invited the exhibiting media artists to present work while thinking of linkages to Canadian media artists that inspired them when they were young. I’m extremely honored and grateful for the conversations that Cheryl L’hirondelle shared with me while I was developing this project.

When I was just beginning to dabble in media art in art school, the net-based artworks of Cheryl L’hirondelle demonstrated to me the power of combining art with sound and songwriting, community building, and other gestures of solidarity, on the internet. Exposure to her work was meaningful to me – I was looking for examples of other women using their voices with technology. Skawennati is another great artist that was creating participative web works in the late 90s and early 2000s – you can check out her cyberpowwow here.

Click here to visit Laughing Web Dot Space

Credits

Graphic Design – Laura Lalonde
Backend Programming – Sofian Audry, Conan Lai, Ismail Negm
Frontend Programming- Koumbit

Special thank you to Kai-Cheng Thom, who with wisdom, grace, and passion guided me through many stages of this work’s development.

Exhibition History

October 3 -23, 2018 – Eastern Bloc, Montreal. Curated by Eliane Ellbogen

February 16, 2019 –The Feminist Art Project @ CAA Conference – Trianon Ballroom, Hilton NYC.

February 2019 – Her EnvironmentYards Gallery, Chicago. Curated by Chelsea Welch and Iryne Roh.

June 26 to August 11, 2019. SESI Arte Galeria, FILE festival, São Paulo, Brazil.

October 4-5, 2019. Video Presentation and exhibition at Sound::Gender::Feminism::Activism symposium, Tokyo. Click here to watch my video presentation

Press

Fields, Noa/h. (2019). “Dangling Wires: Artists Examine Relationship with Technology in Entanglements.” Scapi Magazine (Chicago). https://scapimag.com/2019/02/05/dangling-wires-artists-examine-relationship-with-technology-in-entanglements/

Fournier, Lauren (2018). “Our Collective Nervous System.” Canadian Art. https://canadianart.ca/interviews/our-collective-nervous-system/

Berson, Amber (2018). “Amplification” Canadian Art. REVIEWS / OCTOBER 23, 2018. https://canadianart.ca/reviews/amplification/

of the soone

A disembodied voice invites the listener to partake in a speculative audio treatment that promises to awaken underdeveloped neural passageways through exposure to the non-human processes of neural network language acquisition.

In this work, media artists Erin Gee and Sofian Audry expose listeners to the architectures of an artificial intelligence algorithm through the sounds of an Autonomous Sensory Meridian Response (ASMR) roleplay. ASMR is a genre of audio and videomaking developed by internet aficionados interested in using specific everyday sounds (whispering, soft voice, crinkling and textured sounds) alongside verbal suggestion to “trigger” pleasant tingling reactions in the body of the listener. The artists combined these ASMR principles of sound with artificial intelligence to create a speculative neural conditioning treatment. In of the soone, the listener encounters a soft female voice that whispers a script written by a machine learning algorithm as it slowly loses its neural training and “forgets.” This combination of algorithmic text and ASMR connects the unconscious, automatic processes of artificial intelligence algorithms to the autonomous reactions of the human body to sound, using intimacy to “hack” into the subconscious of the human listener and recondition neural pathways.

Exhibition History:

October 2020: Digital Cultures: Imagined Futures Audio Programme curated by Joseph Cutts. Adam Mickiewicz Institute, Warszawa Poland

June 9 to August 19, 2018: Pendoran Vinci. Art and Artificial Intelligence Today  curated by Peggy Schoenegge and Tina Sauerländer. NRW Forum, Düsseldorf, Germany

January 2018: Her Environment @ TCC Gallery, Chicago

 

text 2018. Courtesy of artists.

Project H.E.A.R.T.

A biodata-driven VR game where militainment and pop music fuel a new form of emotional drone warfare.

A twist on popular “militainment” shooter video games, Project H.E.A.R.T. invites the viewer to place their fingers on a custom biodata device, and summon their enthusiasm to engage their avatar, Yowane Haku, in “combat therapy.” Fans of the Vocaloid characters may recognize Haku as the “bad copy” of Japanese pop celebrity Hatsune Miku, a holographic personnage that invites her fans to pour their content and songs into her virtual voice.

The biosensing system features a pulse sensor, and a skin conductance sensor of Gee’s design. Through principles of emotional physiology and affective computing, the device gathers data relative to heart rate and blood flow from index finger, and skin conductance from middle and ring fingers of users. The biodata is read by a microcontroller and transferred to Unity VR, thus facilitating emotional interactivity: a user’s enthusiasm (spikes in signal amplitude in skin conductance, elevated heart rate, and shifts in amplitude of the pulse signal) stimulates the holographic pop star to sing in the virtual warzone, thus inspiring military fighters to continue the war, and create more enemy casualties. At the end of the experience the user is confronted with their “score” of traumatized soldiers vs enemies killed, with no information whether this means that they won or lost the “game”.

The user is thus challenged to navigate soldier’s emotional anxieties and summon their positivity to activate Haku’s singing voice as soldiers battle not only against a group of enemies, but also against their own lack of confidence in times of global economic instability.

The landscape of Project H.E.A.R.T. was built from geopolitically resonant sites found on Google Maps, creating a dreamlike background for the warzone. In-game dialogue wavers between self-righteous soldier banter typical of video games, and self-help, bringing the VR participant to an interrogation of their own emotional body in a virtual space that conflates war, pop music, drone technology, and perhaps movement-induced VR nausea.

 

 

As Kathryn Hamilton pointed out in her 2017 essay “Voyeur Realism” for New Inquiry,

“VR’s genesis and development is in the military, where it has been used to train soldiers in “battle readiness,” a euphemism for: methods to overcome the innate human resistance to firing at another human being. In the last few years, VR’s usage has shifted 180 degrees from a technology used to train soldiers for war, to one that claims to “amplify” the voices afflicted by war, and to affect “world influencers” who might be able to stop said wars.”

Photography by Toni Hafkenscheid.  Images of Worldbuilding exhibition courtesy of Trinity Square Video, 2017.

Exhibition history:

November-December 2017  Worldbuilding @ Trinity Square Video, Toronto

February-March 2018 Future Perfect @ Hygienic Gallery, New London Connecticut

April 26-28, 2018 @ Digifest, Toronto

June 7-17, 2019 @ Elektra Festival, Montreal

January 2020 @ The Artist Project, Toronto

 October 2020 @ Festival LEV Matadero, Spain

Credits

Narrative Design: Sofian Audry, Roxanne Baril-Bédard, Erin Gee

3D Art: Alex Lee and Marlon Kroll

Animation and Rigging: Nicklas Kenyon and Alex Lee

VFX: Anthony Damiani, Erin Gee, Nicklas Kenyon

Programming: Sofian Audry, Erin Gee, Nicklas Kenyon, Jacob Morin

AI Design: Sofian Audry

Sound Design: Erin Gee, Austin Haughton, Ben Hinckley, Ben Leavitt, Nicolas Ow

BioSensor Hardware Design: Erin Gee and Martin Peach

BioSensor Case Design: Grégory Perrin

BioSensor Hardware Programming: Thomas Ouellet Fredericks, Erin Gee, Martin Peach

Featuring music by Lazerblade, Night Chaser and Austin Haughton

Yowane Haku character designed by CAFFEIN

Yowane Haku Cyber model originally created by SEGA for Hatsune Miku: Project DIVA 2nd (2010)

Project H.E.A.R.T. also features the vocal acting talents of Erin Gee, Danny Gold, Alex Lee, Ben McCarthy, Gregory Muszkie, James O’Calloghan, and Henry Adam Svec.

Thanks to the support of the Canada Council for the Arts and AMD Radeon, this project was commissioned by Trinity Square Video for the exhibition Worldbuilding, curated by John G Hampton and Maiko Tanaka.

This project would have not been possible without the logistical and technical support of the following organizations:

Technoculture Art and Games Lab (Concordia University)

Concordia University

ASAP Media Services (University of Maine)

Erin Gee - 7 Nights of Unspeakable Truth at Nuit Blanche Toronto 2013

7 Nights of Unspeakable Truth

(2013)

7-channel audio installation, woven blankets, text work

8 hours duration

It’s a search for disembodied voices in technotongues.

“7 Nights of Unspeakable Truth is a recording that consists of dusk-till dawn searches for number stations on shortwave radio frequencies. Arranged in order, from day one to day seven, the installation allows one to physically walk through seven evenings of shortwave, synchronized in their respective times, in physical space. This spatialization of each night allows listeners to observe patterns and synchronicities in Gee’s nightly search for unexplained broadcasts that consist only of numbers, tones and codes.”

This body of work is informed by my fascination with mystery, symbolic organization and communication. I take on the nocturnal patterns of a solitary listener, connecting to other enthusiasts via online chat in order to share an obscure passion. The patterns of my searching during 7 Nights of Unspeakable Truth are woven directly into blankets, another evening activity partaken during Nuit Blanche 2013 in which I encoded and wove my audio searches into a physical form that you could wrap yourself in while you listen – two different versions of encoded time on radio airwaves.

More on this work:

Gautier, Philippe-Aubert. “Multichannel sound and spatial sound creation at Sporobole: A short account of live performance, studio design, outdoor multichannel audio, and visiting artists.” Divergence Press #3: Creative Practice in Electroacoustic Music (2016).

Voice of Echo

Voice of Echo Series: 2011. Works for video, audio, and archival inkjet prints.

Exhibition history:

  • Dream Machines. TCC Chicago. Curated by Her Environment, August 16-30 2016.
  • Voice of Echo (solo exhibition) Gallerywest, Toronto. Curated by Evan Tyler, January 5–27, 2012.
  • Parer Place Urban Screens. Queensland University of Technology, Brisbane AUS. May 18-20 2012.
  • Uncanny Sound. TACTIC, Cork, Ireland. Curated by Liam Slevin, September 14-24 2012.
  • Contemporary Projects. Curated by David Garneau and Sylvia Ziemann, Regina SK, 2011.

Propelling the mythology of Narcissus and Echo into a science-fiction future, I translate Echo’s golem-like body into a digital environment.

I became Echo in a silent performance for camera: a love song for an absent Narcissus (who is necessary to give Echo presence at all!). I later interpret the digital data from these images not in imaging software, but instead in audio software, revealing a noisy landscape of glitch, expressivity and vocality.  I bounced the data back and forth between the audio and image softwares, “composing” the visual and audio work through delays, copy/paste of image. While the natural world and human perspective created a cruel hierarchy between a human subject/image and a golem-like nymph who was invisible except as voice, technology and machine perspective allow the image and the sound to coexist and presuppose one another. The work is a futurist, emancipatory tale of non-human wrenching itself from dependency on human and instead revealing itself as an entangled, co-constitutive force.

What is the Voice of Echo?  It exists as repetition – of human voice, of Narcissus, a voice that extends anothers’ voice, this other body is somehow more tangible than Echo’s own body. The voice of echo and other non-human voices are unconscious and environmental, ambient, existing beyond symbolic content, the repetitions. The voice of Echo exists as a bouncing of processes, a distortion, a glitch, born of a love and desire uttered but never really heard.

(Description continues below)

I took stills from this love song and translated the raw visual data into an audio editing program, choosing particular interpretation methods to “compose” the echo.  I bounced this data between photoshop and audacity multiple times, eventually coming at glitched sounds of data interpretation, as well as an accompanying distorted image for each “song”.  Echo may only traditionally exist as a re-utterance of Narcissus’ voice, but in this case her cyberfeminist reimagining points at perverse loops somewhere between love, repetition and becoming.

 

Below is the “original video work” that got the call and response process started.

Voice of Echo: Song of Love for Technological Eyes (2011) silent HD video for monitor playback, 18:01 (looped)  Photography by Kotama Bouabane.

Echo is in love with recording technology, particularly the video camera. The mirrors emanating from her throat are her concrete manifestations of her voice – the lovesong intended for the camera’s eye.