Grégory Perrin Tag

to the sooe

to the sooe (2018)
Sofian Audry and Erin Gee. Photography: Alexandre Saunier

2018

A 3D printed sound object that houses a human voice murmuring the words of a neural network trained by a deceased author.

to the sooe (SLS 3D printed object, electronics, laser-etched acrylic, audio, 2018) is the second piece in a body of work Erin Gee made in collaboration with artist Sofian Audry that explores the material and authorial agencies of a deceased author, a LSTM algorithm, and an ASMR performer.

The work in this series transmits the aesthetics of an AI “voice” that speaks through outputted text through the sounds of Gee’s softly spoken human vocals, using a human body as a relatively low-tech filter for processes of machine automation.  Other works in this series include of the soone (2018), and Machine Unlearning (2018-2019)

to the sooe is a sound object that features a binaural recording of Erin Gee’s voice as she re-articulates the murmurs of a machine learning algorithm learning to speak. Through this work, the artists re-embody the cognitive processes and creative voices of three agents (a deceased author, a deep learning neural net, and an ASMR performer) into a tangible device. These human and nonhuman agencies are materialized in the object through speaking and writing: a disembodied human voice, words etched onto a mirrored, acrylic surface, as well as code written into the device’s silicon memory.

The algorithmic process used in this work is a deep recurrent neural network agent known as “long short term memory” (LSTM). The algorithm “reads” Emily Brontë’s Wuthering Heights character by character, familiarizing itself with the syntactical universe of the text. As it reads and re-reads the book, it attempts to mimic Brontë’s style within the constraints of its own artificial “body”, hence finding its own alien voice.

The reading of this AI-generated text by a human speaker allows the listener to experience simultaneously the neural network agent’s linguistic journey as well as the augmentation of this speech through vocalization techniques adapted from Autonomous Sensory Meridian Response (ASMR). ASMR involves the use of acoustic “triggers” such as gentle whispering, fingers scratching or tapping, in an attempt to induce tingling sensations and pleasurable auditory-tactile synaesthesia in the user. Through these autonomous physiological experiences, the artists hope to reveal the autonomous nature of the listener’s own body, implying the listener as an already-cyborgian aspect of the hybrid system in place.

Credits

Sofian Audry – neural network programming and training

Erin Gee – vocal performer, audio recording and editing, electronics

Grégory Perrin – 3D printing design and laser etching

Exhibition history

Taking Care – Hexagram Campus Exhibition @ Ars Electronica, Linz Sept 5-11 2018. Curated by Ana Kerekes.

Printemps Numérique – McCord Museum Montreal, May 29-June 3 2019. Curated by Erandy Vergara.

To the Sooe – MacKenzie Art Gallery, Regina January 26-April 26, 2020. Curated by Tak Pham.

Sounds

to the sooe (2018)

Gallery

NRW Forum Dusseldorf

My collaborative work with Sofian Audry, of the soone (2018) will be featured in an exciting exhibition at NRW Forum focused on contemporary art and AI, curated by Tina Sauerländer (peer to space).

Artists: Nora Al-Badri & Jan Nikolai Nelles (DE), Jonas Blume (DE) Justine Emard (FR), Carla Gannis (US), Sofian Audrey and Erin Gee (CAN), Liat Grayver (ISR/DE), Faith Holland (US), Tuomas A. Laitinen (FI), and William Latham (UK)

Initiated and hosted by Leoni Spiekermann (ARTGATE Consulting)
Curated by Tina Sauerlaender and Peggy Schoenegge
At NRW Forum Düsseldorf,  Ehrenhof 2, 40479 Düsseldorf, Germany

Preview: May 25 – 27, 2018, during Meta Marathon (Tickets/Apply)
Opening: June 8, 2018, 7pm

Exhibition: June 9 – August 19, 2018

We are particularly excited for this exhibition because we will debut a 3D printed enclosure for the work made especially by Gregory Perrin, who has previously worked with me on the sensor box for Project H.E.A.R.T. (2017) as well as an amazing box for installation of Swarming Emotional Pianos (2015).

NRW Forum website 

peer to space website

Erin Gee - Swarming Emotional Pianos

Swarming Emotional Pianos

Swarming Emotional Pianos (2012 – ongoing)
Aluminium tubes, servo motors, custom mallets, Arduino-based electronics, iCreate platforms
Approximately 27” x 12” x 12” each

2012

A looming projection of a human performer surrounded by six musical chime robots: their music is driven by the shifting rhythms of the performer’s emotional body, transformed into data and signal that activates the motors of the ensemble.

Swarming Emotional Pianos is a robotic installation work that features performance documentation of an actress moving through extreme emotions in five minute intervals. During these timed performances of extreme surprise, anger, fear, sadness, sexual arousal, and joy, Gee used her own custom-built biosensors to capture the way that each emotion affects the heartbeat, sweat, and respiration of the actress. The data from this session drives the musical outbursts of the robotics surrounding the video documentation of the emotional session. Visitors to this work are presented with two windows into the emotional state of the actress: both through a large projection of her face, paired with stereo recording of her breath and sounds of the emotional session, and through the normally inaccessible emotional world of physiology, the physicality of sensation as represented by the six robotic chimes.

Micro bursts of emotional sentiment are amplified by the robots, providing an intimate and abstract soundtrack for this “emotional movie”. These mechanistic, physiological effects of emotion drive the robotics, illustrating the physicality and automation of human emotion. By displaying both of these perspectives on human emotion simultaneously, I am interested in how the rhythmic pulsing of the robotic bodies confirm or deny the visibility and performativity of the face. Does emotion therefore lie within the visibility of facial expression, or in the patterns of bodily sensation in her body? Is the actor sincere in her performance if the emotion is felt as opposed to displayed?

Custom open-source biosensors that collect heartrate and signal amplitude, respiration amplitude and rate, and galvanic skin response (sweat) have been in development by Gee since 2012.  Click here to access her GitHub page if you would like to try the technology for yourself, or contribute to the research.

Credits

Thank you to the following for your contributions:

In loving memory of Martin Peach (my robot teacher) – Sébastien Roy (lighting circuitry) – Peter van Haaften (tools for algorithmic composition in Max/MSP) – Grégory Perrin (Electronics Assistant)

Jason Leith, Vivian Li, Mark Lowe, Simone Pitot, Matt Risk, and Tristan Stevans for their dedicated help in the studio

Concordia University, the MARCS Institute at the University of Western Sydney, Innovations en Concert Montréal, Conseil des Arts de Montréal, Thought Technology, and AD Instruments for their support.

Videos

Swarming Emotional Pianos (2012-2014)
Machine demonstration March 2014 – Eastern Bloc Lab Residency, Montréal

Swarming Emotional Pianos (2012-2014)
Machine demonstration March 2014 – Eastern Bloc Lab Residency, Montréal

Gallery

Swarming Emotional Pianos