Grégory Perrin Tag

to the sooe

A 3D printed sound object that houses a human voice murmuring the words of a neural network trained by a deceased author.

to the sooe (SLS 3D printed object, electronics, laser-etched acrylic, audio, 2018) is the second piece in a body of work Erin Gee made in collaboration with artist Sofian Audry that explores the material and authorial agencies of a deceased author, a LSTM algorithm, and an ASMR performer.

The work in this series transmits the aesthetics of an AI “voice” that speaks through outputted text through the sounds of Gee’s softly spoken human vocals, using a human body as a relatively low-tech filter for processes of machine automation.  Other works in this series include of the soone (audio: 2018), and Machine Unlearning (Livestreamed video, 2018)

to the sooe is a sound object that features a binaural recording of Erin Gee’s voice as she re-articulates the murmurs of a machine learning algorithm learning to speak. Through this work, the artists re-embody the cognitive processes and creative voices of three agents (a deceased author, a deep learning neural net, and an ASMR performer) into a tangible device. These human and nonhuman agencies are materialized in the object through speaking and writing: a disembodied human voice, words etched onto a mirrored, acrylic surface, as well as code written into the device’s silicon memory.

The algorithmic process used in this work is a deep recurrent neural network agent known as “long short term memory” (LSTM). The algorithm “reads” Emily Brontë’s Wuthering Heights character by character, familiarizing itself with the syntactical universe of the text. As it reads and re-reads the book, it attempts to mimic Brontë’s style within the constraints of its own artificial “body”, hence finding its own alien voice.

 

The reading of this AI-generated text by a human speaker allows the listener to experience simultaneously the neural network agent’s linguistic journey as well as the augmentation of this speech through vocalization techniques adapted from Autonomous Sensory Meridian Response (ASMR). ASMR involves the use of acoustic “triggers” such as gentle whispering, fingers scratching or tapping, in an attempt to induce tingling sensations and pleasurable auditory-tactile synaesthesia in the user. Through these autonomous physiological experiences, the artists hope to reveal the autonomous nature of the listener’s own body, implying the listener as an already-cyborgian aspect of the hybrid system in place.

Exhibition History

Taking Care – Hexagram Campus Exhibition @ Ars Electronica, Linz Sept 5-11 2018

Credits

Sofian Audry – neural network programming and training

Erin Gee – vocal performer, audio recording and editing, electronics

Grégory Perrin – 3D printing design and laser etching

NRW Forum Dusseldorf

My collaborative work with Sofian Audry, of the soone (2018) will be featured in an exciting exhibition at NRW Forum focused on contemporary art and AI, curated by Tina Sauerländer (peer to space).

Artists: Nora Al-Badri & Jan Nikolai Nelles (DE), Jonas Blume (DE) Justine Emard (FR), Carla Gannis (US), Sofian Audrey and Erin Gee (CAN), Liat Grayver (ISR/DE), Faith Holland (US), Tuomas A. Laitinen (FI), and William Latham (UK)

Initiated and hosted by Leoni Spiekermann (ARTGATE Consulting)
Curated by Tina Sauerlaender and Peggy Schoenegge
At NRW Forum Düsseldorf,  Ehrenhof 2, 40479 Düsseldorf, Germany

Preview: May 25 – 27, 2018, during Meta Marathon (Tickets/Apply)
Opening: June 8, 2018, 7pm

Exhibition: June 9 – August 19, 2018

We are particularly excited for this exhibition because we will debut a 3D printed enclosure for the work made especially by Gregory Perrin, who has previously worked with me on the sensor box for Project H.E.A.R.T. (2017) as well as an amazing box for installation of Swarming Emotional Pianos (2015).

NRW Forum website 

peer to space website

Erin Gee - Swarming Emotional Pianos

Swarming Emotional Pianos

A looming projection of a human face surrounded by six musical chime robots driven by biological markers of emotion.

(2012 – ongoing)

Aluminium tubes, servo motors, custom mallets, Arduino-based electronics, iCreate platforms

Approximately 27” x 12” x 12” each

The projected face is that of an actor (Laurence Dauphinais or Matthew Keyes), who for 20 minutes moves between extreme emotional states of surprise, fear, anger, sadness, sexual arousal, and joy in 5 minute intervals. During the actor’s performance, Gee hooked the performer up to a series of biosensors that monitored how heart rate, sweat, and respiration changed between her emotional states.

The music that the robots surrounding the projection screen play as the actress moves between emotional states is in reaction to these physiological responses: the musical tones and rhythms shift and intensify as heart rate, sweat bursts, blood flow and respiration change in the actress. While the musical result is almost alien to assumptions of what emotional music might sound like, one might encounter the patterns as an abstracted lie-detector test that displays the unique internal fluctuations of the actress that move beneath the surface of her large, projected face. Does emotion lie within the visibility of facial expression, or somewhere in the audible made audible, the patterns of bodily sensation in her body? Is the actor sincere in her performance if the emotion is felt as opposed to displayed? Micro bursts of emotional sentiment are thus amplified by the robots, providing an intimate and abstract soundtrack for this “emotional movie”.

Emotional-physical outputs are extended through robotic performers as human actors focus on their internal states, and in fact activate their emotions mechanistically, as a means of creating change in their body, thus instrumentalizing emotion.

Custom open-source biosensors that collect heartrate and signal amplitude, respiration amplitude and rate, and galvanic skin response (sweat) have been in development by Gee since 2012.  Click here to access her GitHub page if you would like to try the technology for yourself, or contribute to the research.

Credits

Thank you to the following for your contributions:

  • Martin Peach (my robot teacher) – Sébastien Roy (lighting circuitry) – Peter van Haaften (tools for algorithmic composition in Max/MSP) – Grégory Perrin (Electronics Assistant)
  • Matt Risk, Tristan Stevans, Simone Pitot, and Jason Leith for their hours of dedicated studio help
  • Concordia University, the MARCS Institute at the University of Western Sydney, Innovations en Concert Montréal, Conseil des Arts de Montréal, Thought Technology, and AD Instruments for their support.

Swarming Emotional Pianos (2012-2014) Machine demonstration March 2014 – Eastern Bloc Lab Residency, Montréal