LAUGHING WEB DOT SPACE

An interactive website and virtual laugh-in for survivors of sexual violence.

The URL: https://laughingweb.space

Visitors to this site that self-identify as survivors of sexual violence are invited to use the website’s interface to record their laughter and join in, no questions asked. To do so, the user presses the “record” button on the site – which will submit their recorded laughter directly to the site’s admin (Erin) for integration into the master laugh-track. Visitors can also listen to previously recorded laughter of other survivors of sexual violence on loop.

I was putting the finishing touches on laughingweb.space during the 2018 hearing of Brett Kavanaugh as a humble, mostly anonymous, non-social media driven sonic monument for survivors. Whether the laughter is sincere or not, there is a lot of research that demonstrates that even going through the mechanics of joy gives some physiological benefit. Laughing can be made in a spirit of hope, of encouragement, or wicked defiance – my hope is that this website might help others experience solidarity and togetherness with other (anonymous) survivors during challenging times.

 

 

The site is officially launched on October 3rd, 2018! But I still consider it to be in Beta, because it currently is only fully-functional on Firefox browser and Google Chrome browser.  But hey, little steps.  Safari is coming up next!

 

Dedicated to Cheryl L’hirondelle

This project was commissioned by Eastern Bloc (Montreal) on the occasion of their 10th anniversary exhibition. For this exhibition, Eastern Bloc invited the exhibiting media artists to present work while thinking of linkages to Canadian media artists that inspired them when they were young. I’m extremely honored and grateful for the conversations that Cheryl L’hirondelle shared with me while I was developing this project.

When I was just beginning to dabble in media art in art school, the net-based artworks of Cheryl L’hirondelle demonstrated to me the power of combining art with sound and songwriting, community building, and other gestures of solidarity, on the internet. Exposure to her work was meaningful to me – I was looking for examples of other women using their voices with technology. Skawennati is another great artist that was creating participative web works in the late 90s and early 2000s – you can check out her cyberpowwow here.

Special thank you to Kai-Cheng Thom, who with wisdom, grace, and passion guided me through many stages of this work’s development.

[/vc_column_text]

Click here to visit Laughing Web Dot Space

Credits

Graphic Design – Laura Lalonde

Backend Programming – Sofian Audry, Conan Lai, Ismail Negm

Frontend Programming- Koumbit

[/vc_column]
[/vc_row]

to the sooe

A 3D printed sound object that houses a human voice murmuring the words of a neural network trained by a deceased author.

to the sooe (SLS 3D printed object, electronics, laser-etched acrylic, audio, 2018) is the second piece in a body of work Erin Gee made in collaboration with artist Sofian Audry that explores the material and authorial agencies of a deceased author, a LSTM algorithm, and an ASMR performer.

The work in this series transmits the aesthetics of an AI “voice” that speaks through outputted text through the sounds of Gee’s softly spoken human vocals, using a human body as a relatively low-tech filter for processes of machine automation.  Other works in this series include of the soone (audio: 2018), and Machine Unlearning (Livestreamed video, 2018)

to the sooe is a sound object that features a binaural recording of Erin Gee’s voice as she re-articulates the murmurs of a machine learning algorithm learning to speak. Through this work, the artists re-embody the cognitive processes and creative voices of three agents (a deceased author, a deep learning neural net, and an ASMR performer) into a tangible device. These human and nonhuman agencies are materialized in the object through speaking and writing: a disembodied human voice, words etched onto a mirrored, acrylic surface, as well as code written into the device’s silicon memory.

The algorithmic process used in this work is a deep recurrent neural network agent known as “long short term memory” (LSTM). The algorithm “reads” Emily Brontë’s Wuthering Heights character by character, familiarizing itself with the syntactical universe of the text. As it reads and re-reads the book, it attempts to mimic Brontë’s style within the constraints of its own artificial “body”, hence finding its own alien voice.

 

The reading of this AI-generated text by a human speaker allows the listener to experience simultaneously the neural network agent’s linguistic journey as well as the augmentation of this speech through vocalization techniques adapted from Autonomous Sensory Meridian Response (ASMR). ASMR involves the use of acoustic “triggers” such as gentle whispering, fingers scratching or tapping, in an attempt to induce tingling sensations and pleasurable auditory-tactile synaesthesia in the user. Through these autonomous physiological experiences, the artists hope to reveal the autonomous nature of the listener’s own body, implying the listener as an already-cyborgian aspect of the hybrid system in place.

Exhibition History

Taking Care – Hexagram Campus Exhibition @ Ars Electronica, Linz Sept 5-11 2018

Credits

Sofian Audry – neural network programming and training

Erin Gee – vocal performer, audio recording and editing, electronics

Grégory Perrin – 3D printing design and laser etching

of the soone

A disembodied voice invites the listener to partake in a speculative audio treatment that promises to awaken underdeveloped neural passageways through exposure to the non-human processes of neural network language acquisition.

of the soone is the first in a body of work Erin Gee made in collaboration with artist Sofian Audry that explores the material and authorial agencies of a deceased author, a LSTM algorithm, and an ASMR performer.

The work in this series transmits the aesthetics of an AI “voice” that speaks through outputted text through the sounds of Gee’s softly spoken human vocals, using a human body as a relatively low-tech filter for processes of machine automation.

of the soone, Gee welcomes the listener to a speculative neural treatment called “language processing and de-processing”, preparing the listener as a subject by dressing them in a fluffy terry robe and EEG cap to monitor brainwaves. She introduces the listener to the many benefits of this language processing and de-processing “treatment”, as  sonic exposure to machine learning processes allow one to to subliminally reinvigorate under developed neural-linguistic pathways in their own human mind.

During the aural treatment, the subject listens to Gee’s voice reading out the results of a process enacted by a deep recurrent neural network agent known as “long short term memory” (LSTM). The algorithm “reads” Emily Brontë’s Wuthering Heights character by character, familiarizing itself with the syntactical universe of the text. As it reads and re-reads the book, it attempts to mimic Brontë’s style within the constraints of its own artificial “body”, hence finding its own alien voice.

The reading of this AI-generated text by a human speaker allows the listener to experience the neural network agent’s linguistic journey, and to experience the augmentation of this machine-speech through vocalization techniques adapted from Autonomous Sensory Meridian Response (ASMR). ASMR involves the use of acoustic “triggers” such as gentle whispering, fingers scratching or tapping, in an attempt to induce tingling sensations and pleasurable auditory-tactile synaesthesia in the user. Through these autonomous physiological experiences, the work aims to reveal the listener’s own cyborgian qualities as part of the hybrid system in place.

Exhibition History:

April 2018: NRW Forum, Düsseldorf, Germany

March 2018: XXFiles Radio @ Nuit Blanche, Montreal

January 2018: Her Environment @ TCC Gallery, Chicago

text 2018. Courtesy of artists.

Project H.E.A.R.T.

A biodata-driven VR game where militainment and pop music fuel a new form of emotional drone warfare.

A twist on popular “militainment” shooter video games, Project H.E.A.R.T. invites the viewer to place their fingers on a custom biodata device, and summon their enthusiasm to engage their avatar, Yowane Haku, in “combat therapy.” Fans of the Vocaloid characters may recognize Haku as the “bad copy” of Japanese pop celebrity Hatsune Miku, a holographic personnage that invites her fans to pour their content and songs into her virtual voice.

The biosensing system features a pulse sensor, and a skin conductance sensor of Gee’s design. Through principles of emotional physiology and affective computing, the device gathers data relative to heart rate and blood flow from index finger, and skin conductance from middle and ring fingers of users. The biodata is read by a microcontroller and transferred to Unity VR, thus facilitating emotional interactivity: a user’s enthusiasm (spikes in signal amplitude in skin conductance, elevated heart rate, and shifts in amplitude of the pulse signal) stimulates the holographic pop star to sing in the virtual warzone, thus inspiring military fighters to continue the war, and create more enemy casualties. At the end of the experience the user is confronted with their “score” of traumatized soldiers vs enemies killed, with no information whether this means that they won or lost the “game”.

The user is thus challenged to navigate soldier’s emotional anxieties and summon their positivity to activate Haku’s singing voice as soldiers battle not only against a group of enemies, but also against their own lack of confidence in times of global economic instability.

The landscape of Project H.E.A.R.T. was built from geopolitically resonant sites found on Google Maps, creating a dreamlike background for the warzone. In-game dialogue wavers between self-righteous soldier banter typical of video games, and self-help, bringing the VR participant to an interrogation of their own emotional body in a virtual space that conflates war, pop music, drone technology, and perhaps movement-induced VR nausea.

 

 

As Kathryn Hamilton pointed out in her 2017 essay “Voyeur Realism” for New Inquiry,

“VR’s genesis and development is in the military, where it has been used to train soldiers in “battle readiness,” a euphemism for: methods to overcome the innate human resistance to firing at another human being. In the last few years, VR’s usage has shifted 180 degrees from a technology used to train soldiers for war, to one that claims to “amplify” the voices afflicted by war, and to affect “world influencers” who might be able to stop said wars.”

Photography by Toni Hafkenscheid.  Images of Worldbuilding exhibition courtesy of Trinity Square Video, 2017.

Exhibition history:
November-December 2017 Worldbuilding @ Trinity Square Video, Toronto
February-March 2018 Future Perfect @ Hygienic Gallery, New London Connecticut
April 26-28, 2018 @ Toronto Digifest, Toronto

Credits

Narrative Design: Sofian Audry, Roxanne Baril-Bédard, Erin Gee

3D Art: Alex Lee and Marlon Kroll

Animation and Rigging: Nicklas Kenyon and Alex Lee

VFX: Anthony Damiani, Erin Gee, Nicklas Kenyon

Programming: Sofian Audry, Erin Gee, Nicklas Kenyon, Jacob Morin

AI Design: Sofian Audry

Sound Design: Erin Gee, Austin Haughton, Ben Hinckley, Ben Leavitt, Nicolas Ow

BioSensor Hardware Design: Erin Gee and Martin Peach

BioSensor Case Design: Grégory Perrin

BioSensor Hardware Programming: Thomas Ouellet Fredericks, Erin Gee, Martin Peach

Featuring music by Lazerblade, Night Chaser and Austin Haughton

Yowane Haku character designed by CAFFEIN

Yowane Haku Cyber model originally created by SEGA for Hatsune Miku: Project DIVA 2nd (2010)

Project H.E.A.R.T. also features the vocal acting talents of Erin Gee, Danny Gold, Alex Lee, Ben McCarthy, Gregory Muszkie, James O’Calloghan, and Henry Adam Svec.

Thanks to the support of the Canada Council for the Arts and AMD Radeon, this project was commissioned by Trinity Square Video for the exhibition Worldbuilding, curated by John G Hampton and Maiko Tanaka.

This project would have not been possible without the logistical and technical support of the following organizations:

Technoculture Art and Games Lab (Concordia University)

Concordia University

ASAP Media Services (University of Maine)

BioSolo

Using the BioSynth, I improvised a set for my breath/voice and my sonified heart and sweat release at No Hay Banda in an evening that also featured the very interesting work of composer Vinko Globokar (Russia).  The improvisation is very sparing, the goal is to exploit interesting rhythmic moments between heavy breath-song and the heartbeat, all the while exploring limits of respiratory activity and seeing what effect it has on my physiology.

Photography: Wren Noble

BioSolo was first performed at No Hay Banda series in Montreal at La Sala Rossa, organized by Daniel Àñez and Noam Bierstone.

Echo Grey

Echo Grey is a composition for four voices, feedback musical instruments, and tape part (which features the sounds of a broken image file).  World premiere at Vancouver New Music with Andrea Young, Marina Hasselberg, Sharon Chohi Kim, Micaela Tobin, Michael Day, Braden Diotte, and Erin Gee in November 2016. It has also been performed at Open Space Gallery (Victoria), and Neworks (Calgary).

The mythological character Echo exists only as a shade, a reflection or bounce. Movement between words or utterance, the Echo’s mythological voice exceeds the signal itself and speaks to a deeper engagement with materiality.  In Echo Grey, I composed a series of vocal patterns that emerge directly with breath as raw material, the movement of intake and exhalation made audible through mechanistic patterns that are impossible to perform perfectly. The choir’s collective attempt at mechanistically engaging with an impossible repetition eventually negates the signal: all that is left is the lungs and vocal vibrations of the individual who gasps, cries in defeat, and whoops in ecstasy.  These human voices are simultaneously punctuated by the feedback of microphone and amplified instruments, and a tape track composed through process – a bouncing of data back and forth between visual and aural softwares that eventually results in nothing but glitched statements.  This tape track is analogous to the squealing proximity of the sender to the receiver in the scored feedback parts, which is analogous to the back and forth of the breath of the singers as they perform.  The colour grey in the work’s title is inspired by the back and forth motion of a 2HB pencil stroking endlessly across an empty pad of paper.

 

Song of Seven: Biochoir

A composition for children’s choir featuring seven voices and seven sets of biodata with piano accompaniment.

In this song, young performers contemplate an emotional time in their lives, and recount this memory as an improvised vocal solo.The choir is instructed to enter into a meditative state during these emotional solos, deeply listening to the tale and empathizing with the soloist, using imagination to recreate the scene.  Choir members are attached to a musical instrument I call the BioSynth a small synthesizer that sonifies heartbeats and sweat release for each individual member to pre-programmed tones. Sweat release, often acknowledged as a robust measure of emotional engagement, is signaled by overtones that appear and reappear over a drone; meanwhile the heartbeats of each chorister are sounded according to blood flow, providing a light percussion.

The musical score combines traditional music notation with vocal games and rhythms determined not necessarily by the conductor or score but by beatings of the heart and bursts of sweat. Discreet flashing lights on the synthesizer boxes in front of the choristers allowed the singers to discern the rhythms and patterns of their heart and sweat glands, which therefore permits compositions to incorporate the rhythms of the body into the final score as markers that trigger sonic events.

This choral composition was workshopped over a one-week residency at the LIVELab (McMaster University) with selected members of the Hamilton Children’s Choir, and facilitated by Hamilton Artists Inc. with support from the Canada Council for the Arts.

For more information

Hamilton Children's Choir
Daniel Àñez (Spanish biography)
Hamilton Artists' Inc
LIVElab
Canada Council for the Arts

Piano accompanist: Daniel Àñez
Hardware design: Martin Peach
Software design: Erin Gee

Erin Gee - Larynx Series

Larynx Series

(2015)

inkjet prints on acid-free paper

34″x 44″ each

These vector images are derived from endoscopic footage of a human larynx. Within the images I discovered what looked like abstract musical symbols in the margins. These silent songs of the computer rendered throat have also been transformed into choral songs for four human voices, premiered at the Dunlop Art Gallery, Saskatchewan, in 2015.

Erin Gee - Swarming Emotional Pianos

Swarming Emotional Pianos

A looming projection of a human face surrounded by six musical chime robots driven by biological markers of emotion.

(2012 – ongoing)

Aluminium tubes, servo motors, custom mallets, Arduino-based electronics, iCreate platforms

Approximately 27” x 12” x 12” each

The projected face is that of an actor (Laurence Dauphinais or Matthew Keyes), who for 20 minutes moves between extreme emotional states of surprise, fear, anger, sadness, sexual arousal, and joy in 5 minute intervals. During the actor’s performance, Gee hooked the performer up to a series of biosensors that monitored how heart rate, sweat, and respiration changed between her emotional states.

The music that the robots surrounding the projection screen play as the actress moves between emotional states is in reaction to these physiological responses: the musical tones and rhythms shift and intensify as heart rate, sweat bursts, blood flow and respiration change in the actress. While the musical result is almost alien to assumptions of what emotional music might sound like, one might encounter the patterns as an abstracted lie-detector test that displays the unique internal fluctuations of the actress that move beneath the surface of her large, projected face. Does emotion lie within the visibility of facial expression, or somewhere in the audible made audible, the patterns of bodily sensation in her body? Is the actor sincere in her performance if the emotion is felt as opposed to displayed? Micro bursts of emotional sentiment are thus amplified by the robots, providing an intimate and abstract soundtrack for this “emotional movie”.

Emotional-physical outputs are extended through robotic performers as human actors focus on their internal states, and in fact activate their emotions mechanistically, as a means of creating change in their body, thus instrumentalizing emotion.

Custom open-source biosensors that collect heartrate and signal amplitude, respiration amplitude and rate, and galvanic skin response (sweat) have been in development by Gee since 2012.  Click here to access her GitHub page if you would like to try the technology for yourself, or contribute to the research.

Credits

Thank you to the following for your contributions:

  • Martin Peach (my robot teacher) – Sébastien Roy (lighting circuitry) – Peter van Haaften (tools for algorithmic composition in Max/MSP) – Grégory Perrin (Electronics Assistant)
  • Matt Risk, Tristan Stevans, Simone Pitot, and Jason Leith for their hours of dedicated studio help
  • Concordia University, the MARCS Institute at the University of Western Sydney, Innovations en Concert Montréal, Conseil des Arts de Montréal, Thought Technology, and AD Instruments for their support.

Swarming Emotional Pianos (2012-2014) Machine demonstration March 2014 – Eastern Bloc Lab Residency, Montréal

Erin Gee - Vocaloid Gig At Nocturne (X + 1)

Gig Vocaloid

A video-text pop band from a dystopic future where the human voice is lost and pop music reigns supreme.

Virtual voices are key for these pop stars. Dancing, costumed performers carry tablets that display the human larynx and song lyrics as they dance in sync.

GIG VOCALOID is a virtual pop band that had its first performance at the Musée d’art Contemporain de Montreal in February 2015 at X + 1: an evening of Internet-inspired art.

The project is inspired by virtual pop stars such as Hatsune Miku, which exist equally as distributed visual media avatar (holograms, merchandise), and as digital software tools for public, fan-based synthesized vocal creation. GIG VOCALOID is also inspired by boy and girl pop bands, whereupon individual voices and musicality are often superseded by a pop “character.” This is especially true in Japanese pop group AKB48, which has 48 female members whom are voted upon by the public for the right to solo singing and “leadership” within the group.

In this pop music context, celebrity character, fashion and visual appeal is more important than the human singing voice itself, which is often replaced by synthesizers and pitch correction. GIG VOCALOID invokes a fantasy posthumanist future where the human voice is lost, subjectivity is dead, and everyone is celebrating.

Externalizing the human voice outside of the preciousness of the human body, the human larynx (typically a hidden, interior aspect of vocal performance) is displayed prominently on tablets. “Lyrics” to their song flash aleatorically through these videos, which enable humans performers to be the support for digital artwork. GIG VOCALOID re-localizates the voice beyond the borders of the flesh body in an infectious avatar-dream.