Performance

BioSolo

Using the BioSynth, I improvised a set for my breath/voice and my sonified heart and sweat release at No Hay Banda in an evening that also featured the very interesting work of composer Vinko Globokar (Russia).  The improvisation is very sparing, the goal is to exploit interesting rhythmic moments between heavy breath-song and the heartbeat, all the while exploring limits of respiratory activity and seeing what effect it has on my physiology.

Photography: Wren Noble

BioSolo was first performed at No Hay Banda series in Montreal at La Sala Rossa, organized by Daniel Àñez and Noam Bierstone.

Song of Seven: Biochoir

A composition for children’s choir featuring seven voices and seven sets of biodata with piano accompaniment.

In this song, young performers contemplate an emotional time in their lives, and recount this memory as an improvised vocal solo.The choir is instructed to enter into a meditative state during these emotional solos, deeply listening to the tale and empathizing with the soloist, using imagination to recreate the scene.  Choir members are attached to a musical instrument I call the BioSynth a small synthesizer that sonifies heartbeats and sweat release for each individual member to pre-programmed tones. Sweat release, often acknowledged as a robust measure of emotional engagement, is signaled by overtones that appear and reappear over a drone; meanwhile the heartbeats of each chorister are sounded according to blood flow, providing a light percussion.

The musical score combines traditional music notation with vocal games and rhythms determined not necessarily by the conductor or score but by beatings of the heart and bursts of sweat. Discreet flashing lights on the synthesizer boxes in front of the choristers allowed the singers to discern the rhythms and patterns of their heart and sweat glands, which therefore permits compositions to incorporate the rhythms of the body into the final score as markers that trigger sonic events.

This choral composition was workshopped over a one-week residency at the LIVELab (McMaster University) with selected members of the Hamilton Children’s Choir, and facilitated by Hamilton Artists Inc. with support from the Canada Council for the Arts.

For more information

Hamilton Children's Choir
Daniel Àñez (Spanish biography)
Hamilton Artists' Inc
LIVElab
Canada Council for the Arts

Piano accompanist: Daniel Àñez
Hardware design: Martin Peach
Software design: Erin Gee

Erin Gee - Larynx Series

Larynx Series

(2015)

inkjet prints on acid-free paper

34″x 44″ each

These vector images are derived from endoscopic footage of a human larynx. Within the images I discovered what looked like abstract musical symbols in the margins. These silent songs of the computer rendered throat have also been transformed into choral songs for four human voices, premiered at the Dunlop Art Gallery, Saskatchewan, in 2015.

Erin Gee - Swarming Emotional Pianos

Swarming Emotional Pianos

A looming projection of a human face surrounded by six musical chime robots driven by biological markers of emotion.

(2012 – ongoing)

Aluminium tubes, servo motors, custom mallets, Arduino-based electronics, iCreate platforms

Approximately 27” x 12” x 12” each

The projected face is that of an actor (Laurence Dauphinais or Matthew Keyes), who for 20 minutes moves between extreme emotional states of surprise, fear, anger, sadness, sexual arousal, and joy in 5 minute intervals. During the actor’s performance, Gee hooked the performer up to a series of biosensors that monitored how heart rate, sweat, and respiration changed between her emotional states.

The music that the robots surrounding the projection screen play as the actress moves between emotional states is in reaction to these physiological responses: the musical tones and rhythms shift and intensify as heart rate, sweat bursts, blood flow and respiration change in the actress. While the musical result is almost alien to assumptions of what emotional music might sound like, one might encounter the patterns as an abstracted lie-detector test that displays the unique internal fluctuations of the actress that move beneath the surface of her large, projected face. Does emotion lie within the visibility of facial expression, or somewhere in the audible made audible, the patterns of bodily sensation in her body? Is the actor sincere in her performance if the emotion is felt as opposed to displayed? Micro bursts of emotional sentiment are thus amplified by the robots, providing an intimate and abstract soundtrack for this “emotional movie”.

Emotional-physical outputs are extended through robotic performers as human actors focus on their internal states, and in fact activate their emotions mechanistically, as a means of creating change in their body, thus instrumentalizing emotion.

Custom open-source biosensors that collect heartrate and signal amplitude, respiration amplitude and rate, and galvanic skin response (sweat) have been in development by Gee since 2012.  Click here to access her GitHub page if you would like to try the technology for yourself, or contribute to the research.

Credits

Thank you to the following for your contributions:

  • Martin Peach (my robot teacher) – Sébastien Roy (lighting circuitry) – Peter van Haaften (tools for algorithmic composition in Max/MSP) – Grégory Perrin (Electronics Assistant)
  • Matt Risk, Tristan Stevans, Simone Pitot, and Jason Leith for their hours of dedicated studio help
  • Concordia University, the MARCS Institute at the University of Western Sydney, Innovations en Concert Montréal, Conseil des Arts de Montréal, Thought Technology, and AD Instruments for their support.

Swarming Emotional Pianos (2012-2014) Machine demonstration March 2014 – Eastern Bloc Lab Residency, Montréal

Erin Gee - Vocaloid Gig At Nocturne (X + 1)

Gig Vocaloid

A video-text pop band from a dystopic future where the human voice is lost and pop music reigns supreme.

Virtual voices are key for these pop stars. Dancing, costumed performers carry tablets that display the human larynx and song lyrics as they dance in sync.

GIG VOCALOID is a virtual pop band that had its first performance at the Musée d’art Contemporain de Montreal in February 2015 at X + 1: an evening of Internet-inspired art.

The project is inspired by virtual pop stars such as Hatsune Miku, which exist equally as distributed visual media avatar (holograms, merchandise), and as digital software tools for public, fan-based synthesized vocal creation. GIG VOCALOID is also inspired by boy and girl pop bands, whereupon individual voices and musicality are often superseded by a pop “character.” This is especially true in Japanese pop group AKB48, which has 48 female members whom are voted upon by the public for the right to solo singing and “leadership” within the group.

In this pop music context, celebrity character, fashion and visual appeal is more important than the human singing voice itself, which is often replaced by synthesizers and pitch correction. GIG VOCALOID invokes a fantasy posthumanist future where the human voice is lost, subjectivity is dead, and everyone is celebrating.

Externalizing the human voice outside of the preciousness of the human body, the human larynx (typically a hidden, interior aspect of vocal performance) is displayed prominently on tablets. “Lyrics” to their song flash aleatorically through these videos, which enable humans performers to be the support for digital artwork. GIG VOCALOID re-localizates the voice beyond the borders of the flesh body in an infectious avatar-dream.

 

Anim.OS

(2012)

Generative software choir installation in collaboration with Oliver Bown

Inspired by exerpts of Elizabeth Grosz’s book “Architecture from the Outside”, I made recordings of myself singing text that made reference to insideness, outsideness, and flexible structures. These recordings were arranged by composer Oliver Bown into a networked choral software.

Anim.OS is a networked computer choir developed by Oliver Bown (Sydney) and Erin Gee (Montreal) in 2012. Videography and sound recording by Shane Turner (Montreal).

This is documentation of one of the first tests for improvisation and control of the choir at the University of Sydney.

Erin Gee and Stelarc - Orpheux Larynx

Orpheux Larnyx

(2011)

Vocal work for three artificial voices and soprano, feat. Stelarc.

Music by Erin Gee, text by Margaret Atwood.

I made Orpheux Larynx while in residence at the MARCs Auditory Laboratories at the University of Western Sydney, Australia in the summer of 2011. I was invited by Stelarc to create a performance work with an intriguing device he was developing there called the Prosthetic Head, a computerized conversational agent that responds to keyboard-based chat-input with an 8-bit baritone voice. I worked from the idea of creating a choir of Stelarcs, and developed music for three voices by digitally manipulating the avatar’s voice. Eventually Stelarc’s avatar voices were given the bodies of three robots: a mechanical arm, a modified segueway, and a commercially available device called a PPLbot. I sang along with this avatar-choir, while carrying my own silent avatar with me on a djgital screen.

It is said that after Orpheus’ head was ripped from his body, he continued singing as his head floated down a river. He was rescued by two nymphs, who lifted his head to the heavens, to become a star. In this performance, all the characters (Stelarc’s, my voice, Orpheus, Euridice, the nymphs) are blended into intersubjective robotic shells that speak and sing on our behalf. The flexibility of the avatar facilitates a pluratity of voices to emerge from relatively few physical bodies, blending past subjects into present but also possible future subjects. Orpheus is tripled to become a multi-headed Orpheux, simultaneously disembodied head, humanoid nymph, deceased Euridice. The meaning of the work is in the dissonant proximity between the past and present characters, as well as my own identity inhabiting the bodies and voices of Stelarc’s prosthetic self.

Credits

Music, video and performance by Erin Gee. Lyrics “Orpheus (1)” and “Orpheus (2)” by Margaret Atwood. Robotics by Damith Herath. Technical Support by Zhenzhi Zhang (MARCs Robotics Lab, University of Western Sydney). Choreography coaching by Staci Parlato-Harris.

Special thanks to Stelarc and Garth Paine for their support in the creation of the project.

This research project is supported by the Social Sciences and Humanities Research Council of Canada and MARCS Auditory Labs at the University of Western Sydney. The Thinking Head project is funded by the Australian Research Council and the National Health and Medical Research Council.

Music: Orpheux Larynx © 2011 . Lyrics are the poems by Margaret Atwood: “Orpheus (1)” and “Orpheus (2)”, from the poetry collection Selected Poems, 1966 – 1984 currently published by Oxford University Press © 1990 by Margaret Atwood. In the United States, the poems appear in Selected Poems II, 1976 – 1986currently published by Houghton Mifflin © 1987 by Margaret Atwood. In the UK, these poems appear in Eating Fire, Selected Poetry 1965 – 1995 currently published by Virago Press, ©1998 by Margaret Atwood. All rights reserved.

BodyRadio

(2011)

Four-part score for electronic voices in organic bodies debuted as part of New Adventure in Sound Art’s Deep Wireless Festival of Transmission Art, Toronto, Canada

Body Radio is a composition for four performers that reverses the interiority/exteriority of a radio, which is a human voice in an electronic body. Small wireless microphones are placed directly in the mouths of the performers, who are each facing a guitar amplifier. The performers control the sensitivity of both the amplifier’s receiving function and the microphone’s sending function in accordance with the score. The final sounds are a combination of inner mouth noises, breathing, and varying pitches feedback controlled by the opening and closing of mouths.

Erin Gee - 2plus2: Prologue

2plus2: Prologue

(2009)

Performed September 9th, 2009 at Prince Albert, SK.

2plus2 is a collaborative work inspired by the 1967 painting of the same title by Douglas Morton, with musical composition by myself and choreography by Robin Poitras. A goal in my composition was to marry volume and weight with the lively colour and joy embodied in Morton’s work, using such reference points as jazz music, as well as the sonic similarities between vintage sci-fi and nature documentary films. The final work is an exploration of opposite forces, partnering, and twinned bodies, exploring the evolution of a species, learned behavior, community, and time.

This short prologue was premiered in September 2009 as part of CrossHatch: Dance and the Arts in three Saskatchewan cities.

NEED VIDEO

2plus2: Prologue, 2009. In collaboration with choreographer Robin Poitras. Trumpet samples courtesy of Mihai Sorohan.

The prologue for 2plus2 was premiered in September 2009 as part of CrossHatch: Dance and the Arts in three Saskatchewan cities. Poitras and I will continue in 2010 to complete the project with a larger work for two dancers.