VR Commission Update

Here’s a sneak peek at some of the art developed last summer in a residency at the Technoculture Art and Games lab at Concordia University with lead 3D artist Alex Lee, AI designer Sofian Audry, art assistant Marlon Kroll, and research assistant Roxanne Baril-Bédard. Among holographic popstars who may or may not have their own consciousness to begin with, the project includes rhetorical analysis of post 9/11 counterterrorist video games, reality television, startup culture, and self-help manuals for improving emotional state.

I am implementing the Biosensor control system this Winter and plan on working on finalizing the game’s art, music and sounds this summer for a launch towards the end of 2017 in an exhibition at Trinity Square Video in Toronto.


In the future, weapons of war possess advanced AI systems, systems that guarantee successful automated combat on behalf of soldiers wielding the technology.  The military still trains its soldiers in case of equipment failure, but at this point, fighters function more as passive operators. The terrorist threat has nothing similar to this technology in their ranks, and the effectiveness of our systems is swift and deadly.  Historically, our soldiers manning the machines have never witnessed violence or devastation at this scale: the largest threat to soldiers today defending our nation’s values is Post Traumatic Stress Syndrome.

To address this unfortunate state of affairs, the military developed a startup fund open to the public to resolve this issue through technological innovation.  Significant scholarships and research funding was provided for researchers interested in devoting time to creating a means towards mitigating the psychological crisis.  A risky but intriguing proof of concept was eventually presented: the creation of a revolutionary entertainment for the troops as they fought the terrorist threat.

Yowane Haku became the face of this entertainment: a mobile holographic pop star engineered specifically for psychological distraction on the battlefield.  

The world’s most talented engineers, design consultants, and pop writing teams were assembled to enshrine Haku with every aesthetic and technical element to impress not only the troops, but the world with her next-generation technology.  However, the initial test-run of this mobile holographic pop medium in combat trials was….a failure.  

On the battlefield, Haku’s perfect body glowed faintly amongst the dust and screams, bullets and explosions passing ineffectually, dance moves precise, vocalizations on point. But ultimately her pop music performance lacked resonance with the battle.  Instead of the soldiers being emboldened by this new entertainment, which was intended to distract or inspire them from their gruesome tasks, their adverse psychological symptoms…flourished.  Some of the men went mad, laughing maniacally in tune with the holographic presence smiling sweetly at them.  It was only due to the superiority of our AI weaponry and automated drone operation that the morally corrupt foreign threat, with their violent and technologically crude methods, were stopped that day. The minds of our soldiers were lost.

Months later, a young pool of startup professionals would provide another solution.  This vocal minority of engineers…though others called them crazy….had a hunch. For the hologram pop star to “work,” her systems needed access pure emotion, to link a human element with the trauma of the human soldiers.  But it was not clear who, or what, could best provide this emotional link…and what amount of embodied “disruption” this might entail…

This enthusiastically crowdfunded group of millennials completed their groundbreaking research without the strings of ethics funding or institutional control.  Human emotions and consciousness now flow direct to Haku via experimental trials in VR technology.  Haku rises again on the battlefront.

Simultaneously, a new reality television show has been borne of these first human trials. The star of this reality show could be…….you.

Could you be the next American Sweetheart?  Do you have what it takes to provide 110% Best Emotional Performance?  Join us through advanced VR technologies, Live and Direct on the battlefield, to find out if you could be fit to man the ultimate weapon of war: Our Next Holographic Idol.

This project is supported by the Canada Council for the Arts and Trinity Square Video’s AMD VR lab

BioSolo

Using the BioSynth, I improvised a set for my breath/voice and my sonified heart and sweat release at No Hay Banda in an evening that also featured the very interesting work of composer Vinko Globokar (Russia).  The improvisation is very sparing, the goal is to exploit interesting rhythmic moments between heavy breath-song and the heartbeat, all the while exploring limits of respiratory activity and seeing what effect it has on my physiology.

Photography: Wren Noble

BioSolo was first performed at No Hay Banda series in Montreal at La Sala Rossa, organized by Daniel Àñez and Noam Bierstone.

Echo Grey

Echo Grey is a composition for four voices, feedback musical instruments, and tape part (which features the sounds of a broken image file).  World premiere at Vancouver New Music with Andrea Young, Marina Hasselberg, Sharon Chohi Kim, Micaela Tobin, Michael Day, Braden Diotte, and Erin Gee in November 2016. It has also been performed at Open Space Gallery (Victoria), and Neworks (Calgary).

The mythological character Echo exists only as a shade, a reflection or bounce. Movement between words or utterance, the Echo’s mythological voice exceeds the signal itself and speaks to a deeper engagement with materiality.  In Echo Grey, I composed a series of vocal patterns that emerge directly with breath as raw material, the movement of intake and exhalation made audible through mechanistic patterns that are impossible to perform perfectly. The choir’s collective attempt at mechanistically engaging with an impossible repetition eventually negates the signal: all that is left is the lungs and vocal vibrations of the individual who gasps, cries in defeat, and whoops in ecstasy.  These human voices are simultaneously punctuated by the feedback of microphone and amplified instruments, and a tape track composed through process – a bouncing of data back and forth between visual and aural softwares that eventually results in nothing but glitched statements.  This tape track is analogous to the squealing proximity of the sender to the receiver in the scored feedback parts, which is analogous to the back and forth of the breath of the singers as they perform.  The colour grey in the work’s title is inspired by the back and forth motion of a 2HB pencil stroking endlessly across an empty pad of paper.

 

Song of Seven: Biochoir

In this song, young performers contemplate an emotional time in their lives, and recount this memory as an improvised vocal solo.The choir is instructed to enter into a meditative state during these emotional solos, deeply listening to the tale and empathizing with the soloist, using imagination to recreate the scene.  Choir members are attached to a musical instrument I call the BioSynth a small synthesizer that sonifies heartbeats and sweat release for each individual member to pre-programmed tones. Sweat release, often acknowledged as a robust measure of emotional engagement, is signaled by overtones that appear and reappear over a drone; meanwhile the heartbeats of each chorister are sounded according to blood flow, providing a light percussion.

The musical score combines traditional music notation with vocal games and rhythms determined not necessarily by the conductor or score but by beatings of the heart and bursts of sweat. Discreet flashing lights on the synthesizer boxes in front of the choristers allowed the singers to discern the rhythms and patterns of their heart and sweat glands, which therefore permits compositions to incorporate the rhythms of the body into the final score as markers that trigger sonic events.

This choral composition was workshopped over a one-week residency at the LIVELab (McMaster University) with selected members of the Hamilton Children’s Choir, and facilitated by Hamilton Artists Inc. with support from the Canada Council for the Arts.

For more information

Hamilton Children's Choir
Daniel Àñez (Spanish biography)
Hamilton Artists' Inc
LIVElab
Canada Council for the Arts

Piano accompanist: Daniel Àñez
Hardware design: Martin Peach
Software design:Nicholas Asch, Patrice Coulombe, Erin Gee

Erin Gee - Larynx Series

Larynx Series

(2015)

inkjet prints on acid-free paper

34″x 44″ each

These vector images are derived from endoscopic footage of a human larynx. Within the images I discovered what looked like abstract musical symbols in the margins. These silent songs of the computer rendered throat have also been transformed into choral songs for four human voices, premiered at the Dunlop Art Gallery, Saskatchewan, in 2015.

Erin Gee - Swarming Emotional Pianos

Swarming Emotional Pianos

(2012 – ongoing)

Aluminium tubes, servo motors, custom mallets, Arduino-based electronics, iCreate platforms

Approximately 27” x 12” x 12” each

Custom biosensors that collect heartrate and signal amplitude, respiration amplitude and rate, and galvanic skin response (sweat).

Biodata collection software and affective data responsive algorithmic music software built in Max/MSP.

A cybernetic musical performance work that bridges robotics and emotion to create biologically harmonic chamber music. Swarming Emotional Pianos features a set of mobile robots that each house a bell instrument and lighting components. The music that these robots play is determined through physiological responses of a human subject to emotional state, which is reflective of affective computing research. These physiological markers including breathing, heart rate, sweat glands, blood pressure. Research is ongoing for integration of skin sensitive neural activity through microneurography into the system.

My final goal is a live performance whereupon actors are hooked up live to biosensors and their emotional data is wirelessly streamed to the robotic musical instruments. This will require extensive biofeedback testing. I maintain an active dialogue with microneurographer and neurophysiologist Vaughan Macefield, in anticipation of networked, telematic performances that involve tiny needles inserted directly into nerves that reflect emotional arousal. The use of microelectrode needles inserted directly into the nerves of awake human performers to pick up on direct electrical neural activity is a unique technical component of this project.

The goal in creating this work is to illuminate and explore the complex relationships between body and mind in human emotions. Emotional-physical outputs are extended through robotic performers as human actors focus on their internal states, and in fact activate their emotions mechanistically, as a means of creating change in their body, thus instrumentalizing emotion.

Credits

Thank you to the following for your contributions:

  • Martin Peach (my robot teacher) – Sébastien Roy (lighting circuitry) – Peter van Haaften (tools for algorithmic composition in Max/MSP) – Grégory Perrin (Electronics Assistant)
  • Matt Risk, Tristan Stevans, Simone Pitot, and Jason Leith for their hours of dedicated studio help
  • Concordia University, the MARCS Institute at the University of Western Sydney, Innovations en Concert Montréal, Conseil des Arts de Montréal, Thought Technology, and AD Instruments for their support.

Swarming Emotional Pianos (2012-2014) Machine demonstration March 2014 – Eastern Bloc Lab Residency, Montréal

Erin Gee - Vocaloid Gig At Nocturne (X + 1)

Gig Vocaloid

A performative, distributed video-text band from a dystopic future where the human voice is lost and pop music reigns supreme. Virtual voice is key component for the synthesized pop star. Dancing, costumed performers carry tablets that display the human larynx and song lyrics as they dance in sync.

GIG VOCALOID is a virtual pop band that had its first performance at the Musée d’art Contemporain de Montreal in February 2015 at X + 1: an evening of Internet-inspired art.

The project is inspired by virtual pop stars such as Hatsune Miku, which exist equally as distributed visual media avatar (holograms, merchandise), and as digital software tools for public, fan-based synthesized vocal creation. GIG VOCALOID is also inspired by boy and girl pop bands, whereupon individual voices and musicality are often superseded by a pop “character.” This is especially true in Japanese pop group AKB48, which has 48 female members whom are voted upon by the public for the right to solo singing and “leadership” within the group.

In this pop music context, celebrity character, fashion and visual appeal is more important than the human singing voice itself, which is often replaced by synthesizers and pitch correction. GIG VOCALOID invokes a fantasy posthumanist future where the human voice is lost, subjectivity is dead, and everyone is celebrating.

Externalizing the human voice outside of the preciousness of the human body, the human larynx (typically a hidden, interior aspect of vocal performance) is displayed prominently on tablets. “Lyrics” to their song flash aleatorically through these videos, which enable humans performers to be the support for digital artwork. GIG VOCALOID re-localizates the voice beyond the borders of the flesh body in an infectious avatar-dream.

GIG VOCALOID thrives through multiplicity, otherness, and inauthentic copies, so the band exists through 5 anonymous core members whose identities are not essential.

GIG VOCALOID consists of five masked characters: Cheerful (the leader, they like the colour red and to make a statement) // Timidity (shy and graceful) // Twinkle (cute, optimistic) // Lolo (wild and crazy, the rebel) // and Grace (sophisticated, stoic, strong). Each of these masks is without a fixed gender.

 

Erin Gee - 7 Nights of Unspeakable Truth at Nuit Blanche Toronto 2013

7 Nights of Unspeakable Truth

(2013)

7-channel audio installation, woven blankets, text work

8 hours duration

“7 Nights of Unspeakable Truth is a long-form composition that consists of documentation of dusk-till dawn searches for number stations on shortwave radio frequencies. Arranged in order, from day one to day seven, the installation allows one to physically walk through seven evenings of shortwave, synchronized in their respective times, in physical space. This spatialization of each night will allow listeners to make comparisons, appreciating patterns demonstrated in Gee’s search as she consults research and online communities to tune into mysterious, unexplained broadcasts that consist only of numbers, tones and codes.”

This body of work is informed by my fascination with these principles of secrecy, organization and communication, coupled with the nocturnality of a solitary listener that connects to others via online chat in order to share an obscure passion. It’s a search for disembodied voices in strange technotongues. The patterns of my searching during 7 Nights of Unspeakable Truth are woven directly into blankets, in text artworks I weave together my research into radio technologies, music history, and ancient numbers documents from times past in order to re-present an encrypted mystery. The 7-channel audio is composed listening and searching that you can listen to, 7 Nights compressed into one enfolded 8 hour experience.

Anim.OS

(2012)

Generative software choir installation in collaboration with Oliver Bown

Inspired by exerpts of Elizabeth Grosz’s book “Architecture from the Outside”, I made recordings of myself singing text that made reference to insideness, outsideness, and flexible structures. These recordings were arranged by software designer and algorhythmic composer Oliver Bown into a networked choral software, which when installed in a gallery, performs my music on my behalf.

Anim.OS premiered at Tin Sheds Gallery, Sydney. The installation opened with a live performance work featuring Gee (vocals), as well as Laura Altman, Monica Brooks, Sam Pettigrew (accordian, clarinet, double bass improvisation) and the software choir manipulated live by its creator, Oliver Bown.

Anim.OS is a networked computer choir developed by Oliver Bown (Sydney) and Erin Gee (Montreal) in 2012. Videography and sound recording by Shane Turner (Montreal).

This is documentation of one of the first tests for improvisation and control of the choir at the University of Sydney.

The installation work premiered at Tin Sheds Gallery (Sydney) in August 2012, and was featured in a performance work scored by Erin Gee for Anim.OS choir and three musicians.

Erin Gee and Stelarc - Orpheux Larynx

Orpheux Larnyx

(2011)

Vocal work for three artificial voices and soprano, feat. Stelarc.

Music by Erin Gee, text by Margaret Atwood.

I made Orpheux Larynx while in residence at the MARCs Auditory Laboratories at the University of Western Sydney, Australia in the summer of 2011. I was invited by Stelarc to create a performance work with an intriguing device he was developing there called the Prosthetic Head, a computerized conversational agent that responds to keyboard-based chat-input with an 8-bit baritone voice. I worked from the idea of creating a choir of Stelarcs, and developed music for three voices by digitally manipulating the avatar’s voice. Eventually Stelarc’s avatar voices were given the bodies of three robots: a mechanical arm, a modified segueway, and a commercially available device called a PPLbot. I sang along with this avatar-choir, while carrying my own silent avatar with me on a djgital screen.

It is said that after Orpheus’ head was ripped from his body, he continued singing as his head floated down a river. He was rescued by two nymphs, who lifted his head to the heavens, to become a star. In this performance, all the characters (Stelarc’s, my voice, Orpheus, Euridice, the nymphs) are blended into intersubjective robotic shells that speak and sing on our behalf. The flexibility of the avatar facilitates a pluratity of voices to emerge from relatively few physical bodies, blending past subjects into present but also possible future subjects. Orpheus is tripled to become a multi-headed Orpheux, simultaneously disembodied head, humanoid nymph, deceased Euridice. The meaning of the work is in the dissonant proximity between the past and present characters, as well as my own identity inhabiting the bodies and voices of Stelarc’s prosthetic self.

Credits

Music, video and performance by Erin Gee. Lyrics “Orpheus (1)” and “Orpheus (2)” by Margaret Atwood. Robotics by Damith Herath. Technical Support by Zhenzhi Zhang (MARCs Robotics Lab, University of Western Sydney). Choreography coaching by Staci Parlato-Harris.

Special thanks to Stelarc and Garth Paine for their support in the creation of the project.

This research project is supported by the Social Sciences and Humanities Research Council of Canada and MARCS Auditory Labs at the University of Western Sydney. The Thinking Head project is funded by the Australian Research Council and the National Health and Medical Research Council.

Music: Orpheux Larynx © 2011 . Lyrics are the poems by Margaret Atwood: “Orpheus (1)” and “Orpheus (2)”, from the poetry collection Selected Poems, 1966 – 1984 currently published by Oxford University Press © 1990 by Margaret Atwood. In the United States, the poems appear in Selected Poems II, 1976 – 1986currently published by Houghton Mifflin © 1987 by Margaret Atwood. In the UK, these poems appear in Eating Fire, Selected Poetry 1965 – 1995 currently published by Virago Press, ©1998 by Margaret Atwood. All rights reserved.