Portfolio

AFFECT FLOW

AFFECT FLOW (2022)
Performance at MUTEK Montreal 2023. Photography by Vivien Gaumand.

2022

AFFECT FLOW is a music performance work of approximately 30 minutes that initiates listeners into a state of “non-naturalist emotion”: emotional manufacture as a technology for survival or pleasure. It is a hybrid of electroacoustic music with live-spoken verbal suggestion, an ensemble of live biofeedback created by hardware synthesisers, and song.

In AFFECT FLOW I use psychological hacks borrowed from method acting and clinical psychology in order to move beyond “natural” emotion, playing with biofeedback music paradigms and group participation through folk hypnosis, verbal suggestion, clinical psychology methods, roleplay, song, and textural sounds.

These performance techniques, which I call “wetware,” challenge the authoritarian aura of quantification, transforming biofeedback into a feminist space of posthumanist connection and expression.

The biofeedback performers (up to 10) in AFFECT FLOW are volunteers referred to as surrogates who meet me a half hour before the performance. After a brief musical interlude, I extend an invitation for the audience to join us in guided visualization and hypnosis led by me and my voice. Each surrogate operates a BioSynth, a musical instrument of my design that responds to physiological markers like heart rate, breathing, and skin conductance as a control parameter for electronic sound. The mechanics of the BioSynths are explained clearly, allowing listeners to perceive the shifting mood in the room during the performance through the bodies of the performers. This collaborative interplay of bodies gives rise to affect as an ecological relation, transcending individual subjectivity.

A lightbulb illuminates at the feet of each performer when their signals are amplified. Because I can control the audio outputs of each body via a mixing board, I can highlight solos, duets, trios, and ensemble moments live in the moment.

Credits

Affect Flow (2022)
Music composition and performance by Erin Gee.

Dramaturgy and text by Jena McLean. Poetry by Andrew C. Wenaus.

BioSynth affective hardware synthesizers are an open-source project by Erin Gee. Programming for this iteration by Etienne Montenegro with sonification programming by Erin Gee. PCB design by Grégory Perrin.

Click here for the BioSynth GitHub.

Click here for Tech Rider

Performances:

International Symposium of Electronic Art. CCCB Barcelona, ES, May 2022.

Society for Art and Technology, Montreal CA, July 2022.

Vancouver New Music – Orpheum Annex, Vancouver CA November 2022.

Electric Eclectics Festival, Meaford ON, CA, August 2023

MUTEK Montreal, CA, August 2023.

AFFECT FLOW (2022) at Vancouver New Music, Vancouver.

We as Waves

We as Waves (2021)
Premiere performance at Akousma Festival, Montreal.
Photography by Caroline Campeau.

2020

ASMRtronica is an ongoing project developed in the artist’s home-studio during the novel coronavirus pandemic: a manifestation of a desire for intimacy in sound, when touch was not possible. This is a style of music applied to several works as Gee develops her own vocabularies of psychosomatic performance.

Through ASMRtronica, Gee brings to life a combination of electroacoustic music and the sounds of Autonomous Sensory Meridian Response (ASMR) videos: clicks, whispers, soft spoken voice, taps, and hand gestures inspired by hypnosis, tactility, intimacy, and verbal suggestion. Through ongoing development of this genre, she explores the sonic limits of the sensorial propositions of ASMR, journeying into embodied and unconscious feedback loops in sound.

Credits

We as Waves (2020)
Released August 2021 by Erin Gee.
Music composition and performance by Erin Gee. Text by Jena McLean. Videography by Michel de Silva.

To the Farther (2020)
Released September 8, 2020 by Erin Gee.
Music composition and art by Erin Gee.

We as Waves

We as Waves (2020) is a collaboration between myself and queer playwright Jena McLean. The text in this work is inspired by an essay by feminist theorist of electronic music Tara Rodgers. What does it mean to enter into an affective relationship of touch with sound? The work embodies a dark narrative of sonic becoming aided by hypnosis and physiological relationship to sound and voice, closed by the the following quotes from queer theologist Catherine Keller:

“As the wave rolls into realization, it may with an uncomfortable passion
fold its relations into the future: the relations, the waves of our possibility,
comprise the real potentiality from which we emerge…”

“We are drops of an oceanic impersonality. We arch like waves,
like porpoises.”

We as Waves (2020)

To the Farther

In September 2020 I launched To the Farther as part of MUTEK Montreal’s online exhibition Distant Arcades. It is first a series of music that explore the limits of tactile whispers, proximity, and hypnotic language through ASMR and electronic sound.

To the Farther is the title of the first iteration: A fresh take on texture, form, and the plasticity of reality under digital transformations, also is a “remix” of my ASMR recordings made in Machine Unlearning (2020).

To the Farther (2020)

Audio PlaceboPlaza

Audio Placebo Plaza: Montreal Edition

Poster for Audio Placebo Plaza: Montreal Edition (2021)

2021

In June 2021 the trio transformed a former perfume shop in the St Hubert Plaza of Montreal into a pop up radio station, sensory room, therapist office, and audio production studio, uniting these spaces through the aesthetics of a sandwich shop or cafe to offer customizable audio placebo “specials” and “combos” to the public.

Founded upon principles of feminism, socialism, and audio production excellence, Audio Placebo Plaza invites everyday people to take appointments with artists to discuss how an audio placebo could help improve their lives. These appointments are entirely focused on the individual and are in themselves part of the process. Common topics of discussion included increasing productivity, self-esteem, self-care, social interactivity, brain hacking, mitigating insomnia, and pain management, but also one’s aural preferences, sensitivities, and curiosities. Intake sessions were conducted in a blended telematic/in-person structure to determine one’s familiarity and comfort levels with a variety of psychosomatic audio techniques including but not limited to soundscapes, binaural beats, simulated social interactions, positive affirmations, drone, participatory vocalization, ASMR, guided meditation and deep listening.

After the consultation is complete, team members met to discuss each participant’s case to fulfill their “prescription,” and also to divide the labor amongst the three creators. The collaborations are non-hierarchical, adaptive, and simultaneous: one might be working on up to four projects at a time, or trade tasks depending on one’s backlog of labor. Labor is divided into recording sounds, conducting intake sessions, writing scripts, performing spoken or sung content, writing music, editing and audio mixing, cleaning and maintaining the shared spares, and communicating with visitors or walk-ins.

Audio Placebo Plaza Radio broadcast was facilitated through a pirate radio transmitter as well as an internet radio station. We broadcast completed placebos, shared technical advice and performance practices during informal critiques, work sessions in progress through the DAW, and sometimes informal chats with visitors. Intake sessions were also broadcasted (with the consent of visitors).

Through Audio Placebo Place, we explore and develop methods for sound and music that propose emotional labor, listening, collaboration and “music as repair” (see Suzanne Cusick, 2008) as key elements that shape the sonic-social encounter between artists and the public.

Can placebos help?
Does sound have the power to process complex emotions?
Can music give you what you need?
Is this even music?

Credits

Audio Placebo Plaza is a community sound art project conceived by Julia E Dyck, Erin Gee and Vivian Li .

Graphic design by Sultana Bambino.

Gallery

Photo Credits
Audio Placebo Plaza Poster

Presence

Presence (2020)
Screen capture from performance at Network Music Festival 2020. Online.

2020

In Presence, artists Erin Gee and Jen Kutler reconfigure voice and touch across the internet through a haptic/physical feedback loop, using affective and physical telematics to structure an immersive electronic soundscape through physiological response.

(March 2020) I was quarantining intensely during the coronavirus pandemic when Jen Kutler reached out to me asking if I would like to collaborate on a new work that simulates presence and attention over the network.  We have never met in real life, but we started talking on the internet every day. We eventually built a musical structure that implicates live webcam, endoscopic camera footage, biosensor data, sounds rearranged by biosensor data, ASMR roleplay and touch stimulation devices delivering small shocks to each artist. We developed this work at first through a month-long intensive online residency at SAW Video, while in conversation with many amazing artists, curators and creative people.

Presence is a telematic music composition for two bodies created during the Spring of 2020, at the height of confinement and social distancing during the COVID19 epidemic in Montreal and New York state. This work has been performed for online audiences by both artists while at home (Montreal/New York), featuring Gee and Kutler each attached to biosensors that collect the unconscious behaviours of their autonomic nervous systems, as well as touch simulation units that make this data tactile for each artist through transcutaneous nerve simulation.

Audiences are invited to listen attentively this networked session for physicalized affect through the sonification of each artists’ biodata, which also slowly triggers an ASMR roleplay that is actively reconfigured by the bodily reactions of each artist. Music and transcutaneous electronic nerve stimulation is triggered by listening bodies: these bodies are triggered by the sounds and electric pulses, everything in the system is unconscious, triggering and triggered by each other through networked delays, but present. Through this musical intervention the artists invite the listeners to imagine the experience and implicate their own bodies in the networked transmission, to witness the artists touching the borders of themselves and their physical spaces while in isolation.

Credits

web socket for puredata (wspd) created for Presence by Michael Palumbo. Available on Github here.

Biodata circuitry and library created by Erin Gee. Available on Github here.

Electronic touch stimulation device for MIDI created by Jen Kutler. Available on Github here.

Performance built with a combination of puredata (data routing), Processing (biodata generated visuals), Ableton Live (sounds) and OBS (live telematics) by Erin Gee and Jen Kutler.

Presence was created in part with the support from SAW Video artist-run centre, Canada.

Exhibition/Performance history

SAW Video “Stay at Home” Residency March-April 2020

Network Music Festival July 17 2020

Fonderie Darling – As part of Allegorical Circuits for Human Software curated by Laurie Cotton Pigeon. August 13 2020

Video

Presence (2020)
Performance by Erin Gee and Jen Kutler at Network Music Festival.

Gallery

Machine Unlearning

Vision calibration from Machine Unlearning (2020).
Photography by Elody Libe. Image courtesy of the artist.

2020

In Machine Unlearning, the artist greets the viewer and slowly offers them a unique neural conditioning “treatment”: sonically reproducing the unraveling outputs of an LSTM algorithm as it “unlearns” through whispering, moving backwards in time through its epochs of training.

This aural treatment is couched in a first-person roleplay scenario that grounds the viewer through a series of simple audio visual tests. At no point is the neural network technology “seen” – it is instead performed by a human interlocuter, translated into affective vocality and whispered text. The algorithm was created by media artist Sofian Audry, and trained on the text of Emily Brontë’s novel Wuthering Heights (1847). This novel was chosen in part because of its richly poetic syntax, but also for its feminine vocality and conceptual themes of love and intergenerational trauma. Machine Unlearning is a novel combination of neural network technologies and the popular internet genre “Autonomous Sensory Meridian Response,” or ASMR. ASMR is a social media genre that has developed largely through massive social media metrics in the form of upvotes, clicks, comments, subscribes, and likes in response to audio visual stimuli that creates feelings of mild euphoria, relaxation and pleasure. ASMR fans online seek out specific video content that causes the physiological reaction of “tingles” – tingling sensations across the skin, a mild body high, or simply a means of falling asleep. Gee considers ASMR as a form of psychosomatic body hacking. By combining machine learning with ASMR, Gee draws parallels between cutting edge autonomous/non-conscious algorithms and the autonomous/unconscious functions of the human body. Just as ASMRtists use specific sounds and visual patterns in their videos to “trigger” physical reactions in the viewer, machine learning algorithms also unconsciously respond to patterns perceived through limited senses in order to develop learning (and unlearning) results. The artist’s emphasis on whispering the textual outputs of the algorithm as it slowly “unlearns” allows the listener to grasp the materiality of machine learning processes at a human level, but also a subconscious level: allowing one’s body to be mildly and charmingly “hacked” through soft and gentle play.

The use of the word “intelligence” in the metaphor of AI focuses on higher functions of consciousness that algorithms do not possess. While algorithms have not meaningfully achieved a humanistic consciousness to date, today’s algorithms act autonomously on sensory information, processing data from its environment in unconscious, automatic ways. The human brain also responds unconsciously and automatically to sensory data in its environment, for example, even if you are not conscious of how hot a stove is, if you place your hand on a hot stove, your hand will automatically pull away. These unconscious, physiological actions in the sensory realm points to an area of common experience between algorithms and the human.  For more explanation of these ideas, take a look at the work of postmodern literary critic N. Katherine Hayles in her 2017 book Unthought: The power of the cognitive nonconscious.  In this way I wonder if the expression “autonomous intelligence” makes more sense than “artificial intelligence”, however like posthumanist feminist Rosi Braidotti I am deeply suspicious of the humanist pride that our species takes in the word “intelligence” as something that confers a special status and justification for domination of other forms of life on earth.

Credits

Photography and videography by Elody Libe.

Production Support: Machine Unlearning video installation was produced at Perte de Signal with the support of the MacKenzie Art Gallery for the exhibition To the Sooe (2020) curated by Tak Pham.

The roleplay performance was developed during my artistic residency at Locus SonusÉcole Superieur d’art d’Aix en Provence and Laboratoire PRISM.

Custom LSTM Algorithm created by media artist Sofian Audry

Video

Machine Unlearning (2020)
Videography by Elody Libe

Gallery

This work was first developed as a performance that debuted at Cluster Festival, Winnipeg in 2019.  During live performance, each audience member dons a pair of wireless headphones.  The performance allows the audience members to see the ASMR “result” of the performance for camera, simultaneous with the ability to see my “backstage” manipulation of props and light in real time.

Pinch and Soothe

Pinch and Soothe (2019)
Custom biofeedback instruments, microcontroller-based hardware, plexiglass.

2019

An interactive biofeedback sculpture for two: the sounds of the bodies are structured through this score for giving/receiving physical pain and healing.

 

Pinch the hand of the other three times.
Soothe the hand of the other three times.

 

Each action can be performed for a short duration or a long duration, lightly or with force, or intermixed.  Listen to the sounds of your bodies, how they both shift and react to these interactions.  The sounds of your body are in your left ear: the sounds of the other’s body are on the right.

Scoring biofeedback beyond the interface, instead scoring interaction, social experience, and relationships across the body.

The interface is purely hardware (bodies with limited memory) so no data is retained in the device.  The biosensors consist of non-invasive heartrate, skin conductance and respiration sensors.  These sensors can be exhibited alongside sterile pads if the public chooses to clean them before use.

Each body has its own tone associated with its heart, breath and respiration.

Respiration is mapped to a clear pitch that fades in and out with the breath through the hardware synthesizer.

Heartbeats are heard as low pulses.

Skin conductance is a high, ethereal tone. Because skin conductance is a value that is not normally perceived or well understood, think of it like a volume knob for emotion, whether positive or negative, accidental or unconscious. It is particularly active during emotions of delight or curiosity, however it tends to spike and fall in moments of non-specific emotion, like a beacon for change.

Sounds are emitted directly into the headphones. The only auditory witnesses are the performers themselves.

The device itself is totally plug and play.  The hardware is intentionally fragile, with non-soldered wires visible – the circuitry is wired in such a way that the power sources are connected, co-dependent: if one half of the circuitry fails, the other will as well.

Please note that documentation of this work was partially obtained through social media, as the pandemic made the possible window for capturing human interaction with the work limited.

Credits

PCB designed by Martin Peach.

Exhibition history

To the Sooe. MacKenzie Art Gallery (Regina, Canada) January 2020 – April 2020 (closed early due to the pandemic)

Cluster Festival. (Winnipeg, Canada) March 2019.

Algorithms that Matter. Lydgalleriet (Bergen, Norway). February 2019.

Gallery

Photo Credits
Erin Gee and Cameron Wiest

LAUGHING WEB DOT SPACE

Installation detail of laughingweb.space (2018) by Erin Gee.
Exhibition at Eastern Bloc, Montreal. Photo by Anna Iarovaia.

2018

An interactive website and virtual laugh-in for survivors of sexual violence.

The URL: https://laughingweb.space

This website enables survivors to record and listen to the sounds of their laughter, and through the magic of the internet, laugh together. Visitors of any gender that self-identify as survivors are invited to use the website’s interface to record their laughter and join in: no questions asked. Visitors can also listen to previously recorded laughter on loop.

Why laughter? Laughter is infectious, and borne of the air we still breathe. We laugh in joy. We laugh in bitterness. We laugh awkwardly. We laugh in relief. We laugh in anxiety. We laugh because it is helpful for laugh. We laugh because it might help someone else. Laughing is good for our health: soothing stress, strengthening the immune system, and easing pain. Through laughter, we proclaim ourselves as more complex than the traumatic memories that we live with. Our voices echo, and will reverberate in the homes, public places, and headphones of whoever visits.

Dedicated to Cheryl L’hirondelle

This project was commissioned by Eastern Bloc (Montreal) on the occasion of their 10th anniversary exhibition. For this exhibition, Eastern Bloc invited the exhibiting media artists to present work while thinking of linkages to Canadian media artists that inspired them when they were young. I’m extremely honored and grateful for the conversations that Cheryl L’hirondelle shared with me while I was developing this project.

When I was just beginning to dabble in media art in art school, the net-based artworks of Cheryl L’hirondelle demonstrated to me the power of combining art with sound and songwriting, community building, and other gestures of solidarity, on the internet. Exposure to her work was meaningful to me – I was looking for examples of other women using their voices with technology. Skawennati is another great artist that was creating participative web works in the late 90s and early 2000s – you can check out her cyberpowwow here.

Credits

Graphic Design – Laura Lalonde
Backend Programming – Sofian Audry, Conan Lai, Ismail Negm
Frontend Programming- Koumbit

Special thank you to Kai-Cheng Thom, who with wisdom, grace, and passion guided me through many stages of this work’s development.

Exhibition History

October 3 -23, 2018 – Eastern Bloc, Montreal. Curated by Eliane Ellbogen

February 16, 2019 –The Feminist Art Project @ CAA Conference – Trianon Ballroom, Hilton NYC.

February 2019 – Her EnvironmentYards Gallery, Chicago. Curated by Chelsea Welch and Iryne Roh.

June 26 to August 11, 2019. SESI Arte Galeria, FILE festival, São Paulo, Brazil.

October 4-5, 2019. Video Presentation and exhibition at Sound::Gender::Feminism::Activism symposium, Tokyo. Click here to watch my video presentation

Links

Laughing Web Dot Space

Press

Fields, Noa/h. (2019). “Dangling Wires: Artists Examine Relationship with Technology in Entanglements.” Scapi Magazine (Chicago). https://scapimag.com/2019/02/05/dangling-wires-artists-examine-relationship-with-technology-in-entanglements/

Fournier, Lauren (2018). “Our Collective Nervous System.” Canadian Art. https://canadianart.ca/interviews/our-collective-nervous-system/

Berson, Amber (2018). “Amplification” Canadian Art. REVIEWS / OCTOBER 23, 2018. https://canadianart.ca/reviews/amplification/

Gallery

Exhibition at Eastern Bloc, Montreal. Photos by Anna Iarovaia.

to the sooe

to the sooe (2018)
Sofian Audry and Erin Gee. Photography: Alexandre Saunier

2018

A 3D printed sound object that houses a human voice murmuring the words of a neural network trained by a deceased author.

to the sooe (SLS 3D printed object, electronics, laser-etched acrylic, audio, 2018) is the second piece in a body of work Erin Gee made in collaboration with artist Sofian Audry that explores the material and authorial agencies of a deceased author, a LSTM algorithm, and an ASMR performer.

The work in this series transmits the aesthetics of an AI “voice” that speaks through outputted text through the sounds of Gee’s softly spoken human vocals, using a human body as a relatively low-tech filter for processes of machine automation.  Other works in this series include of the soone (2018), and Machine Unlearning (2018-2019)

to the sooe is a sound object that features a binaural recording of Erin Gee’s voice as she re-articulates the murmurs of a machine learning algorithm learning to speak. Through this work, the artists re-embody the cognitive processes and creative voices of three agents (a deceased author, a deep learning neural net, and an ASMR performer) into a tangible device. These human and nonhuman agencies are materialized in the object through speaking and writing: a disembodied human voice, words etched onto a mirrored, acrylic surface, as well as code written into the device’s silicon memory.

The algorithmic process used in this work is a deep recurrent neural network agent known as “long short term memory” (LSTM). The algorithm “reads” Emily Brontë’s Wuthering Heights character by character, familiarizing itself with the syntactical universe of the text. As it reads and re-reads the book, it attempts to mimic Brontë’s style within the constraints of its own artificial “body”, hence finding its own alien voice.

The reading of this AI-generated text by a human speaker allows the listener to experience simultaneously the neural network agent’s linguistic journey as well as the augmentation of this speech through vocalization techniques adapted from Autonomous Sensory Meridian Response (ASMR). ASMR involves the use of acoustic “triggers” such as gentle whispering, fingers scratching or tapping, in an attempt to induce tingling sensations and pleasurable auditory-tactile synaesthesia in the user. Through these autonomous physiological experiences, the artists hope to reveal the autonomous nature of the listener’s own body, implying the listener as an already-cyborgian aspect of the hybrid system in place.

Credits

Sofian Audry – neural network programming and training

Erin Gee – vocal performer, audio recording and editing, electronics

Grégory Perrin – 3D printing design and laser etching

Exhibition history

Taking Care – Hexagram Campus Exhibition @ Ars Electronica, Linz Sept 5-11 2018. Curated by Ana Kerekes.

Printemps Numérique – McCord Museum Montreal, May 29-June 3 2019. Curated by Erandy Vergara.

To the Sooe – MacKenzie Art Gallery, Regina January 26-April 26, 2020. Curated by Tak Pham.

Sounds

to the sooe (2018)

Gallery

of the soone

of the soone (2018) Print

2014

A disembodied voice invites the listener to partake in a speculative audio treatment that promises to awaken underdeveloped neural passageways through exposure to the non-human processes of neural network language acquisition.

In this work, media artists Erin Gee and Sofian Audry expose listeners to the architectures of an artificial intelligence algorithm through the sounds of an Autonomous Sensory Meridian Response (ASMR) roleplay. ASMR is a genre of audio and videomaking developed by internet aficionados interested in using specific everyday sounds (whispering, soft voice, crinkling and textured sounds) alongside verbal suggestion to “trigger” pleasant tingling reactions in the body of the listener. The artists combined these ASMR principles of sound with artificial intelligence to create a speculative neural conditioning treatment. In of the soone, the listener encounters a soft female voice that whispers a script written by a machine learning algorithm as it slowly loses its neural training and “forgets.” This combination of algorithmic text and ASMR connects the unconscious, automatic processes of artificial intelligence algorithms to the autonomous reactions of the human body to sound, using intimacy to “hack” into the subconscious of the human listener and recondition neural pathways.

Exhibition history

October 2020: Digital Cultures: Imagined Futures Audio Programme curated by Joseph Cutts. Adam Mickiewicz Institute, Warszawa Poland

June 9 to August 19, 2018: Pendoran Vinci. Art and Artificial Intelligence Today  curated by Peggy Schoenegge and Tina Sauerländer. NRW Forum, Düsseldorf, Germany

January 2018: Her Environment @ TCC Gallery, Chicago

Sounds

of the soone (2018)

Gallery

of the soone – print. text 2018. Courtesy of artists.

Project H.E.A.R.T.

Project H.E.A.R.T. (2017)

2017

A biodata-driven VR game where militainment and pop music fuel a new form of emotional drone warfare.

A twist on popular “militainment” shooter video games, Project H.E.A.R.T. invites the viewer to place their fingers on a custom biodata device, and summon their enthusiasm to engage their avatar, Yowane Haku, in “combat therapy.” Fans of the Vocaloid characters may recognize Haku as the “bad copy” of Japanese pop celebrity Hatsune Miku, a holographic personnage that invites her fans to pour their content and songs into her virtual voice.

The biosensing system features a pulse sensor, and a skin conductance sensor of Gee’s design. Through principles of emotional physiology and affective computing, the device gathers data relative to heart rate and blood flow from index finger, and skin conductance from middle and ring fingers of users. The biodata is read by a microcontroller and transferred to Unity VR, thus facilitating emotional interactivity: a user’s enthusiasm (spikes in signal amplitude in skin conductance, elevated heart rate, and shifts in amplitude of the pulse signal) stimulates the holographic pop star to sing in the virtual warzone, thus inspiring military fighters to continue the war, and create more enemy casualties. At the end of the experience the user is confronted with their “score” of traumatized soldiers vs enemies killed, with no information whether this means that they won or lost the “game”.

The user is thus challenged to navigate soldier’s emotional anxieties and summon their positivity to activate Haku’s singing voice as soldiers battle not only against a group of enemies, but also against their own lack of confidence in times of global economic instability.

The landscape of Project H.E.A.R.T. was built from geopolitically resonant sites found on Google Maps, creating a dreamlike background for the warzone. In-game dialogue wavers between self-righteous soldier banter typical of video games, and self-help, bringing the VR participant to an interrogation of their own emotional body in a virtual space that conflates war, pop music, drone technology, and perhaps movement-induced VR nausea.

As Kathryn Hamilton pointed out in her 2017 essay “Voyeur Realism” for New Inquiry,

“VR’s genesis and development is in the military, where it has been used to train soldiers in “battle readiness,” a euphemism for: methods to overcome the innate human resistance to firing at another human being. In the last few years, VR’s usage has shifted 180 degrees from a technology used to train soldiers for war, to one that claims to “amplify” the voices afflicted by war, and to affect “world influencers” who might be able to stop said wars.”

Credits

Narrative Design: Sofian Audry, Roxanne Baril-Bédard, Erin Gee
3D Art: Alex Lee and Marlon Kroll
Animation and Rigging: Nicklas Kenyon and Alex Lee
VFX: Anthony Damiani, Erin Gee, Nicklas Kenyon
Programming: Sofian Audry, Erin Gee, Nicklas Kenyon, Jacob Morin
AI Design: Sofian Audry
Sound Design: Erin Gee, Austin Haughton, Ben Hinckley, Ben Leavitt, Nicolas Ow
BioSensor Hardware Design: Erin Gee and Martin Peach
BioSensor Case Design: Grégory Perrin
BioSensor Hardware Programming: Thomas Ouellet Fredericks, Erin Gee, Martin Peach
Featuring music by Lazerblade, Night Chaser and Austin Haughton
Yowane Haku character designed by CAFFEIN
Yowane Haku Cyber model originally created by SEGA for Hatsune Miku: Project DIVA 2nd (2010)
Project H.E.A.R.T. also features the vocal acting talents of Erin Gee, Danny Gold, Alex Lee, Ben McCarthy, Gregory Muszkie, James O’Calloghan, and Henry Adam Svec.

Thanks to the support of the Canada Council for the Arts and AMD Radeon, this project was commissioned by Trinity Square Video for the exhibition Worldbuilding, curated by John G Hampton and Maiko Tanaka.

This project would have not been possible without the logistical and technical support of the following organizations:

Technoculture Art and Games Lab (Concordia University)

Concordia University

ASAP Media Services (University of Maine)

Exhibition history

November-December 2017  Worldbuilding @ Trinity Square Video, Toronto

February-March 2018 Future Perfect @ Hygienic Gallery, New London Connecticut

April 26-28, 2018 @ Digifest, Toronto

June 7-17, 2019 @ Elektra Festival, Montreal

January 2020 @ The Artist Project, Toronto

October 2020 @ Festival LEV Matadero, Spain

Links

Project H.E.A.R.T. official website
Worldbuilding Exhibition Website
Review in Canadian Art
My research blog: Pop and Militainment
Featured on Radiance VR

Video

Project H.E.A.R.T (2017)
Installation and Gameplay

Gallery