machine learning Tag

Now WTF online exhibition

Museums are closed. School is cancelled. The world is shut off and we’re stuck indoors. All the bread has been sold and Twitter has lost its mind. Fox News is killing off its own demographic. While everything is cancelled, why not have a show?

In spite of everything, Silicon Valet is pleased to present Well Now WTF?, an online exhibition curated by Faith Holland, Lorna Mills, and Wade Wallerstein featuring 80 artists with moving image practices opening April 4, 2020 from 8 to 10 pm EST.

Here is the facebook link for you to join in at the opening!  https://www.facebook.com/events/150877186204563/

Here is the URL for the exhibition: https://wellnowwtf.siliconvalet.org

With everything going on, we ask ourselves: Well Now WTF? We have no answer, but we do know how to make GIFs. We can come together and use the creative tools at our disposal to build a space for release outside of anxiety-inducing news cycles and banal social media feeds.

As co-curator Lorna Mills suggests, “Why masturbate alone, when we can all be wankers together?”

Well Now WTF? is as much an art show as a community gathering. Beginning with the opening on April 4 and throughout the exhibition, we will hold online events on the site itself and via Twitch where people can gather and talk as they would normally for a physical exhibition.

Well Now WTF? will be available online at wellnowwtf.siliconvalet.org. The exhibition will be free and open to the public, with a $5 suggested, pay-what-you-wish entry that gets redistributed to the artists contributing work.

The exhibition will be accompanied by essays by Wade Wallerstein and Seth Watter.

Participating artists: A Bill Miller, Ad Minoliti, Adrienne Crossman, Alex McLeod, Alice Bucknell, Alma Alloro, Andres Manniste, Anneli Goeller, Anthony Antonellis, Antonio Roberts, Ben Sang, Benjamin Gaulon, Carla Gannis, Carlos Sáez, Casey Kauffmann, Casey Reas, Cassie McQuater, Chiara Passa, Chris Collins, Cibelle Cavalli Bastos, Claudia Bitran, Claudia Hart, Clusterduck Collective, Daniel Temkin, Devin Kenny, Don Hanson, Dominic Quagliozzi, Elektra KB, Ellen.Gif, Eltons Kuns, Emilie Gervais, Erica Lapadat-Janzen, Erica Magrey, Erin Gee, Eva Papamargariti, Faith Holland, Geoffrey Pugen, Guido Segni, Hyo Myoung Kim, Ian Bruner, Jan Robert Leegte, Jenson Leonard, Jeremy Bailey, Jillian McDonald, Kamilia Kard, Laura Gillmore, Laura Hyunjhee Kim, Lauryn Siegel, Libbi Ponce, Lilly Handley, Lorna Mills, LoVid, Mara Oscar Cassiani, Mark Dorf, Mark Klink, Maurice Andresen, Maya Ben David, Molly Erin McCarthy, Molly Soda, Nicolas Sassoon, Nicole Killian, Olia Svetlanova, Olivia Ross, Pastiche Lumumba, Peter Burr, Petra Cortright, Rafia Santana, Rea Mcnamara, Rick Silva, Rita Jiménez, Ryan Kuo, Ryan Trecartin, Santa France, Sara Ludy, Sebastian Schmieg, Shawné Michaelain Holloway, Stacie Ant, Sydney Shavers, Terrell Davis, Theo Triantafyllidis, Tiare Ribeaux, Travess Smalley, Wednesday Kim, Will Pappenheimer, Yidi Tsao, Yoshi Sodeoka, and more to be announced

Machine Unlearning

Vision calibration from Machine Unlearning (2020). Photography by Elody Libe. Image courtesy of the artist.

Machine Unlearning is a video installation in the style of an ASMR roleplay, in which the artist offers a treatment administered through a chip she inserts at the back of the viewer’s neck. This chip allows the artist to directly administer the outputs of an LSTM algorithm through her whispered voice to the mind of the viewer as the algorithm unlearns language previously learned from the novel Wuthering Heights. This roleplay is complimented by a variety of physiological calibration routines and hypnotic hand gestures that play at visual rhythm and implied first person experience.
The combination of machine learning outputs and ASMR is intended to draw parallels to how the autonomous systems of algorithms are similar to the autonomous reactions of human sensory systems. Just as ASMRtists use specific sounds and visual patterns in their videos to “trigger” physical reactions in the user using stimuli, acting on the unconscious sensory processing of the listener as they watch the video, the algorithm also unconsciously responds to patterns perceived by its limited senses in order to develop its learning (and unlearning) processes.

Credits: Photography and videography by Elody Libe.

Production Support: Machine Unlearning video installation was produced at Perte de Signal with the support of the MacKenzie Art Gallery for the exhibition To the Sooe (2020) curated by Tak Pham.

The roleplay performance was developed during my artistic residency at Locus SonusÉcole Superieur d’art d’Aix en Provence and Laboratoire PRISM.

More...

The use of the word “intelligence” in the metaphor of AI focuses on higher functions of consciousness that algorithms do not possess. While algorithms have not meaningfully achieved a humanistic consciousness to date, today’s algorithms act autonomously on sensory information, processing data from its environment in unconscious, automatic ways. The human brain also responds unconsciously and automatically to sensory data in its environment, for example, even if you are not conscious of how hot a stove is, if you place your hand on a hot stove, your hand will automatically pull away. These unconscious, physiological actions in the sensory realm points to an area of common experience between algorithms and the human.  For more explanation of these ideas, take a look at the work of postmodern literary critic N. Katherine Hayles in her 2017 book Unthought: The power of the cognitive nonconscious.  In this way I wonder if the expression “autonomous intelligence” makes more sense than “artificial intelligence”, however like posthumanist feminist Rosi Braidotti I am deeply suspicious of the humanist pride that our species takes in the word “intelligence” as something that confers a special status and justification for domination of other forms of life on earth.

Live Performance

This work was first developed as a performance that debuted at Cluster Festival, Winnipeg in 2019.  During live performance, each audience member dons a pair of wireless headphones.  The performance allows the audience members to see the ASMR “result” of the performance for camera, simultaneous with the ability to see my “backstage” manipulation of props and light in real time.

Machine Unlearning (2019) Performance at Cluster Festival, Winnipeg. Photo: Leif Norman.

Machine Unlearning (2019) Performance at Cluster Festival, Winnipeg. Photo: Leif Norman.

Machine Unlearning (2019) Performance at Cluster Festival, Winnipeg. Photo: Leif Norman.

ASAP Journal

Happy to announce that my short article on machine learning, ASMR and sound “Automation as Echo” written with Sofian Audry is now published in ASAP/Journal 4.2 in a collection of articles assembled by Jennifer Rhee covering automation from diverse/creative/critical perspectives.

From the article:

“The echo is a metaphor that goes beyond sound, speaking to the physical and temporal gaps in human-computer interaction that open up a space of aesthetic consumption problematized by the impossibility of comprehending machine perspectives on human terms. The echo unfolds in time, but most importantly it unfolds in space: sound travels as a physical interaction between a subject and an object that seemingly “speaks back.”

The mythological nymph Echo “speaks” or “performs” her subjectivity through reflection or imitation of the voice of human Narcissus. Her (incomplete, sometimes humorous, sometimes uncannily resemblant) nonhuman voice is dependent on the human subject, who is also the progenitor of her speech. The relationship between these two mythological entities creates an apt metaphor for machine learning: its processes are not of the human, yet its “neural” functions are crafted in imitation of and in response to human thought. As machine subjectivity is crafted from human subjectivity, we cannot grasp its machined voice, nor perceive its subjective position, through analysis of its various textual, sonic, visual, and robotic outputs alone. Rather, the “voice” of machine learning is fleeting, heard through the spaces, the gaps, the movements between the machine and the human, the vibrational color of nonhuman noise.”

ABOUT ASAP JOURNAL

ASAP/Journal is a peer-reviewed scholarly journal published by John Hopkins University Press that explores new developments in post-1960s visual, media, literary, and performance arts. The scholarly publication of ASAP: The Association for the Study of the Arts of the Present, ASAP/Journal promotes intellectual exchange between artists and critics across the arts and humanities. The journal publishes methodologically cutting-edge, conceptually adventurous, and historically nuanced research about the arts of the present.

Canada Council for the Arts Grant

I am proud to announce that I have been awarded a research and creation grant from the Canada Council for the Arts to conduct preliminary research into an interactive installation work involving machine learning (GANs), biosensor data, 3D printed wearables, and method actors. This project is a collaboration with Sofian Audry. I’ll be sure to send you updates as they come!

We acknowledge the support of the Canada Council for the Arts.

About Canada Council for the Arts

The Canada Council for the Arts is Canada’s public arts funder, with a mandate to foster and promote the study and enjoyment of, and the production of works in, the arts. The Council champions and invests in artistic excellence through a broad range of grants, services, prizes and payments to professional Canadian artists and arts organizations. Its work ensures that excellent, vibrant and diverse art and literature engages Canadians, enriches their communities and reaches markets around the world. The Council also raises public awareness and appreciation of the arts through its communications, research and arts promotion activities. It is responsible for the Canadian Commission for UNESCO, which promotes the values and programs of UNESCO in Canada to contribute to a more peaceful, equitable and sustainable future. The Canada Council Art Bank operates art rental programs and helps further public engagement with contemporary arts.

 

 

Printemps Numerique Montreal

Musee McCord/ McCord Museum – 690 Sherbrooke St W, Montreal, QC H3A 1E9, Canada

Wednesday May 29 – Sunday June 2

Curated by Erandy Vergara

Artists: Sofian Audry, Mara Eagle, Erin Gee, Julia Zamboni

 

to the sooe (2018), my revocalized machine learning sound artwork inspired by ASMR made in collaboration with Sofian Audry, is featured in exhibition at the McCord Museum as part of Printemps Numèrique in Montreal.

Click here to learn more about the exhibition including details on the works by the other artists in the show.

 

 

Cluster Festival Winnipeg Canada

Cluster Festival: Winnipeg’s most dynamic take on contemporary art and sound.

Cluster X : Feb. 28 – Mar. 3, 2019

Great news: I’m going to Winnipeg for the first time!  As a proud Canadian hailing from the prairie provinces, I have always wanted to get to know the experimental art and music scene in Winnipeg. I’m going to be at the 10th edition of Cluster Festival, featured almost every day all weekend! Whether it is a talk on my practice, a public discussion about diversity in music with an amazing bunch of musicians and composers, the Canadian debut of my newest biohardware installation Pinch and Soothe (2019), or a new performance version of Machine Unlearning (2019-20) where I will sweetly whisper an unraveling version of Emily Brontë’s Wuthering Heights into your wireless headphone-d ears as a means of giving you neural stimulation, I’ll be there! For more information  www.clusterfestival.com

Cluster Festival X 2019 Promo from Cluster Festival on Vimeo.

CLUSTER 2019 ARTISTS

Andrea Robers • Beast Nest • Davis Plett • Erin Gee • Eliot Britton • Luke Nickel • Grace Hrabi • Hong Kong Exile • Kristen Wachniak • Maksym Chupov-Ryabtsev • Matt Poon • Mirror Frame • Natalie Tin Yin Gan • Remy Siu • Sharmi Basu • Susan Britton •TAK • Vicky Chow • William Kuo • XIE

to the sooe

A 3D printed sound object that houses a human voice murmuring the words of a neural network trained by a deceased author.

to the sooe (SLS 3D printed object, electronics, laser-etched acrylic, audio, 2018) is the second piece in a body of work Erin Gee made in collaboration with artist Sofian Audry that explores the material and authorial agencies of a deceased author, a LSTM algorithm, and an ASMR performer.

The work in this series transmits the aesthetics of an AI “voice” that speaks through outputted text through the sounds of Gee’s softly spoken human vocals, using a human body as a relatively low-tech filter for processes of machine automation.  Other works in this series include of the soone (2018), and Machine Unlearning (2018-2019)

to the sooe is a sound object that features a binaural recording of Erin Gee’s voice as she re-articulates the murmurs of a machine learning algorithm learning to speak. Through this work, the artists re-embody the cognitive processes and creative voices of three agents (a deceased author, a deep learning neural net, and an ASMR performer) into a tangible device. These human and nonhuman agencies are materialized in the object through speaking and writing: a disembodied human voice, words etched onto a mirrored, acrylic surface, as well as code written into the device’s silicon memory.

The algorithmic process used in this work is a deep recurrent neural network agent known as “long short term memory” (LSTM). The algorithm “reads” Emily Brontë’s Wuthering Heights character by character, familiarizing itself with the syntactical universe of the text. As it reads and re-reads the book, it attempts to mimic Brontë’s style within the constraints of its own artificial “body”, hence finding its own alien voice.

 

The reading of this AI-generated text by a human speaker allows the listener to experience simultaneously the neural network agent’s linguistic journey as well as the augmentation of this speech through vocalization techniques adapted from Autonomous Sensory Meridian Response (ASMR). ASMR involves the use of acoustic “triggers” such as gentle whispering, fingers scratching or tapping, in an attempt to induce tingling sensations and pleasurable auditory-tactile synaesthesia in the user. Through these autonomous physiological experiences, the artists hope to reveal the autonomous nature of the listener’s own body, implying the listener as an already-cyborgian aspect of the hybrid system in place.

Exhibition History

Printemps Numérique – Montreal, May 29-June 3 2019. Curated by Erandy Vergara.

Taking Care – Hexagram Campus Exhibition @ Ars Electronica, Linz Sept 5-11 2018.

Credits

Sofian Audry – neural network programming and training

Erin Gee – vocal performer, audio recording and editing, electronics

Grégory Perrin – 3D printing design and laser etching

NRW Forum Dusseldorf

My collaborative work with Sofian Audry, of the soone (2018) will be featured in an exciting exhibition at NRW Forum focused on contemporary art and AI, curated by Tina Sauerländer (peer to space).

Artists: Nora Al-Badri & Jan Nikolai Nelles (DE), Jonas Blume (DE) Justine Emard (FR), Carla Gannis (US), Sofian Audrey and Erin Gee (CAN), Liat Grayver (ISR/DE), Faith Holland (US), Tuomas A. Laitinen (FI), and William Latham (UK)

Initiated and hosted by Leoni Spiekermann (ARTGATE Consulting)
Curated by Tina Sauerlaender and Peggy Schoenegge
At NRW Forum Düsseldorf,  Ehrenhof 2, 40479 Düsseldorf, Germany

Preview: May 25 – 27, 2018, during Meta Marathon (Tickets/Apply)
Opening: June 8, 2018, 7pm

Exhibition: June 9 – August 19, 2018

We are particularly excited for this exhibition because we will debut a 3D printed enclosure for the work made especially by Gregory Perrin, who has previously worked with me on the sensor box for Project H.E.A.R.T. (2017) as well as an amazing box for installation of Swarming Emotional Pianos (2015).

NRW Forum website 

peer to space website

XX FILES PIRATE RADIO

Image: Radio XX hosts Julia Dyck, Belen Arenas et Amanda Harvey.

 

My soft spoken whispering sound art of the soone (2018) made in collaboration with Sofian Audry will be featured as part of the XXFiles Radio Show’s programming for Nuit Blanche 2018, Riding the Wave: a pirate radio festival, broadcasting live from a little studio on Van Horne/Waverly at 104.3FM in Montreal at 6am on March 4th.  Wish I could be there to turn on a real radio to hear it.

Click here for the official website

ABOUT THE XX FILES

The XX Files is the aural-satellite to Montreal-based feminist media arts space Studio XX. This intersectional feminist media collective works to explore all aspects of our techno-world from the perspective of women living it.

The show was started by Deborah VanSlet and Kathy Kennedy in 1996 on CKUT 90.3 FM and continues to features diverse, compelling feminist perspectives about art, technology and society. The XX Files represents a feminist statement about our relationship to the digital world through traditional media as both a feminist public and a social space that allows feminist icons and marginalized narratives to have their voices heard.

The current team is composed of Julia Dyck and Amanda Harvey. The collective continues to host the weekly CKUT show alongside two monthly internet radio shows, one on Montreal’s N10.AS as well as one on France’s CAMP. The collective also presents live audio-visual performances and DJ sets.

In the summer of 2017, The XX Files completed a residency at Studio XX where they produced a triptych of audio documentaries, devised a live A/V performance, and built FM radios. In March of 2018, they are curating and presenting a shortwave pirate radio festival for Nuit Blanche à Montréal.

 LOCATION

March 3 – 4, 7:00pm – 7:00am

Broadcasting live from Earth II, 134 Van Horne, Studio 212

Open to the public until 2AM

 

Live streaming

https://studioxx.org/en/pirate-radio-festival-xxfiles/

Frequency

104.3FM (Van Horne/ Waverley)