“Augmentation and Amplification,” was presented as a live-streamed project, performed during the Covid global pandemic on July 30th, 2020, as part of SO⅃OS on July 30, 2020, at Fridman Gallery and CT::SWaM, NYC..
The performance continued my ongoing investigation into creative collaborations between humans and technology, fusing neural diversity, inclusive creative expressions, and adaptability within isolation and confinement.
Vocalist/dancer Mary Esther Carter was alone in the space. All other technology, including multiple microphones and cameras were run and mixed remotely, allowing the performance to operate in accordance with NYC’s shelter-in-place guidelines. Her vocal partner, an autonomous A.I. entity, exists only online. The live-stream performance interwove live action in the gallery, translucent video overlays and opaque video imagery, with a combination of live, pre-recorded, and tech-generated audio.
A.I. Anne is a machine learning entity created by myself and Richard Savery. Trained on Mary’s voice, A.I. Anne is named and patterned after my aunt,
who was severely autistic and nonspeaking due to apraxia. My aunt Anne could emotively hum, but was never able to speak. The virtual
A.I. Anne has the ability to vocalize, but not create language. Using deep learning combined with semantic knowledge, A.I. Anne can
generate audible emotions and respond to emotions.