Live 3D / Ambisonic audio-visual full dome performance. Collaboration with Will Young and Ben Gannaway of RFID.
Fixed version of the piece presented at Sonar D, Barcelona. 14-17/06/2017.
Read a review of the show on the Creator’s Project.
Fragments is an exploration of our relationship with memories. How recollected fragments are woven into an ever changing narrative, and emotionally charged events become managed as ordered information. What we remember as reality can be seen as a complex, interwoven construction of these partials and the stories they become.
The piece is built around a series of audio interviews with subjects who recall significant past experiences, which have been heavily abstracted. Environmental elements referenced in the interviews are used to reconstruct these stories in a non-linear way – field recordings and 3D footage are distorted, processed and spatialised to create abstract audio-visual scenes, the narration only appearing clearly as momentary fragments which disappear again into distorted disarray.
In the composition of Fragments neither visual or sonic composition lead the collaborative process, rather their work is created synergetically throughout the use of interlinked iterative processes. Performance parameters of the visuals are tied to sonic processes and vice versa, often in feedback loops, and their work grows out of this interaction. The piece is structured but there is a lot of scope for improvisation within the structure both visually and sonically and every performance is different.
Much of the visual content was captured using the new Kinect2 sensor. This sensor captures a range of information such as depth and infra-red data as well as video. This allowed for 3d scenes to be constructed from recorded footage and creates enormous potential for further manipulation during the performance in VVVV. The version of the Kinect2 used for the project is currently one of only 500 in a limited trial release. It has a brand new API with new hardware which is hugely more sensitive than its predecessor.
The audio is made from entirely from binaural field recordings manipulated in Ableton and Max for Live. The sound is spatialised throughout the 157 speaker array at the Satosphere using Ambisonics in MaxMSP.