KIMA: "The Wheel" at Curtain Call

As part of a collaboration with the Analema Group, we created an interactive experience for Ron Arad's Curtain Call, an immersive 360 degree screen. The experience was built in Unity as a custom particle system driven by 7 neural networks which interpreted audio input from 7 microphones to visualise an avant-garde choir piece. 

The piece draws inspiration from the combination of the space of the Roundhouse and the poetry of Evgenia Emets. Built as a particle system on a vector field, the piece directly maps 6 microphones assigned to the singers of the choir to 6 different neural networks (referred to as brains) that control the characteristics of 6 particle emitters, with a further central network connected to a main microphone in the centre of the circle. The installation consisted of a two-part performance by the choir, with an intermission in which the audience was encouraged to interact directly with each of the microphones.

The visual layer consisted on two Unity 5 applications running on two 7thSense Delta Infinity servers, each creating a 6480 x 1920 pixel texture that was sent via Spout to the main video system of the roundhouse. These textures were mapped onto 12 HD projectors, each spanning 30 degrees of the circular screen.

A screenshot of the Unity application running on my laptop. Note the particles and the underlying hexagonal force field, which was progressively revealed as the piece evolved, evoking the idea of the memory of the sounds that had travelled through the installation.

A screenshot of the Unity application running on my laptop. Note the particles and the underlying hexagonal force field, which was progressively revealed as the piece evolved, evoking the idea of the memory of the sounds that had travelled through the installation.

Two possible arrangement of the force nodes in the grid.

Two possible arrangement of the force nodes in the grid.

The visual aesthetic of the installation is heavily inspired by hexagons, which required me to write a custom algorithm to place the forces that made up the vector field in a hexagonal grid.

By coaxing the forces to only take on three directions, we were able to force the particles to follow hexagonal paths, which could then be made more or less dramatic by adjusting the magnitude of the forces.

The field is divided into 6 emitters which had their own settings (force, speed, friction, magnitude) which allowed to create the conceptual idea of 6 worlds. While initially we were building mappings manually on the unity layer, it quickly became apparent that for higher expressivity we would have to search for a better method.

I had recently came across wekinator, a tool created by Rebeca Fiebrink that allows to create expressive interfaces by using machine learning techniques. I suggested we moved to using neural networks to control the emitters, due to the increased expressivity, and the ability of training the networks to respond on different singing patterns.

Each network was thus trained to analyse OSC input from an audio analysis application and respond differently to different sound stimuli. We were able to easily train certain "worlds" to respond to tonal or fricative sounds in a really intuitive manner, mostly by setting the output that we wanted from the network, and then providing it with enough test data for the network to be trained. This was done in a most natural way, by speaking onto a microphone.

Detail of one of the emitter worlds, creating complex shapes.

Detail of one of the emitter worlds, creating complex shapes.

Detail of one of the Wekinator neural networks, with each of the available parameters that the network could control.

Detail of one of the Wekinator neural networks, with each of the available parameters that the network could control.

The usage of these networks was vital in creating a compelling and expressive interface between the artists and the visuals. Moreover, it became a sort of visual instrument, responding instantly to the choir's voices, and enabling the artistic vision to be easily realised. By not having to build the mappings directly, non-technical members of the group were able to independently adjust and explore the behaviour of each world.

A sreenshot of the audio feature extractor, written by Sean Soraghan using the JUCE framework, which analysed the audio input in real time from each of the microphones and sent it via OSC to each of the neural networks.

A sreenshot of the audio feature extractor, written by Sean Soraghan using the JUCE framework, which analysed the audio input in real time from each of the microphones and sent it via OSC to each of the neural networks.

The diagram of the KIMA system and data flow. Audio flows from the Microphones to the Sound Analysis software, which extracts features that are then sent to each neural network (brains) which they interpret as stimuli to alter characteristics inside the unity apps. A set of range controls allows us to change ranges and override certain settings, for expressivity and performance.

The diagram of the KIMA system and data flow. Audio flows from the Microphones to the Sound Analysis software, which extracts features that are then sent to each neural network (brains) which they interpret as stimuli to alter characteristics inside the unity apps. A set of range controls allows us to change ranges and override certain settings, for expressivity and performance.

Shot of KIMA from the foyer at the Roundhouse, displaying the view of the installation from outside of the circle with three singers, one on the central microphone and another two on the outside microphones.

Shot of KIMA from the foyer at the Roundhouse, displaying the view of the installation from outside of the circle with three singers, one on the central microphone and another two on the outside microphones.

KIMA was one of the most ambitious projects I've taken part in. We developed the entire system in about a month, with minimal testing time on the final hardware. This, together with the complexity of the setup and the live nature of the installation (which lasted for a really short time) put a lot of pressure on the team. However, the project was a resounding success, and as a testament we will be showing permanent version of the piece at Fondation L'Abri as part of a residence in November 2016

I wish to thank everyone in the AnalemaGroup for the opportunity to work side by side with you guys!

A much deserved closing party in Hampstead Heath after the show, with all our friends and collaborators.

A much deserved closing party in Hampstead Heath after the show, with all our friends and collaborators.