TOUCH ME from Friendred on Vimeo.
Live performers thrive on feedback from their audiences but they generally don't get this feedback until the end of the performance. Any feedback they get during the performance doesn't dramatically affect the performance for others. One exception to this might be comedians where the audience participates through laughter. A comedian feeds off this to elaborate more in parts that receive a lot of laughter and the audience members feed off each other as laughter is contagious.
I thought it would be interesting to design something that allowed a similar interaction between the audience and a dancer. I created a piece of touch and gesture sensitive clothing that can be worn by the performer but controlled by other individuals responding to their performance.
A dancer connects with a piece of music and move their body to express that music to whoever is watching them. The watcher is then able to respond to the performance by controlling visual and audio elements on the dancers clothing. This feedback affects the dancer's performance but also changes the experience for other members of the audience. In effect individuals watching the dancer can become contributors to the performance and not just a spectators.
This project has 2 inputs controlling 2 outputs - audio (sound sequencer) and visual (neo-pixel light strip).
The first input is a capacitive sensor (MPR121), every time the capacitance will be changed when someone touched it. In total there are 11 outputs; each output can trigger one switch in MAX, so the whole sequencer can work properly. MAX constantly reads the message from the serial monitor, this is why two different inputs cannot run at the same time as they share the same port. After getting the original message from Arduino, MAX needs to transform the message into an integer using ASCII.
Then I created a sequencer which triggers one audio sample, out of 11 (one for each tin foil output), to play for each message that is received from Arduino. The tin-foil outputs act as both an on and off switch. The sample will continue to play until the same sensor that triggered it to sound originally is pressed again when it will stop the sound.
The second input of this project controlled the visual component. I used leap motion, to create 5 different hand gestures. Leap Motion will generate 15 coordinates, 3 points for all 5 fingers. The coordinate data was sent to Wekinator and trained using machine learning. Sequentially, Wekinator sends a message to Processing to control 5 different light settings of the neo-pixel light strips, controlled through Adruino.
If you open Osculator to see the message, you can find 15 outputs transferred from Processing. You also get a message in the process which is significant to know the difference between classification, regression and time warping. In here, Here I used time warping, that's why we can record the gestures. The problem is I get the message from Wekinator by oscEvent, I tried a lot to see which kind of data type can be received by Processing, and then I used it as a button to transfer the message again to Arduino.
Processing & Wekinator
Processing & Osculator
Some of biggest challenges of this project was putting the electronics into the clothing. Fabric selection was vital, I had to make sure I chose a material that:
(i) was transparent enough to make sure the light underneath could be seen (ii) was flexible enough to make sure it wouldn't restrict the performance of the dancer (iii) strong enough to take the weight of all the electronics and the micro-controller (iv) was insulative and nonconductive to prevent the electronics from shorting
Another challenge was that the controller in the clothing had to be hooked up to the same laptop that was hooked up to the Leap motion. As a next step, I will try using the micro controllor to connect through wifi or bluetooth so that the dancer doesn't need to be connected to the laptop.