This work is a dialogue between Shader and reality. The metamorphoses that occur when touching the screen can raise people’s fragility about the boundary between dematerialized Shader and the actual screen. The screen can be regarded as a spectrum of the experiencers volatile emotions, distorting the limits and immersing the spectator into a short detachment from reality. Nothing is an uncounted experience, creating an immaterial boundary is the process of people wandering their hearts. It all starts with the screen seducing you to interact with it.

SHADER from Friendred on Vimeo.


Shader process video from Friendred on Vimeo.

How this piece has been created

When creating a massive Spandex interactive installation, there are a few problems I had to consider:

a. The spandex fabric size - usually the maximum width is only 1.5 metres. If I want to create a piece of 180*300cm, then I have to sew it. How will I deal with the seam?
b. I need a good quality projector of at least 2600 lumen and projection distance between 2-3 metres in range. Otherwise its will be hard put in one room.
c. I need to create a strong support to allow the performer to touch the screen with a wide range of forces.
d. I need to keep in mind that the range of the Kinect is 500-4000mm.
e. I need to draw on the screen and create the sounds.

I used Kinect to detect depth image. For some of the scenes I read the depth gray image for each frame and found the brightest pixels. I used these pixels to trigger motion within the scenes. For other scenes I used frame differencing combined with the performers touch to manipulate the image.

The fabric I have chosen is spandex. The elasticity of spandex means that it can be stretched more than 20cm, which is quite good for depth detecting. The ratio of projector I used is 16:9, to ensure the interaction was accurate I had to map the projector and the Kinect 2 to this same ratio. To make sure that the leg of the timber structure did not influence the Kinect detection there is a second mapping which reduces the area used and then adapts the proportions so that the touch of the performer is still accurate.

In total, there are 8 different scenes, which all express lexical distortions of shader and real life, blurring the boundary and submerging the audience into a ‘transition’ environment.

In the first scene, which triggers when you put your hand in the left top corner, the iridescent metamorphosis will become blurred and there will be less and less wrinkle effect. By contrast, if you press your hand slightly towards the right, it will show a peristaltic and dynamic drape effect.

In the second stage, the ripple will follow the position of your hand as long as you press the screen. It will go back to it's original place when you release your hand. If you use both hands at once the Kinect will detect only one hand at a time and a flickering ripple will jump between your hands.

In the third scene, the interaction behavior is simply sliding the hands between the left and right corners. Then you will see the cloth wireframe flutter slightly with your hands orientation.

In the fourth scene, moving black and white stripes will induce vertigo. If you push your hand, or whichever part of your body against the screen you can slow it down and control the vertigo. Also, the middle part would become the largest concentration of stripes. If you move your hands towards the left side, you will make a cluster of intensive lighting stripes on the right side and vice versa.

In the fifth scene, the interaction is an overlapped ‘computational glitch’ that grows from a distance in accordance with your position. The glitch shows up in the right of the screen if you touch the left area.

In the sixth scene you need to pick a place on screen and press it. A hole will come out fast if you press it hard. In contrast, it will stop or slightly come out if pushed lightly. The intensity of the force applied to the screen effects the output.

Scene seven explores the spatial and temporal interaction with the screen. The matrix will transform from stable to volatile, depending on where your hand is positioned along the width of screen.

For scene eight, I used opencv to find contours. To be more specific, I get the central position of a blob, transfer these numbers to a particle system and draw several polylines. Unlike the other scenes this one can be triggered, by the user, from multiple positions at the same time.

Developments for the future

a. How can I optimize the program - since this program contains 8 different scenes, the framerate is lower than compared with running them separately.
b. I need to find a good way to calculate the average position in a certain grid to create a more immersive interaction.

Sound: KKU NV
Performer: Amy Louise Cartwright
Wood Fabrication: Freddie Hong
Special Thanks For Helping: Jakob, Qiuhan Huang, Arturas, Konstantin Leonenko
Thanks: Theo Papatheodorou

Add-on & Reference