For the past couple of weeks I have been working on the first version of my Soft Meditation piece. This is a performance in which I meditate and live data is transformed into an animated, artistic visualisation.
For the past year I have been developing, together with a team, Meditation Lab Experimenter Kit. A tool-kit that consists of a suit with sensors and software which allows you to monitor and optimise you meditation practice through self-experimentation and interaction with the environment.
Soft Meditation is the first application made with this tool-kit. It uses the API to create generic imagery from live sensor data collected with the suit. My aim is to explore whether donating personal data can create a positive, meditative effect in others even though they aren’t meditating themselves.
The title of the performance refers to the environmental psychology term soft fascination coined by Kaplan and Kaplan as part of their attention restoration theory. In my own words: the theory describes how looking at natural phenomena like waves on the water captures your attention without causing any cognitive strain. That way the mind can restore and refresh. Meditation is all about attention and I am looking for an easy way to capture the visitors attention and take them to a place of calm.
Trying to do this with meditation is, despite of popular belief, quite hard work. So soft also refers to the gentle and playful way in which, I hope, a meditative state of mind is achieved.
But how do I capture attention in a way that is calming and uplifting? I’ve read some articles (view references below) about the affective properties of motion graphics and compiled an inventory of effects. For my goal it would be best to use slow, linear motion from left to right. I could then play with speed and waviness to create more intensity and interest depending on the sensor data in direct input.
For years I’ve been thinking about expressing my inner meditation state through a water metaphor. Movement of water is endlessly fascinating and mysterious and to my mind perfectly suited for my intentions. I looked for inspiration online which set the boundaries for which software environment to choose.
After exploring various platforms, languages and libraries I ended up with good old Processing as a platform. I found this sketch online which offered a nice starting point to build on. I started modifying it.
Considering I wanted a complex and lively wave animation I choose pitch (nodding movement of the head), breathing (top and bottom), finger pressure and heart-rate as input sensors.
Interaction with the audience
I have been thinking about how to make the performance multi-directional. I wanted to somehow include the audience into what is happening on the screen. What both me and the audience share are the sounds in the room. I decided to use the marker button provide with the suit to change the animation speed depending on the loudness of the sounds. Over time the audience would notice the relation between speed and sounds was my idea.
The first performance
I was invited to give a short presentation at the Human-Technology Relations: Postphenomenology and Philosophy of Technology conference at the University of Twente. Instead of a talk I decided I would test my prototype. I could only last for 5 minutes. I had programmed the sound of a bell at the beginning and end. I was facing the wall while the audience looked at a big screen over my head.
I was a bit nervous on how it would be to meditate in front of some 30 strangers. But once I sat down it was just like I always do: notice my body (pounding heart) and mind.
I was less pleased with the demo effect. One sensor was not working properly (I still don’t know why). This created hard-edged shapes and motions from right to left the exact opposite of the intended animation.
I tried pressing the marker button when I heard something. But as the performance progressed the room became more and more silent. Which I suppose is a sign that it worked but not something I had counted on.
I am of course interested in the effects of the performance. I supplied the audience with the Brief Mood Introspection Scale (BMIS). Four sub-scores can be computed from the BMIS: Pleasant Unpleasant, Arousal-Calm, Positive-Tired and Negative-Relaxed Mood. I asked to fill them in before (baseline) and after the performance. 10 questionnaires were returned of which 6 were complete and correct. I am working on the results and will report on them in a later post.
I was pleased to hear that people were fascinated by the wave and tried to work out what it signified. People found the performance interesting and aesthetically pleasing. We discussed what caused the effects: the context, the staging of me sitting there and people wanting to comply, the animation or the silence? A lot of things to explore further!
One participant came up to me later and explained how much impact the performance had on him. He found it very calming. “Everything just dropped from me” he explained. It also made him think about silence in his life and looking inward more. This is all I can hope to achieve. I continue my research with new energy and inspiration.
The next version of the performance will be on show during the biggest knowledge festival of south Netherlands (het grootste kennisfestival van zuidnederland) in Breda on September 13th.
– Feng, Chao & Bartram, Lyn & Gromala, Diane. (2016). Beyond Data: Abstract Motionscapes as Affective Visualization. Leonardo. 50. 10.1162/LEON_a_01229.
– Lockyer, Matt, Bartram Lyn. (2012). Affective motion textures. Computers & Graphics
– K Piff, Paul & Dietze, Pia & Feinberg, Matthew & Stancato, Daniel & Keltner, Dacher. (2015). Awe, the Small Self, and Prosocial Behavior. Journal of personality and social psychology. 108. 883-899. 10.1037/pspi0000018.