Wave ‘em like you just don’t care – A couple of gesture based interaction tests

Macbook Pro with Phidget board and sonar sensor
 
One of the things that we do at the UX Lab is to prototype design ideas really quickly. Seeing/hearing/feeling is believing, and hence we spend quite some time mocking things up using quick coding, wizard of oz techniques, or (quite often) just some duct tape. The aim is mostly to investigate certain design qualities and not production grade products, in order rapidly to be able to get a hands on feel for what we are thinking about. Most design theorists would probably refer to this as working iteratively, though we like IDEO’s David Kelley’s description better: Thinking with your hands
 
This spring we worked on how to visualise and interact with data. One issue that we focused a bit more on is that of interacting with a material indirectly, such as when you present data as a projection on a wall and want to interact with it. There are of course touch sensitive surfaces to project onto (or through), but we focused on a passive surface only acting as a screen. This lead us to start investigating more physical ways of interacting with the presentation in question.
During recent years we have seen a fantastic development when it comes to sensor technologies of all kinds, ranging from grass root DIY solutions such as Phidgets and Arduino to commercial products such as Microsoft Kinect. We decided to investigate two interaction solutions – One based on the Kinect and the other on a Phidget distance meter. These trials were done rather quickly and with the only aim to get a better understanding of what these technologies could provide us with and to reflect upon the possibilities they could bring to our future work.

 

Trying out the Kinect

Our first trial was with the Kinect. The setup was dead simple: We just bought a Kinect sensor (the Xbox game version, not the PC version which wasn’t available at the time) and connected it to a MacBook Pro (OS X Lion) using its USB cable. There are various online communities (such as OpenKinect) from which we learnt a lot about getting the setup to work. For the trial we projected Google Earth onto a screen and placed the Kinect sensor just below the projection area. After doing some fine tuning we started to play around. Here’s a short video of one of the sessions:

To spin and zoom in on the earth by just waving one’s hands is actually rather amazing. However, though the Kinect is a great piece of technology, interacting in thin air was not as easy as we might have thought (as the video shows…) One issue was the rather awkward position that one had to stand in, raised hands and all that, which quickly became tiresome. Another issue, and definitely more problematic, was the lack of understandable boundaries. We played around with this solution for quite a while, but it turned out to be very difficult to get a feel for the reach of the Kinect’s field of vision and it’s coordinates. Without any visible corners or “vertical plane” to relate to we often went outside the parameters, leaving the application that we tried to control either unresponsive, since the Kinect couldn’t see our hands anymore, or out of control, leaving the earth spinning out of control since the application interpreted our last gesture before leaving the Kinect’s field of vision as a drag and release command. This was of course really frustrating, however we shouldn’t rule out the Kinect just yet. Since our chosen setup was rather basic, we couldn’t really do much more to it than just using it. If we would have had the time and skills the boundary issue could have been addressed in a number of ways, like providing some kind of feedback (visuals, audio etc) that could have given us enough clues to avoid ending up in trouble all the time. This might be something to look into in future projects.

A Phidget sonar sensor version

The second trial was based on a Phidget solution (as shown in the first picture of this post). The setup was once again simple, though it demanded a bit more of work than with the Kinect. To build a sensor like the Kinect was of course not doable for all kinds of reasons, so instead we decided to only measure the distance between the presenter and the projection surface. This limited the interaction a bit in the sense that it forced the presenter to move in a rather restricted area, but since we just wanted to get an idea of the mix of physical space and what was presented, we decided that it would be an OK trade off. To measure the distance we used a sonar sensor (the MaxBotix EZ-1 connected to a Phidget I/O board (the PhidgetInterfaceKit 8/8/8) which in turn was connected to the same MacBook Pro as before. In order to interpret the readings from the sonar sensor and then control some kind of data we built a prototype in Flash CS5, which reads the Phidget data and uses it to manipulate some graphical objects. The concept that we decided to try was that of changing the scale (or granularity) of what was being presented on screen depending on the presenter’s distance from the presentation surface. As shown in the video below this was illustrated by showing three basic bars (as in a bar chart) when being far away from the screen, and then change these as the presenter gets closer to the screen, i.e. disclosing that each bar is actually the sum of four smaller bars (like sub values), something that would have been rather hard to read from a distance.

The use of just one sensor with a rather narrow “field of vision” was a bit constraining, but it was also rather easy to get used to. A great thing with building our own prototype was that we could quickly try different things out. For instance, the video above shows the second version of the demo application in which we introduced an intermediate state of the graph as the user is moving closer to the screen (when the bars start flickering). In the first version of the prototype this behaviour wasn’t implemented, making it really hard for the presenter to even understand that the presentation could change unless he/she went close enough for the change to be triggered. By making our own application this could be elaborated on in a very quick and flexible way.

One observation from this trial was that the sensor was on all the time which meant that the presenter couldn’t avoid causing the chart to react when moving around, forcing the presenter to be very aware of his/hers position in relation to the screen. Another issue was that the sensor reading had to be pre-mapped to the application and/or material that was being presented. With the Kinect the presenter could more or less behave in any way possible since only some gestures were mapped to generic computer events. This allowed for a much higher degree of freedom when interacting with the material. In the case of the Phidget sensor the situation was quite the opposite, at least in the prototype that we built for this test. This doesn’t have to be a problem though, perhaps it’s just another way of looking at how gesture based interaction can be addressed and developed for.

This activity provided us with quite a lot of impressions, thoughts and ideas. We are currently not developing anything in particular based on this, though this work is present as a kind of tangible sketch that is present along with the other pieces of material that we gather and create as we work in various projects. We are also spreading this way of working to other parts of Ericsson as an example of how to investigate areas of interest in a rapid, cost efficient, and –most of all – inspiring way.

 

Commenting rules