Operating Smart Devices from the Back of Your Hand

WatchSense uses a depth sensor that works as a 3D camera in capturing the movements of the thumb and index finger, not only on the back of the hand, but in the space over and above it.

A novel input method expands the input space to the back of the hand and the 3-D space above it.
A novel input method expands the input space to the back of the hand and the 3-D space above it.
Oliver Dietze

Using a depth sensor that tracks movements of the thumb and index finger on and above the back of the hand, smartwatches, phones, TVs and other devices can be controlled and used for applications like augmented and virtual reality.

"Every new product generation has better screens, better processors, better cameras, and new sensors, but regarding input, the limitations remain," explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics.

Sridhar and team have developed an input method that requires only a small camera to track fingertips in mid-air, and touch and position of the fingers on the back of the hand. This combination enables more expressive interactions than any previous sensing technique.

Regarding hardware, the prototype, which the researchers have named "WatchSense", requires only a depth sensor, i.e. a much smaller version of the well-known "Kinect" game controller from the Xbox 360 video game console.

With WatchSense, the depth sensor is worn on the user's forearm, about 20 cm from the watch. As a sort of 3D camera, it captures the movements of the thumb and index finger, not only on the back of the hand but also in the space over and above it. The software developed by the researchers recognizes the position and movement of the fingers within the 3D image, allowing the user to control apps on smartphones or other devices.

"The currently available depth sensors do not fit inside a smartwatch, but from the trend it's clear that in the near future, smaller depth sensors will be integrated into smartwatches," Sridhar says.

But this is not all that's required. According to Sridhar, with their software system the scientists also had to solve the challenges of handling the unevenness of the back of the hand and the fact that the fingers can occlude each other when they are moved. "The most important thing is that we can not only recognize the fingers, but also distinguish between them," explains Sridhar, "which nobody else had managed to do before in a wearable form factor. We can now do this even in real time."

The software recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, because the researchers trained it to do this via machine learning. In addition, the researchers have successfully tested their prototype in combination with several mobile devices and in various scenarios.

More in Product Development