Microsoft shows OmniTouch: its vision for the future of touch interfaces (hint: they’re everywhere)

Microsoft shows OmniTouch: its vision for the future of touch interfaces (hint: they’re everywhere
Now, here’s what we want to see more from Microsoft - innovation. Two teams at Microsoft Research have been working on two very exciting projects and presented their findings at UI symposium UIST 2012. The first project, OmniTouch, is the most eye-catching with its futuristic idea - turn every surface into a touch-enabled space. 

How did they do it? How about plant a smaller Kinect-like device on your shoulder which allows you to interact with all surfaces using multitouch, tapping, dragging and even pinch to zoom. Sounds almost incredible, and the researchers admit that the first three weeks of developing the project were the hardest.

"We wanted to capitalize on the tremendous surface area the real world provides,” explains Hrvoje Benko, of the Natural Interaction Research group. “The surface area of one hand alone exceeds that of typical smart phones. Tables are an order of magnitude larger than a tablet computer. If we could appropriate these ad hoc surfaces in an on-demand way, we could deliver all of the benefits of mobility while expanding the user’s interactive capability."

In order to do that they constructed that Kinect-like device consisting of a laser pico projector which would project images on all surfaces and a depth sensing camera, responsible for the magic. Tweaking the depth camera to recognize human fingers as the source of input, as well as adjusting the accuracy of depth recognition was key to the success of the project.

"Sensing touch on an arbitrary deformable surface is a difficult problem that no one has tackled before. Touch surfaces are usually highly engineered devices, and they wanted to turn walls, notepads, and hands into interactive surfaces—while enabling the user to move about."

So while by now we’re used to the Kinect user recognition, this was a tougher problem to face as it required not only sensing where and how the user moves, but whether the user taps on a surface. Since those surfaces differ, the depth camera performance was essential. The researchers managed to get accurate feedback and detect when the finger is 0.4” (1cm) away from a surface. At that distance a tap is assumed, and they even managed to maintain that state for actions like dragging and pinning. But with no further ado, here’s the video about it, visualizing everything we’ve said so far:

Recommended Stories
Video Thumbnail

The prototype you see on the image is not small at all, actually it’s ridiculously big for use in public spaces, but the research team agrees that there are no big barriers to miniaturizing it to the size of a matchbox. It could also be conveniently placed as a watch or a pendant.

Now, there was also a second ambitious project, PocketTouch, which might not look that exciting but if you think about it’s equally thought provoking. PocketTouch, as it’s called, aims to allow you interaction with your device through different fabrics. This allows “eyes-free” input, so you can use all kinds of gesture without having to look at your device at all times. 

The results exceeded expectations, and the only thing we have left now is wish these technologies arrive sooner to the mainstream.

Update: And if you've just had a deja vu, here's why:


Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless