Unlike the Skinput device created by the two institutions last year, OmniTouch doesn’t require special sensors on the user’s skin.
Instead, it uses a depth-sensing camera – like the Microsoft Kinect – to track the user’s finger movementss, allowing them to control applications by tapping or dragging their fingers, much as they would with a touchscreen.
The projector can superimpose keyboards, keypads and the like onto any surface, automatically adjusting for the surface’s shape and orientation to minimize distortion. It doesn’t need calibration — users can simply wear the device and go.
“It’s conceivable that anything you can do on today’s mobile devices, you will be able to do on your hand using OmniTouch,” says Chris Harrison of Carnegie Mellon’s Human-Computer Interaction Institute.
The palm of the hand could be used as a phone keypad, or as a tablet for jotting down notes. Maps projected onto a wall could be panned and zoomed with the same finger motions as a conventional multitouch screen.
The device includes a short-range depth camera and laser pico-projector and is currently mounted on a user’s shoulder. But the team says it ultimately could be the size of a deck of cards, or even integrated into other handheld devices.