rainbow dance insideYou know, for having cold, metal exteriors and mechanical “hearts,” computers are pretty touchy-feely. So are televisions (well, their remotes, anyway). These devices won’t do a damned thing unless a button is pushed, a key is pressed, or a screen is tapped. There are a fex exceptions — voice is becoming increasingly powerful, as shown with Apple’s Siri and Google Now — but, for the most part, nothing’s happening unless someone is there to push and poke and swipe. If CES is any indicator, however, this could be changing soon.

Many companies are betting that future user interfaces will focus on gestures instead of touch. It doesn’t matter if it’s Hisense and its gesture-controlled television, Leap Motion and its promise to gesture-ify (that’s a thing, right?) how we interact with computers, or any of the other, too-numerous-to-count companies betting on the interaction method.

In case you haven’t had a chance to see it in person, watching as countless people walk past a television or computer, realize it’s got some form of gesture recognition, and proceed to do what I’ve taken to calling the “I’m living in the future!” boogy, let me tell you: It’s great. Watching people at a desktop or a tablet is boring. Watching people throw their hands into the air, step in and out of a camera’s field of view, and try to make a device follow their bidding with gestures and over-expression is not.

Some would cite all of this gesticulation and wave their hands at the future of gesture control (yes, that just happened). I had the chance to use a few of these solutions, and making a “shush” gesture at a television to mute it feels silly whether it’s your first time making the gesture or the tenth. But this is a solvable problem — all it requires is a little bit more time and a smidgen of patience.

Gesture-based controls suffer from a few problems, like processing power and camera technology, but the largest obstacle is discoverability. We’ve known for years that pushing a button, whether it’s physical or virtual, spurs a device into action. But it’s hard to port that interactivity to gesture-based interfaces without simply abstracting the tired mouse-and-keyboard interaction of the present.

How would you fast-forward a movie via gestures, for example? Some might point to their right, others might decide to bring disco back, and still others might expect to just wiggle their head to the side. And then, if you’ve already used that gesture, what do you do while you’re browsing? Should the gesture change based on context, or should they be constant and require discrete gestures for separate actions?

We’re still trying to figure that out. And while we do, these gesture-controlled platforms will have a (justifiably) steep learning curve. Hell, even Apple felt the need to label and demonstrate “swipe to unlock” with iOS, despite its near-obviousness. And Apple had the good fortune of developing iOS in secret, allowing it to work on its interactions and then reveal them once they were deemed ready. Many of these manufacturers, rushing to keep up with each other and get in on the ground floor of future interactions, have to do their innovating in the public’s eye.

Eventually it won’t be so weird to watch people interact with their devices with their whole bodies instead of the tips of their fingers. The technology will help with that a bit as it continues to improve and is able to accept more gestures or learn more about the person using it and their environment — which is what a company called Cube26 is trying to do with its facial recognition tech — but I suspect that the larger shift will be cultural.

I’m sure the first people who used a keyboard instead of a pen to compose a letter thought it was weird as well. You know what you use, and for right now, we use a keyboard and buttons. But in the future? Well, maybe I’ll have to rename it the “I live in the present!” boogy.

[Image courtesy x-ray delta one]