Spirit Fingers: Gestural Art in Action with the Leap Motion

I received the Leap Motion a couple weeks back, and have had fun using it sporadically and testing out as wide a variety of apps as I can without breaking the bank. There’s an impressive lineup already available, from educational to music, games to random art/physics interactive visualization simulators—and more being released each week—so it’s exciting to see what people are doing with this new technology and have the change to engage with it myself.

It must be said: for the vast majority of computing tasks as we currently conceive of them, the current incarnation of Leap doesn’t add much to practical human-computer interaction. It’s not 100% reliable; in fact it’s probably not even 98% or 95% reliable, at least for many of the early, first generation of applications designed for it. But the Leap is a fascinating and fantastic device for many other reasons.

One, it’s a great tool for visualizing things, particularly in ways that give users the impression of feeling immersed in an environment, and granting them some sense of agency and control over it in a way that, if not exactly tactile, is at least more visceral and embodied than using a mouse or keyboard. One critique I’ve heard (certainly valid) is that there’s no form of tactile response, making the theoretically ultra-precise spatial recognition/interaction abilities far less so in practice—finger vibrations, disappearing digits, and an unclear gestural vocabulary that differs between apps are some of the things that render it confusing or somewhat vague and finicky in use. The value proposition is also as yet unclear—most apps are fun diversions, and some are innovative uses of interaction, clever and even genuinely beautiful, but there’s not much here that jumps out as revolutionary, a new paradigm in efficiency, or in general “serious business”. This isn’t anything unexpected at this early stage—but I would love to see the Leap technology seamlessly integrated into Blender or other 3D modeling or AutoCAD programs, or used to interface with even a rough sculptural modeling app for 3D printers, for example. I would love for it to recognize more complex gestures, and to do so more accurately, and to have a more robust and consistent vocabulary to make it easier for both developers and users to take advantage of the super-cool technology underpinning this remarkably unassuming, pack-of-gum-sized device.

But, that said, damned if the way you can zoom through space with this thing using your hands isn’t awesome. It’s clearly great for manipulating (if not yet building) 3D models—from molecular bio-realms to galactic structures, it gives you the power to fly, rotate, zoom and spin at will. Not all that much superior to a video game in many respects, but the ability to use your hands to attain the illusion of pushing/pulling and manipulating something in space does seem to make it in a a sense more real.

My favorite use of this technology, though, is somewhat different—it’s in the way it enables spontaneous creativity, play, and (you might say) even a form of meditation. Many of the most fun apps are ones with fairly standard algorithmic bases, such as particle visualizers or music generators. You attract and repel a star field, or a school of fish, with your fingers (an aside: this seems a great way to introduce kids to complexity or generative/chaotic systems!) You pluck strings, tap buttons, air-engrave pitch onto a circular representation of a waveform (fun, simple computer-generated music; I’d imagine this also has cool potential for performative/stage use). The important thing for me about these applications is that they encourage a kind of freeform digital (in both meanings of the word) choreography. It makes you aware of the range of motion of your hands, makes you aware that they can wield a strangely compelling type of control, with outsized effects translating onscreen. This gives the feeling of a cool sort of precision and agency, with great expressive potential, and gives you a kind of feedback mechanism for experimenting with various gestures, speeds and motions.

As far as games: some are good, fun, and entertaining, but nothing I look forward to playing months from now (though DoubleFine’s “Dropchord” has a banging soundtrack, and admittedly there are many I’ve yet to try). As far as productivity apps, designed to help you manipulate the normal functions of your computer by way of fancy gestures: these seem a nice sentiment, but I can’t see how they’ll actually make me more productive. There’s clearly plenty of room for imagination in this space, but I don’t want to scroll and click by waving my fingers in the air—I want developers to invent entirely new gestures that surprise me with their ingenuity, with things I wouldn’t have thought of but that make intuitive sense, like iOS did in reinventing touch-based interfaces. That line, straddling foreign invention and intuitive ease-of-use, is where the magic happens.

To call attention to a couple apps I really like: Kyoto is a great, compact meditative experience—not exactly a game, but more than a simple particle simulation. It has a clear aesthetic, some gameplay mechanics that guide you through it, simple gestures and scenes, and well-integrated sound and graphics that play to the strengths of the Leap; and it’s a short experience (5-10 minutes) so you don’t get bored—a bit of a palate-cleanser after trying several games, Google Earth, and other apps with complex and often frustrating gesture-recognition systems.

I’m also intrigued by the NYTimes app, which is in simple-proof-of-concept stage right now, but is a good testing-ground example of the rich potential for designing new reading experiences. It can be kind of fun to navigate not actual 3D space but the space of information, and see what metaphors translate. I look forward to seeing ways to visualize and navigate time via the Leap, or perhaps traverse the structure of the Internet and other complex infrastructures/networks that exist in the real world yet lie obscured beneath layers of complexity and abstraction.

Given that some of the apps with most potential are currently hobbled by lag and/or hard to control with requisite precision, what I’m most excited for are small experiments that test new paradigms, rather than for show-stopping games and visually exciting applications. I’m particularly interested in interactive visualization possibilities for information, data, narrative, and other things that could lend themselves to immersion and spatial navigation metaphors.

In some sense, regardless of whether Leap is a major, lasting success, it’s already succeeding at training us early adopters to adapt pieces of a new language of interaction, engendering growing comfort and familiarity with different ways of using digital devices and our bodies in concert. Of course the Wii and Kinect did something similar, and new devices like the Oculus Rift and the meta AR glasses promise great leaps forward as well. But the Leap is precise, focused, and impressively robust and open—and I like how it treats the hands as special tools that have the potential to interact with our existing computer systems in interesting ways. Like all technology, it’s imperfect, but it’s great to watch it evolve.