It's a fascinating system, with lots of potential, but it's not going to replace actually touching the thing I want to touch on the screen, and it's not going to get rid of the problem of "Gorilla Arm" which happens if you hold your arm out straight for more than a few minutes.
I'm not sure I follow. Are you considering that you do can use the leap while resting your arm right? The technology doesn't stop functioning out of the blue because it just noticed you're not getting tired enough.
If your laptop webcam was a leap sensor, you'd be able to use it to control a virtual keyboard/mouse while resting your arm in pretty much anything around you. You can use your work table as a keyboard. Or if you're lying down, use your belly. Tho I'm sure that future leap-ready devices will have it's own built in solution (like a tablet case specially designed to fold forward to serve as support for using your hands to type or something). There's nothing inherit to the technology that implies you must move your hands more or hold them in the air any more than what you already do with current input devices. People only do so in the demos to show off the technology.
I mean, there's a lot of hard and interesting problems created by the leap that will be fun to try and solve. But a tired arm seems so simple to solve that it's even boring.
Unless I'm missing something and you're talking about something else entirely?
You seem to have missed the main point: "not going to replace actually touching the thing I want to touch on the screen".
That direct manipulation is unlikely to go away. While the Leap will be able to support such, touch is unlikely to go away. But I realize we are in the midst of overly sensational headlines.
I was just responding to the fact that David Holz referred to existing touch apps as "legacy" apps. It does have a charge to it which is why I used it in the headline. What I would think all the hackers would like about the Leap is that it progressively enhances on top of existing touch (and GUI) UIs. It will replace those things to the extent that it's useful to people, but it doesn't seem like it breaks anything downstream of it.
>>actually touching the thing I want to touch on the screen...
You're not though... Actually touching the thing.
I don't see the massive loss you infer between touching the screen and manipulating objects 'beyond' the screen.
If anything it's the other way round: this is a far finer interpretation of directly making action on an interface element than is touch because where touch merely removes the metaphor (my hand is moving a thing which moves a thing which I can use to do things on the screen) this actually removes the screen.
That simple feat creates a multitude of new possibilities in terms of types of gestures as well as giving you double the points of contact we have on most multitouch devices.
Leap Motion needs to figure out how to provide haptic feedback on this thing next...
I disagree entirely. How can you revolutionize touch without touching anything? I watched the demo -- it polled extremely quickly but the idea that it will improve drawing or gaming seems hilarious and wrong. (In the demo, Fruit Ninja, his ability was hilariously bad compared to touchscreen -- like comparing mouse to joystick! And they had the nerve to say it would IMPROVE that game!)
We don't draw and manipulate in 3D, we do it in 2D. And this system is inherently 3D. Could you imagine a keyboard that is invisible floating in front of you with zero tactile feedback and zero idea of where you are? Ghost keyboard! Sounds like hunt and peck for all forever.
If this tech can't replace a simple keyboard, than it CERTAINLY isn't replacing mobile touch.
I still buy the "your arm will get tired" argument. When I draw, about 95% of my arm past the elbow touches the table. With this, it's about 10%. The rest of it is being supported by muscle and will tire -- period. Look at the guy doing the demo! He awkwardly holds his hand close to his chest holding a pencil AND requires his other hand to hold his first hand stable (something, you know, a table normally does for you).
I think this is cool and would be great for casual usage and browsing, but I do not see it having any real impact on non-casual gaming and pro-applications. Maybe they'll supplement a kbb/mouse with this, but without ubiquity it seems unlikely any major vendor would incorporate this tech.
However, the fact that 80% of these demos are Google Maps and games that are predisposed to ultra-simple input is super telling -- they can't do anything serious with it yet.
Side note: Have there been any studies (or even informal accounts) on people that hold their arms up a lot (e.g. puppeteers)? Do they still get tired arms easily, or have they gotten so used to it that they can do a day of work without their arms getting tired?
I wonder if this could be a side effect of Leap. The motions required are not as violent as those of conductors, but then again there are finer/weaker muscles in our wrists/fingers that could be affected.
This is called strength training, and yes, if you spend the majority of your day holding your arms up your muscles will adapt and it will become easier to do for longer without as many breaks.
Since we probably should not be sitting in front of a computer for hours at a time, "Gorilla Arm" is a feature, not a bug.
Such new kinds of interfaces might even create incentives to make UI's more efficient in other areas since people would naturally want to limit their time working on the computer.
I work at a computer 8 hours per day. It's my job to make the computer do things. If I could do it more efficiently then I'd _still_ be at the computer 8 hours per day, I'd just be getting more done while I was there.
(Okay, there are meetings to nail down specs, etc. as well. But the majority of my time is definitely spent telling the computer what to do.)
I've been watching this from the public announcement date, preordered, applied for dev sdk, and been active on the forums, as well as pondered the possibilities (read, waving hands in the air a lot).
In my conclusion - THIS WILL REPLACE TRACKPAD.
Not augment, not be by the side of. REPLACE. And not only because Leap is awesome, but because trackpad is horrible.
EDIT: You'll remember not upvoting this one when you're proved WRONG.
I don't get it... if I want my computer to take note of my hands I have to physically touch my keyboard or trackpad; any movement lacking this physical contact doesn't affect it. With this, how do I signal my computer to ignore movements which are not intended for it?
having seen a prototype, i can only say that this tech is a game changer - comparable to the mouse, but will make keyboard style input/ux obsolote. it wouldnt surprise me if the company gets snatched before it ships product.
One could duplicate this right with just a few webcams and some computer vision? I haven't read their patents or anything, but I thought I read the thing just has three webcams in it.
this might be useful in many niche cases, though I doubt a widespread usage/replacement for traditional input methods. I imagine it being useful when you quickly need to switch to another desktop, when you're just browsing the web, or playing another one of those funny games where you have to juggle balls or something.
or maybe it'll forever remain one of those cool demo tools, where you show off a model of your newly developped protein to some executives on a large screen.
Not sure you could get more narrow-minded than that. I'd suggest opening up your outlook a few millimeters and try to envision some of the zillions of ways this could be used. Here's a hint: you might not be sitting in front of a PC.
I was at a hackathon over in Santa Clara recently where we were using consumer brain wave reading headsets:
http://www.neurosky.com/
Unfortunately, it looked like there were something like 8 signals read, the two easiest being concentration and relaxation. Blinking was there too, although that had false positives and was even delivered as a probability more than a boolean. But hey, the tech is coming.
I like that. I think that's part of the appeal here. I don't mean the electrodes in your brain part, but that there is no friction between your volition and something happening on the screen.
I like this a lot, but I think it will augment instead of replace systems.
So I walk into my living room, say "Computer. Gesture interface" then change around things on my large screen monitor while I remain standing, perhaps setting up some music to play and launching and arranging some apps. Then I say "Computer. Mouse interface" and sit down and begin programming.
Not sure of the details, but this inside a mixed environment would be awesome. Might not want to work all day with my hands extended, but I shouldn't need to sit and find a keyboard for everything, either.
I'd also like to see surface displays with touch feedback. I think there are a lot of non-intuitive things like image processing or gaming that could benefit from haptic feedback systems. It's going to be awesome watching these technologies mix with each other over the next ten years or so. Even the simple idea of having multiple screens all over the wall and having them switch depending on which one you're staring at would completely change our relationship with modern technology. (And while we're blue-skying, true 3D would be really nice too) Perhaps we end up with all surfaces being some sort of color e-ink and something like leap as a standard feature of house, office, and car cosntruction? Could that happen?
Ultimately, I really do think it will be a mixed environment like you say (or a different paradigm entirely). There are a lot of NUI proponents who gas up their work in speech or touch (and touch related) interfaces.
I do agree that they will be used pretty widely (maybe not so much speech...) but all of these technologies remain stop gaps in the sense that a mouse was a stop gap solution. They are great technologies that are great stepping stones to something truly natural* that encompasses more than one method of interaction
* I don't necessarily think "natural" is necessary in the future, but that seems to be what a lot of folks strive for.
It's a fascinating system, with lots of potential, but it's not going to replace actually touching the thing I want to touch on the screen, and it's not going to get rid of the problem of "Gorilla Arm" which happens if you hold your arm out straight for more than a few minutes.