7 responses

  1. Kristof
    June 3, 2014

    If a leap motion the size of a pack of gum can read your hands what would the Leap motion a foot-long or a yard long be able to read? Is it scaleable; wouldn’t be able to scan entire body if it was larger?

  2. Jeff Kang
    June 7, 2014

    >It was, perhaps, a little too much hype for any startup to live up to out of the gate. After a round of unenthusiastic reviews and slow sales, they’ve had to reposition themselves for the longer haul.

    >Another offering, Cori, would augment voice-controlled systems like Siri with gesture-controls—the combination of the two could improve usability.

    Leap Motion + eye tracker > Leap Motion alone – Leap Motion Table mode (interact with a surface)

    Leap Motion and the eye-tracking companies should partner up, and build some more basic and practical applications. E.g. look at an interface element to highlight it, and then do a Leap Motion “click-where-I’m-looking-at” gesture.

    > “Uwyn’s GameWAVE, available for Mac and Windows, recently debuted in Airspace. It allows you to completely control keyboard and mouse-based video games, as well as your OS.
    > 20 distinct swipe and circle gestures, 6 continuous movement”.

    20!? With an eye tracker ($99, which isn’t that far from the Leap Motion), you probably don’t need as many gestures. E.g. having a just few gestures for single-clicking, double-clicking, and dragging could allow you to do quite a bit.

    If you plan to have 20 actions, you could just have one Leap Motion gesture to bring up a menu of 20 on-screen buttons, and then use a “select-what-am-looking-at” gesture to activate one of the virtual buttons.

    Another advantage is that visual interfaces, like on-screen buttons, would require less learning and memorization. Attaching macros to a growing list of unique air gestures, or keyboard shortcuts like Control + Alt + Shift + that you have to memorize can be more difficult to maintain. With on-screen buttons and controls, you can pretty much design them to look however you want. Button labels can be descriptive and self-documenting. Therefore, it will be very easy for people to understand the sample interfaces that will be shared with one another. If they’re edited, tweaked, and re-shared, people will be able to see the changes more immediately, and learn them much faster.

    Leap Motion also needs to hurry up, and bring “Table Mode”, where you can use any surface as a touch interface. There is a device called Haptix that transforms any flat surface into a 3-D multitouch surface, and they completed their KickStarter campaign last fall. It’s kind of like the Leap Motion, but it looks like it doesn’t involve as many air gestures. Leap Motion has said that Table Mode is on the roadmap, but that’s it. Doing air gestures all day is not ergonomically friendly.

    A user was able to get Table mode with Leap Motion by hacking together some transparent glass so that the Leap Motion could distinguish the hands. Here’s a video of the user playing Reflex.te, a reaction game that’s usually meant for the mouse: youtube/com/watch?v=cgGkwGJcB1c.

Leave a Reply

You must be to post a comment.

Back to top
mobile desktop