And Leap's use of "Internet of Things" in their recent marketing blast is poorly thought out.
cough now that I have your attention...
The "Internet of Things" (IoT) refers to aThing talking to aThing. So, your IoT-enabled Foo talking to your IoT-enabled Bar. The Leap device is a strange thing to refer to as either a Foo or a Bar, because it is a Sensor that is tracking movement... Unless you plan on using a Leap to tell you when your toast is ready, I can't really see how you would be planning on using a Leap in an IoT context.
Maybe you could be using Leap tech as a replacement for sonar on car bumpers? But unless you can show an order of magnitude improvement across any of the dimensions targeted by sonar (range, environment envelope, sensitivity, cost, ...).
but -- to me -- using "Leap" in the same context as "IoT" doesn't click, sorry.
But that's not why "y'all are thinking too small". Because you are talking about the Leap, when you should be talking about the DATA.
The Leap works to task and standard when attached to machine A. But I want effects and to affect behavior on machine B, C, D and/or * using a pair (or more) of Leaps attached to the device in front of me -- or to BE the device in front of me. I want a Leap attached to slightly better Raspberry Pi implementation. I want to be able drop a credit-card sized thing under my left hand, and another under my right hand, and a third offset forward of me between the other two. I want to be able sense both hands doing complex, highly sensitive movement within the combined field (which would be sufficient to cover a 4' round table), and have that motion/position/orientation ("MOP") data show up on (one or more) full sized computers nearby.
So take a modern quality ARM processor, on a "plate". The plate has three Leap sensors at tau, tau/3 and 2tau/3, and includes the firmware for all three devices plus sufficient software to publish highly refined MOP information for +/- 30 individual pointables, six hands, etc using DDS (www.omg.org/dds). ANY external device with the correct DDS DataReaders and Quality of Service could leverage the data for emergent behavior. Directly. At the same time.
A 3-6 person table top game that allows you to recreate the Pod Racing from Star Wars I by hand motion? Hell yah. The race could be displayed in third person view (on a big screen) and on first person views on smaller screens (Oculus Rift, anyone?). And because the data is on DDS, you could have MULTIPLE tables, each with 3-6 players all in the same virtual space. It will SCALE.
And that's why y'all are thinking too small. To quote Tyler Vernon, you are all insufficiently ambitious.
Get the data off the machine, and let people think big.
... breath, Rip, breath
So here is the criteria:
With a Leap plate, You have highly accurate MOP data for up to six hands and 30 pointables. In a highly scalable, shared, global data space. (which is possible now for less accurate MOP data for N hands and Y pointables, however in the current ability space, you are limited to 2 hands and 11 pointables per physical machine). You aren't limited to the class, size or capability of the platform using the data (ie, Maybe you have a 256bit Cray SuperComputer based on PPC G8 and a Violin Memory 1 petabyte "Brick" of flash ROM as well as a couple of Raspberry Pi series 1...whatever you want, or whatever you think is technologically possible in the next three years...)
What would you build? Be sufficiently ambitious. Your fantasy must take flight. The Data wants to be FREE.
ALL of the technology needed to implement a Leap Plate is current, modern tech.
If I am able, I will be at the Developer meet up in SFO at the end of January. I will have a working demonstration of Leap data being processed by Raspberry Pis, in real-time. The Leap will be attached to a Windows 7 box. I will be showing the resulting DATA on a Mac. The demo is already working (there are shades of it on YouTube, look for the m1cr0Tub3 channel).