I've searched the Interwebs for quite a while now, but found nearly nothing useful. This could be either this is a dumb idea or I just used the wrong terms. So maybe one of you guys may point me to the right direction.
I want to use the Leap in order to help me aligning or matching real-world objects with virtual objects in terms of size and orientation. For example I provide the user with a real box, a HMD and a Leap. Now, the software may project various UI-Elements on the box, but this only works if the software knows where the user put the box. All I know is that it's somewhere where the Leap can see it. There is a demo where a menu is attached to the hand. I want the same but only on some solid object with the hands operating on it.
Another example would be, if I request the real environment to have a window and that the user must put the Leap in a way that the windows is visible. Now the user may take his hand and swipe left to change the outlook of the window in the virtual world. So the Leap must tell my software where the window is, the size, orientation, and what the user does with his hands.
The Leap guys did something similar with their table-tennis demo, where they use a real table with a virtual ball bouncing on it, but how they told the software where the table is, was not mentioned.
Any other projects or examples where this has been tried out and explained in detail?
I assume this is done with some kind of computer vision algorithms in order to find and identify the objects but since the Leap measures fingers and hands with millimeter accuracy, maybe it can provide more than just low resolution grey stereo images...