Hi guys,
I'm uninitiated to the workings of Leap Motion. I've only tried a few demos, and based on that sold a project where we're gonna use them
We are about to start a project where we intend to allow HMD users (HTC Vive Pro) to operate physical controls while still being immersed in VR. We plan on doing this by slapping on a Leap Motion and giving the user a visual feedback of both the controls and his/her own hands. Key to success is the alignment of the real world and the physical world (already on this), and how accurately Leap Motion manages to position the hands in the scene (next problem to tackle).
It's that last part where I don't know how much in control I am. I am aware that if you keep things in the virtual world entirely, truthful positioning becomes less of an issue. You can even mess up your FOV a bit and the user will still figure it out well enough. But if you mix in physical objects, suddenly things need to line up perfectly. I have seen this video, where they pretty much do exactly the same thing I want to do, so this gives me hope.
So I have a few questions:
1) Given that I can align my virtual scene with the physical room perfectly, does Leap Motion position the hands accurately enough in the scene for 10mm precision interaction with physical objects? Is this a fundamental feature of Leap Motion? Or is this something I would have to figure out myself?
2) That's it really.
Cheers