Hi there,
I am using the HoloLens and the LeapMotion together for a research project. Currently I am aiming for aligning the virtual LeapMotion hand skeletons with my real hands. The LeapMotion is mounted ontop of the HoloLens. Thus I have to consider a Y-Translation aswell as a Z-Translation to align the LeapMotion sensors with my biological eyes. However, only applying this seems not to be sufficient, the hands are still off and I guess this is related to the different baseline of the LeapMotion cameras and my eyes. The mismatch of the hands is different depending on the Position within the FOV.
I have already read through the article about the alignment problem [1] where the challenges between physical-, virtual- and biological cameras are pointed out. As far as I understood, the suggested solution here was to match the VCS with the ICD(40mm).
However, since I am using the HoloLens, my real hands are measured with my biological eyes (IPD ~65mm) which means the solution here does not fit in my case. So what I would need is a way to solve the alignment problem between IPD and ICD or maybe I am overseeing something else.
Currently I am using the LMHeadMountedRig and the Leap_Hands_Demo_VR as a template. The Y and Z Translation is applied in the LeapServiceProvider::transformFrame() by me.
Maybe I am overseeing something else, so I am thankful for any hints.
Best wishes!
[1] Alignment Problem: http://blog.leapmotion.com/alignment-problem-position-cameras-augmented-reality/
[Update 1]
I think LeapVRCameraControl::_overrideEyePosition seems to be intended exactly for that alignment purpose and I will now try to apply the correct shift for the HoloLens here.
[Update 2]
Actually I think overriding the eye Position only makes sense if the real world is measured throug the AR LeapMotion cameras. Should the LMHeadMountedRig already fully handle real-world scale? However I seem not to be able to transform the virtual Skeletons perfectly onto my real Hands using any Translation without any drift or scale problems. I am definitely overlooking something here.
[Update 3]
I could reach a really nice alignment of the LM Skeletons with my real Hands. One Thing I did was applying specific offsets along all 3 Axes within the LeapSpace Unity object to count for the physical mount. However, since the HoloLens SDK does the Stereo Rendering by using the IPD value from within the Device Portal (~58,2mm] I applied an X-Axis offset of 18,2mm to the left eyes' view Matrix. The results are very nice so the Skeletons match my real Hands in Position, Rotation and Scale. Interacting with virtual objects feels very natural and collisions appear exactly when I expect them to happen (while disabling the graphical LM Skeleton meshes). Only in a very near viewing frustum the alignment gets off around 1cm (worst case) which is anyway totally out of the expected interaction area. In all other cases / distances the alignment is very satisfying.