Hi everyone, I am trying to fuse multiple sensors and represent in one reality space (MR).
What I need to do is to establish a transformation matrix from a stereo camera (actual stereo camera, not a leap motion) to the leap motion coordinate system.
My strategy is to capture images from the camera and leap motion at the same time and perform another stereo calibration, which I can compute the transform from the camera image coordinate system to the leap motion left (or right) image coordinate system.
However, since all the data from leap motion is represented in the leap motion coordinate system, which I believe is the center top of the device. I could find the vertical distance (or baseline) between left & right IR camera and the IR LED on the center, but the depth from the device surface to the LED is what I am missing at the moment.
If anybody knows how I can get the distance from the surface to the IR LED, I would greatly appreciate.