It saddens me that you have been working on this so long with such frustration.
I was honestly not very impressed with the paper and would encourage you to reach out to the computer vision community and perhaps learn a bit from the other link I shared.
Regarding the use of LEAP in 2018, the math that translates the image to the rendered hand object is already done for you.
If your entire purpose is to identify Parkinsonian hand movements using hand objects, you can start by using a single LEAP camera from the desk or bottom point of view to minimize occlusion for outstretched hands.
By tracking the displacement and velocities of specific points in the hand objects over time, you should be able to scrub outliers from glitchy tracking (curve smooth) and gather replayable demonstrations of an individual's Parkinsonian movement profile.
LEAP can track finger/thumb tips, palms position and velocity to render as a Unity3D hand object. No visor or math needed.
LEAP Orion 4 Unity3d kit can record the hand movement profiles AND apply the Unity3D machine learning/AI capabilities to help you find patterns.
Then, adding hand profiles from cameras placed at various angles becomes a hand object problem, not a computer vision problem.
I think leveraging existing technology it will save you from the math heavy lifting entirely and move you toward your problem of interest.