Janiregt:
See here
https://developer.leapmotion.com/forums/forums/6/topics/1444
and maybe here:
https://developer.leapmotion.com/forums/forums/6/topics/1743
You could use length instead of width to discriminate tools from each other.
The Leap works (according to the patents filed) by fitting ellipses to stereo scanline slices, essentially a triangulation. Problem is that an ellipse has 5 parameters (baseline position, distance to camera, both axises of the ellipse, angle of rotation) which requires 3 cameras. One big - and completely unutilized - advantage of tool use cases is that a tool can be guaranteed to be cylindrical (constant width, ellipse cross-section of circle) which means the 4 edge samples from a 2 camera scanline actually overdetermine the solution.
See here for references:
https://community.leapmotion.com/t/leap-patent-application-and-other-references/717
In short, width computation might suck, but straight, clean, constant reflectivity tools might provide decent length values. The transition to the hand holding these is likely your biggest source of length variation. Maybe a a "fork" approach - clean juncture, hand further down - might help. The study done at the TU Dortmund - which unfortunately omits a lot of relevant details - used a flat circular plate (think hand guard on a fencing weapon) on which the tool was mounted perpendicular.
The simplest approach might be to try to map whatever the current pointable "ID" is to your own using spatio-temporal continuity - take the three tools and do a least square fit to the last known position/orientation of three line segments. If your sampling rate is high enough relative to the motion of the tools (and if the angles between the tools do not change), you should be OK. If the tools can move relative to each other, then your fit becomes more complicated.