V1 internal/experimental visualizer, intended to be obsoleted by the V2 skeletal visualizer.
Due to this video there is a longstanding misconception that V1's internals provide low-level point cloud data. The ongoing developer survey does have a part where you can express needs related to this though.
Which of the following low-level APIs would you find the most useful?
- A data stream of raw camera images
- V1 intermediate data (ellipses, image-processing data) specific to hand/finger reference and tracking algorithms
- A synthesized point cloud (would produce garbage for objects other than hands/fingers/tools; supported by V1 or V2)
- A synthesized depth map (would produce garbage for objects other than hands/fingers/tools; supported by V1 or V2)
- A synthesized hand mesh (V2-only)
- Other (write-in)