Relaying some of the feedback I've received about this question from our frontend team. We've experimented with graphical mirrors that are like physical mirrors in that they reflect light and the world around you, but we've never mirrored the hand data directly. Part of the problem is that when you mirror a hand, it turns from a left hand to a right hand and our hand models get confused.
To get what you're looking for, the most efficient strategy would be to modify how our "post-processing" feature works for hand data. Currently, it happens after hands have been assigned to a model they're going to drive, which means by this point, it's too late to change the chirality (left vs. right) of the hand. If we changed that step to happen directly after frame data is received, a post-process could then flip the chirality of the hand and mirror it on the user's local X axis and you'd get the mirroring behavior you're looking for.
This is somewhat involved, but we've gotten enough feedback about this that we are considering folding this into a develop branch (not least because it would allow for other useful functionality for other use cases). Bearing in mind that we're a small team and have a number of different development priorities right now, I will keep you posted.