This is admittedly an odd question. it's about Unity, and I'm posting here because very likely I am not the only one with this issue, but I can't imagine somebody who has solved it, wanting to share. Here goes:
I have calibrated "real life" video feed coming into Unity. I have a special rig, and I can display "real life". Now, I want to be able to grab a virtual ball in Unity. I want the virtual ball to be occluded by my actual hand, to some extent. I realize I won't be able to get this perfect, but the thinking is that if I know where the fingers are from the LM, I could "bleed through" the pixels from the live background. The problem is, I don't know how to do this in Unity, and can't even guess at how I'd approach it!
Let's take a guess: Wherever the LM tells me the hand is, I draw a virtual hand approximately the same size, and it's colored "magenta" with no shading. This is output to a temporary output buffer. Then in a 2nd step, I look for magenta in the input texture, and if I see it, I draw the background instead.
No, that sounds stupid....
Do it all in one step, but the virtual 3D hand that I've put into the scene, that mimics my actual hand, is drawn with a custom shader, and for each pixel it draws on the hand it uses the pixel from the real-life image. Now the question becomes how to go from the world 3D point to the real-life image's X-Y plane location... Uck.
Has anybody attempted this?