Google lets you test body tracking AI with "Move Mirror"
According to Google, the goal of Move Mirror is to highlight the capabilities of pose estimation using only a webcam and a computer. The company is hoping that after seeing how light-weight and easy to use the technology is, more developers will be willing to try it out and integrate it in their products.
The body-tracking part of the "mirror" is powered by PoseNet, a machine learning model that detects and tracks 17 points on a person's body, 12 joints and 5 points on the head. The data is then matched to the picture database and the result is displayed next to your webcam feed.
Now before you say it's all part of Google's master plan to learn everything about us, Move Mirror uses Tensorflow.js, with its operations contained only to the browser of your device. No data is leaving your computer or smartphone and being sent to servers. The model is open source, so if you're interested, you can peek inside to see how it's done.
We tested it and despite having a hard time detecting a wrist, it seemed to work as advertised and provided multiple minutes of entertainment. The site has a handy "Make a GIF" button, so you can share the goofiness with your friends.