"Our Live Photos are better!" Google talks behind the scenes on Motion Photos — stabilization secrets revealed

8comments

Ever since its own Pixel phones hit the market, it has been obvious that Google is hard at work on its cameras. With an amazing HDR+ feature, which works unlike anything the competition has, the Pixel phones are capable of making sharp, detailed, vivid, balanced, and stable photos without much effort on the side of the user.

With the Pixel 2, Google upped the ante and introduced Motion Photo. Yeah, it's like Apple's Live Photo — when you take a picture, the camera will also save a 3-second video recording, showing what happened before and immediately after you pressed the shutter. Then, Motion Photos are viewable and easily shareable via Google Photos.

But do you know what makes them different to Apple's version? They are extremely, extremely stable, making them look that much more like “moving photos” and less like “I accidentally pressed record at the wrong time”. How does Google do it? In a recent blog post, the company revealed some of the magic that goes on behind the scenes.



So, since the HDR+ technology is already constantly taking photos while your viewfinder is open, Google just made use of them by making the Motion Photos feature — whenever you press the shutter button, you also get a 3-second video clip of what happened before and after that moment. Alongside this information, however, the phone also stores readings from its gyroscope and the optical image stabilization sensor. Then, the software begins working its magic.

First, it scans the clip to find the object in focus and the background. By tracking various motion vectors on a frame-by-frame basis, it can easily separate background from foreground. However, in cases when you have multi-layered or color-rich backgrounds, additional work is required.

Recommended Stories
This is where the gyroscope and OIS sensor metadata comes to work. The Motion Photos algorithm analyzes the phone's positioning and movement speed, then the already-scanned motion vectors are held against this data. With this type of “parallax mapping”, the software is better-capable of making out foreground “action” objects from the background.



Once all that is done, Motion Photos chooses where to short video should be centered, find a movement path if there is one in the clip, and rotates, skews, and stitches together every frame in such a way that it looks like the phone was held steady the whole time. Additionally, if the clip ends with the user putting the phone away and into their pocket — that part is actually automatically cropped out, which is why you will very, very rarely end up with a Motion Photo that has an awkward beginning or end. It's almost always centered around your photo subject and the moment you had.


source: Google

Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless