Google explains how the Pixel 4's Motion Sense gestures work

1comment
Google explains how the Pixel 4's Motion Sense gestures work
The Pixel 4 turned heads back in October for many reasons, but one of the big ones was the inclusion of Soli radar tech, a first in mobile computing, which made Motion Sense gesture recognition and other key functions possible.

Google recently published an explainer on how the Pixel 4 recognizes gesture controls and user presence. The post revealed some insights into the development process for the Soli module, as well as software development efforts to process the sensory data and incorporate it into system functions.

The Soli chip works with radar, a technology that’s been used for decades, but the tech needed to be heavily adapted for the complex gesture recognition on the Pixel 4. Rather than mapping a space and detecting changes in that space, Soli works by detecting changes in signal reflections over time to recognize motion.

The Soli radar transmits 60GHz signals and receives them with three separate receivers. The differences in received signals across receivers and over time allows the system to map relative location of objects in its vicinity (around 20cm from the device) and detect the direction, speed, and distance of moving objects in a 3D space.

By reducing the need for additional antennas required in traditional radar setups, Google’s developers were able to shrink the Soli module down to 5 mm x 6.5 mm x 0.873 mm, or about the size of the average pinky fingernail and thinner than a quarter. The chip makes up a small part of the Pixel 4’s controversial forehead bezel, which also includes 3D face unlock sensors and a selfie camera, of course.



This sensory detection mechanism is the core of all Soli-related functions, and it’s also good to note that the process doesn’t require any optical input or cameras to function. It’s nice to know that Soli doesn’t need to see your face or surroundings to work, sidestepping one potential security concern.

Google also used machine learning algorithms to teach Motion Sense to correctly identify several gesture cues. Thousands of volunteers helped in training the system to recognize gestures from different people contexts. This algorithmic programming enabled Soli to filter through and accurately interpret the data it received.

Recommended Stories
The most fundamental way the Pixel 4 uses Soli is to detect user presence, prepping face unlock as the user reaches for the device and enabling superfast unlock times. As for gesture controls, the Pixel 4 shipped with the capability to accept or decline calls, dismiss alarms and timers, and skip forward or back in media playback, all based on user swipes.










I’ve never had any issues with getting Motion Sense to work reliably, though my colleague Nick, and others, have said their experience has been less positive. Recently, Google also added a pause/play function that activates when you tap the air above the device in the latest Feature Drop for Pixels.

Regardless of your opinion of Motion Sense’s usefulness, it’s clear that a lot of thought went into making it possible. The Soli radar is a fundamental part of the Pixel 4 experience, and Google has proven that it’s invested in expanding Motion Sense’s capabilities further.

The viability of Soli as a input method may always be up for debate, and from recent leaks it seems rather clear that the Pixel 4a and perhaps even the Pixel 5 will be skipping on this new tech. But in any case, the Pixel 4’s Motion Sense is an interesting feature that makes some nifty things possible. Given the effort Google has put into its development, we can expect that the internet giant’s ventures in radar technology will only improve from here.
Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless