How Google Glass will change mobile, and how it could fail
This article may contain personal views and opinion from the author.
How often do we hear jokes about how no one experiences life as it happens these days? People miss out on life because we’re all too busy filtering real-time experiences through the screen of a smartphone or tablet. Google may not have figured out the use case of Google Glass, but after seeing Sergey Brin on the subway, and digging through old Glass stories, it struck us as quite simple: rather than filtering real-life through a screen, the screen mimics our eye and sees what we see, without our attention.
This is the beauty of invention. The company freely admits that it doesn’t quite know how to market the product; the tech elite don’t quite know either; and, common people are a bit unnerved by the technology. But, the geeks are excited, because we can simply feel the potential, and the disruptive force that comes with something new.
The true POV camera
Really, it’s the strength that Microsoft has always claimed with Windows Phone, that it would get you “in and out and back to life”. But, with Google Glass, there theoretically is no more “in and out”. Taken to its logical conclusion, Google Glass (and all subsequent systems like it) would aim to take our digital world and make it part of a HUD for real life. So, rather than having to take time to look at your phone (maybe stop walking, certainly while not driving,) and be completely distracted from whatever you are doing.
But, then you see the video that Sergey Brin and the Glass team took while skydiving, and even though the camera is a bit jerky, it still feels somehow natural. Only the best camera operator can approximate what we would see when moving our heads side to side, but Google Glass seems to be able to simply because the Glass camera has no choice but to follow our heads, and capture the same perspective that we see with our eyes.
The key to success
The photography and videography that we see coming from Google Glass users will be the killer feature, but the make-or-break function for commercial success (assuming the cost comes down fast) will be the UI. We still haven’t seen what the UI will be, or how to interact with the Glass, aside from voice commands.
Google has been learning the keys of design faster than most companies around, especially since hiring Andy Hertzfeld and Matias Duarte, but the UI design for Google Glass is an entirely new undertaking. This isn’t as simple as giving all Google web products a unified design, or making Android prettier and smoother. With Google Glass, the design team has to straddle the line between giving enough information without being too distracting.
We assume that the key to this will be Android’s Talkback feature, which has been designed to aid users with visual impairments. The key would be to mix the best of Talkback, while removing the bits that can drive someone without visual troubles mad (like the explore-by-touch, which reads out everything on screen to aid with navigation). So, we’d expect to see e-mails and messages read, but not displayed, and directions given turn-by-turn, but no map without user prompt (and definitely no map if the user is driving).
Conclusion
If the price is right, and even if nothing else is all that good with Google Glass (which is hard to imagine, given Google’s track record), photography is going to be seeing another revolution when this product finally hits. It’s hard to imagine Google completely screwing up the Glass UI, but the real key to Google saying that it’s not sure how people will use the product is because that lack of knowledge could mean too many items get transferred from phone to Glass.
The Google Glass UI needs to be minimalist and as distraction-free as possible, but still provide enough information to remove the need to look at a smartphone screen. That’s a pretty tall order, but not an impossible one, and it could make all the difference. We should see more soon enough, as the developer units are expected to start going out during events in NYC and San Francisco next week.
The true POV camera
Take a look back at all of the video we’ve seen from Google Glass. For the most part, the pictures may not seem to be anything special at first glance, but the longer you look, the more you see. The camera can capture something much more intimate than anything we’ve seen before. Handheld photography can only get so close, and all “POV” videography feels clunky and off, because there is only so much a handheld camera can mimic the movement of our head. Even a camera strapped to your forehead isn’t quite right, because the perspective is slightly off compared to what we would really see.
But, then you see the video that Sergey Brin and the Glass team took while skydiving, and even though the camera is a bit jerky, it still feels somehow natural. Only the best camera operator can approximate what we would see when moving our heads side to side, but Google Glass seems to be able to simply because the Glass camera has no choice but to follow our heads, and capture the same perspective that we see with our eyes.
The photography and videography that we see coming from Google Glass users will be the killer feature, but the make-or-break function for commercial success (assuming the cost comes down fast) will be the UI. We still haven’t seen what the UI will be, or how to interact with the Glass, aside from voice commands.
Conclusion
If the price is right, and even if nothing else is all that good with Google Glass (which is hard to imagine, given Google’s track record), photography is going to be seeing another revolution when this product finally hits. It’s hard to imagine Google completely screwing up the Glass UI, but the real key to Google saying that it’s not sure how people will use the product is because that lack of knowledge could mean too many items get transferred from phone to Glass.
Things that are NOT allowed: