Google Lens is an interesting thing. It's sort of like reverse Google image search, where you drop an image and let Google find similar-looking ones, only in the real world. But it can be much more than that and it seems like Google has the ambition to bring out the full potential of Lens in the near future.
Lens was added to Google Photos earlier this year, so users could identify different objects or text in their photos and gain more information about them from Google's vast databases. But the newest update for Lens, coming at the end of May, will make it even more versatile and convenient. Lens will gain implementation in other Google services and apps, such as Google Maps and Street View, and will be built directly into the camera apps on future phones from Asus, Motorola, Xiaomi, LG, Sony, Nokia, and OnePlus, among others.
One of the exciting features that was demoed at Google I/O 2018 showcased Google Lens's abilities to recognize text in images. In a similar fashion to Google Translate, using OCR (optical character recognition), Lens allows users to scan words from an image and paste it on their device in editable, text form.
"Style Match" was another upcoming feature shown off at I/O 2018 that allows Lens to scan images of clothes find other articles of clothing and accessories that would go along well with the source material.
Google's ultimate vision for Lens, however, far exceeds all of these examples. The idea is that, eventually, Google Lens will be "like a visual browser for the world around you," says Aparna Chennapragada, vice president of product for AR, VR, and vision-based products at Google. This means just firing up your camera app (that will already have Lens built-in) and just pointing it at something. The goal is to have Lens automatically start scanning your surroundings and surfacing relevant information about what it's seeing.
Expect these features and more to roll out to Google Lens in the coming months.