Smartphone photography battle moves from cameras to chips

Smartphone photography battle moves from cameras to chips
Several low and mid-range phones now sport more than one camera on the back, but the number of cameras mounted on the rear panel of a smartphone does not equate with the quality of photographs produced by the device. If that were the case, Google's Pixel phones, with their single rear-facing camera, never would have garnered the praise and recognition it has received as one of the best (if not the best) handsets for taking photos.

The secret lies in the specialized chips used by companies like Google, Huawei, Samsung and Apple that include AI capabilities to improve the images snapped by their devices. The Pixels, for example, use a specialized Visual Core chip that has been inside each unit from the Pixel 2 series forward. The chip includes Machine Learning functionality that automatically adjusts the settings of the camera to match the lighting and other aspects of the scene being photographed. It also drives the HDR+ feature that combines several images to produce the best possible shot.

AI/Machine Learning capable chips have become the latest battleground among premium handset manufacturers


During Apple's new product event on Tuesday, the company mentioned how part of the A13 Bionic chipset contains a "neural engine" to help the new iPhones take viewable photos in low-lighting conditions. As the tech giant pointed out during the introduction of the new iPhone Pro models, this is computational photography which relies on digital image processing as opposed to optical processing. An example of this is the Deep Fusion feature that will be enabled on the new units via a software update sometime this fall. As we've already explained, this will allow the phone to capture eight images before the shutter button is pressed. Adding the image captured by pressing the shutter, there are nine images analyzed by the neural engine in a split second to determine the combination of images that creates the best shot. Unlike the HDR+ process that averages out the multiple images, Apple says that Deep Fusion will use the nine images to put together a 24MP image, going pixel by pixel to put together the best picture with high detail and low noise.


Reuters cites Ryan Reith in a new report published today. Reith, who works for research firm IDC, says that these AI/Machine Learning chips have become the latest battlefield where premium smartphone manufacturers are taking on the competition. Reith notes that the manufacturers competing in this arena are the ones able to invest in the chips and software required to optimize the cameras on their handsets. He says, "Owning the stack today in smartphones and chipsets is more important than it’s ever been, because the outside of the phone is commodities." The IDC program vice president also pointed out that these chips will also be used in future devices and mentioned Apple's rumored AR headset as a future beneficiary of the company's work on neural engines. "It’s all being built up for the bigger story down the line - augmented reality, starting in phones and eventually other products," he said.


While adding features like Night Mode and an Ultra-wide camera might sound revolutionary the way Apple explains it, the company is simply catching up to some of the more innovative Android manufacturers. And with both Huawei and Google about to unleash their latest premium handsets, it will be interesting to see where the major manufacturers stand once all the dust settles.

FEATURED VIDEO

13 Comments

1. sgodsell

Posts: 7363; Member since: Mar 16, 2013

Deep fusion sounds a lot like the Pixels top shot. Reading in several images before and after to find the best shot.

2. Well-Manicured-Man

Posts: 688; Member since: Jun 16, 2015

It is not about finding the best shot. It is about merging the best out of each shot into the best possible image.

5. sgodsell

Posts: 7363; Member since: Mar 16, 2013

I wonder how that will work with something that is constantly moving in every image. Even moving leaves in the trees. It will be interesting to see how that works.

4. DolmioMan

Posts: 332; Member since: Jan 08, 2018

Sounds nothing like top shot. Google’s top shot sounds exactly like the select key photo feature for Live Photo’s in iOS 11.

7. Fred3

Posts: 521; Member since: Jan 16, 2018

Not really. It's actually just like Google's but with a different name. Samsung is working on one next

12. DolmioMan

Posts: 332; Member since: Jan 08, 2018

No, this hasn’t been done before afaik. Deep fusion pulls data from all 3 camera sensors so google can’t have done it before because they’ve only ever used a single camera. The “top shot” feature requires motion photo to be activated (exactly the same as Live Photo’s from iOS 9), from that you can select any frame of the motion photo to be the “top shot”. Apple added a feature in either iOS 10 or iOS 11 called “key photo” where you select which frame from the Live Photo you want to be the key photo. Somebody may have done it before Apple but google definitely did it after Apple.

3. Vokilam

Posts: 1192; Member since: Mar 15, 2018

Google is gonna top all that, you’ll be taking a photo of a flower, and google will replace that with a better DSLR photo it finds on the internet - voila.

6. blingblingthing

Posts: 960; Member since: Oct 23, 2012

Lol. Good one

11. Vokilam

Posts: 1192; Member since: Mar 15, 2018

The thing is - There’s so much digital manipulation to the photo youre taking - how do you take any credit for it anymore. I think this post process should not be a permanent solution - as soon as they figure out how to take photos naturally this good - they should dump all this digital post processing.

8. cheetah2k

Posts: 2254; Member since: Jan 16, 2011

Love the jargon but as always with Apple its smoke and mirrors.. Always thinking of creative end notes to sell out-dated tech.. PASS

9. kbalthom

Posts: 4; Member since: Jun 06, 2017

Take idea from Google and rename it to cooler name . .

13. DolmioMan

Posts: 332; Member since: Jan 08, 2018

How has google ever attempted this before? For starters the software pulls data from multiple sensors and google has only ever used one camera.

10. gadgetpower

Posts: 155; Member since: Aug 23, 2019

Amazing what apple is doing with their chip. For sure, many pro will use iphone for their work.

Latest Stories

This copy is for your personal, non-commercial use only. You can order presentation-ready copies for distribution to your colleagues, clients or customers at https://www.parsintl.com/phonearena or use the Reprints & Permissions tool that appears at the bottom of each web page. Visit https://www.parsintl.com/ for samples and additional information.