"Deep Fusion" explained: First look at Apple's most innovative camera feature

20comments
Deep Fusion: First look at Apple's most innovative camera feature
The new iPhone 11 family comes with new and better cameras, but there is one ground-breaking new camera feature that will not be available on these iPhones from the very start, but it will instead arrive as an update.

Apple calls this special feature "Deep Fusion" and it is a brand new way of taking pictures where the Neural Engine inside the Apple A13 chip uses machine learning to create the output image.


The result is a photo with a stunning amount of detail, of great dynamic range and with very low noise. The feature uses machine learning and works best in low to medium light.

Phil Schiller, Apple's chief camera enthusiast and also head of marketing, demonstrated the feature with a single teaser picture and explained how it works.

How "Deep Fusion" works:

  • it shoots a total of 9 images
  • even before you press the shutter button, it has already captured 4 short images and 4 secondary images
  • when you press the shutter button, it takes 1 long exposure photo
  • then in just 1 second, the Neural Engine analyzes the fused combination of long and short images, picking the best among them, selecting all the pixels, and pixel by pixel, going through 24 million pixels to optimize for detail and low noise


It is truly the arrival of computational photography and Apple claims this is the "first time a neural engine is responsible for generating the output image". In a typical Apple fashion, it also laughingly calls this "computational photography mad science". Whichever definition you pick, we can't wait to see this new era of photography on the latest iPhones this fall (if you are wondering exactly when, there are no specifics, but practice shows it will likely be the end of October).

Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless