Nvidia's new AI fixes noisy photos in milliseconds with the help of deep learning and a pinch of magic
If exposure is not the issue, it's zooming — we often want to capture something that is at a relative distance from us. However, there's only so much a tiny sensor with a small lens can do. Regardless of our smartphones ability to churn out high-res photos, zoom in or crop these photos and you will see varying degrees of noise digging into the details and watering down whatever it is you wanted to zoom in on.
For years now, manufacturers have been compensating for these hardware limitations by developing complex algorithms that work to clear up noise and sharpen images digitally. These work out to varying degrees. Google's HDR+ deserves a shoutout here, but Samsung, LG, Apple, and other major players also have done a lot of magic with their own software implementations.
However, it appears that the next step in smartphone camera evolution is nigh. Researchers from Nvidia, Aalto University, and MIT have developed a complex AI algorithm that takes image de-noising to the next level.
The team set out to develop a software solution based on deep learning techniques that can fix noisy photos without the need for "clean" samples to base its work on.
According to the researchers, the project was a success and sometimes even performed better than similar algorithms that use a clean sample in their de-noising processes. A number of samples, as well as a YouTube video, have been provided. As we can see, the results are definitely impressive.
Sure, they look a little softer than the actual target image, but the AI seems to have done an impeccable job at restoring the images. It's worth noting that it has also been used to remove randomly positioned text from various photos and did so with great success. Could this spell doom for watermarks?
The AI was able to be trained to perform its task in minutes and the actual de-noising of images took milliseconds. The team used Nvidia Tesla P100 GPUs with Nvidia's TensorFlow deep learning framework, accelerated by the cuDNN. In other words, that's data-center-grade hardware and we don't expect to see this AI working on smartphones or personal computers soon... but it's definitely on the horizon.