Nvidia's new AI fixes noisy photos in milliseconds with the help of deep learning and a pinch of magic

Nvidia's new AI fixes noisy photos in milliseconds with the help of deep learning and a pinch of magic
Even though smartphone manufacturers have done wonders with the cameras they equip in their contemporary flagships, every user still suffers the bane of noisy photos every once in a while. This is because the camera sensors in our handsets still need to be small enough to fit. They don't have the physical capability of collecting enough light at a reasonable time, so the ISO is boosted in order to compensate. ISO boosts will brighten the overall image, but also introduce noise. This is, of course, rarely a problem when out in the Summer sun, but try to take a selfie in a bar and you get the results we are talking about.

If exposure is not the issue, it's zooming — we often want to capture something that is at a relative distance from us. However, there's only so much a tiny sensor with a small lens can do. Regardless of our smartphones ability to churn out high-res photos, zoom in or crop these photos and you will see varying degrees of noise digging into the details and watering down whatever it is you wanted to zoom in on.

For years now, manufacturers have been compensating for these hardware limitations by developing complex algorithms that work to clear up noise and sharpen images digitally. These work out to varying degrees. Google's HDR+ deserves a shoutout here, but Samsung, LG, Apple, and other major players also have done a lot of magic with their own software implementations.

However, it appears that the next step in smartphone camera evolution is nigh. Researchers from Nvidia, Aalto University, and MIT have developed a complex AI algorithm that takes image de-noising to the next level.



The team set out to develop a software solution based on deep learning techniques that can fix noisy photos without the need for "clean" samples to base its work on.

According to the researchers, the project was a success and sometimes even performed better than similar algorithms that use a clean sample in their de-noising processes. A number of samples, as well as a YouTube video, have been provided. As we can see, the results are definitely impressive.

Sure, they look a little softer than the actual target image, but the AI seems to have done an impeccable job at restoring the images. It's worth noting that it has also been used to remove randomly positioned text from various photos and did so with great success. Could this spell doom for watermarks?



The AI was able to be trained to perform its task in minutes and the actual de-noising of images took milliseconds. The team used Nvidia Tesla P100 GPUs with Nvidia's TensorFlow deep learning framework, accelerated by the cuDNN. In other words, that's data-center-grade hardware and we don't expect to see this AI working on smartphones or personal computers soon... but it's definitely on the horizon.

source: Nvidia

FEATURED VIDEO

7 Comments

1. master-mkk

Posts: 214; Member since: Aug 27, 2014

I think this feature can be useful for smartphones camera in very low situations the thing is it needs a lot of powers but none the less this could be the way to go especially that smartphone sensors cant be so big due to size limitations

2. Hollowmost

Posts: 421; Member since: Oct 10, 2017

If Nvidia sell the algorithm to OEMs , we will have a full frame sensors low light capabilities in our smartphones I love technology

3. worldpeace

Posts: 3129; Member since: Apr 15, 2016

Smartphone OEMs? Probably will need at least a decade for smartphone hardware to be good enough to even use this.. Tesla P100 that's mentioned above 20TFlops of processing power, while the current smartphone flagships GPU barely have 0.5Tflops. Even if smartphone's GPU also have 20TFlops, P100 is still faster because it's specialized in deep learning (P100 can't work well for gaming, 1080Ti with 11TFlops can get double the fps compared to P100 in PC games, but 1080Ti is much slower for training AI). And the article above didn't mention how many P100 used in for that sample (it only said GPUs).

6. mixedfish

Posts: 1553; Member since: Nov 17, 2013

The 5G future in smartphones is about cloud processing. Obviously a mobile phone will never be able to achieve the processing power necessary within 10 years but a photo can easily be uploaded and processed on server especially once JPEG is phased out for JPEG XS.

4. digitalw

Posts: 79; Member since: Dec 20, 2011

And the real situation where this can be used is in 3D rendering, to render noisy image on a PC sometime take 1,2,3.. 5... 8 hours, depending on the machine and complexity of the scene. Rendering just a noisy image, can take few seconds. Then NVidia AI comes in play, finishing the job in a blink of an eye and here comes the next frame to render... using this technique, can save weeks of rendering

7. mootu

Posts: 1503; Member since: Mar 16, 2017

I wouldn't hold your breath, Nvidia didn't complete these in the blink of an eye, if you read the research paper you will find that some of these images took up to 13 hours.

5. PhoneCritic

Posts: 1354; Member since: Oct 05, 2011

Love that picture of the little girl. It was originally used in the old commodore Amiga days as the cover for an issue of Amiga world magazine. Brings back good memories of the good old days of computing and using graphic cards and graphic accelerators.

Latest Stories

This copy is for your personal, non-commercial use only. You can order presentation-ready copies for distribution to your colleagues, clients or customers at https://www.parsintl.com/phonearena or use the Reprints & Permissions tool that appears at the bottom of each web page. Visit https://www.parsintl.com/ for samples and additional information.