Google's RAISR uses machine learning to create high-quality upscaled versions of low-res images
In the blog post than unveiled RAISR, Google Research Scientsit Peyman Milanfar says that RAISR obtains results that are comparable or better than the current super-resolution upscaling methods. While the quality is often comparable, Google claims that RAISR works 10 to 100 times faster than current alternatives. Another of RAISR's strengths is its ability to avoid recreating any aliasing artifacts contained in the original low-resolution image.
Standard upscaling methods create new pixels by applying simple combinations to the existent pixels in the low-resolution images. Such linear filters are fast but their results are not always satisfying.
Instead of following the traditional route, Google's RAISR applies filters selectively to each pixel in the image. RAISR uses machine learning to find out which filter works best for each pixel. It does this by analyzing the differences between upscaled versions of low-resolution images and their direct high-resolution pairs. As the system goes through more image pairs, it is better able to predict which filter will work best on each pixel. Obviously, this is just a simplified explanation of how the system works. If you're interested in learning more about RAISR, head on over to the Google Research blog post through the source link below.
Now that you've been introduced to how RAISR works, here are a few sample photos:
The potential for a fast image upscaling technology is immense. Aside from enabling the refresh of any photo taken at low resolution by modern standards, we can imagine that RAISR could be integrated with Google's Camera app, Google Photos, and a range of other apps.