Google Is Using Machine Learning To Show High Quality Images Without Requiring Extra Bandwidth

Google
Google is turning to machine learning to compress images without losing quality or taking up extra bandwidth.

Called RAISR (Rapid and Accurate Super Image Resolution), Google says its machine learning technology can use 75% less bandwidth per image without compromising on quality.

This could mean improved loading speeds and lower data costs for those viewing the images.

Introduced in November last year, and currently only available on Google+, RAISR improves image quality using a filter-like process, by taking a high quality and low quality version of the same image, then using filters to find the best way of improving the lower quality version.

Google explains: “With RAISR, we instead use machine learning and train on pairs of images, one low quality, one high, to find filters that, when applied to selectively to each pixel of the low-res image, will recreate details that are of comparable quality to the original.

“RAISR can be trained in two ways. The first is the “direct” method, where filters are learned directly from low and high-resolution image pairs.

“The other method involves first applying a computationally cheap upsampler to the low resolution image (as in the figure above) and then learning the filters from the upsampled and high resolution image pairs.

“While the direct method is computationally faster, the 2nd method allows for non-integer scale factors and better leveraging of hardware-based upsampling.”

As it stands, Google is using RAISR to improve around 1bn images a week on Google+.

And although it is currently only available for Google+, the tech giant is expected to start using RAISR on more devices and apps in the coming weeks.

To find out more please click here.