Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Google Software Technology

Google's New Compression Tool Uses 75% Less Bandwidth Without Sacrificing Image Quality (thenextweb.com) 103

An anonymous reader quotes a report from The Next Web: Google just released an image compression technology called RAISR (Rapid and Accurate Super Image Resolution) designed to save your precious data without sacrificing photo quality. Claiming to use up to 75 percent less bandwidth, RAISR analyzes both low and high-quality versions of the same image. Once analyzed, it learns what makes the larger version superior and simulates the differences on the smaller version. In essence, it's using machine learning to create an Instagram-like filter to trick your eye into believing the lower-quality image is on par with its full-sized variant. Unfortunately for the majority of smartphone users, the tech only works on Google+ where Google claims to be upscaling over a billion images a week. If you don't want to use Google+, you'll just have to wait a little longer. Google plans to expand RAISR to more apps over the coming months. Hopefully that means Google Photos.
This discussion has been archived. No new comments can be posted.

Google's New Compression Tool Uses 75% Less Bandwidth Without Sacrificing Image Quality

Comments Filter:
  • by JasterBobaMereel ( 1102861 ) on Friday January 13, 2017 @08:07AM (#53660071)

    ....is a lie, it reduces image quality just in a way you cannot see visually

    If all you want to do is look at the image this is fine, but anything else that needs it full quality will be sacrificed

    • it reduces image quality just in a way you cannot see visually

      If all you want to do is look at the image this is fine, but anything else that needs it full quality will be sacrificed

      I'd love to see the Steg implementation

    • Maybe it's only Google's spy algorithms that need the full res, the rest of the world can have the shitty version.
    • ....is a lie, it reduces image quality just in a way you cannot see visually

      If all you want to do is look at the image this is fine, but anything else that needs it full quality will be sacrificed

      Actually I think you could probably see it if the device are using isn't already so high def you can't tell the smallest details anyway. What they do is just request a 1/4 size image and then upscale it. Woo clever.

      • by swillden ( 191260 ) <shawn-ds@willden.org> on Friday January 13, 2017 @12:21PM (#53661537) Journal

        ....is a lie, it reduces image quality just in a way you cannot see visually

        If all you want to do is look at the image this is fine, but anything else that needs it full quality will be sacrificed

        Actually I think you could probably see it if the device are using isn't already so high def you can't tell the smallest details anyway. What they do is just request a 1/4 size image and then upscale it. Woo clever.

        No, they request a 1/4 size image, then upscale it, then selectively restore details to portions of the image that humans pay attention to. The result isn't much larger than the 1/4 size image, but looks much better to people.

        I've been doing something vaguely similar (though not automatically) for years in my portrait photography. I selectively sharpen (actually, oversharpen) key facial features (especially eyes) that are the things that people focus on when looking at a portrait. This makes the whole image seem sharper and more vibrant, though it isn't. In fact, if the entire image were sharpened in the same way it would look terrible. This is especially useful when I shoot with a soft-focus filter which creates a very nice dreamy effect but can make the subject look dull. Soft focus plus sharpened eyes (and, often, lips -- it depends) make a beautiful portrait which people find more appealing and "realistic" than without the phony sharpening. Similarly, reduced overall resolution with detail retained in the right places makes an image look as good as the full resolution version, even though it's not.

        • by Anonymous Coward

          I selectively sharpen (actually, oversharpen) key facial features

          That's weird, I sharpen the breasts and pussy and blur that face in my photos.

        • by BlackPignouf ( 1017012 ) on Friday January 13, 2017 @04:46PM (#53663511)

          +1.

          It's really impressive how much a difference sharp eyes make. I like taking close-up portraits with my 85mm f/1.4 on a full frame sensor.
          99% of the whole picture is basically completely out of focus. If the other 1% falls on the eyes, the picture looks perfectly sharp.
          It's junk otherwise.

          • by swillden ( 191260 ) <shawn-ds@willden.org> on Friday January 13, 2017 @05:02PM (#53663643) Journal

            +1.

            It's really impressive how much a difference sharp eyes make. I like taking close-up portraits with my 85mm f/1.4 on a full frame sensor. 99% of the whole picture is basically completely out of focus. If the other 1% falls on the eyes, the picture looks perfectly sharp. It's junk otherwise.

            Yup. When people look at portraits, they look first, last and middle at the eyes. My slight oversharpening brings out detail in the irises and lashes that people don't consciously notice but really make the image "pop".

            Most of photography is understanding how humans see images and enhancing (with various techniques, including composition, focus, lighting, post-processing etc., etc.) the portions that the photographer wants the audience to look at, in ways the audience finds compelling. In hindsight it's obvious that you can take random photos and go the other direction, losing detail that no one cares about, without degrading human perception of the image. Doing it well requires some understanding of the content of the image, though, so it takes a smart-ish system.

    • I wonder how well this compresses compared with FLIF [slashdot.org], which actually is lossless?

    • by ( 4475953 )

      Don't worry, you will still be able to find even the smallest detail after a few days of zoom and enhance. [knowyourmeme.com]

    • by Anonymous Coward

      I bet the difference is immediately noticeable, that's why they are afraid to show a full size example of this magical zoom and enhance technology.

      I'll stick to WebP lossless for archival and JPEG 95% for general use.

      • by Anonymous Coward

        Why would you use an obscure format like WebP for archival instead of something well-known like 24-bit PNG?

    • by PoopJuggler ( 688445 ) on Friday January 13, 2017 @10:04AM (#53660733)

      is a lie, it reduces image quality just in a way you cannot see visually

      If you can't see any difference then the visual quality is the same. Don't conflate visual quality with informational purity. Claiming it's a lossless conversion would be a lie, but that's not what they're claiming.

      • by arth1 ( 260657 )

        If you can't see any difference then the visual quality is the same.

        That is only true as long as the encoder controls the presentation.
        If a user is allowed to do things like zoom, rotate to non-square angles or even calibrate gamut, fidelity problems can become visible even if not seen in "standard presentation".

        • If a user is allowed to do things like zoom, rotate to non-square angles or even calibrate gamut, fidelity problems can become visible

          If you want to do that, then right-click to download the original image. But for the other 99.9% of the time, this will save bandwidth.

          • by arth1 ( 260657 )

            If you want to do that, then right-click to download the original image. But for the other 99.9% of the time, this will save bandwidth.

            True for the first two, but not for gamut corrections. Those take effect also on the first view, and defeats perceptual optimizations made for a different gamut.
            The underlying problem is the age-old one for web, where publishers mistakenly believe that they can control the presentation. It becomes a trade-off where you make things better for most, but worse for some.

            A similar problem is seen where images are adjusted for LCD monitors, using subpixels to increase the apparent resolution or color clarity (o

    • by mjwx ( 966435 ) on Friday January 13, 2017 @11:50AM (#53661301)

      ....is a lie, it reduces image quality just in a way you cannot see visually

      If all you want to do is look at the image this is fine, but anything else that needs it full quality will be sacrificed

      Well that's kind of the point. They didn't develop this for image manipulation tools... they developed it to save bandwidth on websites. If you cant visually tell the difference, Mission Hay-Fucking-Complished.

    • by Miamicanes ( 730264 ) on Friday January 13, 2017 @01:18PM (#53662071)

      How well do Google-compressed images deal with enlargement compared to JPEG, JPEG2000, etc? It's nice to say a new algorithm reduces file size without visual consequences, but compression artifacts can manifest themselves in new, unforseen ways. And future upsizing algorithms might end up being able to get better results from one due to "useless" data the other discards.

      Case in point: VHS had a nominal resolution of approximately 160x480 or 512 (with color resolution that barely approximated 40x480/512). But with extreme oversampling of a wider tape path (so you also capture unintended sideband artifacts), you can clean up & resample the video in ways that would be frankly *impossible* if your only remaining source copy was literally a 160x480/512 mpeg-1 capture.

      This is a big deal for preservation of analog media. It's deteriorating by the week, but for videotape in particular, there's no good way to massively oversample a decaying source in a way that will let us restore it better in the future. What we *need* is a videotape capture device with a dense array of read heads the full width of the tape, in at least two staggered rows (so row 2's sensors are centered between row 1's sensors), so the state of the entire tape can be captured (the dense array is needed because VCRs recorded diagonally via rotating heads to increase the tape speed relative to the read head... it "kind of" worked, but the capture quality with normal VCRs is *profoundly* impaired if the capture VCR's tracking deviates from the recording VCR's tracking... and the recording VCR's tracking ITSELF might have been "wobbly". We now have the ability to make dense read heads, and sufficiently-cheap phase-change magneto-optical storage space (eg, non-LTH BD-R) to do high-density two-dimensional linear capture so the tracking can be handled after the fact via software.

      This isn't sci-fi. There are already floppy drive controllers that can use a normal PC quad-density floppy drive to oversample a 5-1/4" c64/apple II/etc floppy (~25 sectors/track, ~35 tracks at 40-track stepping) at 50+ sectors/track and 80 track steppings. They can recover data from old floppy disks that would have been *unreadable* by the original drives & computers **years** ago. And with some floppy mods to give you 160 or 320 track steppings and slow down the rotation speed, even more discs become readable. Not to mention, even the lesser method can trivially overcome disc-based copy protection (most of which depended on storing data in ways that old drives could semi-reliably read, but couldn't reliably/easily write (or wouldn't, if you used the official kernel/OS/BIOS/API).

      Anyway, the point is, for capturing decaying analog content from decaying media, compression is BAD if you ever want to be able to restore or enhance it someday.

      • by AmiMoJo ( 196126 )

        This is going to be used for social media. Google sees emerging markets where bandwidth is limited and figures it can get an edge by providing better quality images on slow connections. They do this by taking an original large image, down-scaling it to 1/4 size to reduce bandwidth, and then scaling it back up again on the device.

        The scaling up uses machine learning to improve the result. Essentially it learns what things look like in real life, and then uses that knowledge to fill in the 75% of pixels that

    • As long as they do NOT lose the original picture but only use this for broadband management, it is OK.
  • How well does it do if you replace the lower quality picture with a doodle? Of course this will be no good to the internet unless they open source it.
    • by djinn6 ( 1868030 )
      You should replace it with a meme. This is the internet after all.
    • Probably quite badly. I know that Fraunhoffer used an acapella song by Dido as their "acid test" for judging various mp3 compression schemes. Complex sounds and images compress easily... it's the ones that are the equivalent of a pencil sketch on a napkin or a flute solo that make the limits of a compression algorithm *really* stand out.

  • This is awesome (Score:3, Interesting)

    by Esteanil ( 710082 ) on Friday January 13, 2017 @08:12AM (#53660097) Homepage Journal

    I can't wait until you get technology like this combined with eye tracking to decide on-the-fly what parts of your VR experience are the most visually important and can optimize rendering accordingly.

    On of my main pet peeves with current VR is that I can't see why you'd need to render at full resolution outside of the eye's focus area, which should make it possible to massively reduce the rendering required to get amazing quality.

    If you can also optimize by using machine learning to decide which areas are perceptually important that should make it possible to focus your processing resources even better on the parts that matter for the visual experience.

    • by JustNiz ( 692889 )

      >> On of my main pet peeves with current VR is that I can't see why you'd need to render at full resolution outside of the eye's focus area,

      I really can't imagine that you can reduce the quality of any part of the image enough to make a performance difference without at least subconciously noticing on some level though. That said, nVidia and other companies are already working on rendering only where you're looking, determined by active eye tracking sensors.

      It seems to me that the real visual quality

      • I pretty much agree with your statements, with the caveat that the kind of LOD the OP is talking about would be really, really cool. Hard to do though, you need to track the eye and switch detail basically before they eye can register the image. If you're running at 60fps that means you get about two frames to switch detail levels to an appropriate setting, and really you probably want to get it done in that first frame to make it less likely that the player will notice.

        That takes more than just power, that

        • by JustNiz ( 692889 )

          For those of us with top-end GPUs I just hope we will be able to turn that "feature" off, because I will bet a whole dollar that I will be able to notice it and won't like it.

    • There's a company working on exactly that.
      http://www.theverge.com/circui... [theverge.com]
      https://www.getfove.com/ [getfove.com]

      I can't remember where, but I think I saw a hands-on review that said that it actually works really well.
      I found it slightly hard to believe, given that raising the detail after you've moved focus to another place (especially with saccades) is going to have some delay.

    • by GNious ( 953874 )

      Eh, FOVE?

  • 52 Billion images a year? Damn, that's way more traffic than I ever would have expected on G+

  • by Chuq ( 8564 ) on Friday January 13, 2017 @08:20AM (#53660141) Journal

    ... something they stole from Pied Piper.

  • Better article (Score:5, Informative)

    by alexhs ( 877055 ) on Friday January 13, 2017 @08:23AM (#53660167) Homepage Journal

    Summary's links are fact-free ads.
    I found this one [thestack.com], that has the merit to link to the arXiv article [arxiv.org] about the process.

  • Given the number of pictures harvested by Google over the years, and provided that many people send the same boring pics (Eiffel t., China w., s. of Liberty...), in gmail for instance Google has just to put the index of the same stock picture they already have (say 8 bytes) and that's it. For a 8 MB pic, that's a 99.9999% compression rate.
  • by drafalski ( 232178 ) on Friday January 13, 2017 @08:43AM (#53660275)

    RAISR (Rapid and Accurate Super Image Resolution) does not work... try Rapid and Accurate Image Super Resolution

  • "Rapid and Accurate Super Image Resolution" should give RASIR.

    Which illiterate philistine came up with "RAISR"?

  • Google is AAF (Score:2, Insightful)

    by Virtucon ( 127420 )

    Shit, first it was vp8/WEBM but momentum seems to have died on that but now there's vp9 and it's better than vp8 and now images. Google you're annoying as fuck with the moving targets on your open standards, and while I think it's great that we now have another way to store images but we still have GIF, PNG, SVG, JPEG and even your own )(*@)(*! WEBP [google.com] which is based on VP8 which you don't like anymore. So now with RAISR what do we all do start buying dart boards to figure out what standards we as ISVs shou

    • by Anonymous Coward

      WEBM is video, and the momentum didn't die, when you view a "gif" on imgur you're actually viewing a webm, they just decided to use the wrong extension name. WEBM uses the VP9 codec. WEBP on the other hand is just a container format for the VP8 codec, which was derived from how frames were stored in WEBM when it was VP8-based. WEBP could upgrade to VP9 without changing the format, however it would require developers to link against a new library. And, the changes to VP9 were mostly advancements in moving vi

    • Google you're annoying as fuck with the moving targets on your open standards, and while I think it's great that we now have another way to store images but we still have GIF, PNG, SVG, JPEG and even your own )(*@)(*! WEBP

      This ins't a new standard. The images processed by this algorithm are standard JPEGs, just adjusted in a way that reduces image complexity in a way that is imperceptible to humans.

      • And everybody agrees JPEG is old, tired and long in the tooth the old patent issues. Then there was JPEG-2000 but again patent issues. Again, why would Google push this on top of what's essentially something that collectively we've been told is dying and encumbered by *possible* patent issues? I can see from the press info and details that they've come up with a way to use ML in a new way, great. But again, why not on top of WEBP they're own great new way of doing this and not JPEG? you can convert JPE

        • And everybody agrees JPEG is old, tired and long in the tooth the old patent issues. Then there was JPEG-2000 but again patent issues. Again, why would Google push this on top of what's essentially something that collectively we've been told is dying and encumbered by *possible* patent issues? I can see from the press info and details that they've come up with a way to use ML in a new way, great. But again, why not on top of WEBP they're own great new way of doing this and not JPEG? you can convert JPEGs to WEBP why not? Oh the browsers don't support WEBP but do JPEG?

          You're missing the forest for the trees, I think.

          This technique is entirely independent of image format. You could do it with JPEG, or WEBP or anything you like... you could even do it with lossless compression formats, though you'd obviously be making them lossy. The researchers used JPEG because it was convenient.

    • can we ask that you make up your damn minds, please?!?!

      So what you're saying is: Please cease progress! Did I translate that right?

      • No, Google, eat your own dog food. WEBP is a great standard show me how RAISR is better than WEBP since both are under the same rooftop.

  • by Guppy ( 12314 ) on Friday January 13, 2017 @09:53AM (#53660647)

    Let's hope Google has had the forethought to have the image recognition algorithm pre-screen for images containing numbers, letters, and diagrams. Pattern-matching compression can be pretty scary when it decides two patterns are close enough:

    http://www.dkriesel.com/en/blo... [dkriesel.com]

  • by Anonymous Coward

    The algorithm is not for compression, but for enhancing a low resolution version of the image.

  • Rapid and Accurate Super Image Resolution

    Sounds like the name of a Japanese game show.

  • What's the Weissman score?!

  • Duplicate of Is Google's AI-Driven Image-Resizing Algorithm Dishonest? [slashdot.org] (November 19, 2016)

Technology is dominated by those who manage what they do not understand.

Working...