A Comprehensive Guide to WebP
Join the DZone community and get the full member experience.Join For Free
this is a guest post courtesy of jonathan klein . the post originally appeared on jonathan’s blog on feb 20th, 2013.
webp (pronounced “ weppy “) is an ambitious and promising new image format that was created by a team at google back in september of 2010 , but browser adoption has been slow. at the time of this writing the only major browsers that support webp are chrome and opera, even though support in native applications is strong (albeit with plugins in many cases). in this article i want to present an unbiased, holistic view of the webp landscape as it stands today. my end goal is to further the conversation about alternate image formats and image compression on the web, because i believe it is one of the biggest opportunities for making web browsing faster across all devices and networks.
the description that google provides for webp is concise and explicit:
webp is a new image format that provides lossless and lossy compression for images on the web. webp lossless images are 26% smaller in size compared to pngs. webp lossy images are 25-34% smaller in size compared to jpeg images at equivalent ssim index. webp supports lossless transparency (also known as alpha channel) with just 22% additional bytes. transparency is also supported with lossy compression and typically provides 3x smaller file sizes compared to png when lossy compression is acceptable for the red/green/blue color channels.
the link above also has more information about how webp achieves those smaller image sizes, as well as the container and lossless bitstream specifications. there have been a number of posts about real world situations where webp has succeeded, a few of which are linked below.
- torbit saw a ~54% drop in file-size from using webp
- google reduced image size in the chrome web store by ~30% with webp
- another test showing a 40% improvement on average
- opera turbo saw a 22% reduction in data transfer and a 260% speed improvement by using webp
- smaller images – this one is obvious, and was explained in detail above.
- it can replace both pngs and jpegs, so we could use one image format for most web content.
- it is royalty-free, open source, and anyone can contribute to or implement the spec.
- the project is backed by google, and thus has a lot of resources behind it.
- google’s original study used jpegs as the source images, and looked at peak signal to noise ratio (psnr) as a comparison metric, which is “accepted as a poor measure of visual quality”. google did a second study using structural similarity (ssim) vs. bits per pixel (bpp) that showed similar results as the first. the second study makes this concern moot.
- webp lacks support for key image features, specifically exif data, icc color profiles, alpha channel, and 4:4:4:4 ycrcb color representation. as detailed above, icc color profiles and alpha transparency were added in a later version of webp. i can’t speak to exif data or 4:4:4:4 ycrcb color, but if you know more about this please let me know and i’ll update this point. update: vikas arora from the webp team let me know that webp now supports exif and xmp metadata as well.
- there is a cost associated with new image formats – developers have to decide which one to use and clients need to support it forever. there are third party vendors (like torbit ) and open source modules ( mod_pagespeed / ngx_pagespeed ) that will automate this for you, so developers no longer have to do the work themselves.
- the progressive decoding of webp requires using a separate library, instead of reusing the existing webm decoder.
- during the transition period when support is partial, you have to have duplicate images on your server, creating both jpegs and webp images. this eats up storage space, and can make image retrieval slower (for example, if your working set used to fit in memory, and after doubling it you need to go to disk for some images).
- cdns and intermediate proxies have to cache both versions of the image, reducing cache hit ratio.
- currently the lack of context in accept headers means that all webp requests are marked as cache-control: private so the origin can do user agent detection (more on this below).
- rolling out new image formats is hard.
there are many dimensions to consider when thinking about the performance implications of image formats. most people gravitate toward quality (subjective) and compression sizes (performance). both are interesting, but the elapsed time required to download the image is only part of the problem. after the image arrives on the local system there are several steps before it’s seen on the screen. the exact steps depend on the browser, operating system, and hardware. the basic pipeline looks something like this:
- system memory is allocated and the encoded image is stored in this memory.
- additional system memory is allocated, and the image is decoded at native resolution into this intermediary.
- the native image may need to be converted into another intermediary format.
- the appropriate intermediary is copied to the gpu (on hardware accelerated scenarios).
- finally, the image is drawn to the screen.
on mobile caliber devices it’s not uncommon to see these operations take several 100ms. some browsers attempt to do this work in parallel, so it may not directly block execution on the primary thread, but the elapsed time of this process impacts what are often referred to as ttg (time to glass) metrics.
to improve ttg on the client you need to streamline this process. gpu’s are massively parallel and highly specialized silicon which can perform graphics operations more efficiently than the general purpose cpu silicon. some operations can be performed 1000x more quickly through the gpu. there’s broad hardware support for traditional image formats like jpg in hardware today which speedup this process, and these opportunities don’t exist for webp.
for webp to make the web faster the elapsed time downloading the images (from fewer bytes) needs to be greater than the additional cost of getting the image to the screen (ttg). that’s not the case in the real world today. and over time, other image formats including jpg have more headroom to improve. modern hardware is amazing!
this is a point that few people talk about, and it’s great that jason was willing to explain it. if you spend more incremental time getting an image actually displayed to the screen than you save by downloading fewer bytes, that image has actually made the web slower .
i corresponded with ilya about this issue, and he said that while webp decoding is about 1.4x slower than jpeg, this difference is largely overwhelmed by the speed improvement that comes from sending fewer bytes across the wire. some additional comments from him are below.
first off, to give the ie team credit, i do think they are ahead of most other browser vendors when it comes to their gpu + mobile browser story — in fact, probably significantly so. i think chrome and ff will close this gap in the not so distant future, but credit where credit’s due. now, having said that, i don’t know the specific details of jason’s test, but my guess is, he’s testing image intensive apps, with good network connectivity (3.9/4g), on latest nokia hardware, and with minimum/no concern for bandwidth caps… that’s fair, but that’s not representative.
most users are stuck on non hspa+ connections, with middle of the road hardware, and are using an older (year or more) webkit build. this profile, compared to what he’s [likely] analyzing, is night and day. in fact, in markets where mobile browsing is already dominant (india), bandwidth is the #1 concern – bits are expensive, literally. hence, i take any argument about gpu performance with a grain of salt. yes, it is definitely an important metric, and one that all of us will likely pay much closer attention to in the future, but it is not the limiting factor today for the vast majority of the population.
this is where webp helps and why we’re investing all the effort into it… not to say that webp is the answer to all things. it just happens to be the best option at this very moment. i hope we have more image formats, with even better compression and better “time to glass” in the future! these requirements are not entirely exclusive.
there is an important distinction to mention here: when we are talking about hardware accelerated browsers where image decoding happens on the gpu, lack of hardware support for webp can cause a significant increase in ttg. if we look at browsers that do image decoding on the cpu, the time difference is not as large (~1.4x as ilya mentioned). in the former case, it is likely that in the real world increased ttg time overwhelms the decrease in download time from webp in at least some, and possibly most cases . in the latter case, the decrease in download time likely dominates the difference in decoding time in at least some, and possibly most cases . we clearly need more hard data to be published in this area to further inform this debate, and i will update this post as studies are published.
i’ll also reiterate that if you take speed out of the equation and just talk about the cost of the bytes on the wire (due to bandwidth caps), webp provides significant and unambiguous benefits.
if we are able to overcome these challenges, here are some rough numbers that gauge the size of the opportunity. when google instant launched, they claimed that if everyone in the world used it we would save more than 3.5 billion seconds a day, or 11 hours saved every second. this works out to about 450 human work years (assuming 40 hour work weeks) of time per day. if you think about the time savings we could get by reducing the cost of delivering and rendering images on the web, you can very quickly see that we would save many times that. this means that billions of dollars worth of human time could be saved each year by finding a solution to this problem that works universally.
webp also provides a tremendous amount of value on mobile devices with bandwidth caps. as i said above, 60% of bytes served on the web come from images, and for many mobile sites/apps that number is going to be much higher. take any mobile shopping app for example: once you have the app on your phone, you only have to download a few bytes of json data and images for all of the products you are looking at. webp is a huge win here. even if the increased decoding time offsets some of the improvement in download time, people are still saving money on bandwidth usage.
in addition to talking to jason and ilya, i’ve reached out to mozilla to get more opinions about why adoption has been slow, and i will update this post as i get more feedback. if you have more examples of webp studies, additional benefits/drawbacks, or information about the political and technical reasons that are blocking adoption, please post in the comments. if you prefer to contact me anonymously, you can email me directly at email@example.com.
improving the web is difficult, but in the words of ilya grigorik , “some uphill battles are worth fighting – i think this a good one.”
Published at DZone with permission of Mehdi Daoudi, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.