How HTTP/2 Is Changing Web Performance Best Practices

DZone 's Guide to

How HTTP/2 Is Changing Web Performance Best Practices

For people who write code for the web, transitioning to HTTP/2 isn’t always straightforward, and a speed boost isn’t automatically guaranteed. This article is an introduction to HTTP/2 and how it changes web performance best practices.

· Performance Zone ·
Free Resource

This article is featured in the new DZone Guide to Performance and Monitoring, Volume III. Get your free copy for more insightful articles, industry statistics, and more. 

The Hypertext Transfer Protocol (HTTP) underpins the World Wide Web and cyberspace. If that sounds dated, consider that the version of the protocol most commonly in use, HTTP 1.1, is nearly 20 years old. When it was ratified back in 1997, floppy drives and modems were must-have digital accessories and Java was a new, up-and-coming programming language. Ratified in May 2015, HTTP/2 was created to address some significant performance problems with HTTP 1.1 in the modern web era. Adoption of HTTP/2 has increased in the past year as browsers, web servers, commercial proxies, and major content delivery networks have committed to or released support.

Unfortunately for people who write code for the web, transitioning to HTTP/2 isn’t always straightforward, and a speed boost isn’t automatically guaranteed. The new protocol challenges some common wisdom when building performant web applications, and many existing tools—such as debugging proxies—don’t support it yet. This article is an introduction to HTTP/2 and how it changes web performance best practices.

Binary Frames: The “Fundamental Unit” Of HTTP/2

One benefit of HTTP 1.1 (over non-secure connections, at least) is that it supports interaction with web servers using text in a telnet session on port 80: typing GET / HTTP/1.1 returns an HTML document on most web servers. Because it’s a text protocol, debugging is relatively straightforward.

Instead of text, requests and responses in HTTP/2 are represented by a stream of binary frames, described as a “basic protocol unit” in the HTTP/2 RFC. Each frame has a type that serves a different purpose. The authors of HTTP/2 realized that HTTP 1.1 will exist indefinitely (the Gopher protocol still is out there, after all). The binary frames of an HTTP/2 request map to an HTTP 1.1 request to ensure backwards compatibility.

There are some new features in HTTP/2 that don’t map to HTTP 1.1, however. Server push (also known as “cache push”) and stream reset are features that correspond to types of binary frames. Frames can also have a priority that allows clients to give servers hints about the priority of some assets over others.

Other than using Wireshark 2.0, one of the easiest ways to actually see the individual binary frames is by using the net-internals tab of Google Chrome (type chrome://net-internals/#http2 into the address bar). The data can be hard to understand for large web pages. Rebecca Murphey wrote a useful tool for displaying it visually in the command line.

Additionally, the protocol used to fetch assets can be displayed in the Chrome web developer tools—right click on the column header and select “Protocol”.

All major browsers require HTTP/2 connections to be secure. This is done for a practical reason: an extension of TLS called Application-Layer Protocol Negotiation (ALPN) lets servers know the browser supports HTTP/2 (among other protocols) and avoids an additional round trip. This also helps services that don’t understand HTTP/2, such as proxies—they see only encrypted data over the wire.

Reducing Latency With Multiplexing

A key performance problem with HTTP 1.1 is latency, or the time it takes to make a request and receive a response. This issue has become more pronounced as the number of images and amount of JavaScript and CSS on a typical webpage continue to increase. Every time an asset is fetched, a new TCP connection is generally needed. This requirement is important for two reasons: the number of simultaneous open TCP connections per host is limited by browsers, and there’s a performance penalty incurred when establishing new connections. If a physical web server is far away from users (for example, a user in Singapore requesting a page hosted at a data center on the U.S. East Coast), latency also increases. This scenario is not uncommon—one recent report says that more than 70% of global Internet traffic passes through the unmarked data centers of Northern Virginia.

HTTP 1.1 offers different workarounds for latency issues, including pipelining and the Keep-Alive header. However, pipelining was never widely implemented, and the Keep-Alive header suffered from head-of-line blocking: the current request must complete before the next one can be sent.

In HTTP/2, multiple asset requests can reuse a single TCP connection. Unlike HTTP 1.1 requests that use the Keep-Alive header, the requests and response binary frames in HTTP/2 are interleaved and head-of-line blocking does not happen. The cost of establishing a connection (the well-known “three-way handshake”) has to happen only once per host. Multiplexing is especially beneficial for secure connections because of the performance cost involved with multiple TLS negotiations.

Image title

Requests for multiple assets on a single host use a single TCP connection in HTTP/2.

Implications For Web Performance: Goodbye Inlining, Concatenation, And Image Sprites?

HTTP/2 multiplexing has broad implications for front-end web developers. It removes the need for several long-standing workarounds that aim to reduce the number of connections by bundling related assets, including:

  • Concatenating JavaScript and CSS files: Combining smaller files into a larger file to reduce the total number of requests.
  • Image spriting: Combining multiple small images into one larger image.
  • Domain sharding: Spreading requests for static assets across several domains to increase the total number of open TCP connections allowed by the browser.
  • Inlining assets: Bundling assets with the HTML document source, including base-64 encoding images or writing JavaScript code directly inside <script> tags.

With unbundled assets, there is greater opportunity to aggressively cache smaller pieces of a web application. It’s easiest to explain this with an example:

Image title

A concatenated and fingerprinted CSS file unbundles into four smaller fingerprinted files.

A common concatenation pattern has been to bundle style sheet files for different pages in an application into a single CSS file to reduce the number of asset requests. This large file is then fingerprinted with an MD5 hash of its contents in the filename so it can be aggressively cached by browsers. Unfortunately, this approach means that a very small change to the visual layout of the site, like changing the font style for a header, requires the entire concatenated file to be downloaded again.

When smaller asset files are fingerprinted, significant amounts of JavaScript and CSS components that don’t change frequently can be cached by browsers—a small refactor of a single function no longer invalidates a massive amount of JavaScript application code or CSS.

Lastly, deprecating concatenation can reduce front-end build infrastructure complexity. Instead of having several pre-build steps that concatenate assets, they can be included directly in the HTML document as smaller files.

Potential Downsides Of Using HTTP/2 In The Real World

Optimizing only for HTTP/2 clients potentially penalizes browsers that don’t yet support it. Older browsers still prefer bundled assets to reduce the number of connections. As of February 2016, caniuse.com reports global browser support of HTTP/2 at 71%. Much like dropping Internet Explorer 8.0 support, the decision to adopt HTTP/2 or go with a hybrid approach must be made using relevant data on a per-site basis.

As described in a post by Kahn Academy Engineering that analyzed HTTP/2 traffic on its site, unbundling a large number of assets can actually increase the total number of bytes transferred. With zlib, compressing a single large file is more efficient than compressing many small files. The effect can be significant on an HTTP/2 site that has unbundled hundreds of assets.

Using HTTP/2 in browsers also requires assets to be delivered over TLS. Setting up TLS certificates can be cumbersome for the uninitiated. Fortunately, open-source projects such as Let’s Encrypt are working on making certificate registration more accessible.

A Work In Progress

Most users don’t care what application protocol your site uses—they just want it to be fast and work as expected. Although HTTP/2 has been officially ratified for almost a year, developers are still learning best practices when building faster websites on top of it. The benefits of switching to HTTP/2 depend largely on the makeup of the particular website and what percentage of its users have modern browsers. Moreover, debugging the new protocol is challenging, and easy-to-use developer tools are still under construction.

Despite these challenges, HTTP/2 adoption is growing. According to researchers scanning popular web properties, the number of top sites that use HTTP/2 is increasing, especially after CloudFlare and WordPress announced their support in late 2015. When considering a switch, it’s important to carefully measure and monitor asset- and page-load time in a variety of environments. As vendors and web professionals educate themselves on the implications of this massive change, making decisions from real user data is critical. In the midst of a website obesity crisis, now is a great time to cut down on the total number of assets regardless of the protocol.

4/4 Major Browser Vendors Agree: HTTPS Is Required

Firefox, Internet Explorer, Safari, and Chrome all agree: HTTPS is required to use HTTP/2 in the first place. This is critical because of a new extension to Transport Layer Security (TLS) that allows browsers and clients to negotiate which application-layer protocol to use. When a TLS connection is established for the first time, the server broadcasts support for HTTP 1.1, SPDY, or HTTP/2 without an additional round trip.

Because of changes Google recently announced, it’s critical that backend SSL libraries are updated before Chrome drops support for the older Next Protocol Negotiation. standard in favor of Application Layer Protocol Negotiation. Unfortunately, for almost every modern Linux distribution, this means compiling web server software from source code with OpenSSL version 1.0.2 (not a trivial task).

With the latest version of OpenSSL installed on servers, however, it’s possible to check hosts for HTTP/2 support from the command line:

me@ubuntu-trusty-64:~$ echo | openssl s_client -alpn h2 -connect google.com:443 | grep ALPN ALPN protocol: h2

A web-based tool from KeyCDN and the is-http2 package can also help determine host support.

The transition to the new protocol is relatively straightforward for sites that are already delivered securely. For non-secure sites, web servers (and potentially CDNs) will need to be correctly configured for HTTPS. New open-source projects such as Let’s Encrypt aim to make this process as easy, free, and automated as possible. Of course, regardless of HTTP/2 support, moving to HTTPS is becoming more important. Some search engines now use secure sites as a positive signal in page ranking, and privacy advocates and industry experts strongly recommend it.

Determining Back End and Content Delivery Network Support

If HTTPS is properly configured, the next step is determining if the server or proxy software supports HTTP/2. The IETF HTTP Working Group maintains a comprehensive list of known implementations on its website, and popular web servers have all released or committed to support. Most popular application development languages have HTTP/2 packages as well.

Server or Cloud Provider

HTTP/2 Support


> 2.4.17


> 1.9.5

Microsoft IIS

Windows Server 2016 Technical Preview


No (as of 1/16)

Google AppEngine

Available with TLS

Amazon S3

No (as of 1/16)

Support for the full suite of HTTP/2 features, especially server push, is not guaranteed. It’s necessary to read the release notes to determine which features are fully supported.

If your site uses assets delivered by a Content Delivery Network (CDN), major vendors like CloudFlare and KeyCDN already support the new protocol even if your back end doesn’t. With some providers, enabling HTTP/2 between your client and the edge locations can be as easy as toggling a radio button on a web form.


Supports HTTP/2 as of Jan. 2016?







Amazon CloudFront


Using Wireshark For Debugging

HTTP/2 tooling still has a long way to go before catching up with HTTP 1.1. Because HTTP/2 is a binary protocol, simple debugging using telnet won’t work, and standard debugging proxies like Charles and Fiddler do not offer support as of January 2016.

In the first part of this article, we discussed how to use Chrome Net Internals (chrome://net-internals#http2) to debug traffic. For more advanced analysis, using the low-level C (or the Python bindings) of the nghttp2 library or Wireshark 2.0 is needed. Here, we’ll focus on Wireshark.

Configuring Wireshark to view an HTTP/2 frame requires additional setup because all traffic is encrypted. To view Firefox or Chrome HTTP/2 traffic, you have to log TLS session information to a file specified by the environment variable SSLKEYLOGFILE. On Mac OS X, set the environment variable before launching the browser from the command line (you can see Windows instructions here):

$ export SSLKEYLOGFILE=~/Desktop/tls_fun.log 
$ open -a Google\ Chrome https://nghttp2.org/

Wireshark must be configured to use the SSLKEYLOGFILE in the preferences menu under the “SSL” protocol listing.

When starting Wireshark for the first time, a network interface needs to be selected. Filtering only on port 443 is a good idea since all HTTP/2 traffic in Chrome is secure. After clicking on the shark icon, recording begins for all traffic sent over that interface. The output can be overwhelming, but it’s easy to filter HTTP/2-only traffic by typing “http2” into the filter text box. When HTTP/2 packets are captured, they can now be decrypted into individual HTTP2 binary frames:

Using the tabs at the bottom of the data panel, it’s possible to see the decrypted frames. HEADERS frames, which are always compressed, can also be displayed decompressed.

The Transition Is Not Yet Straightforward

For many web applications in early 2016, transitioning to HTTP/2 is not yet straightforward. Not only is HTTPS required in order to use the new protocol in browsers, it’s likely that server software will also need to be upgraded. In some cases, particularly with Backend-as-a-Service providers or Content Delivery Networks, HTTP/2 support might not be available—or even promised—yet. Lastly, easy-to-use debugging tools are still being worked on.

As many teams have already discovered, it is likely that migrating any large site to HTTP/2 will contain surprises. Despite these challenges, many large web properties have successfully launched HTTP/2 support with significant performance benefits. Carefully measuring real-user performance and understanding the limitations of current tooling is helpful for making the transition as smooth as possible.

Additional Resources

This article was written by Clay Smith, with contribut ions of technical feedback and invaluable suggestions by Jeff Martens, Product Manager for New Relic Browser, and web performance expert Andy Davies.

For more insights on monitoring, application logs, and performance management, get your free copy of the new DZone Guide to Performance and Monitoring, Volume III!

http/2, performance, wireshark

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}