Over a million developers have joined DZone.

What is SPDY? Deployment Recommendations

· Performance Zone

Discover 50 of the latest mobile performance statistics with the Ultimate Guide to Digital Experience Monitoring, brought to you in partnership with Catchpoint.

Originally written by Sehoon Park

In the middle of this year a news report announced that Facebook is planning to support Google's SPDY protocol in large scale, and that they are already implementing SPDY/v2. Here is the official response from Facebook that I have found on this topic. Among various efforts devised and suggested by Google to make the Web faster, I think SPDY will be the one to become a new industry standard, and it will be included in HTTP/2.0.

As an acronym of SPeeDy, SPDY is a new protocol Google has suggested as a part of its efforts to "make the Web faster." It was suggested as a protocol to use the present and future Internet environment more efficiently by addressing the disadvantages of HTTP devised in the early Internet environment.

In this article, I will provide a brief introduction to the features and merits of SPDY. I will explain about the state of SPDY support, and what to do and what to consider when it is introduced.

When was the latest version of HTTP released?

HTTP version 0.9 was first announced in 1991, and HTTP 1.0 and 1.1 were released in 1996 and 1999, respectively, and since then nothing has been changed in HTTP for the last 10 years. These days, however, a webpage has a size 20 times bigger with 20 times more HTTP requests than a webpage in the 1990s. The Table 1 below shows the data quoted from Google I/O 2012.

Mean page size Mean # of requests per page Mean No. of domains
2010 Nov. 15 702 KB 74 10
2012 May 5 1059 KB 84 12

Table 1: Comparison of Mean Webpage Size in 2010 and 2012.

The capacity of the Yahoo! main page in 1996 was 34 KB. That is only 1/30 of the mean webpage capacity in 2012. There is a significant gap even between 2010 and 2012, let alone between the 1990s and 2012. This is because the mean page size and number of requests are ever increasing as User UX is becoming more and more sophisticated along with the dissemination of high-speed Internet.

The characteristics of today's webpage have changed from that in the past as follows:

  • Consists of much more resources.
  • Uses multiple domains.
  • Operates more dynamically.
  • Emphasizes security more.

Considering how today's web environment is different from the past, Google has announced the SPDY protocol, which complemented the disadvantages of HTTP. SPDY focuses especially on resolving the problem of load latency.

Features of SPDY

The Figure 1 below shows the layers of SPDY compared to the traditional TCP/IP layer model.


Figure 1: HTTP vs. SPDY.

The features of SPDY can be summarized as follows:

  • Always operates on Transport Layer Security (TLS).
    • Transport Layer Security (TLS) is the next version of Secure Sockets Layer (SSL). TLS and SSL are sometimes used to refer to the same protocol because they are the name for two different versions of the same protocol, and this also applies to this article.
  • Therefore, SPDY applies only to websites written with HTTPS.

HTTP Header compression

As HTTP headers have many redundant contents whenever a request is made, you can improve the performance significantly just by compressing headers. According to a report by Google, it is possible to reduce the size by 10-35% by compressing HTTP headers even in the initial request, and reduce the size of headers by 80-97% when requests are made several times (long-lived connection). Also, when the upload bandwidth is relatively small, as in mobile devices, this HTTP header compression is more useful. These days, as the HTTP header is 2 KB on average and is growing bigger, the merit of compressing HTTP headers is expected to grow in the future.

Binary protocol

As it uses binary framing rather than text-based framing, it provides faster parsing and is less sensitive to errors.


SPDY handles multiple independent streams in a single connection concurrently. For this reason, unlike HTTP which handles one request at a time in a single connection with response to requests made consecutively, SPDY handles multiple requests and responses concurrently with a small number of connections. In addition, unlike HTTP pipelining in which if one response is delayed, the others are all delayed, SPDY handles each request and response independently as it uses a FIFO queue.

Full-duplex interleaving and stream prioritization

As SPDY allows interleaving in which one stream in a process can be interleaved with another and stream prioritization, data of higher priority can jump into the process of transportation of data of lower priority and can be transported earlier.

Server push

Servers can push content without client requests unlike Comet and Long-polling. Unlike methods such as inlining, SPDY supports resource caching and uses the same bandwidth as that of inlining or a smaller bandwidth. When implementing server push, however, you need to implement additional Web server application logic.

No need to re-write a website

Except for some features that require additional implementation, such as server push, you don't need to change a website itself to apply SPDY. However, the browser and the server should support SPDY. SPDY can be applied completely and transparently to browser users. In other words, there is no protocol scheme like "spdy://." The browser also displays nothing with regard to the use of the SPDY protocol.

Based on such characteristics of SPDY, the following table shows the difference between HTTP and SPDY.

Secure Not Default Default
Header Compression No Yes
Multiplexing No Yes
Full-Duplex No Yes
Prioritization No, (instead, a browser employs heuristics.) Yes
Server push No Yes
DNS Lookup More Less
Connections More Less
Table 2: HTTP/1.1 vs. SPDY.

Therefore, it can be said that SPDY is a protocol designed to use TCP connection more efficiently by improving the data transfer format and connection management of HTTP. As a result of such efforts, in a test with the top 25 websites, SPDY worked 39-55% faster compared to HTTP + SSL.

Why does SPDY need TLS?

Why does SPDY use TLS even though using TLS causes latency due to encryption/decryption? Google's SPDY Whitepaper states the following answer for this question: 

"In the long term, the importance of web security will be emphasized more and more, and thus we want to get a better security in the future by specifying TLS as a sub protocol of SPDY. We need TLS for compatibility with the current network infrastructure. In other words, we need it to prevent any compatibility issue with the communication going through the existing proxy.”

Despite this reason, when looking into the implementation of the actual SPDY, you can see that SPDY depends much on TLS's Next Protocol Negotiation (NPN) extension mechanism. This TLS NPN extension mechanism determines whether a request coming from Port 443 is SPDY or not and identifies the version of SPDY used by the request, to decide whether to use SPDY to handle the next communication. Without TLS NPN, you will need to get additional RTT to use SPDY.

Efforts for Standardization 

SPDY is being developed as an open networking protocol and has been suggested to IETF as a method of HTTP/2.0. SPDY is a sub-project of the Google Chromium project, and thus Chromium client implementations and server tools are all being developed with open sources.

Future of SPDY

Most recently SPDY Draft 3 was released, and SPDY Draft 4 is under development. The features likely to be added in Draft 4 are as follows: 

  • Name resolution push
  • Certificate data push
  • Explicit proxy support

The final goal of SPDY is to provide a page within "a single connection setup time + bytes/bandwidth time".

Browsers, Servers, Libraries and Web Services Supporting SPDY

Currently a variety of browsers and servers support SPDY, and Google, who originally suggested SPDY, already provides almost all of its services with SPDY. The browsers, servers, libraries and services supporting SPDY are as follows.

Browsers Supporting SPDY

As of July 2012, the following is a list of browsers which support the SPDY protocol.

Google Chrome/Chromium

Chrome and Chromium have supported SPDY from their initial version. If you enter the following URI, you can inspect SPDY sessions in Chrome/Chromium: chrome://net-internals/#events&q=type:SPDY_SESSION%20is:active. For example, if you visit www.gmail.com, in the chrome:// tab you can see multiple SPDY sessions being created. Android mobile Chrome also supports SPDY.

Firefox 11 and later versions

While added in version 11, SPDY was not enabled by default until Firefox version 13. If you enter about:config in FF, which is the URI for Firefox settings, and see network.http.spdy.enabled, you can check whether support for SPDY has been enabled or not. Android mobile Firefox 14 also supports SPDY.

Amazon Silk

The Silk browser equipped in Kindle Fire, an Android-based e-book reader from Amazon, also supports SPDY. It communicates with Amazon EC2 service by using SPDY.

Default browser of Android 3.0 and higher

The default browser of Android 3.0 (Honeycomb) and 4.0 (Ice Cream Sandwich) supports SPDY.

For more information about which browsers support SPDY, see http://caniuse.com/spdy.

Servers and Libraries Supporting SPDY

Support for SPDY is being vitalized mainly by major web servers and application servers, and a variety of libraries that implement SPDY are also being developed.


Nginx released a beta version of SPDY module on June 15, 2012, and has continuously provided patches. See SPDY: 146% faster Slideshare presentation by Nginx team to learn more about it.


Jetty also provides the SPDY module.


The SPDY module for Apache 2.2 is also being developed.


In addition, SPDY implementation structures for Python, Ruby and node.js servers have already been developed or are currently being developed. There are a variety of versions for SPDY C library, including libspdy, spindly and spdylay. A library to use SPDY on iOS is also being developed.


Netty began to provide SPDY package from its 3.3.1 version released in 2012.


SPDY support in Tomcat is currently under development and it should come in Tomcat version 8.

Services Using SPDY

As mentioned earlier, Google has already converted almost all of its services, including search, Gmail and Google plus, into HTTPS, and provides them through SPDY. In addition, when Google App Engine uses HTTPS, it also supports SPDY. Twitter also uses SPDY when providing service via HTTPS.

However, among numerous Web sites on the Internet, only a few websites use SPDY. According to a survey by Netcraft conducted in May 2012, of a total of 660 million websites only 339 websites are currently using SPDY. In other words, except for Google and Twitter, there is hardly any major website using SPDY.

When SPDY is Not Very Efficient

SPDY is not always fast. Sometimes you may not get any performance improvement with SPDY. Such situations are as follows.

When using only HTTP

As SPDY always requires SSL, you need additional SSL handshake time. Therefore, when you convert an HTTP site into HTTPS to support SPDY, you may not obtain distinct performance improvement due to SSL handshake.

When there are too many domains

SPDY operates by domain. This means that it requires as many connections as the number of domains, and that request multiplexing is available only in a single domain. Moreover, as it is difficult to make all domains support SPDY, you may not get the merits of SPDY when there are too many domains. Especially when a CDN does not support SPDY, you may not expect the performance improvement with SPDY.

When HTTP is not the bottleneck

For most pages, HTTP is not the bottleneck. For example, in cases when a resource can be downloaded only after another resource is downloaded, SPDY will not be that effective.

When Round-Trip-Time (RTT) is low

SPDY is more efficient when RTT is high. When RTT is very low, for example, in communications between servers within IDC, SPDY has few merits.

When resource in a page is very small

For pages with six or fewer resources, SPDY has few merits because the value of reusing connection is not significant.

Things to Do to Introduce SPDY 

When you introduce SPDY, you need to carry out the following tasks to apply it most efficiently.

Application Level

Use only one connection

For better SPDY performance and more efficient use of Internet resources, you need to use as few connections as possible. If you use a small number of connections when using SPDY, you can see the benefits such as putting data into packets in a better way, getting better header compression efficiency, reducing the frequency of checking the status of connection and reducing the frequency of handshake. Also, in terms of Internet resources, with a small number of connections, you can also have a more efficient TCP and reduce Bufferbloat.

Bufferbloat is a phenomenon in which excess buffering of packets at a router or a switch causes high latency and reduced throughput. With the idea of avoiding packet discards as much as possible and low memory prices, the buffer size of a router or a switch is continuously increasing. The packets that should have been discarded earlier could survive longer and as a result these packets hinder TCP congestion avoidance algorithm, deteriorating the overall network performance.

Avoid domain sharding

Domain sharding is a kind of expedient used to avoid the restriction on the number of concurrent downloads (in general, six downloads per hostname in modern browsers) in web applications. If you use SPDY and comply with "use a single connection" recommendation, you don't need to use domain sharding. To make it worse, domain sharding causes additional DNS queries and makes applications more complex.

Use server push instead of inlining

Inlining stylesheets or scripts is often used to reduce the number of HTTP requests, thus RTT, in web applications. However, inlining makes Web pages less cacheable and increases the size of webpages due to base64 encoding. If you use the SPDY server push feature used to push content, you can avoid these problems.

Use request prioritization

You can enable the client to inform the server of the relative priority of resources by using the request prioritization feature of SPDY. The simple common heuristic prioritization could be html > js, css > *.

Choose the proper size of a SPDY frame

Although SPDY spec allows large frames, sometimes a small frame is more desirable. This is because a small frame allows interleaving to work better.

SSL Level

Use a smaller, full certificate chain

The size of a certificate chain makes a huge influence on the performance of the initialization of a connection. If there are more certificates in a certificate chain, it will take more time to verify the validity of certificates, and more space will be occupied in initcwnd.

initcwnd: the initial value used in TCP initial congestion window and TCP congestion control algorithm. The congestion window is a sender-side window to control the size according to TCP congestion control algorithm, which is different from the TCP window, which is a receiver-side limit on the size. 

In addition, if a server does not provide a full certificate chain, the client will use additional RTT to get certificates in the middle. Therefore, if a large, incomplete certificate chain is used, it will take longer time for an application to use the connection.

Use a wildcard certificate (e.g., *.naver.com) if possible

If you use wildcard certificates, you can reduce the number of connections and use the connection sharing of SPDY. As a wildcard certificate is provided by a certification institute, however, you may need to discuss with the institute and pay extra costs.

Do not set the size of SSL write buffer too large

If SSL write buffer is too large, TLS application record will be put on multiple packets. As an application can process only after the entire TLS application record is completed, the record put on multiple packets will cause additional latency. Google servers use 2 KB buffers.

TCP Level

Set the initcwnd of the server to at least 10

Initcwnd is the main bottleneck that affects the initial loading time of a page. If you use only HTTP, you can avoid this problem by attaining the initial congestion window size of n × initcwnd by opening multiple connections concurrently. As a single connection is advantageous in SPDY, however, it is better to set initcwnd to a large value from the beginning. This value in old Linux kernels is fixed to 2-3, and the method to adjust this value is not provided. As this value was determined according to the reliability and bandwidth of TCP network when it was first considered, it is not suitable to today's TCP network with higher stability and bandwidth. The method to adjust this value was added in Linux Kernel 3.0, and the latest Linux kernels already use the default value of 10 or higher.

Disable tcp_slow_start_after_idle

The tcp_slow_start_after_idle on Linux is set to 1. This causes a congestion window to return to the size of initcwnd when the connection goes idle, and makes TCP Slow Start restart.

TCP Slow Start is an algorithm that works by sending packets to the congestion window of the initcwnd size and increasing the TCP congestion window up to the maximum value allowed by the network or up to the TCP window of the receiver side. If the initcwnd value is small, it takes more round-trips until the window reaches the maximum size allowed by the network, and as a result, the initial page loading time will increase.

As this will eliminate the advantage of a single connection of SPDY, this setting should be disabled. You can change the setting by using the sysctl command. It is also advantageous to disable this setting when using HTTP keepalive.

If SPDY is Really Introduced …

In conclusion, to actually introduce SPDY, you need to consider a variety of matters and modify applications and servers. You cannot ignore the costs of introducing the protocol, either. For a Web application written for Tomcat, which does not yet support SPDY, you should consider the cost required to change the Web application server, as well as the cost required to implement the server push functionality.

Except for the costs required to introduce SPDY, what should we take into account first when we introduce SPDY in a real service? I chose three matters to consider.

The service should be the one that already uses HTTPS

You cannot get many advantages when introducing SPDY for services using only HTTP. You should pay the costs for introducing SSL as well.

You should be able to change the Linux kernel

Even CentOS 6.3, the latest version released on July 9, 2012, still uses the kernel 2.6.32. Adjustment of initcwnd is supported only from the kernel 3.0, and you need to change the kernel if possible because the performance improvement you can get by adjusting initcwnd is very significant.

Consider the ratio of users of SPDY supported browsers

In Korea, many users still use IE which does not support SPDY. On mobile devices, iOS does not yet support SPDY, while Android 3.0 or higher supports SPDY. Therefore, until there are sufficient users of SPDY-supported browsers, you should carefully compare the advantages you can get from the performance improvement derived from SPDY with the costs required to introduce the protocol.

Is your APM strategy broken? This ebook explores the latest in Gartner research to help you learn how to close the end-user experience gap in APM, brought to you in partnership with Catchpoint.


Published at DZone with permission of Esen Sagynov, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}