Over a million developers have joined DZone.

How Does LZ4 Acceleration Work?

DZone's Guide to

How Does LZ4 Acceleration Work?

LZ4 acceleration searches in wider increments, which reduces the number of potential matches it finds but also accelerates the speed and reduces the compression ratio.

Free Resource

SignalFx is the only real-time cloud monitoring platform for infrastructure, microservices, and applications. The platform collects metrics and traces across every component in your cloud environment, replacing traditional point tools with a single integrated solution that works across the stack.

LZ4 has an interesting feature called acceleration. It allows you to modify the compression ratio (and the corresponding compression speed). This is quite interesting for several scenarios. In particular, while higher compression rate is almost always good, you want to take into account the transfer speed, as well. For example, if I’m interesting in writing to the disk, and the disk write rate is 400 MB/sec, it isn’t worth it to use the default acceleration level (which can produce about 385MB/sec), and I can reduce that so the speed of compression will not dominate my writes.

You can read more about it here.

We started playing with this feature today, and that brought the question, what does this actually do?

This is a really good question, but before I can answer it, we need to understand how compression works. Here is a very simple illustration of the concept.

void Compression(byte[] input, Stream output)
	var hashTable = new int[16 * 1024];
	SetArrayValues(hashTable, -1);

	for(int i = 0; i < input.Length - sizeof(int); i++)
		var current = BitConverter.ToInt32(input, i) % hashTable.Length;
		var prevRef = hashTable[current];
		hashTable[current] = i;
		if(prevRef == -1)

		var sizeOFBackReference = FindMatch(i, prevRef);
		if(sizeOFBackReference < sizeof(long))
			continue; // not worth it

		WriteBackReference(output, prevRef, sizeOFBackReference);

		i += sizeOFBackReference;

We created a very stupid-sized constrained hash table that maps from the current 4 bytes to the previous instance where we saw those 4 bytes. When we find a match, we check to see how much of a match we have and then write it out as a back reference.

Note that if we don’t find a match, we update the last match position for this value and move one byte forward to see if there is a match in that location. This way, we are scanning the past 64KB to see if there is a match. This is meant to be a very rough approximation to how compression work, don’t get too hang up on the details.

The key point with acceleration is that it impacts what we’ll do if we didn’t find a match. Instead of moving by one byte, we are going to skip by more than that. The actual logic is in here, and if I’m reading this correctly, it will probe the data to compress in increasingly wider gaps until it finds a match that it can use to reduce the output size.

Acceleration tells it to jump in even wider increments as it searches for a match. This reduces the number of potential matches it finds but also significantly reduces the amount of work that LZ4 needs to do with comparing the data stream, hence how it both accelerates the speed and reduces the compression ratio.

SignalFx is built on a massively scalable streaming architecture that applies advanced predictive analytics for real-time problem detection. With its NoSample™ distributed tracing capabilities, SignalFx reliably monitors all transactions across microservices, accurately identifying all anomalies. And through data-science-powered directed troubleshooting SignalFx guides the operator to find the root cause of issues in seconds.

performance ,speed ,lz4 ,acceleration

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}