Over a million developers have joined DZone.

How Does LZ4 Acceleration Work?

DZone's Guide to

How Does LZ4 Acceleration Work?

LZ4 acceleration searches in wider increments, which reduces the number of potential matches it finds but also accelerates the speed and reduces the compression ratio.

Free Resource

RavenDB vs MongoDB: Which is Better? This White Paper compares the two leading NoSQL Document Databases on 9 features to find out which is the best solution for your next project.  

LZ4 has an interesting feature called acceleration. It allows you to modify the compression ratio (and the corresponding compression speed). This is quite interesting for several scenarios. In particular, while higher compression rate is almost always good, you want to take into account the transfer speed, as well. For example, if I’m interesting in writing to the disk, and the disk write rate is 400 MB/sec, it isn’t worth it to use the default acceleration level (which can produce about 385MB/sec), and I can reduce that so the speed of compression will not dominate my writes.

You can read more about it here.

We started playing with this feature today, and that brought the question, what does this actually do?

This is a really good question, but before I can answer it, we need to understand how compression works. Here is a very simple illustration of the concept.

void Compression(byte[] input, Stream output)
	var hashTable = new int[16 * 1024];
	SetArrayValues(hashTable, -1);

	for(int i = 0; i < input.Length - sizeof(int); i++)
		var current = BitConverter.ToInt32(input, i) % hashTable.Length;
		var prevRef = hashTable[current];
		hashTable[current] = i;
		if(prevRef == -1)

		var sizeOFBackReference = FindMatch(i, prevRef);
		if(sizeOFBackReference < sizeof(long))
			continue; // not worth it

		WriteBackReference(output, prevRef, sizeOFBackReference);

		i += sizeOFBackReference;

We created a very stupid-sized constrained hash table that maps from the current 4 bytes to the previous instance where we saw those 4 bytes. When we find a match, we check to see how much of a match we have and then write it out as a back reference.

Note that if we don’t find a match, we update the last match position for this value and move one byte forward to see if there is a match in that location. This way, we are scanning the past 64KB to see if there is a match. This is meant to be a very rough approximation to how compression work, don’t get too hang up on the details.

The key point with acceleration is that it impacts what we’ll do if we didn’t find a match. Instead of moving by one byte, we are going to skip by more than that. The actual logic is in here, and if I’m reading this correctly, it will probe the data to compress in increasingly wider gaps until it finds a match that it can use to reduce the output size.

Acceleration tells it to jump in even wider increments as it searches for a match. This reduces the number of potential matches it finds but also significantly reduces the amount of work that LZ4 needs to do with comparing the data stream, hence how it both accelerates the speed and reduces the compression ratio.

Get comfortable using NoSQL in a free, self-directed learning course provided by RavenDB. Learn to create fully-functional real-world programs on NoSQL Databases. Register today.

performance ,speed ,lz4 ,acceleration

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}