{{announcement.body}}
{{announcement.title}}

Bulk vs Individual Compression

DZone 's Guide to

Bulk vs Individual Compression

In this article, take a look at bulk vs individual compression.

· Big Data Zone ·
Free Resource

I'd like to share something very brief and very obvious - that compression works better with large amounts of data. That is, if you have to compress 100 sentences you'd better compress them in bulk rather than once sentence at a time. Let me illustrate that:

Java
 




x
13


 
1
public static void main(String[] args) throws Exception {
2
    List<String> sentences = new ArrayList<>();
3
    for (int i = 0; i < 100; i ++) {
4
        StringBuilder sentence = new StringBuilder();
5
        for (int j = 0; j < 100; j ++) { 
6
          sentence.append(RandomStringUtils.randomAlphabetic(10)).append(" "); 
7
        } 
8
        sentences.add(sentence.toString()); 
9
    } 
10
    byte[] compressed = compress(StringUtils.join(sentences, ". ")); 
11
    System.out.println(compressed.length); 
12
    System.out.println(sentences.stream().collect(Collectors.summingInt(sentence -> compress(sentence).length)));
13
}


The compress method is using commons-compress to easily generate results for multiple compression algorithms:

Java
 




xxxxxxxxxx
1
14


 
1
public static byte[] compress(String str) {
2
   if (str == null || str.length() == 0) {
3
       return new byte[0];
4
   }
5
   ByteArrayOutputStream out = new ByteArrayOutputStream();
6
   try (CompressorOutputStream gzip = new CompressorStreamFactory()
7
           .createCompressorOutputStream(CompressorStreamFactory.GZIP, out)) {
8
       gzip.write(str.getBytes("UTF-8"));
9
       gzip.close();
10
       return out.toByteArray();
11
   } catch (Exception ex) {
12
       throw new RuntimeException(ex);
13
   }
14
}


The results are as follows, in bytes (note that there's some randomness, so algorithms are not directly comparable):

Why is that an obvious result? Because of the way most compression algorithms work - they look for patterns in the raw data and create a map of those patterns (a very rough description).

How is that useful? In big data scenarios where the underlying store supports per-record compression (e.g. a database or search engine), you may save a significant amount of disk space if you bundle multiple records into one stored/indexed record.

This is not a generically useful advice, though. You should check the particular datastore implementation. For example MS SQL Server supports both row and page compression. Cassandra does compression on an SSTable level, so it may not matter how you structure your rows. Certainly, if storing data in files, storing it in one file and compressing it is more efficient than compressing multiple files separately.

Disk space is cheap so playing with data bundling and compression may be seen as premature optimization. But in systems that operate on large datasets it's a decision that can save you a lot of storage costs.

Topics:
big data, bulk compression, individual compression, tutorial

Published at DZone with permission of Bozhidar Bozhanov , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}