Bulk vs Individual Compression
In this article, take a look at bulk vs individual compression.
Join the DZone community and get the full member experience.
Join For FreeI'd like to share something very brief and very obvious - that compression works better with large amounts of data. That is, if you have to compress 100 sentences you'd better compress them in bulk rather than once sentence at a time. Let me illustrate that:
public static void main(String[] args) throws Exception {
List<String> sentences = new ArrayList<>();
for (int i = 0; i < 100; i ++) {
StringBuilder sentence = new StringBuilder();
for (int j = 0; j < 100; j ++) {
sentence.append(RandomStringUtils.randomAlphabetic(10)).append(" ");
}
sentences.add(sentence.toString());
}
byte[] compressed = compress(StringUtils.join(sentences, ". "));
System.out.println(compressed.length);
System.out.println(sentences.stream().collect(Collectors.summingInt(sentence -> compress(sentence).length)));
}
The compress method is using commons-compress to easily generate results for multiple compression algorithms:
xxxxxxxxxx
public static byte[] compress(String str) {
if (str == null || str.length() == 0) {
return new byte[0];
}
ByteArrayOutputStream out = new ByteArrayOutputStream();
try (CompressorOutputStream gzip = new CompressorStreamFactory()
.createCompressorOutputStream(CompressorStreamFactory.GZIP, out)) {
gzip.write(str.getBytes("UTF-8"));
gzip.close();
return out.toByteArray();
} catch (Exception ex) {
throw new RuntimeException(ex);
}
}
The results are as follows, in bytes (note that there's some randomness, so algorithms are not directly comparable):
Why is that an obvious result? Because of the way most compression algorithms work - they look for patterns in the raw data and create a map of those patterns (a very rough description).
How is that useful? In big data scenarios where the underlying store supports per-record compression (e.g. a database or search engine), you may save a significant amount of disk space if you bundle multiple records into one stored/indexed record.
This is not a generically useful advice, though. You should check the particular datastore implementation. For example MS SQL Server supports both row and page compression. Cassandra does compression on an SSTable level, so it may not matter how you structure your rows. Certainly, if storing data in files, storing it in one file and compressing it is more efficient than compressing multiple files separately.
Disk space is cheap so playing with data bundling and compression may be seen as premature optimization. But in systems that operate on large datasets it's a decision that can save you a lot of storage costs.
Published at DZone with permission of Bozhidar Bozhanov, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Never Use Credentials in a CI/CD Pipeline Again
-
Getting Started With the YugabyteDB Managed REST API
-
Replacing Apache Hive, Elasticsearch, and PostgreSQL With Apache Doris
-
Seven Steps To Deploy Kedro Pipelines on Amazon EMR
Comments