Engineers use garbage collection logs primarily to troubleshoot memory-related problems and tune their GC settings. However, we are seeing several innovative enterprises using garbage collection logs for following purposes.
1. Lowering AWS Bills
Most applications saturate memory first before saturating other resources (CPU, network bandwidth, storage). Most applications upgrade their EC2 instance size to get additional memory rather than get additional CPU or network bandwidth. With right memory size settings and GC parameters, you can run on basic EC2 instances itself effectively. You don’t have to upgrade to higher EC2 configurations. It can directly cut down your AWS bills. Analyzing GC logs thoroughly will help you to come up with optimal memory size settings.
2. Micro Metrics: Catch Performance Problems in the Test Environment
Despite thorough stress testing in the test environment, performance problems still find their way to production. It's because a lot of enterprises measures only macro metrics like CPU utilization, memory utilization, and response time. Macro metrics don’t give visibility to acute degradation. These acute degradations are the ones that manifest into major performance problems in production. If proper micro metrics are measured in the test environment, several performance problems can be caught in the testing phase itself. You can gather all the micro metrics related to memory from GC logs itself.
3. Production Monitoring and Alerting
The industry has seen several interesting application performance monitoring tools. Yet, none of these tools provide insightful metrics on garbage collection. When we say insightful metrics, we are referring to:
- Memory problems detection such as memory leaks, consecutive full GCs, GC starvation...
- GC KPIs: Latency, throughput, footprint, etc.
- Object creation rate, promotion rate, reclamation rate, etc.
- GC pause time statistics: Duration distribution, average, count, average interval, min/max, and standard deviation.
- GC causes statistics: Duration, percentage, min/max, and total.
- GC phases-related statistics: Each GC algorithm has several sub-phases. Example for G1: initial-mark, remark, young, full, concurrent mark, and mixed.
It's not that APM tools are not interested in these metrics, rather it's because they don't have that data. This data is only available in GC logs and within the JVM runtime. With the advent of GC Log analysis API, you can proactively monitor applications GC logs and build alerts if any of the thresholds are breached. Machine Learning capabilities of this log analysis API not only help you detect the memory problems but also predict future memory problems.
4. Continuous Integration
Catching defects during the development phase is much cheaper than catching them during the testing phase. As part of the Continuous Integration (CI) process, several enterprises are running stress tests. GC logs generated during those tests are programmatically analyzed through the GC log analysis API. If thresholds like object creation rate, full GC count, and GC interval time are breached, then the build gets failed automatically. It's a very powerful way to catch performance problems during code commit period itself.