Your phone rings. It’s 3 AM. A notification says your Java application has crashed. CPU is spiking, latency is rising, and users are waiting. Where do you start?
Knowing how to analyze GC logs in Java is often the fastest path to answers. Garbage collection logs reveal what’s happening inside the JVM over time, allocation pressure, memory reclamation, pause durations, promotion behavior, and GC overhead. Unlike heap dumps and thread dumps, which capture a single moment, GC logs provide a continuous view of memory behavior under real production load.
In this guide, you’ll learn a practical, step-by-step approach to reading GC logs, identifying bottlenecks, and making informed tuning decisions to keep your JVM stable and performant.
What Are GC Logs in Java?
GC logs basically document garbage collection operations within the JVM. They document information such as:
- When GC events took place;
- How much memory went to each area;
- How much has been reclaimed;
- What percentage was retrieved;
- How long application threads were paused.

Fig: What Are GC Logs in Java
GC logs display memory behavior over time in contrast to snapshots: allocation rates, promotion patterns, and pause frequency. Now, this temporal approach makes them the best information for practical analysis of real-world JVM memory dynamics under production loads.
What Information Does a GC Log Contain?
Most of the time, when there are problems with the memory, we do not see them directly. Instead, we see things like the program taking a time to respond, the CPU being used too much or the program not being able to handle as much work as it should. GC logs help us turn these symptoms into things that we can measure.
GC logs help us identify a few problems. These include:
- Memory leaks: When the old part of the memory keeps getting bigger and bigger and the garbage collection is not able to free up memory.
- Mis-sized heaps: When either the young generation or the whole heap is being garbage collected too often.
- GC thrashing: When the garbage collection is happening almost continuously and it is using up too much of the CPU.
- Pause bottlenecks: When the garbage collection is taking a long time, causing application threads to pause and response times to deteriorate.
Without GC logs, it is hard to figure out what is going on.. With GC logs we can use real data to help us solve the problem.
If you want to learn more about how to use GC logs there is a guide that can help you. This guide will show you how to do GC log analysis, which will be a good foundation.
GC Logs vs Heap Dumps vs Thread Dumps
Each diagnostic artifact serves a different purpose. Let’s take a look at this table and try to understand it more clearly:
| Artifacts | GC Logs | Thread Dumps | Heap Dumps |
| What they answer? | How is memory behaving over time? | What is the JVM executing at this moment? | What objects exist right now and why? |
| What It Shows | Temporal GC activity over time | Thread execution state | Complete memory snapshot |
| Nature of Data | Continuous timeline | Point-in-time state | Point-in-time state |
| Best Used For | Identifying allocation pressure, pause behavior, memory leaks | Deadlocks, contention, CPU-bound threads | Object retention analysis, leak root cause investigation |
For JVM performance analysis, GC logs are typically the first place to start.
How to Enable GC Logging in Java
By default, GC logs aren’t enabled. To do so, we need to configure the startup flags that specify to the JVM that GC metrics must be recorded. We will follow these closely in our analysis of memory behavior in the JVM.
Unified Logging was introduced in Java 9, so the configuration differs from Java 8.
GC Logging Flags in Java 8
Java 8 uses old-school verbose GC flags. Here are some useful JVM arguments:
| JVM Flag | Purpose | Why It Matters |
| -XX:+PrintGCDetails | Enables detailed GC event logging | Required for analyzing memory before/after collection |
| -XX:+PrintGCDateStamps | Adds absolute timestamps | Critical for correlating GC events with application logs |
| -XX:+PrintGCApplicationStoppedTime | Logs total stop-the-world pause time | Helps quantify latency impact |
| -Xloggc:/path/to/gc.log | Specifies GC log output file | Ensures logs are persisted for analysis |
| -XX:+UseGCLogFileRotation | Enables automatic log rotation | Prevents unbounded file growth |
| -XX:NumberOfGCLogFiles | Sets number of rotated log files | Controls historical retention |
| -XX:GCLogFileSize | Defines maximum size per log file | Avoids disk exhaustion in long-running systems |
Without proper JVM flags, GC logs won’t be captured at the required verbosity level. If you need a consolidated reference of production-ready JVM options, you can refer to our comprehensive JVM arguments master sheet, which documents commonly used tuning and diagnostic parameters.
Unified GC Logging in Java 9 and Later
From Java 9 onward, legacy GC flags were replaced by the Unified Logging framework using the -Xlog option
-Xlog:gc*,gc+heap=info,gc+age=trace:file=/var/log/app/gc.log:time,uptime,level,tags:filecount=5,filesize=20M
| Component | Purpose |
| gc* | Logs all GC-related events |
| gc+heap=info | Heap layout and occupancy details |
| gc+age=trace | Object aging and promotion tracking |
| time,uptime,level,tags | Adds structured, correlatable metadata |
| filecount,filesize | Enables built-in log rotation |
Unified Logging allows precise control over verbosity through tagged categories, enabling targeted diagnostics without unnecessary noise.
Best Practices for GC Log Collection in Production
For production-grade analysis, the logging has to be very deliberate and long term:
- Always Enable Timestamps
- Use Log Rotation
- Persist Logs Centrally
- Capture During Baseline and Incident
- Avoid Over-Verbose Logging in Production
For a broader JVM performance perspective, you may also reference Java GC best practices.
Understanding the Anatomy of a GC Log Entry
A GC log entry represents a single garbage collection event. Its structure differs, depending on the JVM version (Java 8 vs 9+) and the GC algorithm (Parallel, CMS, G1, ZGC, etc.). Manual parsing is tedious and error-prone because formats are different and logs are extremely verbose. For this reason, automated analyzers such as GCeasy are popular. They provide support for multiple GC formats, as well as converting raw logs into structured, actionable insights.
Sample GC Log Snippet
As mentioned earlier, an actual log file is very long and not easy to read. Here’s a sample taken from a GC log:
[2025-06-11T12:30:27.807+0200] Using G1[2025-06-11T12:30:27.822+0200] Version: 17.0.1+12-LTS-39 (release)[2025-06-11T12:30:27.822+0200] CPUs: 4 total, 4 available[2025-06-11T12:30:27.822+0200] Memory: 8067M[2025-06-11T12:30:27.822+0200] Large Page Support: Disabled[2025-06-11T12:30:27.822+0200] NUMA Support: Disabled[2025-06-11T12:30:27.822+0200] Compressed Oops: Enabled (32-bit)[2025-06-11T12:30:27.822+0200] Heap Region Size: 1M[2025-06-11T12:30:27.822+0200] Heap Min Capacity: 8M[2025-06-11T12:30:27.822+0200] Heap Initial Capacity: 128M[2025-06-11T12:30:27.822+0200] Heap Max Capacity: 2018M[2025-06-11T12:30:27.822+0200] Pre-touch: Disabled[2025-06-11T12:30:27.822+0200] Parallel Workers: 4[2025-06-11T12:30:27.822+0200] Concurrent Workers: 1[2025-06-11T12:30:27.822+0200] Concurrent Refinement Workers: 4[2025-06-11T12:30:27.822+0200] Periodic GC: Disabled[2025-06-11T12:30:27.822+0200] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bc0000-0x0000000800bc0000), size 12320768, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0.[2025-06-11T12:30:27.822+0200] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824[2025-06-11T12:30:27.822+0200] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000[2025-06-11T12:30:27.901+0200] GC(0) Pause Young (Normal) (G1 Evacuation Pause)[2025-06-11T12:30:27.901+0200] GC(0) Using 3 workers of 4 for evacuation[2025-06-11T12:30:27.916+0200] GC(0) Pre Evacuate Collection Set: 0.1ms[2025-06-11T12:30:27.916+0200] GC(0) Merge Heap Roots: 0.1ms[2025-06-11T12:30:27.916+0200] GC(0) Evacuate Collection Set: 8.0ms[2025-06-11T12:30:27.916+0200] GC(0) Post Evacuate Collection Set: 0.2ms[2025-06-11T12:30:27.916+0200] GC(0) Other: 0.4ms[2025-06-11T12:30:27.916+0200] GC(0) Eden regions: 13->0(8)[2025-06-11T12:30:27.916+0200] GC(0) Survivor regions: 0->2(2)[2025-06-11T12:30:27.916+0200] GC(0) Old regions: 0->5[2025-06-11T12:30:27.916+0200] GC(0) Archive regions: 0->0[2025-06-11T12:30:27.916+0200] GC(0) Humongous regions: 2->22025-06-11T12:30:27.916+0200] GC(0) Metaspace: 147K(384K)->147K(384K)NonClass: 142K(256K)->142K(256K) Class: 4K(128K)->4K(128K)
It contains a header with information about the JVM, followed by an entry for each GC event.
We would normally use a log analyzer such as GCeasy to extract and graph information from it.
Below is an extract from a GCeasy report, highlighting key JVM performance indicators such as throughput, CPU time, and pause duration distribution. Visualizing pause buckets makes it easier to detect latency outliers and tail-risk events.

Fig: Example GC log analysis report illustrating key JVM performance indicators
For a full breakdown of a GCeasy report, see this video: Key Sections of a GCeasy Report.
Key Components in a GC Log Event
Let us try and break down the components for you:
- GC Throughput (99.193%): Percentage of total JVM uptime spent running application code rather than garbage collection. Remember, higher is better. If it drops below 95%, we may be running into problems.
- CPU Time (2 hrs 3 min 14 sec): Total processor time consumed by GC threads, indicating how expensive memory management is.
- Avg Pause GC Time (1.446 sec): Average stop-the-world pause per GC event, reflecting steady-state latency impact.
- Max Pause GC Time (12.99 sec): Longest single GC pause observed, highlighting worst-case latency risk.
- GC Duration Distribution: Breakdown of pause times into buckets, exposing tail latency and outlier GC events.
Common GC Event Types (Minor, Full, Mixed, Concurrent)
GC logs contain different event types depending on the collector in use, but most fall into a few core categories:
- Young Generation GC (Minor GC): Short stop-the-world pause triggered by Eden space exhaustion, reclaiming short-lived objects.
- Full GC: Heap-wide stop-the-world collection and compaction event impacting both Young and Old generations.
- Mixed GC (G1GC): G1-specific event that collects all Young regions and selected Old regions to avoid costly Full GCs.
- Concurrent GC Phases: Background marking and sweeping phases executed alongside application threads to minimize pause time.

Fig: GC Events Overview
Step-by-Step: How to Analyze GC Logs in Java
Here are steps that you can follow while analyzing GC Logs:
Step 1: Check GC Frequency
Measure event cadence to identify allocation pressure and excessive Young GC activity.
Step 2: Analyze Pause Time
Quantify the average, max, and tail pause durations to assess the real latency impact.
Step 3: Evaluate Heap Utilization
Evaluate occupancy before GC versus post-GC to see if the reclamation process is efficient and retention trend is on point.
Step 4: Identify Promotion Rates
Track movement of objects from Young to Old Gen to sense a pressure of the aging and Old Gen growth.
Step 5: Detect Full GC Patterns
Look for recurring Full GCs, rising post-GC baselines, or compaction loops that indicate memory leaks or heap mis-sizing. For examples of real-world GC behavior signatures and how to interpret them, refer to this collection of interesting garbage collection patterns.
Key GC Metrics to Monitor for JVM Health
The following quantitative indicators help baseline JVM health and detect emerging memory pressure early:
| Metric | What It Measures | Why It Matters |
| GC Throughput | % of total uptime spent outside GC | Low throughput indicates excessive GC overhead impacting application performance. |
| Allocation Rate | Speed at which objects are created in Eden | High allocation rate drives frequent Minor GCs and CPU pressure |
| Promotion Rate | Volume of objects moved from Young to Old Gen | High promotion signals aging pressure and potential Old Gen saturation |
| GC Pause Distribution | Distribution of pause times across percentiles | Identifies tail latency and rare but disruptive GC events |
Monitoring these metrics continuously allows teams to detect memory inefficiencies before they escalate into Full GC storms or OutOfMemoryErrors.
For a deeper breakdown of these JVM performance indicators and how to interpret them in production, refer to this detailed guide to Java garbage collection key metrics.
Common GC Problems and How to Identify Them from Logs
GC logs are most valuable when you can pattern-match log signatures to real production issues. Below are common GC-related problems and how they typically appear in logs.
| Problem | GC Log Signature | What It Indicates |
| Frequent Young GCs | High Minor GC cadence with short intervals between events | Excessive allocation churn or undersized Young Generation |
| Long Full GC Pauses | Multi-second stop-the-world events with heap-wide compaction | Old Gen saturation, heap fragmentation, or marking inefficiency |
| Memory Leak Indicators | Rising Old Gen occupancy after each GC with poor reclamation | Progressive object retention leading toward OutOfMemoryError |
| GC Overhead Limit Exceeded | java.lang.OutOfMemoryError: GC overhead limit exceeded | JVM spending excessive CPU in GC while reclaiming minimal memory |
For detailed real-world GC signatures and interpretation techniques, see this guide to GC log analysis.
GC Log Analysis for Different Garbage Collectors
As a Java engineer, selecting the right GC algorithm is a critical architectural decision. The garbage collector operates within the same JVM as your application and continuously reclaims memory created by object allocations. As allocation rates increase, GC activity correspondingly intensifies. Since both the application and the GC compete for CPU resources, the choice of collector directly impacts latency, throughput, and overall system stability.
Each GC algorithm follows a different compaction strategy, pause model, and concurrency design, which also means each produces distinct log patterns. Understanding these differences is essential when analyzing GC logs and tuning JVM performance. As of OpenJDK 22, there are 7 GC algorithms available:
- Serial GC
- Parallel GC
- CMS GC (Concurrent Mark & Sweep)
- G1 GC
- ZGC
- Shenandoah GC
- Epsilon GC
For a deeper comparison of GC algorithms and their behavior models, see this guide.
The following video explains how different GC algorithms behave and how their log patterns differ in real-world JVM environments:
Tools to Analyze GC Logs in Java
Raw GC logs are verbose, subject to format constraints, and cannot easily be interpreted manually. Unstructured log entries are turned into actionable performance figures by GC analysis tools. A solid GC log analyzer usually gives you:
- GC throughput calculations
- Pause time distribution and percentiles
- Trends in allocation and promotion rates
- Heap growth and retention performance analysis
- Leak detection indicators
- CPU time consumed by GC threads
By converting raw logs to visual reports and summarized KPIs, these tools can considerably save diagnostics time.
There are various options available, including GCeasy, GCViewer and GarbageCat. GCeasy caters for nearly all JVM variants and GC algorithms. It automatically parses logs and shows a production-application-ready analysis. For a good overview of a few top GC log analysis tools, see this detailed guide.
GC Tuning Based on Log Insights
GC logs don’t just diagnose problems: they guide tuning decisions. The table below maps common log symptoms to potential JVM adjustments.
| Observed Symptom (From GC Logs) | Likely Cause | Possible Tuning Direction |
| Frequent Minor GCs | High allocation rate or small Young Gen | Increase Young Gen size (-Xmn) or adjust NewRatio |
| High Promotion Rate | Objects aging too quickly | Tune Survivor ratio or increase Survivor space |
| Rising Old Gen Baseline | Memory retention growth | Investigate memory leaks. If you are sure there are no leaks, Increase heap size (-Xmx) |
| Long Full GC Pauses | Old Gen pressure or fragmentation | Increase heap, switch to G1/ZGC, or adjust region size |
| Low GC Throughput (<95%) | Excessive GC overhead | Increase heap or reduce allocation churn |
| GC Overhead Limit Exceeded | JVM spending too much time in GC | Increase heap or fix retention problem |
The following video walks through practical JVM tuning decisions based on GC behavior and real log analysis:
Best Practices for Continuous GC Monitoring
GC analysis should not just start with production-related incidents. It should start in the testing phase and be done across the entire application lifecycle. Proactive monitoring allows early identification of memory pressure and prevents performance regressions from reaching production.
- Enable GC logging in staging and performance test environments
- Automate GC log analysis as part of CI/CD pipelines
- Use REST APIs from GC analysis platforms to validate performance baselines
- Centralize GC logs via log pipelines (ELK, Splunk, etc.)
- Track GC throughput and pause percentiles over time
- Alert on abnormal Full GC frequency
- Monitor Old Gen post-GC baseline trends
- Correlate GC pauses with application latency metrics
By building GC log analysis into CI/CD workflows, teams are able to identify allocation spikes, promotion pressure, and pause regressions before code reaches production. Continuous visibility transforms GC from a reactive diagnostic tool into a preventive observability layer.
For a more comprehensive JVM tuning and operational perspective, see this detailed guide on Java garbage collection best practices.
Quick GC Log Analysis Checklist
Use this diagnostic cheat sheet when analyzing GC logs in Java:
- Identify the active GC algorithm
- Check Minor and Full GC frequency
- Measure average and maximum pause times
- Review pause percentile distribution (P95 / P99)
- Compare heap occupancy before and after GC
- Examine promotion rate and Survivor space behavior
- Track Old Gen post-GC baseline growth
- Verify GC throughput remains within healthy range (>95% for most systems)
- Look for repeated Full GC cycles or compaction loops
- Check for OutOfMemoryError or GC overhead warnings
If multiple indicators trend negatively together, escalate to deeper heap analysis or tuning adjustments.
Conclusion: Mastering GC Log Analysis in Java
Understanding how to analyze GC logs in Java is an important part of JVM performance tuning. GC logs offer a time-based perspective of allocation pressure, promotion behavior, pause impact, latency impact, and heap efficiency data. GC logs offer resources that both heap dumps and thread dumps cannot yield.
Systematic reading allows us to expose memory leaks, mis-sized heaps, GC thrashing and latency bottlenecks before they result in performance problems, which can eventually lead to outages. The logs also direct accurate decisions for tuning and validating settings by providing measurable evidence.
When GC log analysis is integrated with testing, CI/CD pipelines, and continuous monitoring, teams move from reactive troubleshooting to proactive performance engineering. For today’s Java applications, GC logs are not just optional diagnostics: they are necessary metrics for stability, scalability and capacity planning.


Share your thoughts!