Recent times, Uber has been experiencing exponential growth in its traffic. Recent spike in traffic volume exposed several memory related performance bottlenecks in their platform: long garbage collection (GC) pauses, memory corruption, out-of-memory (OOM) exceptions, and memory leaks. In this brilliant article, the Uber engineering team summarizes their optimization journey: What all the challenges they faced, the tools they used, best practices they followed to fix their application’s performance bottlenecks.
As one of the tools in their optimization journey, they have used our GCeasy tool. In this article, they are discussing how they have used the GCeasy tool to study the Garbage collection pause times , reclaimed bytes, … As the builders of GCeasy, we are delighted to see it.
In one part of this article, the Uber engineering team has documented how they went about identifying thread(s) that were creating tonnes of objects in memory. One of their application threads was gathering vital JVM metrics and publishing it to their internal TSDB server for monitoring purposes. This thread was accidentally configured to capture JVM metrics every millisecond instead of every second. i.e., thread was capturing and transmitting JVM vital metrics 1000 times per second instead of once per second. This was causing significant object creation in the JVM, adding unnecessary overhead to their JVM.
To identify this misconfigured thread, the Uber engineering team had to take multiple snapshots of thread dumps from the application. From these thread dumps, they have to identify the threads that are in ‘RUNNABLE’ state. From those threads, they had to identify the threads are *consistently* in ‘RUNNABLE’ state across multiple thread dump snapshots to spot this misconfigured thread. All of these steps are tedious. On top of it, they have to be done manually. Besides that, there will be hit & miss to identify such ‘hot’ threads.
Modern Java applications tend to contain hundreds (sometimes thousands) of threads. It’s not a trivial job to narrow down the exact thread(s) that were consuming an enormous amount of CPU & memory from those hundreds of threads. However, you can use the fastThread thread dump analysis tool to instantly identify such hot threads, without hit & miss (guesswork). Here is a real case study on how a major trading application in N.America used the fastThread tool to identify such hot threads and fix the problem instantly.