Clean up the leak!

Reading Time: 3 minutes

Even with the most sophisticated and thorough QA process, bugs will make it to your production systems. Even the most experienced developers will be at times confident about a certain release, only to then face the reality of a production environment, where things just don’t work out the same way as they did during testing. This does not rate the quality of your code, it’s a part of life – your customers and users can always surprise you. What’s important is how you react to such incidents.

You may have very few bugs in your releases, but if the mitigation of these incidents is not satisfactory, then the user experience – which grades your work at the end of the day – will dip rapidly. This post does not aim to provide a swiss-knife (or swiss army knife) for every scenario, but addresses a class of problems that is far more frequent than it should be. I mean the infamous and regular problem known to Java application administrators as the out of memory error.


It is not news to any experienced application administrator or to a developer, that one needs to identify what is causing extensive memory usage. It may be a bug in the software, it may be a problem on how a certain functionality is used – but regardless of the cause, you will want to see the evidence, the footprints left during and after the leak. Neither are the methods new or complicated; you will capture and analyze garbage collection logs and ideally a heap dump, to not only see how the problem happened, but also what exactly may have caused it.

The complication comes, when you only face such an issue on a production environment and you are yet to realize that the required diagnostic parameters are not in place – you do not have data to analyze. Specifically, if you did not enable garbage collection logging and automatic heap dump generation, all you see are the user complaints and possibly some vague log messages. Enabling these parameters will require a restart of the application, which then will result in even more unhappy users – far from ideal.


So how about we avoid this additional restart and enable the required parameters runtime? You didn’t think it was possible? Neither did I, but then I was introduced to a simple little utility called jinfo that is bundled with the JDK! Putting it as simple as possible; there are a number of JVM arguments that can be changed runtime. Luckily enough, these are the ones related to garbage collection logging and heap dump generation. Use the following command to list all the parameters that you can alter:

java -XX:+PrintFlagsFinal -version|grep manageable   

  intx CMSAbortablePrecleanWaitMillis         = 100 {manageable} 
  intx CMSTriggerInterval                     = -1 {manageable} 
  intx CMSWaitDuration                        = 2000 {manageable}
  bool HeapDumpAfterFullGC                    = false {manageable}
  bool HeapDumpBeforeFullGC                   = false {manageable}
  bool HeapDumpOnOutOfMemoryError             = false {manageable}
  ccstr HeapDumpPath                          = {manageable}
  uintx MaxHeapFreeRatio                      = 100 {manageable}
  uintx MinHeapFreeRatio                      = 0 {manageable}
  bool PrintClassHistogram                    = false {manageable}
  bool PrintClassHistogramAfterFullGC         = false {manageable}
  bool PrintClassHistogramBeforeFullGC        = false {manageable}
  bool PrintConcurrentLocks                   = false {manageable}
  bool PrintGC                                = false {manageable} 
  bool PrintGCDateStamps                      = false {manageable}
  bool PrintGCDetails                         = false {manageable}
  bool PrintGCID                              = false {manageable}
  bool PrintGCTimeStamps                      = false {manageable}

How does it work? For the long version, visit the related documentation, but in short you just use the following format (this example will set the PrintGCDetails option to true):

jinfo -flag +PrintGCDetails <PID>

Final words

Saving yourself a restart in business critical environments is desirable, but make sure not to rely on the jinfo tool as a part of automated processes, since Oracle marked this utility as not supported and it may or may not be a part of future JDK releases see the related tech note for more information.

The settings applied/altered by jinfo are not persistent, neither will be visible on the output of ps. The effective values of manageable parameters can be queried using the jinfo tool itself.

Put this handy little utility to your toolbelt and remember to use it, when you’re in trouble, save yourself (and your users) a restart – modify the diagnostic JVM arguments whenever it is needed! Need to share this information? Use our knowledge base article: How to change JVM arguments at runtime to avoid application restart