In this video I explain some 21 JVM parameters which are suited for most server applications. If you have any questions, you can read those links below for more information or just ask in the comments section.

I run several Java enterprise server applications. I often wondered – what are the best „default“ JVM settings for a server application to start with in production? I read a lot on the web and tried several things myself and wanted to share what I found out, so far. Links containing more information about JVM optimization can be found here:

http://blog.sokolenko.me/2014/11/javavm-options-production.html

http://www.petefreitag.com/articles/gctuning/

http://stas-blogspot.blogspot.de/2011/07/most-complete-list-of-xx-options-for.html

 

So let’s start:

-server

Use „-server“: All 64-bit JVMs use the server VM as default anyway. This setting generally optimizes the JVM for long running server applications instead of startup time. The JVM will collect more data about the Java byte code during program execution and generate the most efficient machine code via JIT.

-Xms=<heap size>[g|m|k] -Xmx=<heap size>[g|m|k]

The „-Xmx/-Xms“ settings specify the maximum and minimum values for the JVM heap memory. For servers, both params should have the same value to avoid heap resizing during runtime. I’ve applications running with 16GB heap sizes without an issue.

Depending on your application, you will have to try out how much memory will be best suited for your use case.

-XX:MaxMetaspaceSize=<metaspace size>[g|m|k]

Java 8 has no „Permanent Generation“ (PermGen) anymore but requires additional „Metaspace“ memory instead. This memory is used, in addition to the heap memory we specified before, for storing class meta data information.

The default size will be unlimited – I tend to limit MaxMetaspaceSize with a somewhat high value. Just in case something goes wrong with the application, the JVM will not hog all the memory of the server.

I suggest: Let your application run for a couple of days to get a feeling for how much Metaspace Size it uses normally. Upon next restart of the application set the limit to e.g. double the value.

-XX:+CMSClassUnloadingEnabled

Additionally, you might want to allow the JVM to unload classes which are held in memory but no code is pointing to them any more. If your application generates lots of dynamic classes, this is what you want.

-XX:+UseConcMarkSweepGC

This option makes the JVM use the ConcurrentMarkSweepGC – It can do much work in parallel to program execution but in some circumstances a „full GC“ with a „STW pause“ might still occur. I’ve read many articles and came to the conclusion that this GC is still the best one for server workloads.

-XX:+CMSParallelRemarkEnabled

The option CMSParallelRemarkEnabled means the remarking is done in parallel to program execution – which is what you want if your server has many cores (and most servers do).

 -XX:+UseCMSInitiatingOccupancyOnly
 -XX:CMSInitiatingOccupancyFraction=<percent>

Normally the GC will use heuristics to know when it’s time to clear memory. GC might kick in too late with default settings (causing full-Gcs).
Some sources say it might be a good idea to disable heuristics altogether and just use generation occupancy to start a CMS collection cycle. Setting values around 70% worked fine for all of my applications and use cases.

-XX:+ScavengeBeforeFullGC

The first option tells the GC to first free memory by clearing out the „young generation“ or fairly new objects before doing a full GC.

-XX:+CMSScavengeBeforeRemark

CMSScavengeBeforeRemark does attempt a minor collection before the CMS remark phase – thus keeping the remark pause afterwards short.

-XX:+CMSClassUnloadingEnabled

The option „-XX:+CMSClassUnloadingEnabled“ here tells the JVM to unload classes, which are not needed any more by the running application. If you deploy war files to an application server like wildfly, tomcat or glassfish without restarting the server after the deployment, this flag is for you.

-XX:+ExplicitGCInvokesConcurrentAndUnloadsClasses

The option „-XX:+ExplicitGCInvokesConcurrentAndUnloadsClasses“ is especially important if your application uses RMI (remote method invocation). The usage of RMI will cause the JVM to do a FULL-GC EVERY HOUR! This might be a very bad idea for large heap sizes because the FULL-GC pause might take up to several seconds. It would be better to do a concurrent GC and try to unload unused classes to free up more memory – which is exactly what the second option does.

-XX:+PrintGCDateStamps
-verbose:gc
-XX:+PrintGCDetails
-Xloggc:"<path to log>"

These options shown here will write out all GC related information to a specified log file. You can see how well your GC configuration works by looking into it.

I personally prefer to use the „Visual GC“ plug in for the „Visual VM“ tool to monitor the general JVM and GC behavior.

-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=<path to dump>`date`.hprof

When your JVM runs out of memory, you will want to know why. Since the OOM error might be hard to reproduce and you want to get your production server up and running again – you should specify a path for a heap dump. When things have settled down, you can analyze the dump afterwards.

-Djava.rmi.server.hostname=<external IP>
-Dcom.sun.management.jmxremote.port=<port>

These options will help you to specify an IP and port for JMX – you will need those ports open to connect remotely to a JVM running on a server for tools like VisualVM. You can gain deep insights over cpu and memory usage, gc behaviour, class loading, thread count and usage of your application this way.

Visual VM
Lastly, I would like to recommend to you the VisualVM tool which is bundled with the Java 8 JDK. You can use it to gain more insights about your specific application behaviour on the JVM – like cpu and memory usage, thread utilisation and much more.

Visual GCVisualVM can be extended with a plug in called „Visual GC“. It will briefly show you VERY detailed information about the usage of the young and old generation object spaces. You can easily spot problems with garbage collection simply by analyzing these graphs during application runtime.

Thank you very much for watching! If you liked the video you might consider giving it a „thumbs up“. If you have any questions – just put them in the comments section. I will reply as quickly as possible.