Tuesday, August 11, 2009

Mysterious JVM crashes explained

We've recently suffered from mysterious crashes of the JVM running our application server. The JVM (Sun JDK 1.5.0_06 but it was later reproduced with JDK 1.6.0_14 as well) simply crashed with all sorts of memory related errors. A few examples:

java.lang.OutOfMemoryError: requested 8388608 bytes for uint64_t in /BUILD_AREA/jdk1.5.0_06/hotspot/src/share/vm/utilities/growableArray.cpp. Out of swap space?
java.lang.OutOfMemoryError: requested 157286476 bytes for JvmtiTagHashmapEntry* in /BUILD_AREA/jdk1.5.0_06/hotspot/src/share/vm/prims/jvmtiTagMap.cpp. Out of swap space?
java.lang.OutOfMemoryError: requested 16 bytes for CHeapObj-new. Out of swap space?

The server was not out of swap space and it had plenty of free memory left, so that was not the problem.

I started googling around and found lots of different mentions of these kinds of problems, but there turned out to be many possible causes.

In our case, the problem turned out to be that we allowed the JVM to use too much memory... The JVM was configured with a 2GB heap (and 256MB of perm size), and was running on a 32bit machine with 4GB of memory (linux 2.6.x kernel without hugemem or anything like that). As it turns out, in this kind of configuration, linux allows each process to allocate at most a little over 3GB. When our application was requiring a lot of memory, the JVM was trying to (perhaps temporary, to perform garbage collection) allocate more memory than it is allowed to by the linux kernel, resulting in one of the above types of crashes.

Configuring the JVM with a smaller heap (1.5GB) caused our application to simple go out of memory with a regular "java.lang.OutOfMemoryError: Java heap space". But now at least, it didn't just crash, and after a few hours with a memory profiler (YourKit), the cause of the excessive memory usage in our application was found as well.

I still wonder if the JVM or the linux kernel could arrage for a more meaningful error message though.