As a Java developer, you don't need to know internal details of JVM most of the times. The virtual machine is not a trivial piece of technology and learning internal details can be scary. But there are times, when you find something interesting in the logs, which seems to be important. Like the message:
CodeCache is full. The compiler has been disabled. Try increasing the code cache size using -XX:ReservedCodeCacheSize=
The compiler has been disabled?
Yikes. That sounds bad, but the application is still running as expected. Fortunately, this message likely won't jump at you as it is related to specific application usage which isn't very usual in my opinion. But what CodeCache is anyway and how can be filled?
JVM uses several types of caches and CodeCache is one of them. As your application is being executed, your code (to be precise bytecode) is interpreted and some parts are compiled by JIT (just-in-time) compiler to native code, which can be directly executed by CPU. The code interpretation is a relatively slow process but if this section of the code isn't invoked frequently, it is fine. The compilation to native code takes additional time (and even more time with optimizing), but if the code is executed frequently, the speed gain is worth it. And how does the CodeCache fits into the equation? Well, the compiled native code is saved to the CodeCache.
Your application has usually a fixed size. The CodeCache is growing after the application was started but at some point, every critical section of code is already compiled so the growth stops. The default size of the cache should be enough for most applications but if not, it can be increased. Not to mention the cache should flush compiled section of code when it's full to create some new empty space. And according to our profiling, this flushing really happens. So how can you fill it to the point of stopping the compiler?
In one of our projects, we use Janino compiler to compile many generated classes and even though they are used only for a brief moment to generate data, they are still compiled to native code and saved to cache. But for some reason, the code of generated classes was never flushed. This is indeed a weird behavior because these problems with CodeCache were related to JDK 7 and addressed in JDK 8.
To be more precise, the flushing was observable just before the cache was filled. Once it was, it stayed that way. It's easily possible the sweeper was invoked but no code was flushed. It's hard to tell why is the code stuck because the sweeping process isn't transparent to the user and the decision if the method should be flushed is calculated at runtime using several parameters.
There are several solutions how to fight with full CodeCache but not all are possible:
Assign more memory to CodeCache - this is, however, temporary solution as it will still fill in our case
Minimize the number of generated classes and code per class - another temporary solution
Separate the class compilation to a different process which will be killed when done - we can do this because our classes are used only briefly to generate data
Mark generated classes for interpretation only - can be done for static classes but probably not at runtime
Disable JIT during the class compilation - can be set when the application is started, can be probably done using interface Compiler.disable() at runtime but not sure if it's supported in all JVMs
Investigate why are classes stuck and solve the source of the problem
For now, we aren't sure which way to take. It will probably involve more cache profiling so it's possible we will return to this topic in some of our future blogs.
Comments