I've been benchmarking my app and analyzing it with JMC. I've noticed that under load, it performs quite a bit of JIT compiling. If I send a large amount of transactions per second, the compile time spikes. The compile time always grows proportionally with any heavy load test against the app.
I've also observed that the Code Cache slowly rises as well. So I decided to raise the Code Cache reserve to 500MB to test. Bad move! Now it's spending even more time performing JIT.
Then I explicitly disabled code cache flushing via -XX:-UseCodeCacheFlushing
. However, I noticed that the peak Code Cache usage is larger than the current size. This leads me to a couple of questions:
- Does the JVM try to cache every JIT compilation?
- Why is the peak Code Cache size bigger than the current size even though I disabled flushing?
- Is there "temporary" compiled code that's automatically removed after the function ends?
In HotSpot JVM all JIT-compiled methods stays in CodeCache until they are reclaimed.
UseCodeCacheFlushing
affects reclamation of cold (but still valid) compiled methods. However, CodeCache may also contain obsolete or invalidated methods ("zombies") which are subject to purge at the next sweep cycle even with-XX:-UseCodeCacheFlushing
.There is a separate JVM flag
-XX:-MethodFlushing
to prevent sweeping CodeCache altogether, including zombie methods.