Given the following code:
public class Test{
static int[] big = new int [10000];
public static void main(String[] args){
long time;
for (int i = 0; i < 16; i++){
time = System.nanoTime();
getTimes();
System.out.println(System.nanoTime() - time);
}
}
public static void getTimes(){
int d;
for (int i = 0; i < 10000; i++){
d = big[i];
}
}
}
The output shows a decreasing duration trend:
171918
167213
165930
165502
164647
165075
203991
70563
45759
43193
45759
44476
45759
52601
47897
48325
Why is the same code in getTimes
being executed in less than one third of the time after it has been executed 8 times or more? (Edit: It does not always happen at the 8th time, but from the 5th to 10th)
The fact that what you see is the result of some JIT optimization should be clear by now looking at all the comments you received. But what is really happening and why that code is optimized always nearly after the same amount of iterations of the outer
for
?I'll try to answer both questions but please remember that everything explained here is relative only to Oracle's Hotspot VM. There is no Java specification that defines how a JVM JIT should behave.
First of all, let's see what the JIT is doing running that Test program with some additional flag (the plain JVM is enough to run this, no need to load the debugging shared library, required for some of the
UnlockDiagnosticVMOptions
options):The execution completes with this output (removing a few lines at the beginning that show that other methods are being compiled):
The
printlns
from your code are interleaved with diagnostic information related to the compilation the JIT is performing. Looking at a single line:Each column has the following meaning:
osr_bci
if available)Keeping only the lines related to
getTimes
:It's clear that
getTimes
is being compiled more than once, but every time it's compiled in a different way.That
%
symbol means that on-stack replacement(OSR) has been performed, meaning that the 10k loop contained ingetTimes
have been compiled isolated from the rest of the method and that the JVM replaced that section of the method code with the compiled version. Theosr_bci
is an index that points to this new compiled block of code.The next compilation is a classic JIT compilation that compiles all the
getTimes
method (the size is still the same because there is nothing else in that method other than the loop).The third time another OSR is performed but at a different tiered level. Tiered compilation have been added in Java7 and basically allows the JVM to choose the client or server JIT mode at runtime, switching freely between the two when necessary. The Client mode performs a simpler set of optimization strategies while the server mode is able to apply more sophisticated optimizations that on the other hand have a bigger cost in term of time spent compiling.
I will not go into details about the different modes or about tiered compilation, if you need additional information i recommend Java Performance: The Definitive Guide by Scott Oaks and also check this question that explain what changes between the levels.
Back to the output of PrintCompilation, the gist here is that from a certain point in time, a sequence of compilations with increasing complexity is performed until the method becomes apparently stable (i.e. the JIT doesn't compile it again).
So, why all this start at that certain point in time, after 5-10 iteration of the main loop?
Because the inner
getTimes
loop has become "hot".The Hotspot VM, usually defines "hot" those methods that have been invoked at least 10k times (that's the historical default threshold, can be changed using
-XX:CompileThreshold=<num>
, with tiered compilation there are now multiple thresholds) but in the case of OSR i'm guessing that it's performed when a block of code is deemed "hot" enough, in term of absolute or relative execution time, inside the method contains it.Additional References
PrintCompilation Guide by Krystal Mok
Java Performance: The Definitive Guide