Why is calling a function so slow in Javascript?

1.5k views Asked by At

Here's something really odd I noticed recently: calling a function seems to take significantly longer than the inner function. as in this fiddle:

jQuery('button').click(function(){
    console.time('outer')
    dostuff.call(this)    
    console.timeEnd('outer')
})

dostuff = function(){
 console.time('inner')
 jQuery(this).css('background','red')
 jQuery(this).css('border','solid black')
 jQuery(this).css('margin','0 5px 0')
 jQuery(this).css('padding','0px')
 console.timeEnd('inner')
}

The console output shows the outer is WAY slower than the actual work the function does... Why does this happen, and more importantly, how can I reduce this time in time-critical code?

outer: timer started show:23
17:12:49.020 inner: timer started show:29
17:12:49.021 inner: 1.8ms show:34
17:12:49.023 outer: 5.1ms show:25
17:12:51.368 outer: timer started show:23
17:12:51.368 inner: timer started show:29
17:12:51.370 inner: 1.2ms show:34
17:12:51.370 outer: 2.47ms show:25
17:12:54.094 outer: timer started show:23
17:12:54.095 inner: timer started show:29
17:12:54.096 inner: 1.92ms show:34
17:12:54.098 outer: 3.67ms
1

There are 1 answers

12
AudioBubble On

Micro benchmarks can be very misleading, and it's why they're generally discouraged. To properly measure the timing of anything, you generally need to do a sufficient amount of meaningful work (work that causes side effects, not work that say, computes something temporarily only to discard it). Otherwise you start looking at the timing of more dynamic factors (caching, paging, a single branch misprediction, etc).

Your question is a bit too broad as answering it accurately would require exact knowledge of the internals of the precise JavaScript engine being used. We'd have to know the exact disassembly being emitted by the compiler, and when, to say for sure. Though it might help you get more accurate answers if you state the exact JavaScript engine being used.

That said, if you're using a JIT, as you probably already know, just-in-time compilers translate instructions on the fly (to bytecode IR or directly to machine code).

Some of these do translation on a per-function basis. If that's the case, you'll see the initial overhead of a first-time function call being significantly more expensive than subsequent calls, as you're paying that "first-encounter" overhead of new code being encountered by the JIT.

If you're using a trace JIT, then the JIT will be analyzing common case branches of execution ("hot paths") and you could likewise see a kind of "first-encounter" overhead in tracing code paths until a common case branch of execution is established.

Anyway, all of this is hypothetical, but it's worth noting that with many JITs, the first-time execution of code is significantly skewed in performance (much more expensive than subsequent executions of the same code). That may explain your results, or it may not.

But whatever the case may be, you generally want to steer clear of micro-benchmarks, especially with more and more dynamic factors involved (a scripting language adds a lot more than native code). We're using very complex hardware and compilers nowadays which try to predict things for us, and in the case of dynamically compiled code, we have yet one other element besides the hardware and operating system doing it on the fly. When we start measuring things at too granular of a level, combined with these dynamic factors, we cease to measure the speed of our code and instead start reverse engineering these dynamic factors.