There's a lot of capital C, capital S computer science going into Javascript via the Tracemonkey, Squirrelfish, and V8 projects. Do any of these projects (or others) address the performance of DOM operations, or are they purely Javascript computation related?
How do the various Javascript optimization projects affect DOM performance?
1.2k views Asked by Alana Storm At
2
There are 2 answers
0
On
They're pure JavaScript. Unless a particular DOM method call is implemented in JS, they'll have little effect (not to say there hasn't been work done on reducing the overhead of such calls however).
DOM optimization is a whole 'nother kettle of squirrels monkeys spiders fish... The layout and even rendering engines come into play, and each browser has their own implementation and optimization strategy.
The performance of pure DOM operations (getElementById/Tagname/Selector, nextChild, etc) are unaffected as they're already in pure C++.
How the JS engine improvements will effect performance does depend to an extent on the particular techniques used for the performance improvements, as well as the performance of the DOM->JS bridge.
An example of the former is TraceMonkey's dependence on all calls being to JS functions. Because a trace effectively inlines the path of execution any point where the JS hits code that cannot be inlined (native code, true polymorphic recursion, exception handlers) the trace is aborted and execution falls back to the interpreter. The TM developers are doing quite a lot of work to improve the amount of code that can be traced (including handling polymorphic recursion) however realistically tracing across calls to arbitrary native functions (eg. the DOM) isn't feasible. For that reason I believe they are looking at implementing more of the DOM in JS (or at least in a JS friendly manner). That said, when code is traceable TM can do an exceptionally good job as it can lower most "objects" to more efficient and/or native equivalents (eg. use machine ints instead of the JS Number implementation).
JavaScriptCore (which is where SquirrelFish Extreme lives) and V8 have a more similar approach in that they both JIT all JS code immediately and produce code that is more speculative (eg. if you are doing
a*b
they generate code that assumesa
andb
are numbers and falls back to exceptionally slow code if they aren't). This has a number of benefits over tracing, namely that you can jit all code, regardless as to whether or not it calls native code/throws exceptions, etc, which means a single DOM call won't destroy performance. The downside is that all code is speculative -- TM will inline calls to Math.floor, etc, but the best JSC/V8 can do would be equivalent toa=Math.floor(0.5)
->a=(Math.floor == realFloor) ? inline : Math.floor(0.5)
this has costs both in performance and memory usage, it also isn't particularly feasible. The reason for this is the up front compilation, whereas TM only JITs code after it's run (and so knows exactly what function was called) JSC and V8 have no real basis to make such an assumption and basically have to guess (and currently neither attempts this). The one thing that V8 and JSC do to try and compensate for this problem is to track what they've seen in the past and incorporate that into the path of execution, both use a combination of techniques to do this caching, in especially hot cases they rewrite small portions of the instruction stream, and in other cases they keep out of band caches. Broadly speaking if you have code that goesV8 and JSC will check the 'implicit type'/'Structure' twice -- once for each access, and then check that
a.x
anda.y
are both numbers, whereas TM will generate code that checks the type ofa
only once, and can (all things being equal) just multiplya.x
anda.y
without checking that they're numbers.If you're looking at pure execution speed currently there's something of a mixed bag as each engine does appear to do better at certain tasks than others -- TraceMonkey wins in many pure maths tests, V8 wins in heavily dynamic cases, JSC wins if there's a mix. Of course while that's true today it may not be tomorrow as we're all working hard to improve performance.
The other issue i mentioned was the DOM<->JS binding cost -- this can actually play a very significant part in web performance, the best example of this is Safari 3.1/2 vs Chrome at the Dromaeo benchmark. Chrome is based off of the Safari 3.1/2 branch of WebKit so it's reasonably safe to assume similar DOM performance (compiler difference could cause some degree of variance). In this benchmark Safari 3.1/2 actually beats Chrome despite having a JS engine that is clearly much much slower, this is basically due to more efficient bindings between JSC/WebCore (the dom/rendering/etc of WebKit) and V8/WebCore
Currently looking at TM's DOM bindings seems unfair as they haven't completed all the work they want to do (alas) so they just fall back on the interpreter :-(
..
Errmmm, that went on somewhat longer than intended, so short answer to the original question is "it depends" :D