IronPython import performance with compiled code

844 views Asked by At

I am doing some experiments with IronPython 2.6.1 and the clr.CompileModules function to compile my large scripts into assemblies. Testing has shown good cold start performance performance improvements but in some cases importing the compiled module is actually slower than executing a large string that represents my code in some cases.

My question is, if i use something like

scope.Engine.Execute(string.Format("from {0} import {0}", theModule), scope);

or the ImportModule function, even though I get a new ScriptSCope back does the DLR cache the imports made in other ScriptScopes? So if module 1 and module 10 import the same type, I only take the performance hit once?

Is using clr.CompileModules preferable over scope.Compile()? My understanding is the on the fly compile is useful if I don’t want to manage extra assemblies and only want to pay the compile cost once.

1

There are 1 answers

1
Dino Viehland On BEST ANSWER

The DLR doesn't cache the imports but IronPython does.

I think your understanding is correct - clr.CompileModules is usually good for a startup benefit. You can also combine it with ngen'ing the assemblies and you'll have even better startup perf. If you aren't doing that already then that's probably the reason you are seeing worse performance sometimes - we can avoid the JIT when compiling your code by interpreting it first, but if you compile we always need to JIT. Compiling + ngen is the best of both worlds other than needing to set all of that up.