I've written a python module, much of which is wrapped in @numba.jit
decorators for speed. I've also written lots of tests for this module, which I run (on Travis-CI) with py.test
. Now, I'm trying to look at the coverage of these tests, using pytest-cov
, which is just a plugin that relies on coverage
(with hopes of integrating all of this will coveralls).
Unfortunately, it seems that using numba.jit
on all those functions makes coverage
think that the functions are never used -- which is kind of the case. So I'm getting basically no reported coverage with my tests. This isn't a huge surprise, since numba
is taking that code and compiling it, so the code itself really never is used. But I was hoping there'd be some of that magic you see with python some times...
Is there any useful way to combine these two excellent tools? Failing that, is there any other tool I could use to measure coverage with numba?
[I've made a minimal working example showing the difference here.)
The best thing might be to disable the numba JIT during coverage measurement. That relies on you trusting the correspondence between the Python code and the JIT'ed code, but you need to trust that to some extent anyway.