I have a number of projects where I use the pytest.mark.xfail
marker to mark tests that fail but shouldn't fail so that a failing test case can be added before the issue is fixed. I do not want to skip these tests, because if something I does causes them to start passing, I want to be informed of that so that I can remove the xfail
marker to avoid regressions.
The problem is that because xfail
tests actually run until they fail, any lines hit leading up to the failure are counted as "covered", even if they are part of no passing test, which gives me misleading metrics about how much of my code is actually tested as working. A minimal example of this is:
pkg.py
def f(fail):
if fail:
print("This line should not be covered")
return "wrong answer"
return "right answer"
test_pkg.py
import pytest
from pkg import f
def test_success():
assert f(fail=False) == "right answer"
@pytest.mark.xfail
def test_failure():
assert f(fail=True) == "right answer"
Running python -m pytest --cov=pkg
, I get:
platform linux -- Python 3.7.1, pytest-3.10.0, py-1.7.0, pluggy-0.8.0
rootdir: /tmp/cov, inifile:
plugins: cov-2.6.0
collected 2 items
tests/test_pkg.py .x [100%]
----------- coverage: platform linux, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 0 100%
As you can see, all five lines are covered, but lines 3 and 4 are only hit during the xfail
test.
The way I handle this now is to set up tox
to run something like pytest -m "not xfail" --cov && pytest -m xfail
, but in addition to being a bit cumbersome, that is only filtering out things with the xfail
mark, which means that conditional xfails also get filtered out, regardless of whether or not the condition is met.
Is there any way to have coverage
or pytest
not count coverage from failing tests? Alternatively, I would be OK with a mechanism to ignore coverage from xfail
tests that only ignores conditional xfail
tests if the condition is met.
Since you're using the
pytest-cov
plugin, take advantage of itsno_cover
marker. When annotated withpytest.mark.no_cover
, the code coverage will be turned off for the test. The only thing left to implement is applyingno_cover
marker to all tests marked withpytest.mark.xfail
. In yourconftest.py
:Running your example will now yield:
Edit: dealing with condition in the
xfail
markerThe marker arguments can be accessed via
marker.args
andmarker.kwargs
, so if you e.g. have a markeraccess the arguments with
To consider the condition flag, the hook from above can be modified as follows: