Background: In one project I use pytest with a helper function which for many tests does the final assert. So it is something like this source code:
tsthelper.py:
def check(obj1, obj2, *args):
# do stuff calculate (using args) from the objects two strings str1 and str2
assert str1 == str2
test_arg1.py:
from tsthelper import check
def test_arg1_():
# Setup obj1 and obj2
check(obj1, obj2, arg1):
def test_arg2_():
# Setup obj1 and obj2
check(obj1, obj2, arg2):
...
def test_no_arg1_should_fail():
# Setup obj1 and obj2
check(obj1, obj2):
So, most functions are suppose to hold the assertion, only the last one is suppose to fail.
I don't think using the xfail fixture is the right idea, because then pytest counts the fail separately, but what I want is that the fail of the last function is seen as the correct behaviour which pytest should count towards the successes in its stats.
What do I have to decorate the fail function with, so that it is treated as a success?
The idea of having different functions explicitly testing different behaviors is to enable easier debugging, better coverage and more flexibility especially if you are using test driven development TDD.
To achieve your desired behavior you can try using the
pytest.raisesin the test where you want to have thecheck()assertion fail.Now when the
check()method throws an assertion exception the test function will consider this as the correct behavior and pass the test.Better Approaches
Following are some more pythonic alternatives to keep your test clean and simple to understand.
check()function to only return results of str1 == str2 and then each test function explicitly asserts the results ofcheck()to be True or False.Code Without Parameterization
With parameterization
This will essentially run your function for all different inputs as different tests and will reduce code duplication.
You can read more about it here