marking a test as xfail with double parameterize in pytest

804 views Asked by At

I have a pytest test that tests several inputs against two different databases. i do it with using the parameterized mark twice:

@pytest.mark.parametrize(
    "input_type",
    [
        pytest.param("input_1"),
        pytest.param("input_2"),
    ],
)
@pytest.mark.parametrize(
    "db_type",
    [
        pytest.param("db_type_1"),
        pytest.param("db_type_2"),
    ],
)

What I experience is only when running input_1 with db_type_2 (for example) the test fails due to a bug but running the same input with different db passes. I want to mark only the input_1 and db_type_2 combination as xfail while all other combinations should not be marked as xfail. I cant find how to do so.

If marking db_type_2 as xfail:

@pytest.mark.parametrize(
    "db_type",
    [
        pytest.param("db_type_1"),
        pytest.param("db_type_2", marks=pytest.mark.xfail)
    ],
)

all inputs will be xfailed and it is not the behaviour I'm looking for. Can somebody help me with this?

2

There are 2 answers

1
Guy On

You could create a function that will be executed in tests collection time and handles the parametrization and the the xfail mark based on the data

def data_source():
    input_types = ['input_1', 'input_2']
    db_types = ['db_type_1', 'db_type_2']

    for tup in itertools.product(input_types, db_types):
        marks = []
        if tup == ('input_1', 'db_type_2'):
            marks.append(pytest.mark.xfail(reason=f'{tup}'))
        yield pytest.param(tup, marks=marks)

@pytest.mark.parametrize('test_data', data_source())
def test_example(self, test_data):
    assert test_data != ('input_1', 'db_type_2')

itertools.product allows to add another list of inputs without modifying any other part of the code except for the if condition for allowed inputs.

0
hoefling On

You can't mark the test based on the complete set of arguments in pytest.mark.parametrize/pytest.param, the information about other arguments is simply not there yet. Usually I move out the postprocessing of test parameters in a separate fixture which then can alter the test based on the complete set of test arguments. Example:

@pytest.mark.parametrize('x', range(10))
@pytest.mark.parametrize('y', range(20))
def test_spam(x, y):
    assert False

We have 200 tests overall; suppose we want to xfail tests for x=3, y=15 and x=8, y=1. We add a new fixture xfail_selected_spams that gets access to both x and y before test_spam starts and appends the xfail marker to test instance if necessary:

@pytest.fixture
def xfail_selected_spams(request):
    x = request.getfixturevalue('x')
    y = request.getfixturevalue('y')

    allowed_failures = [
        (3, 15), (8, 1),
    ]
    if (x, y) in allowed_failures:
        request.node.add_marker(pytest.mark.xfail(reason='TODO'))

To register the fixture, use pytest.mark.usefixtures:

@pytest.mark.parametrize('x', range(10))
@pytest.mark.parametrize('y', range(20))
@pytest.mark.usefixtures('xfail_selected_spams')
def test_spam(x, y):
    assert False

When running the tests now, we will get 198 failed, 2 xfailed results since the two selected tests are expected to fail.