Runtime statistics for parallel test function calls (with pytest-benchmark or some other plugin)

542 views Asked by At

I want to use a plugin like pytest-benchmark to show me runtime statistics for the parallel calls test_valid_submission(0), ..., test_valid_submission(EXPERIMENT_SIZE).

For this, I have the following code, which tries to achieve this using pytest-benchmark (similar to Grouping Parametrized Benchmarks with pytest):

@pytest.mark.parametrize("counter", range(EXPERIMENT_SIZE))
def test_performance_under_load(benchmark, counter):
    benchmark(test_valid_submission, counter)

When I call

pytest --workers auto --tests-per-worker auto -vv --benchmark-only --benchmark-verbose --group-by=func

I hoped to get a benchmark summary table at the end, with min, max, mean, standard deviation for each runtime of my EXPERIMENT_SIZE parallel test_valid_submission() calls. Unfortunately, no benchmark summary table is printed (see details below).

@hoefling commented that pytest-benchmark doesn't support running and collecting benchmark data in parallel.

Is there another pytest plugin (or other solution) that can

  • collect the EXPERIMENT_SIZE number of parallel test_valid_submission(x) calls and group them together
  • compute min, max, mean, standard deviation for the runtime statistics of the parallel calls in the group
  • use multiple groups, e.g. one for test_valid_submission(x) and one for test_invalid_submission(x)
  • print the statistics at the end of my tests, similar to the pytest-benchmark summary table mentioned above?

Details about pytest-benchmark

With pytest-benchmark, EXPERIMENT_SIZE=3, iterations=1 and rounds=1, I get the following output (but even when EXPERIMENT_SIZE>=5, it shows rounds=5 and no statistics).

but I get the following output (with EXPERIMENT_SIZE=3 and irrelevant lines removed):

 ============================== test session starts ==========================================
 platform linux -- Python 3.6.10, pytest-5.3.5, py-1.8.1, pluggy-0.13.1 -- /anaconda3/envs/reg/bin/python
 benchmark: 3.2.3 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
 plugins: repeat-0.8.0, cov-2.8.1, bdd-3.2.1, clarity-0.3.0a0, benchmark-3.2.3, parallel-0.1.0
 collected 22 items
 pytest-parallel: 8 workers (processes), 3 tests per worker (threads)

 ...

 test/test_x.py::test_valid_submission SKIPPED

Computing precision for time.perf_counter ... 50.99ns.

Calibrating to target round 5.00us; will estimate when reaching 2.50us (using: time.perf_counter, precision: 50.99ns).

Computing precision for time.perf_counter ... 48.98ns.

Calibrating to target round 5.00us; will estimate when reaching 2.50us (using: time.perf_counter, precision: 48.98ns).

Computing precision for time.perf_counter ... 49.01ns.

Calibrating to target round 5.00us; will estimate when reaching
2.50us (using: time.perf_counter, precision: 49.01ns).

Measured 1 iterations: 105.72s.   Running 5 rounds x 1 iterations ...
Measured 1 iterations: 105.73s.   Running 5 rounds x 1 iterations ...
Measured 1 iterations: 117.20s.   Running 5 rounds x 1 iterations ...   
Ran for 339.53s.   Ran for 350.11s.   Ran for 461.53s.

test/test_x.py::test_performance_under_load[2] PASSED
test/test_x.py::test_performance_under_load[1] PASSED
test/test_x.py::test_performance_under_load[0] PASSED

========================== 3 passed, 19 skipped in 714.05s (0:11:54) ========================

Using alternatively benchmark.pedantic(test_valid_submission, args=[counter], iterations=1, rounds=1) does not lead do printed statistics either.

0

There are 0 answers