Passing the Parallel API tests in PettingZoo for custom multi-agent environment

56 views Asked by At
from pettingzoo.test import (
    parallel_api_test,
    parallel_seed_test,
    max_cycles_test,
    performance_benchmark,
)

I have a custom multiagent environment that extends ParallelEnv, and since I passed the parallel_api_test,

I plan to pass the other ones as well before starting training:

  1. parallel_seed_test
...
  File "D:\anaconda3\Lib\site-packages\pettingzoo\test\seed_test.py", line 139, in parallel_seed_test
    check_environment_deterministic_parallel(env1, env2, num_cycles)
  File "D:\anaconda3\Lib\site-packages\pettingzoo\test\seed_test.py", line 108, in check_environment_deterministic_parallel
    assert data_equivalence(actions1, actions2), "Incorrect action seeding"

I have no idea how to pass this one. I tried adding np.random.seed() statements in my observation_space and action_space functions, but I don't know how to get deterministic actions. Please advise. Are there any steps I can follow to pass the seed test and make my environment results reproducible?

  1. max_cycles_test
...
  File "D:\anaconda3\Lib\site-packages\pettingzoo\test\max_cycles_test.py", line 6, in max_cycles_test
    parallel_env = mod.parallel_env(max_cycles=max_cycles)
                   ^^^^^^^^^^^^^^^^
AttributeError: 'MultiAgentHighway' object has no attribute 'parallel_env'

I'm not sure how to use this? I have end_of_sim as the maximum number of steps in the simulation before which the simulation is closed forcefully?

  1. performance_benchmark: Had to convert my ParallelEnv to AEC with parallel_to_aec() to use this.
2466.7955100048803 turns per second
123.33977550024402 cycles per second
Finished performance benchmark

How do I evaluate these numbers? Please advise.

Thank you in advance :)

0

There are 0 answers