We are using dredd to test our API and have been making use of python hooks to successfully separate the API document and however dredd makes use of it from the rest of the test logic.
My question is: is it possible to incorporate negative tests into our workflow? if so what would be the most efficient method/tool for this?
A few examples to illustrate:
- We have a sign in which validates the 200 response for when the user enters correct credentials (username, password). But we also want to add a test for wrong credentials which would also run when running 'dredd' command, for this we need to run the sign-in request twice - once for correct credentials and once for wrong ones.
The problem: - currently we don't know how to run any request more than once with different logic for each execution
- We have a get user profile details which we want to run once at the beginning of the test suite (right after creation) and once after all other requests have been executed (add measurements, join/leave group etc').
The problem: - currently we don't know how to run any request more than once with different logic for each execution
The question is simple, I'm sure there must be some way of doing this - but it would also be helpful to know if we are looking for the answer in the correct place... is dredd the correct tool for this kind of task?
API Blueprint supports specifying multiple requests and responses (many-to-many). Following structure is a valid API Blueprint action:
Dredd has a support for this, albeit limited. You need to have them as request-response pairs:
If you generate documentation from the same API Blueprint, I advise you to split it into two documents. The first one with positive scenarios, to be tested and presented to users, and a second one with negative ones, to be just tested. This way you can still keep your documentation readable.