Lately, I'm learning a lot about acceptance testing, BDD and TDD (ATDD too). But my main problem is that i find it difficult to know which test cases i should include in the acceptance tests and which test cases i should include in the unit tests. For instance, i'll give a clear use case example:
Imagine you are creating a backend app (http api) that keeps track of what videogames you are playing, played and plan to play. Imagine one of the requirements is "the user must be able to add videogames with a status of 'playing', 'played' and 'plan to play' into his library".
In this case, what should you be covering in acceptance tests and what should you be covering in unit tests?
I'm seeking for a clear criteria to know how to decide which scenarios(tests) would be best to cover in acceptance tests (doing BDD) and what tests would be better to cover in unit testing (doing TDD) (maybe there isn't and just experience over time will tell me).
What i know so far, is that acceptance tests asserts that the code does what users want, and we want to achieve a behavioral especification that captures the intention of the user (the WHAT). In other words, acceptance tests should focus on testing the system's behavior and features from the end user's perspective. Meanwhile, unit tests are meant to focus on testing technical details and may not directly relate to the functionality and requirements that end users are interested in. And many unit tests may not be a requirement from the perspective of the end user, they just test how a certain piece of code should behave (the HOW). But this criteria seems quite ambiguous/vague to me and it is difficult for me to see clearly what should acceptance tests cover vs unit tests.
Another question, Should the acceptance tests cover all the corner cases? If so, won't there be a lot of repeated tests between the acceptance tests and the unit tests?
Examples from last question: A user trying to add a game that already exists in his library. You test this in acceptance test but also in unit tests? Another example could be testing if the username is 4 characters minimum in the user creation use case. Or testing if password is not null, and has minimum 8 characters. Do you test all of these in separate tests and test it 2 times in both acceptance and unit test just to make sure the acceptance test is telling you that the feature is failing and the unit test shows you where it is failing?
Thanks for taking the time to read this.
Strictly speaking, acceptance tests is a group of tests that tell you whether or not you can deploy/deliver/ship the software. They don't even have to be automated. Decades ago, they would typically be manual, performed by a separate QA department.
Ultimately, the decision to ship is rarely a single programmer's. Again, in old days, one or more managers were involved, and deciding to ship was a big decision. It also happened irregularly.
As you move toward Continuous Delivery/Deployment, you'll have to automate such tests. External stakeholders may still have something to say about what constitutes acceptance.
If you're fortunate, you may be able to work with stakeholders to define a set of acceptance tests, with the understanding that if all acceptance tests pass, no more decisions need to be taken: You can deploy the software.
More common is the scenario where a team of developers have to guess what constitutes a set of acceptance tests, because external stakeholders don't have time to be involved at that level of technical detail.
In that scenario (which, in my experience, is much more common), there are no real acceptance tests, but only tests at varying levels of granularity.
Should you mostly write and perform tests of large-scale system behaviour, or of low-level unit behaviour? There's a tension that you need to resolve.
Testing actual system behaviour may be closer to acceptance tests, so better exercises the value that the system provides. At this level, you can also consider each test to be a useful regression test. If such a test fails, it ought to be a strong indication that there's something wrong with the system, or that you introduced a breaking change into the system. Ideally, all tests should be tests like that.
The problem with only testing the 'whole system' is that these tests tend to be complex, slow, difficult to write, and hard to maintain. You may also run into a problem with combinatorial explosions, as explained by J.B. Rainsberger.
Unit tests address many of those very real problems. They are easier to write and maintain, run faster, and you need orders of magnitudes fewer of them. On the other hand, they don't really exercise any visible behaviour, and a failing unit test may only indicate that someone changed an implementation detail internally in the code base.
The test pyramid remains relevant.