Monday, 4 March 2013

Testing Strategy part 3 - Acceptance Tests

After looking at code coverage and unit tests, I'd like to jump to the other end of the testing spectrum and examine what sort of Bayesian evidence an acceptance test provides. Again, this is skirting around the central question of "How can you demonstrate that your tests are correct?"

The first thing I need to establish is what type of test I refer to with the label 'acceptance test'. This term can mean different things. Sometimes it refers simply to any test that the client performs to determine 'acceptance' of a piece of software (also sometimes referred to as the 'User Acceptance Test', or simply UAT, phase of a project). In my usage, I apply more stringent constraints. To me, an acceptance test:

  • Always exercises the system through exposed interfaces. These interfaces may be user interfaces, or they may be published APIs, but an acceptance test doesn't delve beneath this surface level (a.k.a. black-box testing).
  • Tests complete, end-to-end scenarios. A single acceptance test will exercise many parts of a system in order to achieve some useful task that the user desires. 
  • Is written in conjunction with the user(s). This is an extremely important aspect, as without this user input into the scenario being tested then all that is gained is the developer's view on what they think the user wants.
Considering the above in terms of Bayesian evidence, I personally see a passing acceptance test as much stronger evidence than code coverage or unit tests. Conversely, a failing acceptance test is weaker evidence of an incorrect system.

In terms of understanding this, I examine my perceptions - if I have a system with a set of acceptance tests that all pass when I run them, I'm very confident that the system is correct. Conversely, if an acceptance test fails, my confidence is shaken a bit but I start off by checking that I have set things up correctly, or I check the system manually to establish if the test is a real failure, an intermittent test or similar. Namely, my first suspicion on a single test failure isn't that the system is wrong, but that some other factor outside of the system is at fault. Contrasted to unit tests, where when a failure occurs my first suspicion is that the system is at fault. 

I'm starting to get to the point where I can address the central question, but I want to examine some tests that sit between unit tests and acceptance tests first, then focus on that central question exclusively.


Further reading: If bayesian reasoning is a fairly new concept to you, I can recommend An Intuitive Explanation of Bayes' Theorem. I can also highly recommend the LessWrong Sequences.

No comments:

Post a Comment