Chapter 2 - Do we need QA?¶
Why we don’t need QA¶
Note
In this chapter we use the term “QA” to mean Manual Testing. This does not include UAT which is considered distinct from QA.
Many organizations who have instigated some level of regimented software testing will have a split between automated tests and manual tests operated by a QA (Quality Assurance) department.
This seems like a sensible solution:
- Create automated (unit) tests to ensure discrete, independent components of the system operate correctly.
- Create integration tests to ensure components work together as expected and deliver final products to QA staff who pull all the levers and push all the buttons in an effort to prove the tests wrong.
In theory this should work well, but in practice it can have a significant negative effect on the overall process of creating software, for a range of reasons.
Manual testing doesn’t scale¶
Manual testing requires humans, who make mistakes¶
Engineers won’t test if they don’t have to¶
Why we DO need UAT¶
Despite the premise of this discussion being in support of automated testing, a comprehensive software development methodology would be incomplete with only automated tests. The positive impact of humans in the process should not be overlooked and there are several reasons why the role of humans cannot be easily duplicated or automated.
The key distinction here is that QA is not the same as UAT (User Acceptance Testing) which can, and should be adopted for testing usability. That is, does the function do what the user expected it would, is it easy to access & understand etc. These are more tests of the way the software is defined rather than testing that the code works.
Importantly however the UAT role does not necessarily have to be filled by internal staff. A staged release cycle where external beta testers are engaged can often yield the same (or greater) value in terms of identifying potential usability problems.
Key point:
User Acceptance Testing is important but should not be used to identify bugs and can be significantly streamlined through the introduction of staged releases.
The whole is greater than the sum¶
Engineers are not always right¶
An engineer working in isolation may be able to create amazingly complex solutions that have a series of automated tests which ensure that what they created works perfectly in every conceivable situation, and yet this solution can still be completely wrong. A function that produces result “A” with an accompanying test that asserts that the function produces result “A” is pointless if the function was supposed to produce result “B”.
Software requirements are notoriously vague and tend to be in a constant state of flux. Humans will always be required to ensure that the original requirement was not only satisfied, but correctly interpreted by the implementing engineer(s).
Sometimes it’s just not testable¶
Scalable Manual Testing¶
As previously highlighted manual testing (traditional QA) doesn’t scale well as system complexity increases, however the role of humans in the testing process nonetheless remains important.
In situations where automated tests are difficult to create or where such testing is unlikely to uncover rare and or unexpected code paths that lead to bugs, one of the most powerful approaches to testing is to crowd source testing from your current user base. Large companies like Google and Facebook use this technique regularly and it involves segmenting your user base either automatically or manually.
Manual user segmentation¶
Automated user segmentation¶
Often used in conjunction with manual segmentation, automated segmentation exists where newer versions of software is released to a portion of the user base without them being enrolled into a specific early access program. A common approach is to stage this in several iterations:
- Release the new version exclusively to early access members
- Release the new version to a portion of the wider user base with their consent
- Release the new version to a portion of the wider user base without their consent
- Continue to expand the exposure until it encompasses the entire user base
If done in a controlled manner this can be a very safe and very effective way to elicit both reports of bugs and crucially feedback as to usability and/or product-market fit for the new version.