I’ve noted a surge in the desire to automate testing. I know, if you’ve been plugged into the testing scene for any amount of time, my first statement will be quite underwhelming. You might even be tempted to navigate away from this page — don’t do it!
This one will be quick and to the point, but it’s something I feel rather passionate about. Testing is such a diverse activity that it’s impossible to encapsulate it in testing activities alone. And while the focus of this post is automated testing and the pesticide paradox, I’m going to explain why you need to be investing in full-spectrum testing and not just ascribe “silver bullet status” to automated testing solutions.
First, what’s automated testing and how’s it valuable? At a basic level, automated testing is the execution of test step specifications (more commonly known as test cases) that doesn’t require manual input in order to complete. Think automated API testing tools like SOAPUI and Postman and GUI record/playback tools like Selenium. These tools are immensely powerful and capable of saving tremendous amounts of time and money when applied intelligently and purposefully.
Automated testing can also be used to perform maintenance testing of systems that have not undergone significant change (save perhaps in one small area). This provides a data touch point that offers a quick pulse check on the system without allocating significant resources. However, there’s a catch to all this, and that’s the Pesticide Paradox.
Pesticide Paradox is a testing term that also doubles as a principle of testing. In a nutshell, once a test case has been run once, it’s usefulness declines rapidly. Think of the money required to crop-dust a field. After the first two flights, the expense of spraying a field probably begins to outweigh the benefit of the treatment. This is true in testing as well.
The high efficiency and time-saving capability of automated testing, while significant, is ultimately countered by the time-cost of creating and maintaining them. If your testing strategy relies predominantly on automated testing and the system under test is complex, it’s incumbent upon you to help your management understand that testing a complex system (especially one with a GUI) requires a multi-level/faceted testing approach that’s likely predicated on more than “Get it done fast.”
Only with a testing lifecycle that contains all (or a subset) of non-testing activities in conjunction with static and dynamic testing techniques (of which automation is one) can a system be tested completely and sufficiently. Do you agree or disagree with me? Let me know in the comments below!