October 18 2018

So, you’ve decided to sign on the dotted line.  “Soon,” you think “we’ll have the web app of our dreams.  It’ll do all the things, make all the profits, solve all the problems — and it’ll work perfectly!”  There’s one detail missing, though: you passed up the sales representative on the idea of having a dedicated and trained testing group assigned to the Scrum team.  The sage Han Solo once said, “I have a bad feeling about this.”

This seems to happen more than you may think, actually.  As the scenario goes, the product owner (for example) thinks they’re already paying for top quality code (which they are), so why pay more for “quality?”  “By the time a shippable piece of software makes it to UAT,” you think,  “it will have been unit and integration tested by the development team, cross-checked by the BAs, and reviewed by UAT testing staff on premises.”  

In the end, if the system doesn’t crash and the high-level provisions of fulfillment are verified under test, then everything’s good, right?  

Not necessarily, and If this sounds like you or someone you know, you (or they) may be accepting “Kiddie testing” as an appropriate measure of quality [Beizer 1990].

So, what is kiddie testing?  In short, kiddie testing is doing something in the system, observing an output, and concluding that since the system did, in fact, do something (i.e., it didn’t crash), it must be ok.  Sounds ridiculous, right?  Thankfully, testing is no longer considered a necessary evil like it once was and appears to be even more in demand [Black, Van Veenendaal, and Graham 2012].  If that’s the case, though, how can the scenario I just described still occur?  

From my perspective, it seems that we still don’t fully understand software quality assurance testing apart from the traditional unit, integration, and now automated frameworks.

If we proceed with the three previous frameworks only, that prompts a small list of questions. 

  • Who’s writing, maintaining, and monitoring the test policy?
  • Who’s writing, maintaining, and monitoring the test strategy?
  • Who’s writing, maintaining, and monitoring the test approach?
  • Who’s managing the testing lifecycle?
  • Who’s representing the testing lifecycle in management meetings?
  • Who’s evaluating which static testing techniques are best suited to the test objects each sprint?
  • Who’s evaluating which dynamic testing techniques are best suited to the test objects of each sprint?
  • Who’s defining all the possible test conditions?
  • Who’s coordinating and/or administering the meeting(s) that define risk and priority of the test conditions and objects?
  • Who’s writing the test plan(s)?
  • Who’s writing the test case specifications?
  • Who’s writing the test step specifications?
  • Who’s implementing and executing bullets 10, 11, and 12?
  • Who’s evaluating and reporting results from full-spectrum test activities?
  • Who’s determining that all testing control items have been satisfactorily completed?

The list could go on.  The point here is that the list of considerations above is short and developers are primarily concerned with development (not to mention human like the rest of us).  Defects and mistakes happen.  They create bugs; sometimes they result in faults (but not always).  Nevertheless, how early we systematically test for, identify, and fix these defects, bugs, and faults define how expensive the application will be to build in the long run.  Kiddie testing, when it comes down to it, is just another way of saying [little to] no testing.

The only other question I’ll dare write in this post for fear of overusing the question mark: How much do you really want to spend in the long term and how is kiddie testing helping you stay within budget?  Let me know what you think in the comments below. 

Sources:

  1. Beizer (1990) Software Testing Techniques, 2nd edition, Van Nostrand Reinhold: Boston
  2. Black, Van Veenendaal, Graham (2012) Foundations of Software Testing, Cengage Learning EMEA

About the Author

Mark Hearon image

Mark Hearon is an accomplished, ISTQB certified, and award-recognized QA consultant.  Methodical and skilled in marrying the software development lifecycle with the testing lifecycle, Mark has a proven track record of success in full-spectrum test management for AVIO's clients.  Mark has championed testing best practices in all phases of product development from planning and discovery through deployment and test closure.  Focused on team growth and measurable results, Mark stands ready to mak

Join the Conversation

Enter your first name. It will only be used to display with your comment.
Enter your email. This will be used to validate you as a real user but will NOT be displayed with the comment.