The Trials of Testing

Software testing is often misunderstood and abused by everyone from programmers and managers to testers
Software testing while one of the most important tasks done in a development project is often misunderstood and abused by everyone from programmers and managers to testers.

Wikipedia calls testing "an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under testing, with respect to the context in which it is intended to operate."

This definition, like most that try to make software into a science, is bunk. The definition of testing that I buy and try to instill in others is that testing is done to find bugs in a piece of software before the user does.

When a project subscribes to empirical ideology it causes a number of problems. The first is that developers write sloppy code because they somehow feel that testing is something done by others who will use imperfect and fancy overpriced tools to dissect their product and provide statistics to men who create graphs to report nay or yea on whether the program works or not.

These are the kind of developers who don't write unit tests because they think they are somehow absolved of their responsibility to test in the way people throw litter on the street because they think it's the job of public servants to clean up after them.

"Hey buddy, I'm a developer, I write code, your job is to test it, haw, haw."

When such a division is created, with fiefdom-happy managers creating "tiger (ugh) teams" of system testers the battle begins, the testers are pitted against the arrogant developers and, on your mark, bang, the race is on to find and report bugs.

The testers pummel the system from all angles, find dialogs that can't be resized that should be, ones that can that shouldn't, and mine obscure conditions under which the product operates oddly, ending everyday with huge smiles after raising record defect numbers against the product.

Before you know it there are a gazillion defects raised against a product that possibly has nothing wrong with it anyway.

The manageress who runs the project now gets shown charts of defect numbers drawn with red-colored lines that show an ever-expanding y axis. "Holy crikey, project number boy, it looks like the project is on fire. Let's pour water on it by stopping coding features because we should be fixing defects."

The developers curse and kick because they have to fix supposed defects rather than write new function, the testers clap with joy as they raise more bogus defects, e-mail wars ensue over who controls priority and severity, and the entire project is soured.

This is all because of the initial premise that testing is empirical to stakeholders. It's not, never was, and never will be.

Step one is to take automated testing tools away from the testers; they just get consumed by them the way a child gets fascinated by Internet sites with penguins or dogs that can chat to each other.

Tell testers their role is to find defects before users do, and tell them to talk to users and get rid of any envy they have that they are not programmers.

Tell programmers they must write unit tests that run as part of the build process and a failed test is like a failed compilation run - the function is incomplete.

Finally, tell the managers to back off and stop using empirical nonsense to gauge the health of their project and go and talk to users, listen to feedback, and maybe get out a little more and look over the edge of their cubicle. Their role is to facilitate, not to govern, and any spare time they have between sending e-mails, booking meetings, or writing minutes, they could use to try out of their team's product and pretend to be a user and maybe help to find flaws and defect. In the process they might even actually become knowledgeable on the thing they're supposedly in charge of.

BY Joe Winchester

Copyright © 2009 Ulitzer, Inc. - All Rights Reserved.



Copyright 2008-2009 Daily IT News | Contact Us