2009/02/27

On testing tools

Here is a scenario that seems to play out all too often.

A young software development organization decides to address the lack of quality in software they produce by adding some testers. They hire a bunch of people (usually with little or no development skills) to do some variation of manual software testing. Before long, the testers realize that critical regression defects are a particularly dangerous thing, because they can be introduced in parts of the application that already made it through the manual testing and aren't eyeballed often enough. Enter manual regression testing. This amounts to maintaining lists of things to test, and performing those tests by hand time, after time, after time, after time.

Everything fine and dandy so far. There is a couple of annoying problems with the manual regression testing. It is time consuming activity, not particularly rewarding, and when you repeat the same sequence of tests the fifth time, simply boring. Management also doesn't like the fact that it takes their testers a week to bless a new release. This becomes painfully obvious when some urgent patch needs to be pushed out into production, and there is no way to do it other than just pushing the deploy button and praying. Therefore, someone (management or testers themselves) comes up with the idea to automate regression tests. This task, naturally, falls to the testing department. What does testing department do? They start looking for a testing tool, of course!

Decision matrix for selecting a tool typically contains these two heavyweight factors. (1) should be able to drive our application frontend; (2) should be usable by a non-programmer (since there are no programmers in the testing department). Eventually, they come across a sales brochure for one of a breed of commercial products (hmm... let's call it Irrational Droid... or Pluto MacWalker). The brochure basically says: here is a tool. You point it to your app, click some buttons, the tool remembers what you did, spits it out as a script, and opens it in an editor. You sprinkle some assertions, maybe some parameterization, and - voila! - you have a test suite. It's all so simple, a monkey could do it. And it costs mere $100,000.

The brochure may not mention a few minor things. The tool uses a home grown script language, poorly designed and buggy. Standard library of the language consists of two and a half libraries, which is more than enough for a sales demo. Encapsulation is not supported. Text editor provided with the tool is worse than Notepad, which by the way cannot be used because some parts of the script are saved in a binary file. Version control cannot be used for the same reason. External libraries written in another language can be attached (through some sort of Rube Goldberg device), and may even work. Or some other heinous design compromises to appease capture/playback gods.

However, it does know how to drive application front-end, and can be used by a non programmer to do test automation. Since those two constraints have heaviest weights in the decision matrix (and the latter, incidentally, excludes all the open source tools, written by programmers for programmers), the Droid wins the contest. Now this testing department invests some non-trivial amount of money in licenses and training, and starts using the tool. Eventually, they discover that doing anything significantly bigger than a sales demo is an exercise in anger management, and the resulting suite is broken within two weeks, because something changed in the app. None of the developers on the team are wiling to touch the Droid with a ten feet pole to fix it, and the licenses are too expensive to give it to everyone on the team, anyway. So, the only practical recourse is to recapture the whole shebang. After several attempts to salvage the situation, automation project is abandoned, and the lesson is learned: test automation is too expensive, and not worth it.

What went wrong here? In my experience, it always seems to start with the notion that some expensive tool would allow a non-programmer do test automation. This doesn't work for the very same reason tools-driven approaches don't work in normal software development. In fact, test automation is not at all different from any other software development, so it's not even surprising.

How do you avoid this situation? First of all, accept that it takes 90% development skill and 10% testing skill to do test automation. A non-programming tester, with a few days of training, can certainly contribute tests to an existing suite, but don't expect him or her to design the underlying automation framework well on their own. Therefore, staff this exercise accordingly. And when evaluating technologies to aid your automation effort, look for something that people in the development team would be comfortable with. What you normally need is a library that can drive the front-end of your application, in a language that developers already know, or would at least not hate learning. Most of the time, particularly when your app is a web app, you should not spend money on tool licenses - there are perfectly adequate open-source alternatives.

My next post will be about performance/scalability testing strategies.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home