2009/02/27

On testing tools

Here is a scenario that seems to play out all too often.

A young software development organization decides to address the lack of quality in software they produce by adding some testers. They hire a bunch of people (usually with little or no development skills) to do some variation of manual software testing. Before long, the testers realize that critical regression defects are a particularly dangerous thing, because they can be introduced in parts of the application that already made it through the manual testing and aren't eyeballed often enough. Enter manual regression testing. This amounts to maintaining lists of things to test, and performing those tests by hand time, after time, after time, after time.

Everything fine and dandy so far. There is a couple of annoying problems with the manual regression testing. It is time consuming activity, not particularly rewarding, and when you repeat the same sequence of tests the fifth time, simply boring. Management also doesn't like the fact that it takes their testers a week to bless a new release. This becomes painfully obvious when some urgent patch needs to be pushed out into production, and there is no way to do it other than just pushing the deploy button and praying. Therefore, someone (management or testers themselves) comes up with the idea to automate regression tests. This task, naturally, falls to the testing department. What does testing department do? They start looking for a testing tool, of course!

Decision matrix for selecting a tool typically contains these two heavyweight factors. (1) should be able to drive our application frontend; (2) should be usable by a non-programmer (since there are no programmers in the testing department). Eventually, they come across a sales brochure for one of a breed of commercial products (hmm... let's call it Irrational Droid... or Pluto MacWalker). The brochure basically says: here is a tool. You point it to your app, click some buttons, the tool remembers what you did, spits it out as a script, and opens it in an editor. You sprinkle some assertions, maybe some parameterization, and - voila! - you have a test suite. It's all so simple, a monkey could do it. And it costs mere $100,000.

The brochure may not mention a few minor things. The tool uses a home grown script language, poorly designed and buggy. Standard library of the language consists of two and a half libraries, which is more than enough for a sales demo. Encapsulation is not supported. Text editor provided with the tool is worse than Notepad, which by the way cannot be used because some parts of the script are saved in a binary file. Version control cannot be used for the same reason. External libraries written in another language can be attached (through some sort of Rube Goldberg device), and may even work. Or some other heinous design compromises to appease capture/playback gods.

However, it does know how to drive application front-end, and can be used by a non programmer to do test automation. Since those two constraints have heaviest weights in the decision matrix (and the latter, incidentally, excludes all the open source tools, written by programmers for programmers), the Droid wins the contest. Now this testing department invests some non-trivial amount of money in licenses and training, and starts using the tool. Eventually, they discover that doing anything significantly bigger than a sales demo is an exercise in anger management, and the resulting suite is broken within two weeks, because something changed in the app. None of the developers on the team are wiling to touch the Droid with a ten feet pole to fix it, and the licenses are too expensive to give it to everyone on the team, anyway. So, the only practical recourse is to recapture the whole shebang. After several attempts to salvage the situation, automation project is abandoned, and the lesson is learned: test automation is too expensive, and not worth it.

What went wrong here? In my experience, it always seems to start with the notion that some expensive tool would allow a non-programmer do test automation. This doesn't work for the very same reason tools-driven approaches don't work in normal software development. In fact, test automation is not at all different from any other software development, so it's not even surprising.

How do you avoid this situation? First of all, accept that it takes 90% development skill and 10% testing skill to do test automation. A non-programming tester, with a few days of training, can certainly contribute tests to an existing suite, but don't expect him or her to design the underlying automation framework well on their own. Therefore, staff this exercise accordingly. And when evaluating technologies to aid your automation effort, look for something that people in the development team would be comfortable with. What you normally need is a library that can drive the front-end of your application, in a language that developers already know, or would at least not hate learning. Most of the time, particularly when your app is a web app, you should not spend money on tool licenses - there are perfectly adequate open-source alternatives.

My next post will be about performance/scalability testing strategies.

2009/02/22

Functional test automation through UI is hard, part 2

In the previous post, I described a mismatch in the abstraction level between automated UI tests and business requirements that those tests purport to express. I also promised to tell something about how to deal with it. First, let me admit (with a due dose of regret) that I've no idea how to make the problem go away.

Automation is expensive. Automation costs can easily be so big that manual regression testing would be cheaper and more efficient. This is especially true for UI tests (they are easy to perform manually, and expensive to automate). So, the art of test automation is to be selective about what you automate, and learn how to cut down the costs of automation.

Avoid UI tests


I don't mean to avoid them altogether here. Recognize situations where test automation through UI is too expensive, and just not do it then.

Limit the scope


It is often wise to use automated UI tests for smoke testing only. Rely on manual testing to make sure that all your buttons are aligned, dropdown lists have the right values, and so on. Use automated tests as a fast fail mechanism, protecting the development team against commits that completely break the application. If your application has 10 primary use cases, write 10 UI tests, each covering a success path through one of these cases, and stop there - until you really feel the need to add more UI tests.

Opt for service layer tests whenever possible


OK, we all know that manual regression testing is slow, mind-numbing, and costly. Luckily, there is another way to automate tests. Service layer should have an API that fits the business problem quite nicely. Therefore, it's a great medium to automate tests that are about some kind of transaction processing, or workflow, or business rules, and not really about the UI behavior per se. So, if your app has some complex back-end logic, and a dumb UI (which, for the sake of maintainability, is how business software should strive to be), write automated functional tests that talk to the service layer directly.

Functional testing through service layer should not be confused with unit tests. The kind of tests I'm talking about here should be driving the entire application, minus the (dumb) UI, and the purpose is to cover integration bugs that may creep up in the glue between your carefully unit-tested classes.

Postpone automating UI tests


In any new functionality, UI has most bugs and is most likely to change at the later stages of development, when customers put their hands on the app and realize what else they need to change. If you implement an automated UI test early on, it is virtually guaranteed to need significant rework, too. Incidentally, new functionality in active development is regularly eyeballed by humans. So regression bugs in it are caught early, anyway. It's new regression bugs in the old functionality that go unnoticed until someone in production support gets that proverbial 3am phone call.

Therefore, do not automate UI tests for new functionality until it's past the active test/change feedback loop or two. Typically, this means a couple of weeks or even a month after the first time a feature was submitted to testing.

Don't use UI level tests to drive development


Corollary to the above. As executable specifications, automated UI tests are grossly expensive and have to specify too many details too early in the process. It's just another form of the Big Design Up-Front anti-pattern.

If you can't avoid it, then do it right!


So far, I covered a few possible ways to avoid automating tests through UI. Now let's talk about another angle. If you can reduce the cost of UI test automation, you won't need to avoid it so much. :) And here are some suggestions about that.

Treat test automation as software development


Test automation is software development first, and testing second. Doing it takes as much sophistication in development tools and techniques, as writing production code.

Assign the task to a programmer


Most testing departments trying to introduce test automation make this mistake - they assign the task to a tester. The right mix of skills required to do automation is 10% testing and 90% software development. If you happen to have someone in your testing department who could do a convincing job as a senior developer, awesome. Otherwise, you are much better off recruiting someone from development to do it. You can train testers to do it, but the chance of someone with no development experience to figure it out on their own looks slim (I have seen a few failures, and no success stories so far).

Use version control


Putting your test suite in the same version control repository as your production code is the most basic thing you can do to bring some order to the affair. As silly as it sounds (to a developer), most first-time test automation initiatives I've seen didn't use version control. In fact, I've seen at least two major commercial test automation tools that didn't even support version control (along the lines of "large binary file that has to be present, cannot be built from sources, and changes every time you run a test"). To me, that looks like reason enough to avoid using the tool. Once again: whenever you are developing software, YOU MUST use version control.

Have a design


So, there this huge abstraction mismatch between functional requirements and UI constructs, which I mentioned before. To bridge abstraction mismatches, software is structured in layers. Since test automation is software development, the same principle applies. Typically, you want to have at least three layers - the test script, the UI map, and the UI driver. UI driver is basically a library to manipulate the UI controls directly (e.g., Selenium or Watir), Test scripts should be written in terms that match analyst thinking as close as possible. I think, you should *literally* try to make them readable by a non-technical business analyst, or customer. UI map is the layer that converts actions described by test scripts into UI driver calls. For example, a test script can read like "create user profiles for Joe and Jane, search for Joe, Joe's profile should be in the search results", UI map would know that "search for Joe" means writing "Joe" in the text box with id of "search_query", clicking on a button with id of "search_button", and making sure that we landed on a page with the URL of /search?query=Joe and title "MyApp - Search Results for Joe" , rather than "HTTP 500 - Internal Server Error".

Do not rely on capture/playback


In the light of "test automation is software development", capture/playback is code generation, and of a bad kind. It produces badly designed, repetitive, unreadable, and therefore unmaintainable code. If you use captured script as a way to discover exactly what is going on between the browser and the server (i.e., as some sort of a network sniffer aware of higher-level protocols), that's OK. If you are checking this generated stuff into your version control repository verbatim, you are probably making a big mistake.

Use continuous integration


It should be obvious that if you have some automated tests, you will be better off running them as often as practically possible. For the sake of discovering regression bugs as soon as possible, but also for the sake of keeping the tests themselves relevant. When some change breaks an automated test, it's much easier to figure out what has changed, who made the change, and deal with it right there and then, instead of running the entire suite once a month and discovering that half of the tests are broken for all sorts of strange reasons. Having said that, UI tests (unlike unit and service-level integration tests) are usually too cumbersome for the developers to run as part of the regular pre-commit process. Therefore, they should not be included in the main continuous integration loop. It's better to run them in a separate CI loop and tolerate broken builds there.

Remember about the truck number


Automated tests should be maintainable by more than one person. Ideally, any developer (note: developer, not tester!) on the team should be able to do it. By the way, this puts some interesting constraints on the choice of automation tools. And since this text is already too long for a blog, and the plane I'm on is about to land, I'll just use this observation as a segway. Next time I'm on the plane, I'll try writing down my thoughts about automation tools. Which is a somewhat painful subject...

2009/02/06

Must you always use a rich domain?

Ask a Java developer in my neck of the woods to create a web application (of the usual "shovel some data from Oracle to HTML" variety) and s/he will probably come up with an instant architecture. Some sort of MVC => Spring => domain layer => Hibernate => database. Blueprint done, let's go write some code now.

What can possibly be wrong with this? The Spring => domain layer => Hibernate part. There are many simple web applications where a full-blown domain persisted by a full-blown object-relational mapper and wired together by a full-blown dependency injection framework with aspect-oriented programming features provides no value, but has a big price in terms of complexity, scalability and long-term maintenance. And every complex web application that I have seen so far had some areas where rich domain backed by ORM was not the best way to go, either.

A classical example is a web application that is all about searching and viewing some stuff, stored in a relational database. Usually, there is some data manipulation involved, but it's all CRUD. Whenever this application has a domain, it is inevitably an anemic one - in other words, there are getters and setters, maybe some data validation logic, and not much interesting behavior. Unfortunately, there seems to be a blind spot where every greenfield application must have a domain layer complete with DI and ORM. Surely, all this byte-code manipulation voodoo and angled-brackety declarative configuration goodness must have some cost to it? And it does!


First, there is always a performance penalty to pay. The moment you decide to go with a full-scale ORM, you probably increase production hardware costs by at least 50%. I don't have any hard numbers to back this with, so this is just a subjective opinion of someone in the trenches who is considered "a performance dude" in ThoughtWorks. In the world where developer time is expensive and hardware is not, this is a great tradeoff, as long as you save developer time.

But this is the catch. When there is no rich domain, no need for distributed transaction management, or advanced caching strategies or other things of this nature, you may not be saving anything - quite the contrary. You just end up writing more code to wire all those decoupled layers together, running longer builds (a much bigger productivity killer than most people realize), having to deal with more interesting problems, reading much larger and less informative exception stack traces, and generally working harder than necessary.

And then there is the maintenance cost. Conceptual complexity, all this cool voodoo, is hard on production support. It creates more situations that regular support people can't cope with on their own and have to escalate to platform experts.

One major selling point for Hibernate is that it eliminates a lot of boilerplate JDBC code to map data from rows to objects. One day, Sun will hopefully bake something like LINQ into Java and the boilerplate data mapping issue will be gone for good. Until then, there are libraries out there that do just this (IBATIS comes to mind) at a tiny fraction of Spring+Hibernate complexity cost.

Now, there are people who take this to the other extreme, and just mix SQL with markup. Although it's a great design for a "Hello, World"-type system, I'm not radical enough to advocate this for anything bigger.

So, next time you start writing an application that is 90% data display and 10% CRUD manipulation -- if something like Rails or Django is not an option -- please at least think about MVC => service => hand-coded SQL option.

Labels: