2009/02/22

Functional test automation through UI is hard, part 2

In the previous post, I described a mismatch in the abstraction level between automated UI tests and business requirements that those tests purport to express. I also promised to tell something about how to deal with it. First, let me admit (with a due dose of regret) that I've no idea how to make the problem go away.

Automation is expensive. Automation costs can easily be so big that manual regression testing would be cheaper and more efficient. This is especially true for UI tests (they are easy to perform manually, and expensive to automate). So, the art of test automation is to be selective about what you automate, and learn how to cut down the costs of automation.

Avoid UI tests


I don't mean to avoid them altogether here. Recognize situations where test automation through UI is too expensive, and just not do it then.

Limit the scope


It is often wise to use automated UI tests for smoke testing only. Rely on manual testing to make sure that all your buttons are aligned, dropdown lists have the right values, and so on. Use automated tests as a fast fail mechanism, protecting the development team against commits that completely break the application. If your application has 10 primary use cases, write 10 UI tests, each covering a success path through one of these cases, and stop there - until you really feel the need to add more UI tests.

Opt for service layer tests whenever possible


OK, we all know that manual regression testing is slow, mind-numbing, and costly. Luckily, there is another way to automate tests. Service layer should have an API that fits the business problem quite nicely. Therefore, it's a great medium to automate tests that are about some kind of transaction processing, or workflow, or business rules, and not really about the UI behavior per se. So, if your app has some complex back-end logic, and a dumb UI (which, for the sake of maintainability, is how business software should strive to be), write automated functional tests that talk to the service layer directly.

Functional testing through service layer should not be confused with unit tests. The kind of tests I'm talking about here should be driving the entire application, minus the (dumb) UI, and the purpose is to cover integration bugs that may creep up in the glue between your carefully unit-tested classes.

Postpone automating UI tests


In any new functionality, UI has most bugs and is most likely to change at the later stages of development, when customers put their hands on the app and realize what else they need to change. If you implement an automated UI test early on, it is virtually guaranteed to need significant rework, too. Incidentally, new functionality in active development is regularly eyeballed by humans. So regression bugs in it are caught early, anyway. It's new regression bugs in the old functionality that go unnoticed until someone in production support gets that proverbial 3am phone call.

Therefore, do not automate UI tests for new functionality until it's past the active test/change feedback loop or two. Typically, this means a couple of weeks or even a month after the first time a feature was submitted to testing.

Don't use UI level tests to drive development


Corollary to the above. As executable specifications, automated UI tests are grossly expensive and have to specify too many details too early in the process. It's just another form of the Big Design Up-Front anti-pattern.

If you can't avoid it, then do it right!


So far, I covered a few possible ways to avoid automating tests through UI. Now let's talk about another angle. If you can reduce the cost of UI test automation, you won't need to avoid it so much. :) And here are some suggestions about that.

Treat test automation as software development


Test automation is software development first, and testing second. Doing it takes as much sophistication in development tools and techniques, as writing production code.

Assign the task to a programmer


Most testing departments trying to introduce test automation make this mistake - they assign the task to a tester. The right mix of skills required to do automation is 10% testing and 90% software development. If you happen to have someone in your testing department who could do a convincing job as a senior developer, awesome. Otherwise, you are much better off recruiting someone from development to do it. You can train testers to do it, but the chance of someone with no development experience to figure it out on their own looks slim (I have seen a few failures, and no success stories so far).

Use version control


Putting your test suite in the same version control repository as your production code is the most basic thing you can do to bring some order to the affair. As silly as it sounds (to a developer), most first-time test automation initiatives I've seen didn't use version control. In fact, I've seen at least two major commercial test automation tools that didn't even support version control (along the lines of "large binary file that has to be present, cannot be built from sources, and changes every time you run a test"). To me, that looks like reason enough to avoid using the tool. Once again: whenever you are developing software, YOU MUST use version control.

Have a design


So, there this huge abstraction mismatch between functional requirements and UI constructs, which I mentioned before. To bridge abstraction mismatches, software is structured in layers. Since test automation is software development, the same principle applies. Typically, you want to have at least three layers - the test script, the UI map, and the UI driver. UI driver is basically a library to manipulate the UI controls directly (e.g., Selenium or Watir), Test scripts should be written in terms that match analyst thinking as close as possible. I think, you should *literally* try to make them readable by a non-technical business analyst, or customer. UI map is the layer that converts actions described by test scripts into UI driver calls. For example, a test script can read like "create user profiles for Joe and Jane, search for Joe, Joe's profile should be in the search results", UI map would know that "search for Joe" means writing "Joe" in the text box with id of "search_query", clicking on a button with id of "search_button", and making sure that we landed on a page with the URL of /search?query=Joe and title "MyApp - Search Results for Joe" , rather than "HTTP 500 - Internal Server Error".

Do not rely on capture/playback


In the light of "test automation is software development", capture/playback is code generation, and of a bad kind. It produces badly designed, repetitive, unreadable, and therefore unmaintainable code. If you use captured script as a way to discover exactly what is going on between the browser and the server (i.e., as some sort of a network sniffer aware of higher-level protocols), that's OK. If you are checking this generated stuff into your version control repository verbatim, you are probably making a big mistake.

Use continuous integration


It should be obvious that if you have some automated tests, you will be better off running them as often as practically possible. For the sake of discovering regression bugs as soon as possible, but also for the sake of keeping the tests themselves relevant. When some change breaks an automated test, it's much easier to figure out what has changed, who made the change, and deal with it right there and then, instead of running the entire suite once a month and discovering that half of the tests are broken for all sorts of strange reasons. Having said that, UI tests (unlike unit and service-level integration tests) are usually too cumbersome for the developers to run as part of the regular pre-commit process. Therefore, they should not be included in the main continuous integration loop. It's better to run them in a separate CI loop and tolerate broken builds there.

Remember about the truck number


Automated tests should be maintainable by more than one person. Ideally, any developer (note: developer, not tester!) on the team should be able to do it. By the way, this puts some interesting constraints on the choice of automation tools. And since this text is already too long for a blog, and the plane I'm on is about to land, I'll just use this observation as a segway. Next time I'm on the plane, I'll try writing down my thoughts about automation tools. Which is a somewhat painful subject...

4 Comments:

At February 23, 2009 at 11:20 AM , Anonymous Anonymous said...

Hi Alex! Great post - made me want to read the first one. Any chance you could please include a link when you do "In the previous post"?

Will go have a look now; I'm sure it must be there somewhere...

- Liz

 
At February 24, 2009 at 10:28 PM , Blogger Alexey Verkhovsky said...

Hi, Liz,

Fixed that, and posted on TW internal list - it's been a while since we had a decent holy war there :)

 
At June 16, 2009 at 12:10 AM , Blogger DrummerDaveF said...

Please give LiquidTest a try ( www.jadeliquid.com/liquidtest ) and see if that doesn't resolve many of the issues you mention above. Certainly it tries to address some of your classic Automated tool issues.

With LiquidTest JadeLiquid recommends version control of tests, it creates test scripts through record/capture that I can figure out (with out a degree or history in development), does record and playback using official APIs, runs on build or CI boxes, and as the tests are less fragile they can be written earlier in the development phase than would normally be practical.

 
At March 7, 2013 at 2:03 AM , Blogger Palm Beach Vending Machine said...

That looks like reason enough to avoid using the tool.
Palm Beach Vending Machine

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home