User interface tests are a bit tricky – it is easy to get carried away and waste quite a lot of time, without getting any real benefits. However, if planned properly, automated UI tests can have a great effect on the project. Here are a few tips on how to make the most of automated user-interface tests.

The Achilles' heel of automated tests

The problem with user interface tests is not in how to execute them, but what to test. Since human testers would often check the whole round-trip of information, from the GUI to the back-end report, the first instinct is to replicate that and try to verify business domain rules. This is the Achilles’ heel of UI tests, and it ultimately leads to enormous waste of time.

User interface often builds a thick layer around business domain code (at least in properly designed applications), and it is often very hard to peel that onion in an automated test. A lot of clicking, selecting and dragging may be required for a simple variable assignment. Testing a business rule properly through the user interface may require a lot of very similar tests, with only minor workflow differences. Maintaining such tests is a real pain when the business rule changes (and it is when, not if). If the project has more than one customer, especially when the front-end is a Web application, the user interface is often slightly different for each customer. The differences can be just enough to make all those UI tests customer-specific, so that they have to be recorded or written for each specific client environment. This makes test maintenance even harder. But the business rules typically sit below the user interface, in a common back-end or middleware. It is much better to test business rules below the UI, using an acceptance testing framework such as FIT.

Functional unit tests through the user interface are even harder to maintain and write properly. For start, it is hard to isolate code units in the end-to-end calls, and user interface tests (especially Web) run at least an order of magnitude slower than direct code tests. Proper unit tests have to run very quick, so that people can execute them after any significant change. If the unit test suite takes a long time to run, it becomes more an obstacle than a helping tool.

Testing to save our face

There are, however, some things that make very much sense to test through the UI. Workflow or session rules are a good example. The UI layer often controls session rules, like which features are not accessible if the user is not logged in. Presentation layer workflow, for example asking the user to register a credit card after the account is verified, is also defined at this level. So the user interface tests are the only place such rules can be validated. However, I don’t believe that there is much benefit in having an extensive test suite for workflow or session rules in most applications. A large suite of user-interface tests will just be a pain to maintain, and they will break every time the UI changes. If your clients are like mine, this will be quite often. The good thing is that mistakes in the workflow get discovered quickly, because clients are directly exposed to the user interface and will notice such problems.

On the other side, since the user interface is the only thing that customers really see in a typical application, a small problem in that part can render the whole system effectively unusable. A minor CSS problem or a typo in the HTML login form can prevent users from actually logging in to the system. These kind of problems are easy to fix after they are spotted, but they can be quite embarrassing. And they sort of spoil the whole thing. At that point, it does not really matter if all the business rules are properly implemented, and all acceptance and unit tests run successfully. Customers cannot use the web site, so everything else is just irrelevant. Users only see the user interface, and they can get “blinded” by it.

So, what can we do to prevent such problems? Automate key business scenarios end-to-end and test them once a day (and before the release). I don’t think that any customer-facing web site will have more than ten such functions, so these tests can be done for each flavour or customer environment. A typical online bookstore, for example, has only two key scenarios as far as I am concerned: one is the registration and the second consists of logging in and purchasing a book. Making sure that those things work before the release is a must, so for me those verifications are really Face-saving tests.

Face saving tests often touch quite a big part of the system. The purchasing example will quickly run through login, store browsing, shopping cart and order completion. It will not check whether those processes are a 100% correct, but it will make sure that the basic workflow completes correctly. It is very important to focus on that, not on the validity of business rules, like calculating taxes and delivery time, or else the tests become too complicated. Business rules should be covered and validated by unit and acceptance tests. Since those tests also must pass before the release, face saving tests should only check whether the scenario was successful. They should also check how long it takes to perform a key scenario, to make sure that a few changed optimisation plans in the database did not suddenly make the entire system appear too slow.

Since customers directly work with the user interface, it may be a very good idea to get them involved into writing and maintaining the UI tests. This is especially good when multiple customers have the same back-end but slightly different web applications.