Leaving behind the traditional ‘request-wait-reload page’ processing, Web 2.0 broke many boundaries and brought online applications and sites much closer to an interactive desktop, but best practices from the desktop model have still to penetrate into the mind of common Web 2.0 developer. The paradigm shift from server-side to client-side workflow created a void in best practices for Web development. Like any new cool and funky technology, Web 2.0 has many nice new features, but comes with a set of new problems, at least new in the area of client-side browser development. Plainly ignoring these issues may cause big problems from support to serious security exploits - but there is no need to re-invent the wheel. Most of those problems were solved on the desktop a long time ago.

Here are some common mistakes with Ajax web sites, and how to avoid them.

Clint Eastwood all over again

Spaghetti code and cowboy programming, proven wrong on so many platforms and in so many technologies, now have another chance to show their bad side with Ajax. Since the traditional request-refresh page model is being abandoned, and there is no tried-and-tested workflow framework to use in JavaScript, many developers gave in to the temptation to mix layout and workflow logic. Once the applications move into the support mode, and change requests start coming in, things (will) simply fall apart. The word "will" is in brackets because many Web 2.0 applications are so new that the pain of support has not tested them yet - but, to quote J.R.R. Tolkien, 'It will not do to leave a live dragon out of your plans if you live near one'.

Having a non-visible state, responsible for the workflow behind the layout, is a proven best-practice that has survived the test of time - It's now almost thirty years after Model-view-controller was first described, and even if there is no ready-made popular MVC framework for JavaScript, a little discipline in coding can go a long way.

The weakest link

Another common anti-pattern in Web 2.0 applications is taking the Ajax approach too far – so that it becomes impossible to use links for navigating through the application. Hyperlinks are one of the main reasons why web became so popular – they are incredibly flexible, can be bookmarked, stored, exported, interlinked and all those fine features are quickly disregarded in favour of a bit of background processing.

A typical mistake with links is not to make important sections of the site accessible by hyper-links. It's nice not to refresh the whole display when someone goes from the main page to the product catalogue, selects at a book and then clicks on author's name to see all his work. However, there is no reason to make users do all that every time when they want to see the books. Links are also good because they can be easily used from various parts of the Web site – for example, having a link for author's books allows a drill-down to that page from search results, author listings and the home page.

Again, using a non-visible background state allows the application to easily initialise a saved game from GET/POST request properties. The second part of the solution is to enable users to save their game, which is also relatively easy if the state is kept in background. Google Maps offers a good example – 'link to this page' button which gives the visitor the navigation state serialised into GET parameters. If the navigation state cannot fit into the URL, there is a choice to send it to a background server for storage, and then return just a state ID, but this is typically a signal that the browser has taken over too much responsibility.

Too submissive

There are a few common anti-patterns related to use of forms, again probably arising because the traditional submit-reload page model is being abandoned, with no established best practice to replace it.

A common way to handle forms in the background is to intercept the submission with an onSubmit handler. After processing, handlers typically return false, to stop the form from actually being submitted. However, a common mistake is not to take care of exceptions – if the onSubmit handler throws an exception, the form will get submitted and the page will reload. If you don't have to, don't mix and match – just return false in the onSubmit handler, and use a custom procedure for background processing (called by a button click, not form submit).

Introducing asynchronous form processing requires a lot more care in user interaction. With normal form submission, users get instant notification when their request is being processed, and that does not happen automatically with Ajax forms. Neither does the browser stop the user from re-submitting the same form, or trying to edit a field once it has already been submitted. If the request ends successfully, in many cases the input fields should be hidden or at least cleared. Asynchronous processing, if done properly, looks much better, but involves more work. Here is a good starting point for the form processing model, which should be modified if needed:

  • disable all form input elements, and turn the cursor into 'wait'.
  • send the asynchronous request.
  • if the request ends successfully, enable the form elements again and delete all field contents, show a success message and return cursor to normal.
  • if the request ends with an error, enable the form elements again without deleting field contents, show the error message and return cursor to normal.

It is a good practice to provide a 'Cancel' button if the action can take too long (and the business rules allow cancelling), which would just activate the form again.

The third common mistake with forms is to use GET for everything. Browser can (and typically will) cache GET requests. So, when only GET is used, there is typically also some form of cache prevention, either on the server (HTTP headers) or on the client (request timestamp added to the URL to make it unique). Browser cache is a good thing if used properly, and there is no reason to turn it off completely - a much simpler solution is to use GET for documents which are not user-specific, and using POST to execute procedures and other user-specific requests.

Don't hesitate to use old-fashioned submission if it saves you the trouble – you can provide asynchronous information in that case as well. A good practice is to post to an invisible IFrame, and then use Ajax to provide progress updates. This is especially effective when uploading files through forms.

Too much concurrency

Browsers limit the number of concurrent requests to a same server, so too much Ajax might kill the user experience, especially if the requests start timing out. And this limit is relatively low – for example, Internet Explorer only allows two simultaneous connections.

If XSLT or some similar template type is being used to render HTML from XML on the client, a good way to reduce concurrency is to cache the templates in the client page – this may significantly improve the application performance.

A common error is also to expect concurrent responses to return in the order or requests. The style may come before the content, for example, and that will happen quite often. To handle inter-dependent requests, either post both and wait for both to finish, or post the second from onSuccess handler of the first.

Traal Security

Ajax makes it really easy to call back-end APIs, so developers may get tempted to expose a lot more than they should, thinking that it will go unnoticed because URLs are not displayed in the address bar. I've seen SQL scripts being sent over the wire, and authorisations controlled in the browser. It is, undoubtedly, easier to program the application that way, but this leaves a big security hole open. Web users are not like the Ravenous Bugblatter Beast of Traal, and someone will quickly exploit relaxed security of an Ajax application. SQL queries are easy to spot using Tamper Data or some TCP proxy tool, and it is relatively easy to guess similar generic processing APIs with enough examples, if special care was not taken to hide or encrypt methods. A much better way to fight against exploits is to handle security on the back end, where it should be handled in the first place, and not to allow direct access to critical resources from the browser.

Similar, but arguably a less painful mistake is to rely on JavaScript to validate form data – even with the best intentions, due to an unrelated JS error on the page, the validation script might not execute before the form gets submitted. Front-end validation may be done as a quick and responsive way to notify the user about common problems, but the data will have to be validated on the back-end again.

With Ajax, browser can and should handle part of the workflow, and can even execute a few business rules. But the ultimate responsibility for security, sessions and business rules must be on the server – that is the only part of the system really under our control.

Checklist for an Ajax release

Here is my 10-point check-list of things that really should not be forgotten for Ajax applications:

  1. Our pages use MVC or a similar principle, and we don't mix layout with code.
  2. Our pages read request parameters and use them to initialise the page.
  3. Our pages enable users to build direct links to the content.
  4. We care about security - not publishing the API does not mean that people will not misuse it.
  5. Our pages block forms and provide feedback during background form processing, and allow users to cancel long requests.
  6. As far as business rules and security are concerned, the entry-point to the system is not the browser, but the back-end application.
  7. Our pages cache relatively static content, like XSLT templates, and do not request them all the time.
  8. We don't expect asynchronous responses to come in the same order as requests.
  9. We use POST for procedure calls and user-specific processing, GET for other content requests.
  10. The browser handles front-end user integration, layout and workflows, but not much more.