Golden rule of web caching

Effective content caching is one of the key features of scalable web sites. Although there are several out-of-the-box options for caching with modern web technologies, a custom built cache still provides the best performance.

The primary aim of caching is to speed up processing and reduce load on critical resources. In case of web sites, the most critical resource is probably going to be the database – so dataset caching is a common solution to speed up the web. It is relatively easy to implement, and it can be introduced almost transparently into an existing system. But that is just painting the corpse. First of all, dataset caching requires the same source data to be re-processed over and over, and takes up unnecessary CPU cycles. Also, a lot of data from recordsets is often not displayed directly, so dataset caching can lead to a lot of wasted memory – in fact, I’ve seen sites where this approach caused serious problems by fragmenting memory under heavy load and leaving the application without enough memory to run.

Although the web site will, no doubt, benefit significantly even from a dataset cache, it will scale much more and better if the cached content is closer to the final product. How much closer, that depends on the application. The golden rule of web caching is: For the caching to be most effective, cache as close to the final product as possible.

Ideally, cache content into static files

The best option for caching, if possible, is to use static files on the disk and allow web servers to publish those files directly. All web servers will process static files very efficiently, and that will also leave more resources for stuff that needs to be processed dynamically. Content on the disk does not take up valuable memory space available for the web application, and it absolutely minimises CPU requirements per request.

Here are the results of a stress test I recently did on a fairly good web server machine running IIS6 over a gigabit network. I measured the number of requests per second during a two-minute stress load for serving the a couple of files using several techniques.

Requests/s \ file (KB) 64 32 16 8 4 2 1 0.5
theoretic network capacity 2,048.00 4,096.00 8,192.00 16,384.00 25,600.00 51,200.00 102,400.00 204,800.00
Static 1,786.16 3,531.10 7,013.06 11,545.83 13,154.12 14,278.29 14,336.17 14,371.84
ASP-SSI 1,752.93 3,514.08 6,823.75 7,599.36 8,057.98 7,874.10 8,246.19 8,358.91
ASP-Include 1,781.35 3,527.30 5,686.10 6,053.17 6,309.94 6,372.38 6,449.43 6,463.96
ASP-Nocache 233.93 493.50 925.55 1,562.07 2,622.32 3,507.78 4,001.26 4,309.78
ASP-Cache 1,264.07 2,540.00 3,571.16 4,878.81 5,265.62 5,497.21 5,653.36 5,708.31
ASPX-Nocache 1,560.03 2,000.51 3,641.76 4,500.16 5,390.68 5,770.47 6,561.65 6,150.49
ASPX-Cache 1,782.68 3,507.56 6,612.81 9,301.24 9,542.29 9,100.19 11,591.58 11,375.98
ASPX-Include 1,777.88 3,491.74 6,447.18 8,964.44 9,359.95 9,281.08 11,304.62 11,198.98
ASPX-SSI 1,790.19 3,568.56 6,497.78 9,032.19 9,267.66 9,707.82 11,224.71 11,000.50
  • Static – file served directly by IIS from disk
  • ASP-SSI – file included using #include SSI directive in an ASP file (turning on the SSI engine)
  • ASP-Include – file included using #include SSI directive in an ASP file but with an ASP statement if (true) around the include (so turning on the ASP engine as well)
  • ASP-Cache – file read from disk using FSO, but cached in an Application object for up to 10 seconds.
  • ASP-Nocache – file read from disk on every request using FSO
  • ASPX-SSI – file included using #include SSI directive in an ASPX file
  • ASPX-Include – file included using #include SSI directive in an ASPX file, with an additional C# if (true) block around the include
  • ASPX-Cache – file read from disk but cached using ASP.NET page caching for 10 seconds
  • ASPX-NoCache – file read from disk on each request, page caching turned off

The difference between “static” and “aspx-cache” is the pure overhead of using the ASP.NET engine. For a 4KB file, we get a 38% increase of performance from a file based cache over even the ASP.NET page caching mechanism. Compared to a simple cache based on the Application object in ASP, the performance increase is 150%. And this is just for managing completely static content. A typical web application would pull the content from a database or format it, leading to much bigger differences. Also interesting is the difference between ASP-Include and ASP-SSI, effectively the cost of having turning on the ASP engine (having if(true) statement in ASP code). The good news is that difference between those two cases in ASP.NET is negligible.

With larger files we hit the bandwidth bottleneck first, so the benefits of a file based cache are not that visible. However, with the growing number of Ajax-based sites, pages are getting split into smaller independent requests, and so the overhead becomes quite noticeable. Although ASP.NET page caching is a great utility given that it almost requires no work to implement, file based cache can also be split across the server farm, allowing multiple machines to use the same content.

Caching into static files also brings the benefit of automatic support for last modification timestamps. For browsers with HTTP 1.1 support, if the same files are downloaded often (live Ajax updates), the server can actually reply just with the “304 Not modified” header, without sending any file content at all. Cache that works directly from disk files also allows us to use lightweight HTTP servers such as LightHTTPD to get some extra performance from the same hardware.

Not just for pre-publishing

The file based cache is often used for pre-publishing content, but there is a simple trick to use this technique for on-demand publishing, even for content that must be generated often on the fly. Most web servers I have worked with, including IIS and Apache, will allow us to override the 404 “Not found” handler. This can be used to implement the cache-miss scenario: when users request a file that has not yet been cached, IIS will not find it on the disk and will call the 404 handler; we can then generate the file using dynamic processing (ASP/ASP.NET) and store it to the disk for the next request before sending the content back to the client. The same technique can be used with Apache and PHP.

An important note for this technique is that requests must be mapped completely to the file path of the URL, not to GET parameters. So, for example, instead of using http://myserver/search.aspx?query=fitnesse, we would have to use something like http://myserver/search/fitnesse.query. Then a 404 handler would be set up on the search folder to perform a search based on the active URL path and store it into the fitnesse.query file. You can generally chose any extension you want, but avoid ASP/ASPX and other standard extensions, to prevent IIS from turing on ASP/ASP.NET processing when serving those files. HTML and TXT extensions are also not a good choice. IE uses “smart” caching by default, so HTML files are downloaded just once per page. If you fire a background AJAX request twice for a HTML file, only the first will actually go to the server. TXT and HTML files may also be cached by transparent proxy systems, so it’s best to avoid using those extensions.

Image credits: Akis Kolokotronis/SXC

I'm Gojko Adzic, author of Impact Mapping and Specification by Example. My latest book is Fifty Quick Ideas to Improve Your Tests. To learn about discounts on my books, conferences and workshops, sign up for Impact or follow me on Twitter. Join me at these conferences and workshops:

Specification by Example Workshops

How to get more value out of user stories

Impact Mapping

7 thoughts on “Golden rule of web caching

  1. Excellent article! Thanks for sharing .. I’m a little confused though by the results table, I don’t understand the file size part, it’s written in the column header “file (KB)”, so does a value like 1,786.16 mean 1.7 MB (approximately) or that’s the size in bytes? I see that number increasing as the requests per second decrease, were you serving files of larger sizes? Could you please give more details on how the test was done .. thanks a million!

  2. Oh, sorry, I also wanted to ask about another thing, you mention here that IE uses smart caching with .html files, so, are you saying that IE ignores any cache-control headers for .html files?

  3. Hi – 1,786.16 means that the stress test recorded 1,786.16 requests per second for a 64k file.

    Re IE and html — it does not honour the headers every time, we had big problems with that. when we renamed the files to .xml, everything worked fine.

  4. Thanks a lot, actually it’s much clearer now .. I could only suggest that you may also add Response.WriteFile() as it could very possibly be used in an application for serving static files .. actually I’m currently working on a site that includes a social network section so I have profiles for users, I know that IIS can serve static pages faster but unfortunately I have some dynamic parts on the profile pages (e.g. the sign out link in the page header if the users is logged on .. etc), so I’m considering using Response.WriteFile() to work around this.

    One more thing, could you please add the machine specs (esp. the CPU, memory and OS)? This would be really useful.

    Again, great article, Keep up the good work!

  5. Well, actually there’s something I wanted to point out .. AFAIK, Windows caches recently opened files but unfortunately I don’t have enough information on how exactly it does this (tried to search the web but seems that there’s no much information available on the subject, so, I still need to do some more searching), so, could you please give more details on this? Were you serving a limited number of files? This is probably not the real situation on production servers, so, could the test was possibly affected by Windows File caching?

    Sorry for spamming your comments section :)

  6. One more thing, could you please add the machine specs (esp. the CPU, memory and OS)? This would be really useful.

    This was a while ago, so I may not remember all the details, but off the top of my head, web servers used in the test were dual-core 2.4ghz machines with 4 GB RAM running IIS6 on Windows 2003 server.

    Were you serving a limited number of files?

    yes, i think that it was about 10 files or something like that. IIS also caches recently open files, and that is exactly why frequently accessed stuff should be put into static files.

Leave a Reply

Your email address will not be published. Required fields are marked *