NOTE: All posts in this blog have been migrated to Web Performance Matters.
All updated and new content since 1/1/2007 is there. Please update your bookmarks.

Tuesday, August 15, 2006

Web Performance Engineering [1]

This post is the first in a new series on how to build Web pages, sites and applications that perform well -- by design. I will be combining my own observations with online research and recommendations on best practices contributed by my colleague Ben Rushlo. Ben makes his living measuring Web site performance and giving companies advice on how to improve the performance of their sites and applications.

Despite the crucial contribution of performance to online application effectiveness, good advice on the subject is surprisingly scarce on the Web, and even scarcer in books about Web design. Although you do sometimes find small sections (rarely chapters) of such books devoted to performance, their content is usually weak and almost never a systematic treatment of the issues. I won't bore you with proof, but I do mention one example below.

On the other hand, the exceptions are certainly worth noting. Recently I found an excellent discussion of 10 Realistic Steps to a Faster Web Site, posted in February 2006 by Alexander Kirk, a Web application programmer in Vienna, Austria. I mention his profession because in my experience, a lot of advice about performance written by programmers completely ignores the central rule of all performance optimization -- to speed anything up, remove the biggest bottleneck.

Applying just this one rule repeatedly (think about it) will always produce the best results. And in the world of distributed systems, the biggest bottlenecks will almost always be related to the way an application uses a network, not to the way a computer (client or server) processes some program code. In other words, the application's logic is usually far more important than the way a programmer coded it.

Kirk obviously understands this. After complaining about an example of misguided performance advice (read the comments), he presents his view of a systematic approach to Web application performance analysis. I will not attempt to summarize it beyond simply listing the outline, which is:
1. Determine the bottleneck
1.1. File Size
1.2. Latency
2. Reducing the file size
3. Check what’s causing a high latency
3.1. Is it the network latency?
3.2. Does it take too long to generate the page?
3.3. Is it the rendering performance?
4. Determine the lagging component(s)
5. Enable a Compiler Cache
6. Look at the DB Queries
7. Send the correct Modification Data
8. Consider Component Caching (advanced)
9. Reducing the Server Load
9.1. Use a Reverse Proxy (needs access to the server)
9.2. Take a lightweight HTTP Server (needs access to the server)
10. Server Scaling (extreme technique)
Kirk's discussion does omit a few topics (subjects for future posts in this series) that I think are important, but this is an excellent starting point. While his wording implies a reactive approach to his subject matter (writing about tuning or improving the performance an existing site), most of his guidelines relate to best practices in site design and engineering. So there is no reason why they should not be implemented proactively, without waiting for problems to surface.

Tags: , , , , , , .

0 Comments:

Post a Comment

<< Home