NOTE: All posts in this blog have been migrated to Web Performance Matters.
All updated and new content since 1/1/2007 is there. Please update your bookmarks.

Monday, August 28, 2006

Web Performance Engineering [4]

Continuing my series of posts on Web performance guidelines, today I'm reviewing one chapter of a new book, Deliver First Class Web Sites: 101 Essential Checklists, by Shirley Kaiser of SKDesigns, published by Sitepoint in July 2006.

Sitepoint is known among Web developers for its practical publications, and Kaiser deserves credit for including a full chapter on site performance alongside all the other useful advice in this book.

As its title states, this book does not cover topics in great depth. Each checklist item is stated, then discussed briefly. But Kaiser does include enough detail to justify each recommendation, and often includes examples of how to do it. Chapter 11 (Web Site Optimization) is 19 pages long, presenting 41 recommendations arranged into six checklists:
Creating Clean, Lean Markup
Minimizing URLs
Optimizing CSS
Optimizing JavaScript
Supporting Speedy Server Responses
Optimizing Images, Multimedia, and Alternative Formats
However, once I started reviewing Kaiser's checklists, I soon noticed a remarkable similarity between her choice of topics and those covered in Andrew King's book, Speed Up Your Site -- the subtitle of which also happens to be Web Site Optimization.

A bit more scrutiny, summarized in this spreadsheet, revealed similarities in the naming, wording, and organization of the checklist items too. Although Kaiser cites Speed Up your Site in her first paragraph, she does not actually recommend it explicitly, or mention any collaboration with Andrew King. But she seems to have relied heavily on that one source as the main inspiration for her Web Site Optimization chapter.

To give Kaiser her due, rather than simply parroting all of King's recommendations, she has ignored the more extreme ones. All the same, her material still shares most of King's limitations and omissions, many of which I listed earlier when I reviewed his book, and which I plan to cover in future posts in this series.

In conclusion, if you'd like a handy summary of the material in King (except for his excellent discussion of the Psychology of Performance), Kaiser's book has it, plus about 300 more pages of useful checklists on other topics. I recommend it, because I don't think you can go far wrong with any book from Sitepoint. If you want to read more about the same topics, you can find Speed Up your Site on sale on Amazon these days under half price. But don't assume that either book will give you a well-rounded picture of Web Site Optimization issues and techniques.

For more ideas about that, continue reading Performance Matters, and I'll do my best to fill in the holes.

Tags: , , , , , , , , , , , , , .

NOTE: All posts in this blog have been migrated to Web Performance Matters.
All updated and new content since 1/1/2007 is there. Please update your bookmarks.

Friday, August 25, 2006

Are Online Retailers Ready for Business?

Every year, more and more shoppers turn to the Web for their holiday shopping, with total sales in 2006 projected to be in the multi-billion dollar range. But will online retailers be up to the task?

My team at Keynote recently studied 25 top online retailers in three categories: Books and Music, Electronics and Apparel. The study involves measuring a typical customer's navigation path through each site -- from Home Page, to Search, to Product Details, and finally Checkout.

Using computers ("agents") that emulate the behavior of a customer using an IE browser, we measured the exact same sequence of steps on each site from 10 locations throughout the US, over various connection speeds, every hour for a month (mid-May through mid-June, 2006). This produces a large data sample (over 6,000 data points per site), which we then subject to a lot of statistical analysis. By the time we publish the report, we believe we have a reliable picture of a site's health and readiness.

This is the second year we've conducted this particular study, and the 2006 results were surprising. While the top-ranked sites continue to provide excellent service -- almost perfect availability, excellent download speeds, and very little inconsistency -- the lowest-ranked sites had some serious failings. Without doubt, the performance problems we saw were bad enough to dissatisfy customers and impact the bottom line. They fall into three general areas:
  1. Outages:
    We consider a transaction (the sequence of steps) to be unavailable if any part of the purchase path fails so that the customer is unable to complete their transaction. And when 30% or more of our measurements during an hour report a site as unavailable, we count it as an outage.

    Only 4 of the 25 measured sites registered no outages during the month, while some of the least reliable sites had more than 15 hours of downtime. These hours are during the peak period each day (8 a.m. to midnight EDT). Apparel sites were especially outage-prone, averaging over 4 outages each.

    We all know how frustrating it can be to try to buy something online only to have the process fail midway through, or to not even be able to get to the site's Home Page. I'm not sure which is the more annoying. And if these major retail sites can't stay up under relatively light summertime loads, how will they respond when their traffic increases dramatically during the holiday season? In the increasingly competitive retail marketplace, being down for even a single hour at that time of year can have a significant financial and brand impact.

  2. Load Handling:
    One area of our analysis involves a site's ability to keep up with its current load, without any performance degradation. On a site that is built with sufficient capacity, performance does not change noticeably as traffic fluctuates during the day. At this time of year, most retail sites should be idling, because traffic is light. Indeed, we saw that the best sites, Barnes and Noble and Gap, had virtually no slowdowns each day, indicating that they can handle their present load comfortably.

    Of course, it takes a controlled load-testing project to discover what happens when traffic volumes are doubled, tripled, quadrupled, and so on. But judging from our measurements, these sites do appear well prepared to handle increases in their load. In contrast, several sites already display significant load-handling issues now, slowing down as much as 100% each day. For example, a page that normally appears in 3 seconds would take 6 seconds under load. Not only does this kind of sluggish behavior annoy current customers, it suggests that these sites are highly likely to crumble under the increased load, once holiday shoppers begin to fill their online storefront -- unless something is done before then to increase capacity.

  3. Dial-Up Performance:
    While many of us can't remember the last time we used dial-up, a large percentage of users still use slower connections. These might not be dial-up, but a poor wireless connection, a slow hotel connection, or a satellite connection may not be much faster. So site designers shouldn't forget about their bandwidth-challenged customers.

    Yet that's exactly what seems to be happening. Many retail sites seem set on alienating the dial-up user. The sites we tested averaged 30-40 seconds per page, which really adds up fast in a 10-page shopping transaction. And in some cases we even clocked Home Pages taking over 100 seconds to download. A few sites got it right, Dell being a good example. They have the fastest Home Page in the study for dial-up users because they serve a slightly trimmer page for this audience.
In short, the best sites are ready, and their performance sets the standard for the retail industry. But even among the leading retailers we studied, many could do a lot to improve the quality of the online shopping experiences they offer.

More importantly, if one of these sites is struggling now, what will happen in December, when I desperately need to buy my last-minute gifts? I'm not optimistic they will be ready to take my business. And I absolutely hate fighting the crowds at the local shopping mall, so -- along with thousands of others like me -- I'll be buying from their competitor's site.

Update #1: To hear a longer (12.5 min) discussion of these results and related issues in the retail industry, listen to a StorefrontBacktalk Week In Review Audiocast discussion I took part in. Click on the link for the section on whether the major E-Commerce sites are ready for the holiday rush or the holiday crash.

Update #2: I was also interviewed about the study today (8/25) on CNBC's Closing Bell program. As you might expect, that interview was a lot shorter than the StorefrontBacktalk Audiocast. You can view a replay online; the username is keynote and password is keynote0895.

Tags: , , , , , , , , , .

NOTE: All posts in this blog have been migrated to Web Performance Matters.
All updated and new content since 1/1/2007 is there. Please update your bookmarks.

Tuesday, August 22, 2006

Web Performance Engineering [3]

Continuing my series on Web performance guidelines, today I am reviewing another book -- Speed Up Your Site, by Andrew B. King, published by New Riders in 2003.

A while back, when I was reviewing Web Usability Books, I promised to cover Speed Up Your Site, but never got around to doing so -- for reasons I will explain. A full table of contents listing all 19 chapters is available online; in summary, the book has six parts:
Part I - The Psychology of Performance (38 pages)
Part II - Optimizing Markup: HTML and XHTML (99 pages)
Part III - DHTML Optimization: CSS and JavaScript (111 pages)
Part IV - Graphics and Multimedia Optimization (85 pages)
Part V - Search Engine Optimization (39 pages)
Part VI - Advanced Optimization Techniques (79 pages)
Part of my difficulty in reviewing this book was that I have mixed feelings about it. Naturally, I am always pleased to see an entire book devoted to performance issues. Also, Part I is particularly good. Containing 77 references, it is a very well-researched survey of an important subject rarely covered in books about Usability. On the other hand, I am not nearly as impressed with the rest of its advice and guidelines, for two reasons: their correctness, and their completeness.

First correctness. Regarding the actual content, my issue is not that Speed Up Your Site contains factual errors. As far as I can tell, its content is accurate. But some of its recommendations -- although they may indeed improve performance -- are so unnatural that they are not really the correct way to tackle the problem. This concern is summarized by Alexander Bunkenburg in this review on Amazon.com:
This book concentrates almost exclusively on sending fewer bytes from the server to the browser. It gives a large collection of tricks how to write shorter html, xhtml, css, and JavaScript. Some of these tricks are useful. Others however go against standards, and some seriously go against maintainability. I'd be reluctant to give this book to my team. One may be tempted into shaving off bytes, spending a big effort and yet producing unmaintainable code.
The problems that can result from deliberately violating standards are highlighted by David Rose in another Amazon review:
In today's world, where "standards based" coding is becoming more prevalent and adherence to the W3C standards for HTML coding is being recommended, this book just grated on me. While there is a great deal of great information, there are also a large number of "gotchas" to watch out for as well.

The book proposes to use HTML tags without their corresponding closing tags, not to use required elements whenever possible, avoid using quotes in HTML tags, and many other ways of creating "non-valid" code. This will "optimize" your code a bit more by reducing the characters in it, but it will also create problems for you in the future.

In summary, while the book does give a lot of good information, it often steers you away from standard code. If you are unsure what is considered "standard" and required for creating valid XHTML/CSS, you are best served skipping this book as it will teach you to create invalid code.
Bunkenburg's review also touches on my second area of concern -- completeness. Any book about speeding up Web site performance should (in my view) at least mention all the important topics, to let the reader know what options exist. Some important topics that receive little or no coverage in Speed Up Your Site are:
  • Page Types: All pages are not created equal, and users' tolerance for delays changes depending on where they are in their interaction with a site. So a single page design approach cannot be applied to all pages.
  • Maximize content reuse: A common mistake as sites grow is to use many different names (URLs) for the same thing, reducing the efficiency of browser caching.
  • Ratio of HTML base to content: There is a significant difference between the way browsers handle the base (or index) portion of the page, and referenced content elements, which can affect page download time.
  • Image resizing: For efficient page rendering, it helps to specify HTML HEIGHT and WIDTH tags for all embedded images. And for download speed, avoid resizing images in the browser.
  • Performance of SSL: All online business sites use encryption, and pages that use SSL encryption incur significant overheads. This is an issue that demands a separate section in any book about site performance.
  • Content Delivery Networks (CDNs): Akamai went public in 1999, and was widely used by 2002. Mirror Image, Cable and Wireless (formerly Digital Island), and Speedera also offered CDN services in 2002. How can any book about speeding up your site not even mention this technology?
In fairness to King, any discussion of the book's coverage should point out the following disclaimer, which appears in its Introduction:
Although the primary emphasis is on optimizing client-side technologies, this book also covers server-side techniques and compression to squeeze the maximum performance out of your site. These are all techniques that most designers and authors can control. Instead of focusing on esoteric server-side tuning limited to system administrators, this book focuses on optimizing the content that you deliver. For a server-oriented look at performance, see Web Performance Tuning, by Patrick Killelea.
But most of the omissions I listed are not related to server-side tuning, and of the two that are (SSL and CDNs), only SSL is discussed by Patrick Killelea, and his only recommendation is to use an SSL accelerator card. And in 2002, CDN technology could hardly be considered "esoteric", especially in a book which, to quote its author, is "not for beginners".

There is a lot more to be said on most the above topics, and in future posts I will expand upon them.

Tags: , , , , , , , , , , , .

NOTE: All posts in this blog have been migrated to Web Performance Matters.
All updated and new content since 1/1/2007 is there. Please update your bookmarks.

Friday, August 18, 2006

Web Performance Engineering [2]

Today I'm going to look at another list of Top Ten Web Performance Tuning Tips, following up on my promise to review Web site and application performance advice.

Today's list of tuning tips was created by Patrick Killelea, the author of Web Performance Tuning, first published by O'Reilly in 1998, then revised in 2002. When the second edition came out, Patrick also updated his 1998 top ten list, presumably to reflect changes in the rapidly maturing Internet and Web environment. But O'Reilly still publishes the 1998 list alongside the 2002 list without any further explanation, even though just four recommendations appear on both lists!

I see this as evidence that publishers are a lot more interested in selling a book than they are in the usefulness of its content. So let's blame O'Reilly and give Patrick the benefit of the doubt here, and focus on his latest list only. Abbreviating his recommendations, they are:
1. Check for compliance with standards
2. Minimize use of JavaScript and style sheets
3. Turn off the Web server's reverse DNS lookups
4. Try out a free analysis tool (to find bottlenecks)
5. Use simple servlets or CGI
6. Get more memory
7. Index your database tables well
8. Make fewer database queries
9. Look for packet loss and retransmission
10. Monitor your Web site's performance
When someone publishes a top ten list, I expect it to include the ten most important and useful recommendations -- especially when its author has written the most comprehensive book available on the subject. In this case, even allowing for the maturing of the Web since 2002, I have no idea how Patrick could have come up with this list. I have three problems with it -- what's in it, what's not in it, and its order. Today I will tackle mainly the first area; here are some brief thoughts about each of his recommendations:
  1. Check for standards compliance by using Weblint or other HTML checking tools.
    Content that conforms to the HTML 4.0 standard will load faster and work in every browser because the browser then knows what to expect. Note that Microsoft-based tools create content that does not even use the standard ASCII character set, but instead uses many proprietary Microsoft characters that will display in Netscape as question marks and can slow down rendering.
  2. Complying with standards is always a good thing of course, but it's rarely a performance issue. And how can a browser compatibility problem be rated the top performance guideline? In the 458-page book it merits just 37 words, headed Watch out for Composition Tools with a Bias. Beware of biased guidelines, I say.
  3. Minimize the use of JavaScript and style sheets.
    JavaScript is a major source of incompatibility, browser hangs, and pop-up advertising. Style sheets require separate downloads before the page can be displayed. There are some nice features to both JavaScript and style sheets, but at a big cost. Life is better without them.
  4. Wrongheaded, even in 2002. Today this advice is ridiculous -- JavaScript and CSS are core features of most Web sites. How can O'Reilly even keep this on their site?
  5. Turn off reverse DNS lookups in the Web server.
    If left on, reverse DNS will log a client's machine name rather than IP address, but at a large performance cost. It is better left off. You can always run log analysis tools which look up the names later.
  6. Outdated, even in 2002. This was good advice in 1997: Prior to Apache 1.3, HostnameLookups defaulted to On. This adds latency to every request because it requires a DNS lookup to finish before the request is completed. In Apache 1.3, this setting defaults to Off. This should still appear on a much longer checklist, because security concerns might prompt someone to turn on HostnameLookups. But it doesn't belong at #3 in the top ten.
  7. Try out a free analysis tool.
    I've provided a free analysis tool at my Web site that can tell you whether or not your bottleneck is in DNS, or because of connection time or content size, or is on the server side. Work on improving the slowest part first.
  8. The core idea here -- improving the slowest part first -- is a great recommendation; it should have been at the top of the list. It's in the book too, on page 163. On the other hand, the free tool has now been replaced (check the link) by a graph of local house prices. After some digging, I found that Patrick does still have a page about his book, which also contains tons of links to software tools, so it would take a while to figure which one he meant. But this kind of Web research doesn't have to be a treasure hunt -- how hard would it be rewrite the guideline and get O'Reilly to update their site?
  9. Use simple servlets or CGI.
    Use simple servlets, CGI, or your Web server's API rather than any distributed object schemes like CORBA or EJB. Distributed object schemes are intended to improve a programmer's code-writing productivity, but they do so at an unacceptable cost in performance for end-users.
  10. This is reasonable advice, although the examples need updating -- CGI is legacy technology now, and newer application services like ASP.NET and low level APIs like ISAPI, NSAPI, Apache extensions, etc. are faster. But the central idea is this: When a site handles a lot of business transactions, back-end communication overheads add up fast, and in the worst examples, become the bottleneck that forces you to spread the load across more servers. So anything you can do to minimize the resources consumed per transaction will cut service times and increase server capacity. And probably save money in the process, too -- money that could be spent on the next item.
  11. Get more memory.
    Your Web server, middleware, and database all will probably do better with more memory, if they still use their hard disks frequently. Hard disks are literally about a million times slower than memory, so you should buy more memory until the disks are phased out.
  12. Absolutely! You'll never be able to throw away your disks, but a key goal of tuning should be to find ways to use them less. Prioritize your hardware resources from fastest to slowest -- memory, processor, disks, LAN, Internet -- and try to reduce use of the slower ones by moving work to the faster ones.
  13. Index your database tables well.
    Spectacular improvements are possible if you are inadvertently doing full-table scans on every hit of a particular URL. Indexes allow you to go directly to the data you need.
  14. As opposed to indexing them badly, I suppose. This tuning guideline certainly does not apply to the Web exclusively, it's important whenever databases are used. But it's probably worth repeating in this context, in case anyone creating Web applications thinks that databases use magic to find things. By the way, you can also get spectacular improvements by replacing incompetent programmers and improving their poor designs. But I'd strongly recommend not hiring them in the first place.
  15. Make fewer database queries.
    If you can cache content in your middleware or servlets, do it. Making connections to a database and using those database connections is typically a bottleneck for performance.
  16. Right! And if you can send less content to the browser, do that too. In fact, doing less work is always a sure way to improve performance. That's a general rule everyone should know, so general that I would not even include it in this list. I consider it part of a tuning framework -- a systematic way to approach any tuning project, not just speeding up Web applications.
  17. Look for packet loss and retransmission.
    There are many network snooping and monitoring tools to help you do this. Intermittent slowness is often due to packets being lost or corrupted. This is because a time-out period needs to pass before the packet is retransmitted.
  18. This is useful advice, as far as it goes -- noisy connections can ruin your response times. But the guideline should really suggest what to do about the problem, if you have it, and that's a subject for a future post. And I'm not sure if it will make my top ten list either, I'll have to wait and see what else I come up with.
  19. Set up monitoring and automated graphing of your Web site's performance.
    This information is free online in Chapter 4 of the second edition of Web Performance Tuning.
  20. Indeed! Measurements usually beat guesswork and clairvoyance. You've probably heard the popular saying that you can't manage what you don't measure, and I've already spent more than enough time researching it. All the same, it's not really a tuning guideline. I'd call it a performance management principle, so I don't think it actually belongs in this list at all.
So, to sum up my audit of Patrick's list of ten guidelines, I vote to reject two altogether (#1 and #2), downgrade one (#3) to a priority well outside my top ten, accept four (#4, #5, #6, and #7), restate two (#8 and #10) as general principles that don't belong on this list, and reserve judgment on one (#9).

That opens up 5 or 6 slots for the things that Patrick missed -- but what should they be? I will tackle that subject in a follow-up post.

Tags: , , , , , , , , , , , , .

NOTE: All posts in this blog have been migrated to Web Performance Matters.
All updated and new content since 1/1/2007 is there. Please update your bookmarks.

Thursday, August 17, 2006

Baseball and the Price of Gas

This is only tangentially related to the usual subjects I cover in this blog, but it certainly relates to the way I approach research and blogging. I am always doing research online, and during summer evenings and weekends that activity is often accompanied by the day's radio broadcast of the Oakland A's baseball game -- the best baseball team here in the San Francisco Bay Area, by any objective standard.

Tonight was no different, and A's broadcaster Robert Buan caught my attention when he opened the post-game show. He pointed out that in winning tonight, the A's have secured their biggest lead in their division since September 30, 1992. As a fan of the team, this is interesting; to everyone else it's probably instantly forgettable. But more intriguing was what he actually said -- in the very first sentence of the program. He opened his show with this statement:

According to a reliable authority, wikipedia.com, the last time the A's had a lead of six and a half games in the American League West, gas was selling for $1.38.

I couldn't help reflecting on how much the Web is changing the way everyone approaches information and research!

NOTE: All posts in this blog have been migrated to Web Performance Matters.
All updated and new content since 1/1/2007 is there. Please update your bookmarks.

Tuesday, August 15, 2006

Reporting Web Application Responsiveness

In a previous post, I discussed some complications of measuring Rich Internet Applications (RIAs). In particular, I concluded that …
… to report useful measurements of the user experience of response times, instead of relying on the definition of physical Web pages to drive the subdivision of application response times, we must break the application into what we might call logical pages. To do this, a measurement tool must recognize meaningful application milestones or markers that signal logical boundaries of interest for reporting, and thus subdivide the application so that we can identify and report response times by logical page.
Today I am going to look inside the logical page, and consider what happens when the application responds to a user action. I have previously written about the stages of this process here, and in this paper.

Two Standard Service Level Metrics
Because Web pages are constructed using many separately downloaded components, low level monitoring tools can collect a ton of data about the communications activity triggered by a user interface action. And for diagnostic and tuning purposes, it is always useful to measure each separate component of the download process. But for most service level monitoring, this low level data can usually be summarized in just two important metrics: initial response time and page download time.

From the perspective of a typical user, these metrics represent two distinct events in the page download experience. After 'clicking' on a page, the first measures the apparent pause until the site begins to respond, while the second measures the time for that response to download.

The elapsed time between these two events can be large enough to affect the way pages are designed. Because pages containing many images take a while to download over slower connections, such pages are normally laid out so that the most popular links are displayed early in the page download process, allowing a user to navigate to another part of the site without having to wait for the all the page content to complete.

But while initial response time and page download time provide a good indication of the user’s experience of a traditional Web application, they may be less useful for some RIAs. In these applications, the time to complete a page download might include additional asynchronous activity that does not affect the user’s experience of the current page, such as fetching additional content (like executable script files, application data, images, or even streams) for future use within the application.

An Additional Metric for RIAs
This suggests that, while the two existing metrics are still useful, a standard scheme for measuring and reporting RIA responsiveness probably needs to also include a metric that represents the time from user interface action until an intermediate event, one that we might call page display complete. This metric would reflect the time it takes to download everything needed to complete the display of the current page.

Even that definition is ambiguous, because it may not be obvious when some pages are 'completely displayed'. I see two issues here. First, what about parts of the page outside the display window, that won’t be visible until the user scrolls? Let’s deal with that by assuming the user has a screen of unlimited size, so that the entire page could be visible without scrolling.

Second, what if a page includes a streamed video, or a series of images that are displayed in sequence, each replacing the previous one? On reflection, I conclude that the 'page display complete' event should occur when the complete display is first visible -- in my examples, when the video begins playing, or the first image is displayed. That's because measuring to the last view of the display would make this metric a lot less useful as a measure of a user's experience of responsiveness. And nowhere between those two extremes offers any possibility of generality across applications.

Summarizing What’s Needed
So now I have defined three distinct metrics that seem to have fairly universal applicability when measuring the responsiveness of RIAs:
Initial response time
The elapsed time from a user interface action (a mouse click, or any other user action that triggers a download process) until the browser places the page in a state that permits a user to perform another action (technically, the user is unblocked). We could also say that this is the time until the new page first becomes usable.

Page display time
The elapsed time from a user interface action until the complete resulting page is first displayed, or could be first displayed if the screen were large enough.

Page download time
The elapsed time from a user interface action until the complete resulting page and all associated content elements have been downloaded. This time includes all secondary or background downloads triggered by the user's action, whether or not they are required for the current page display.
These definitions provide only a rough statement of requirements for RIA measurement -- none is technically precise. In practice, any attempt to implement any one of these requirements could present problems for some applications, browsers, or measurement tools, and I will write about some of those complications in a future post. But I think it is useful to identify these generally applicable goals for measuring and reporting on the responsiveness of Rich Internet Applications.

Tags: , , , , , , , .

Web Performance Engineering [1]

This post is the first in a new series on how to build Web pages, sites and applications that perform well -- by design. I will be combining my own observations with online research and recommendations on best practices contributed by my colleague Ben Rushlo. Ben makes his living measuring Web site performance and giving companies advice on how to improve the performance of their sites and applications.

Despite the crucial contribution of performance to online application effectiveness, good advice on the subject is surprisingly scarce on the Web, and even scarcer in books about Web design. Although you do sometimes find small sections (rarely chapters) of such books devoted to performance, their content is usually weak and almost never a systematic treatment of the issues. I won't bore you with proof, but I do mention one example below.

On the other hand, the exceptions are certainly worth noting. Recently I found an excellent discussion of 10 Realistic Steps to a Faster Web Site, posted in February 2006 by Alexander Kirk, a Web application programmer in Vienna, Austria. I mention his profession because in my experience, a lot of advice about performance written by programmers completely ignores the central rule of all performance optimization -- to speed anything up, remove the biggest bottleneck.

Applying just this one rule repeatedly (think about it) will always produce the best results. And in the world of distributed systems, the biggest bottlenecks will almost always be related to the way an application uses a network, not to the way a computer (client or server) processes some program code. In other words, the application's logic is usually far more important than the way a programmer coded it.

Kirk obviously understands this. After complaining about an example of misguided performance advice (read the comments), he presents his view of a systematic approach to Web application performance analysis. I will not attempt to summarize it beyond simply listing the outline, which is:
1. Determine the bottleneck
1.1. File Size
1.2. Latency
2. Reducing the file size
3. Check what’s causing a high latency
3.1. Is it the network latency?
3.2. Does it take too long to generate the page?
3.3. Is it the rendering performance?
4. Determine the lagging component(s)
5. Enable a Compiler Cache
6. Look at the DB Queries
7. Send the correct Modification Data
8. Consider Component Caching (advanced)
9. Reducing the Server Load
9.1. Use a Reverse Proxy (needs access to the server)
9.2. Take a lightweight HTTP Server (needs access to the server)
10. Server Scaling (extreme technique)
Kirk's discussion does omit a few topics (subjects for future posts in this series) that I think are important, but this is an excellent starting point. While his wording implies a reactive approach to his subject matter (writing about tuning or improving the performance an existing site), most of his guidelines relate to best practices in site design and engineering. So there is no reason why they should not be implemented proactively, without waiting for problems to surface.

Tags: , , , , , , .