Home > Web Front-end > HTML Tutorial > Yahoo 14 performance optimization principles_html/css_WEB-ITnose

Yahoo 14 performance optimization principles_html/css_WEB-ITnose

WBOY
Release: 2016-06-24 12:33:59
Original
1719 people have browsed it

14 tips for optimizing website performance and improving website access speed


Also known as "Yahoo's Fourteen Rules", I think of the ignorant me a year ago, when I foolishly went to the university town for an interview Front-end, at that time I thought I was very good after watching two sets of CSS videos during the winter and summer vacations. I warmed up the videos before setting off. Well, well, this is a sliding door, well, well, this is absolute positioning, well, well, this is floating. Clear...

Uncle Biao interviewed me at that time. I didn’t know that person at the time. He was all black, black T-shirt, black skin, black hat, black sunglasses, and a little black stubble. The person in question was Uncle Biao. After making up the test questions, I talked hesitantly with him and found that it didn't work at all. The first question was what are the "Yahoo Fourteen Rules"? Then I was confused, pardon? I had never heard of it, and then I died in battle. After I got home, I posted a log on QQ space, but I didn’t know much about it at the time. I read it all day today and posted it to share with everyone:

I believe that the Internet has become more and more an indispensable part of people's lives. Ajax, flex and other rich client applications make people more "happy" to experience many functions that could only be implemented in C/S. For example, Google has moved all the most basic office applications to the Internet. Of course, while it is convenient, it will undoubtedly make the page slower and slower. I am a front-end developer. In terms of performance, according to Yahoo's survey, the backend accounts for only 5%, while the front-end accounts for as much as 95%, of which 88% can be optimized.



The above is a life cycle diagram of a web2.0 page. Engineers vividly describe it as divided into four stages: "pregnancy, birth, graduation, and marriage." If we can be aware of this process when we click on a web link instead of a simple request-response, we can dig out many details that can improve performance. Today, I listened to a lecture by Taobao Xiao Ma on the Yahoo development team's research on web performance. I felt that I gained a lot and wanted to share it on my blog.

I believe many people have heard of the 14 rules for optimizing website performance. More information can be found at developer.yahoo.com

1. Reduce the number of HTTP requests as much as possible [content]

2. Use CDN (Content Delivery Network) [server]

3. Add Expires header (or Cache-control) [server]

4. Gzip component [server]

5. Place CSS style at the top of the page [css]

6. Move scripts to the bottom (including inline ones) [javascript]

7. Avoid using Expressions in CSS [css]

8. Combine JavaScript and CSS Separate into external files [javascript] [css]

9. Reduce DNS queries [content]

10. Compress JavaScript and CSS (including inline) [javascript] [css]

11. Avoid redirects [server]

12. Remove duplicate scripts [javascript]

13. Configure entity tags (ETags) [css]

14. Enable AJAX caching

There is a plug-in yslow under firefox, which is integrated in firebug. You can use it to easily check the performance of your website in these aspects.



This is the result of using yslow to evaluate my website Xifengfang. Unfortunately, it only has a score of 51. hehe. The scores of major Chinese websites are not high. I just took a test and both Sina and NetEase scored 31. Then the score of Yahoo (USA) is indeed 97 points! This shows the efforts made by Yahoo in this regard. Judging from the 14 rules they summarized and the 20 newly added points, there are many details that we really don’t think about, and some practices are even a little "perverted."

The first one is to reduce the number of HTTP requests as much as possible (Make Fewer HTTP Requests)

http requests are expensive, think Methods to reduce the number of requests can naturally increase web page speed. Commonly used methods include merging css, js (merging css and js files in a page respectively), image maps and css sprites, etc. Of course, perhaps splitting css and js files into multiple files is due to considerations such as css structure and sharing. Alibaba's Chinese website approach at that time was to develop it separately, and then merge js and css in the background. This way it was still one request for the browser, but it could still be restored into multiple ones during development, which facilitated management and repeated references. . Yahoo even recommends writing the css and js of the homepage directly into the page file instead of external references. Because the number of visits to the homepage is too large, this can also reduce the number of requests by two. In fact, many domestic portals do this.

Css sprites simply merge the background images on the page into one, and then use the value defined by the background-position property of css to get its background. Taobao and Alibaba Chinese sites currently do this. If you are interested, you can take a look at the background images of Taobao and Alibaba.

http://www.csssprites.com/ This is a tool website that can automatically merge the images you upload and give the corresponding background-position coordinates. And output the results in png and gif format.

Article 2. Use a Content Delivery Network: Use a Content Delivery Network

To be honest, I don’t know much about CDN. To put it simply, by adding a new layer of network architecture to the existing Internet, the content of the website is published to the cache server closest to the user. DNS load balancing technology determines the user's source to access the cache server nearby to obtain the required content. Users in Hangzhou access content on the server near Hangzhou, and users in Beijing access content on the server near Beijing. This can effectively reduce the time for data transmission on the network and increase the speed. For more detailed information, you can refer to the explanation of CDN on Baidu Encyclopedia. Yahoo! distributes static content to a CDN and reduces user impact time by 20% or more.

CDN technical diagram:



CDN networking diagram:



Article 3. Add Expire/Cache-Control header: Add an Expires Header

Now there are more and more pictures and scripts , css, and flash are embedded into the page. When we visit them, we will inevitably make many http requests. In fact, we can cache these files by setting the Expires header. Expire actually specifies the cache time of a specific type of file in the browser through the header message. Most of the pictures in Flash do not need to be modified frequently after they are released. After caching, the browser will not need to download these files from the server in the future but will read them directly from the cache. This will speed up accessing the page again. will be greatly accelerated. A typical HTTP 1.1 protocol returns header information:

HTTP/1.1 200 OK

Date: Fri, 30 Oct 1998 13:19:41 GMT

Server: Apache /1.3.3 (Unix)

Cache-Control: max-age=3600, must-revalidate

Expires: Fri, 30 Oct 1998 14:19:41 GMT

Last-Modified: Mon, 29 Jun 1998 02:28:12 GMT

ETag: “3e86-410-3596fbbc”

Content-Length: 1040

Content- Type: text/html

This can be done by setting Cache-Control and Expires through server-side scripts.

For example, set expiration after 30 days in php:

以下为引用的内容:

The following is the quoted content:

It can also be done by configuring the server itself. I’m not very clear about this, haha. Friends who want to know more can refer to http://www.web-caching.com/

As far as I know, the current Expires expiration time of Alibaba Chinese website is 30 days. However, there have been problems during the period, especially the setting of script expiration time should be carefully considered, otherwise it may take a long time for the client to "perceive" such changes after the corresponding script function is updated. I have encountered this problem before when I was working on [suggest project]. Therefore, what should be cached and what should not be cached should be carefully considered.

Article 4. Enable Gzip compression: Gzip Components

The idea of ​​Gzip is to compress the file on the server side first and then transmit it. . This can significantly reduce the size of file transfers. After the transmission is completed, the browser will decompress the compressed content again and execute it. All current browsers support gzip "well". Not only can browsers recognize it, but also major "crawlers" can also recognize it. SEOers can rest assured. Moreover, the compression ratio of gzip is very large, and the general compression ratio is 85%. This means that a 100K page on the server side can be compressed to about 25K before being sent to the client. For the specific Gzip compression principle, you can refer to the article "Gzip Compression Algorithm" on csdn. Yahoo particularly emphasizes that all text content should be gzip compressed: html (php), js, css, xml, txt... Our website has done a good job in this regard, and it is an A. In the past, our homepage was not A, because there were many js placed by advertising codes on the homepage. The js of the website of the owner of these advertising codes was not gzip compressed, which would also drag down our website.

Most of the above three points are server-side contents, and I only have a superficial understanding of them. Please correct me if I am wrong.

Item 5: Put Stylesheets at the Top

Put Stylesheets at the top of the page. Why is this? Because browsers such as IE and Firefox will not render anything until the CSS is completely transmitted. The reason is as simple as what Brother Ma said. css, the full name is Cascading Style Sheets (cascading style sheets). Cascading means that the following css can cover the previous css, and higher-level css can cover lower-level css. This hierarchical relationship was briefly mentioned at the bottom of [css!important] article. Here we only need to know that css can be overridden. Since the previous one can be overwritten, it is undoubtedly reasonable for the browser to render it after it is completely loaded. Under many browsers, such as IE, the problem with placing the style sheet at the bottom of the page is that it prohibits the sequential display of web content. The browser blocks display to avoid redrawing page elements, and the user only sees a blank page. Firefox does not block display, but this means that some page elements may need to be repainted after the stylesheet is downloaded, which causes flickering issues. So we should let the css be loaded as soon as possible

Following this meaning, if we study it more carefully, there are actually areas that can be optimized. For example, the two css files included on this site, . From the media, you can see that the first css is for the browser, and the second css file is for the print style. Judging from the user's behavioral habits, the action of printing the page must occur after the page is displayed. Therefore, a better method should be to dynamically add css for the printing device to this page after the page is loaded, which can increase the speed a little. (Haha)

Article 6. Put Scripts at the Bottom of the page (Put Scripts at the Bottom)

Placing the script at the bottom of the page has two purposes: 1. To prevent the execution of the script from blocking the download of the page. During the page loading process, when the browser reads the js execution statement, it will interpret it all and then read the following content. If you don’t believe it, you can write a js infinite loop to see if the things below the page will come out. (The execution of setTimeout and setInterval is somewhat similar to multi-threading, and the following content rendering will continue before the corresponding response time.) The logic of the browser doing this is because js may execute location.href at any time or otherwise completely interrupt this page The function of the process, that is, of course has to wait until it is executed before loading. Therefore, placing it at the end of the page can effectively reduce the loading time of the visual elements of the page. 2. The second problem caused by the script is that it blocks the number of parallel downloads. The HTTP/1.1 specification recommends that the number of parallel downloads per host of the browser should not exceed 2 (IE can only be 2, other browsers such as FF are set to 2 by default, but the new ie8 can reach 6) . So if you distribute the image files to multiple machines, you can achieve more than 2 parallel downloads. But while the script file is downloading, the browser will not initiate other parallel downloads.

Of course, for each website, the feasibility of loading scripts at the bottom of the page is still questionable. Just like the page of Alibaba Chinese website. There are inline js in many places, and the display of the page relies heavily on this. I admit that this is far from the concept of non-intrusive scripts, but many "historical problems" are not so easy to solve.

Article 7. Avoid using Expressions in CSS (Avoid CSS Expressions)

But this adds two more layers of meaningless embedding Set, definitely not good. A better way is needed.

Article 8. Put both javascript and css in external files (Make JavaScript and CSS External)

I think this is still the case Very easy to understand. This is done not only from the perspective of performance optimization, but also from the perspective of ease of code maintenance. Writing css and js in the page content can reduce 2 requests, but it also increases the size of the page. If the css and js have been cached, there will be no extra http requests. Of course, as I said before, some special page developers will still choose inline css and js files.

Article 9. Reduce DNS Lookups

There is a one-to-one correspondence between domain names and IP addresses on the Internet , the domain name (kuqin.com) is easy to remember, but the computer does not recognize it, and the "recognition" between computers must be converted into an IP address. Each computer on the network corresponds to an independent IP address. The conversion between domain names and IP addresses is called domain name resolution, also known as DNS query. A DNS resolution process will take 20-120 milliseconds. Before the DNS query is completed, the browser will not download anything under the domain name. Therefore, reducing the time of DNS query can speed up the loading speed of the page. Yahoo recommends that the number of domain names contained in a page should be limited to 2-4. This requires a good planning for the page as a whole. At present, we are not doing well in this regard, and many advertising delivery systems are dragging us down.

Article 10. Compress JavaScript and CSS (Minify JavaScript)

The effect of compressing js and css is obviously to reduce the number of bytes on the page. . Pages with small capacity will naturally load faster. In addition to reducing the volume, compression can also provide some protection. We do this well. Commonly used compression tools include JsMin, YUI compressor, etc. In addition, http://dean.edwards.name/packer/ also provides us with a very convenient online compression tool. You can see the difference in capacity between compressed js files and uncompressed js files on the jQuery web page:



Of course, there is a drawback caused by compression. It’s just that the readability of the code is gone. I believe many front-end friends have encountered this problem: the effect of looking at Google is cool, but looking at its source code is a lot of characters squeezed together, and even the function names have been replaced. It’s so sweaty! Wouldn't it be very inconvenient to maintain your own code like this? The current approach adopted by all Alibaba Chinese websites is to compress js and css on the server side when they are released. This makes it very convenient for us to maintain our own code.

Article 11, Avoid Redirects

I saw "Internet Explorer and Connection Limits" on ieblog not long ago 》This article, for example, when you enter http://www.kuqin.com/, the server will automatically generate a 301 server redirection to http://www.kuqin.com/. You can see it by looking at the address bar of the browser. come out. This kind of redirection naturally takes time. Of course, this is just an example. There are many reasons for redirection, but what remains the same is that every additional redirection will increase a web request, so it should be reduced as much as possible.

Article 12. Remove Duplicate Scripts

I know this without even saying it, not only from a performance perspective, but also from a code specification perspective. But we have to admit that many times we will add some perhaps repetitive code because the picture is so fast. Perhaps a unified css framework and js framework can better solve our problems. Xiaozhu's point of view is right. Not only should it not be repeated, but it should also be reusable.

Article 13. Configure Entity Tags (ETags) (Configure ETags)

I don’t understand this either, haha. I found a more detailed explanation on inforQ "Using ETags to Reduce Web Application Bandwidth and Load". Interested students can check it out.

Article 14. Make Ajax Cacheable

Ajax still needs to be cached? When making ajax requests, it is often Add a timestamp to avoid caching. It’s important to remember that “asynchronous” does not imply “instantaneous”. Remember, even if AJAX messages are generated dynamically and only affect one user, they can still be cached.


What we can do now is about css, puzzle, compression to reduce redundancy, reasonable writing and classification, so that our css will be displayed as "A" in YSlow. As for the server type, the future is long. Let's learn slowly... As long as we have the enthusiasm, we will learn it sooner or later...


I will add it later, because the fourteen items have been expanded a lot now. You can see the details in this article Analysis:

http://uicss.cn/yslow/#more-12319

You can see 23 items on Yslow, see the picture below:



Reduce the number of HTTP requests
Merge images, CSS, and JS to improve the waiting time for first-time users. Use CDN
Nearby cache ==> Intelligent routing ==> Load balancing ==> WSA whole-site dynamic acceleration to avoid empty src and href
When the href attribute of the link tag is empty and the src of the script tag When the attributes are empty, the browser will use the URL of the current page as their attribute value when rendering, thereby loading the content of the page as their value. Test Specify Expires
for the file header to make the content cacheable. Avoid unnecessary HTTP requests in subsequent page visits. Use gzip to compress content
Compressing any text type response, including XML and JSON, is worthwhile. Put CSS at the top and JS at the bottom
to prevent js loading from blocking subsequent resources. Avoid using CSS expressions and put CSS and JS in external files
The purpose is caching, but sometimes in order to reduce requests, it will be written directly to the page, which needs to be weighed according to the ratio of PV and IP. Tradeoff Number of DNS Lookups
Reducing hostnames saves response time. But at the same time, it is important to note that reducing hosts will reduce the number of parallel downloads in the page.
IE browser can only download two files from the same domain name at the same time. When multiple images are displayed on one page, the image download speed for IE users will be affected. So Sina will create N second-level domain names to put pictures. Streamline CSS and JS to avoid jumps
Same domain: Be careful to avoid jumps with backslash "/";
Cross-domain: Use Alias ​​or mod_rewirte to create a CNAME (a DNS record that saves the relationship between domain names) Delete Duplicate JS and CSS
Repeatedly calling the script will not only add additional HTTP requests, but also waste time in multiple operations. In IE and Firefox, regardless of whether the script is cacheable or not, they all have the problem of re-evaluating JavaScript. Configure ETags
which is used to determine whether the elements in the browser cache are consistent with those on the original server. It is more flexible than last-modified date. For example, if a file is modified 10 times in 1 second, Etag can accurately judge based on Inode (number of index nodes (inode) of the file), MTime (modification time) and Size. This avoids the problem that UNIX recording MTime can only be accurate to seconds. For server cluster use, the last two parameters are available. Reduce web application bandwidth and load with ETags Cacheable AJAX
"Asynchronous" does not mean "instant": Ajax does not guarantee that users will not spend time waiting for asynchronous JavaScript and XML responses. Use GET to complete AJAX requests
When using XMLHttpRequest, the POST method in the browser is a "two-step" process: first send the file header, and then send the data. So it makes more sense when using GET to get data. Reduce the number of DOM elements
Is there a more appropriate tag that can be used? Semantic tags to avoid abusing meaningless tags and avoid 404
Some sites change the 404 error response page to "Are you looking for ***?" Although this improves the user experience, it also wastes server resources (such as databases) wait). The worst case scenario is that the link to the external JavaScript malfunctions and returns a 404 code. First, this loading will destroy parallel loading; second, the browser will try to find potentially useful parts in the returned 404 response content as JavaScript code to execute. Reduce the size of cookies and use cookie-free domains
such as picture CSS, etc. Yahoo!'s static files are all on yimg.com. When the client requests static files, it reduces the repeated transmission of cookies to the main domain name (yahoo.com ) influence. Don’t use filters
png24 is translucent in IE6. Don’t use it randomly. Cut it calmly into PNG8 jpg. Don’t scale the image in HTML. Reduce favicon.ico and cache Minimize HTTP Requests

tag: content

80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.

One way to reduce the number of components in the page is to simplify the page's design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.

Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.

CSS Sprites are the preferred method for reducing the number of image requests. Combine your background images into a single image and use the CSSbackground-image and background-position properties to display the desired image segment.

Image maps combine multiple images into a single image. The overall size is about the same, but reducing the number of HTTP requests speeds up the page. Image maps only work if the images are contiguous in the page, such as a navigation bar. Defining the coordinates of image maps can be tedious and error prone. Using image maps for navigation is not accessible too, so it's not recommended.

Inline images use the data: URL scheme to embed the image data in the actual page. This can increase the size of your HTML document. Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages. Inline images are not yet supported across all major browsers.

Reducing the number of HTTP requests in your page is the place to start. This is the most important guideline for improving performance for first time visitors. As described in Tenni Theurer's blog post Browser Cache Usage - Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.

top | discuss this rule

Use a Content Delivery Network

tag: server

The user's proximity to your web server has an impact on response times. Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective. But where should you start?

As a first step to implementing geographically dispersed content, don't attempt to redesign your web application to work in a distributed architecture. Depending on the application, changing the architecture could include daunting tasks such as synchronizing session state and replicating database transactions across server locations. Attempts to reduce the distance between users and your content could be delayed by, or never pass, this application architecture step.

Remember that 80-90% of the end-user response time is spent downloading all the components in the page: images, stylesheets, scripts, Flash, etc. This is thePerformance Golden Rule. Rather than starting with the difficult task of redesigning your application architecture, it's better to first disperse your static content. This not only achieves a bigger reduction in response times, but it's easier thanks to content delivery networks.

A content delivery network (CDN) is a collection of web servers distributed across multiple locations to deliver content more efficiently to users. The server selected for delivering content to a specific user is typically based on a measure of network proximity. For example, the server with the fewest network hops or the server with the quickest response time is chosen.

Some large Internet companies own their own CDN, but it's cost-effective to use a CDN service provider, such as Akamai Technologies, EdgeCast, or level3. For start-up companies and private web sites, the cost of a CDN service can be prohibitive, but as your target audience grows larger and becomes more global, a CDN is necessary to achieve fast response times. At Yahoo!, properties that moved static content off their application web servers to a CDN (both 3rd party as mentioned above as well as Yahoo’s own CDN) improved end-user response times by 20% or more. Switching to a CDN is a relatively easy code change that will dramatically improve the speed of your web site.

top | discuss this rule

Add an Expires or a Cache-Control Header

tag: server

There are two aspects to this rule:

For static components: implement "Never expire" policy by setting far future Expires header For dynamic components: use an appropriate Cache-Control header to help the browser with conditional requests

Web page designs are getting richer and richer, which means more scripts, stylesheets, images, and Flash in the page. A first-time visitor to your page may have to make several HTTP requests, but by using the Expires header you make those components cacheable. This avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often used with images, but they should be used on all components including scripts, stylesheets, and Flash components.

Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. A web server uses the Expires header in the HTTP response to tell the client how long a component can be cached. This is a far future Expires header, telling the browser that this response won't be stale until April 15, 2010.

      Expires: Thu, 15 Apr 2010 20:00:00 GMT
Copy after login

If your server is Apache, use the ExpiresDefault directive to set an expiration date relative to the current date. This example of the ExpiresDefault directive sets the Expires date 10 years out from the time of the request.

      ExpiresDefault "access plus 10 years"
Copy after login

Keep in mind, if you use a far future Expires header you have to change the component's filename whenever the component changes. At Yahoo! we often make this step part of the build process: a version number is embedded in the component's filename, for example, yahoo_2.0.6.js.

Using a far future Expires header affects page views only after a user has already visited your site. It has no effect on the number of HTTP requests when a user visits your site for the first time and the browser's cache is empty. Therefore the impact of this performance improvement depends on how often users hit your pages with a primed cache. (A "primed cache" already contains all of the components in the page.) We measured this at Yahoo! and found the number of page views with a primed cache is 75-85%. By using a far future Expires header, you increase the number of components that are cached by the browser and re-used on subsequent page views without sending a single byte over the user's Internet connection.

top | discuss this rule

Gzip Components

tag: server

The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by front-end engineers. It's true that the end-user's bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.

Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request.

      Accept-Encoding: gzip, deflate
Copy after login

If the web server sees this header in the request, it may compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response.

      Content-Encoding: gzip
Copy after login

Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you're likely to see is deflate, but it's less effective and less popular.

Gzipping generally reduces the response size by about 70%. Approximately 90% of today's Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.

There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically.

Servers choose what to gzip based on file type, but are typically too limited in what they decide to compress. Most web sites gzip their HTML documents. It's also worthwhile to gzip your scripts and stylesheets, but many web sites miss this opportunity. In fact, it's worthwhile to compress any text response including XML and JSON. Image and PDF files should not be gzipped because they are already compressed. Trying to gzip them not only wastes CPU but can potentially increase file sizes.

Gzipping as many file types as possible is an easy way to reduce page weight and accelerate the user experience.

top | discuss this rule

Put Stylesheets at the Top

tag: css

While researching performance at Yahoo!, we discovered that moving stylesheets to the document HEAD makes pages appear to be loading faster. This is because putting stylesheets in the HEAD allows the page to render progressively.

Front-end engineers that care about performance want a page to load progressively; that is, we want the browser to display whatever content it has as soon as possible. This is especially important for pages with a lot of content and for users on slower Internet connections. The importance of giving users visual feedback, such as progress indicators, has been well researched and documented. In our case the HTML page is the progress indicator! When the browser loads the page progressively the header, the navigation bar, the logo at the top, etc. all serve as visual feedback for the user who is waiting for the page. This improves the overall user experience.

The problem with putting stylesheets near the bottom of the document is that it prohibits progressive rendering in many browsers, including Internet Explorer. These browsers block rendering to avoid having to redraw elements of the page if their styles change. The user is stuck viewing a blank white page.

The HTML specification clearly states that stylesheets are to be included in the HEAD of the page: "Unlike A, [LINK] may only appear in the HEAD section of a document, although it may appear any number of times." Neither of the alternatives, the blank white screen or flash of unstyled content, are worth the risk. The optimal solution is to follow the HTML specification and load your stylesheets in the document HEAD.

top | discuss this rule

Put Scripts at the Bottom

tag: javascript

The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won't start any other downloads, even on different hostnames.

In some situations it's not easy to move scripts to the bottom. If, for example, the script uses document.write to insert part of the page's content, it can't be moved lower in the page. There might also be scoping issues. In many cases, there are ways to workaround these situations.

An alternative suggestion that often comes up is to use deferred scripts. The DEFER attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, Firefox doesn't support the DEFER attribute. In Internet Explorer, the script may be deferred, but not as much as desired. If a script can be deferred, it can also be moved to the bottom of the page. That will make your web pages load faster.

top | discuss this rule

Avoid CSS Expressions

tag: css

CSS expressions are a powerful (and dangerous) way to set CSS properties dynamically. They were supported in Internet Explorer starting with version 5, but were deprecated starting with IE8. As an example, the background color could be set to alternate every hour using CSS expressions:

      background-color: expression( (new Date()).getHours()%2 ? "#B8D4FF" : "#F08A00" );
Copy after login

As shown here, the expression method accepts a JavaScript expression. The CSS property is set to the result of evaluating the JavaScript expression. Theexpression method is ignored by other browsers, so it is useful for setting properties in Internet Explorer needed to create a consistent experience across browsers.

The problem with expressions is that they are evaluated more frequently than most people expect. Not only are they evaluated when the page is rendered and resized, but also when the page is scrolled and even when the user moves the mouse over the page. Adding a counter to the CSS expression allows us to keep track of when and how often a CSS expression is evaluated. Moving the mouse around the page can easily generate more than 10,000 evaluations.

One way to reduce the number of times your CSS expression is evaluated is to use one-time expressions, where the first time the expression is evaluated it sets the style property to an explicit value, which replaces the CSS expression. If the style property must be set dynamically throughout the life of the page, using event handlers instead of CSS expressions is an alternative approach. If you must use CSS expressions, remember that they may be evaluated thousands of times and could affect the performance of your page.

top | discuss this rule

Make JavaScript and CSS External

tag: javascript, css

Many of these performance rules deal with how external components are managed. However, before these considerations arise you should ask a more basic question: Should JavaScript and CSS be contained in external files, or inlined in the page itself?

Using external files in the real world generally produces faster pages because the JavaScript and CSS files are cached by the browser. JavaScript and CSS that are inlined in HTML documents get downloaded every time the HTML document is requested. This reduces the number of HTTP requests that are needed, but increases the size of the HTML document. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the size of the HTML document is reduced without increasing the number of HTTP requests.

The key factor, then, is the frequency with which external JavaScript and CSS components are cached relative to the number of HTML documents requested. This factor, although difficult to quantify, can be gauged using various metrics. If users on your site have multiple page views per session and many of your pages re-use the same scripts and stylesheets, there is a greater potential benefit from cached external files.

Many web sites fall in the middle of these metrics. For these sites, the best solution generally is to deploy the JavaScript and CSS as external files. The only exception where inlining is preferable is with home pages, such as Yahoo!'s front page and My Yahoo!. Home pages that have few (perhaps only one) page view per session may find that inlining JavaScript and CSS results in faster end-user response times.

For front pages that are typically the first of many page views, there are techniques that leverage the reduction of HTTP requests that inlining provides, as well as the caching benefits achieved through using external files. One such technique is to inline JavaScript and CSS in the front page, but dynamically download the external files after the page has finished loading. Subsequent pages would reference the external files that should already be in the browser's cache.

top | discuss this rule

Reduce DNS Lookups

tag: content

The Domain Name System (DNS) maps hostnames to IP addresses, just as phonebooks map people's names to their phone numbers. When you type www.yahoo.com into your browser, a DNS resolver contacted by the browser returns that server's IP address. DNS has a cost. It typically takes 20-120 milliseconds for DNS to lookup the IP address for a given hostname. The browser can't download anything from this hostname until the DNS lookup is completed.

DNS lookups are cached for better performance. This caching can occur on a special caching server, maintained by the user's ISP or local area network, but there is also caching that occurs on the individual user's computer. The DNS information remains in the operating system's DNS cache (the "DNS Client service" on Microsoft Windows). Most browsers have their own caches, separate from the operating system's cache. As long as the browser keeps a DNS record in its own cache, it doesn't bother the operating system with a request for the record.

Internet Explorer caches DNS lookups for 30 minutes by default, as specified by the DnsCacheTimeout registry setting. Firefox caches DNS lookups for 1 minute, controlled by the network.dnsCacheExpiration configuration setting. (Fasterfox changes this to 1 hour.)

When the client's DNS cache is empty (for both the browser and the operating system), the number of DNS lookups is equal to the number of unique hostnames in the web page. This includes the hostnames used in the page's URL, images, script files, stylesheets, Flash objects, etc. Reducing the number of unique hostnames reduces the number of DNS lookups.

Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.

top | discuss this rule

Minify JavaScript and CSS

tag: javascript, css

Minification is the practice of removing unnecessary characters from code to reduce its size thereby improving load times. When code is minified all comments are removed, as well as unneeded white space characters (space, newline, and tab). In the case of JavaScript, this improves response time performance because the size of the downloaded file is reduced. Two popular tools for minifying JavaScript code are JSMin and YUI Compressor. The YUI compressor can also minify CSS.

Obfuscation is an alternative optimization that can be applied to source code. It's more complex than minification and thus more likely to generate bugs as a result of the obfuscation step itself. In a survey of ten top U.S. web sites, minification achieved a 21% size reduction versus 25% for obfuscation. Although obfuscation has a higher size reduction, minifying JavaScript is less risky.

In addition to minifying external scripts and styles, inlined 

Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template