Website Optimization: Try to Escape from 404 – Page Not Found

Article ID : Tip-003
Article Topic : Website Optimization
Article Title : Try to Escape from 404 – Page Not Found


Preface

Wen client always creates a request for the resources referenced in the page it is rendering and if server can’t find it sends 404 – Page Not Found.

Explanation

When we create a website with a number of webpages and embedded resources like images (in HTML and CSS), JavaScript Files or Cascading Style Sheets, it is well obvious that we may skip something to include or we have referenced which is not present in the application. If this missing is visible, it can be caught and taken care. But, what if it is not visible?

Let’s talk about the favicon.ico. Microsoft started the concept of the favorite’s icon with IE 4. Since then the trend is started ad is adopted by major browsers to fetch and display the favorite’s icon in the address bar o on the tabs icon (for the bowsers supporting tabs). Earlier the concept was to fetch this icon only when the page is added to the favorite’s menu. But today if the browser has no previous visit recorded for the site the page belongs to will create a domain/favicon.ico request. Once the data is fetched it will cached by the browsers. But if the server returns 404 Error it will try to get that every time the page is requested from the domain.

Most of the cases even after the deployment is over for the site we skip this small thing favicon.ico to be included in the application root and the server is bothered again and again by the requests to provide the information which actually it doesn’t have. Every time without getting frustrated it calmly says 404 – Page Not Found.

How to Resolve

You can avoid this very common situation specifically for the icon case by

  • Using a favicon.ico file in the root of the application.
  • Specifying the page icon in the header of the page.

There a number of tools available out there in the market (freeware or paid), which can help you find out the web requests and response for your browser. Using them you can identify what is requested and what you have received in response. Fiddler is a free tool used for the same purpose.

Conclusion

Any request which cannot be fulfilled by the server is a burden to the server and to the network and to the client who is making that request. We can actually avoid them by using little cautiousness.

Download Word Document

Website Optimization: Serve Resources from Different Hosts

Article ID : Tip-002
Article Topic : Website Optimization
Article Title : Serve Resources from Different Hosts


Preface

Today all modern browsers are multi-threaded. Means they can server the contents while downloading more than one resource simultaneously. But there are still a few restrictions. There can’t be more than 2 parallel requests threads from the same host.

Explanation

When your browser finds an embedded resource in your web page it creates a request from the host to deliver that resource. The browser adds these requests in the host wise resource download queue where they are served on the FIFO basis. If two resources are on the same host and the third one is on the other host it may be possible that you are receiving the third one before the first or second are downloaded. Here are few facts:

1.   Resource Download Queue
Each browser adds the resource request in a download queue. While parsing the HTML contents if the browser comes across any embedded resource like an image file or a style sheet file. It will add it to the download queue.
2.   Priority on the Bases of Resource Type
Every embedded resource download request will go into the queue as and when the parser comes across it and continue to parse the rest. In case of a JavaScript file however the case is bit different. If the browser finds a script file embedded in the Webpage it will stop parsing the rest of the page and will wait for that file to be downloaded first. So if that file is late in the queue, the wait and parsing time is high.
3.   Number of Requests per Host
Today the browsers are multithreaded to serve the content by parallel download and reduce the wait time. But according to the ISDN rule this limited to 2 parallel connections to the same host. So if there are more files embedded in a Webpage. It is going to take a considerably high time to fetch those resources.

How to Resolve

If the contents are served from different hosts the browser can create separate threads for them and have them downloaded parallel. However actually it impossible to have the separate host for each resource, we can have the separate hosts for different type of resources. In real scenarios having separate resource type hosts are also difficult, so we can fool the browser by serving the resources from different Subdomains, like Images from image.domain.com and scripts from script.domain.com. Here are a few points in short

  • Create the separate Subdomains for different type of objects.
  • Separate user images from the interface images and serve them from different Subdomains.
  • You can use the alias in place of actually creating subdomain in the DNS server like image.domain.com -> domain.com. However efficiency is at its max when you have not more than 4 alias per domain.

Conclusion

In case of rich user interface sites where there are high number of images and media files are embedded this technique is quite useful. Simply split the resources across multiple hosts and serve your users a much faster.

Download Word Document

Website Optimization: Reduce Total Number of Objects per Web Page

Article ID : Tip-001
Article Topic : Website Optimization
Article Title : Reduce Total Number of Objects per Web Page


Preface

Whenever a web page is requested from the server, the browser also seeks the embedded objects like style sheets, script files, images, media files etc. and tries to download them along with them. Each download creates a web request.

Explanation

A web page design always consists of various numbers of embedded objects. But sometimes the good UI comes out to be expensive for the user’s time prospective. A user can wait for a page to be loaded with in an average of 10 sec without feedback (refreshing / retrying). Reducing the number of objects in page can reduce the wait time effectively. Here are a few facts.

1.   Average Header Size
Each response consists of a header along with the contents, which is of approximately 512 bytes. Consider there are a number of images which are well optimized to reduce their size, but this header information will also be added to the each image download. For a page containing 100 such images will have to download 25 additional KBs, which is huge even the image size is very less.
2.   Round-Trip Latency
Each request has its legacy time of average 0.2 seconds for the completion of a round trip. So with a page having the 100 objects will delay in loading for 20 seconds irrespective of the speed of the internet connection.
3.   Packet Loss
There is approximate 0.7 percent of the loss of the data in a packet transferred, which will be requested again from the server. Each request creates its own packet of bytes and will lead in to more loss and recovery cycles.

How to Resolve

Even if the page is optimized for the size of objects, there are a few other things which play a considerable role in the website optimization. Reducing number of embedded objects is one of them. Here are a few tips how we can achieve that.

  • Stitch Images together (create a sprite).
  • Combine the different style sheets into one on the bases of their media type like screen, print etc.
  • Combine the script files according to their features and functionality.

Conclusion

There are situations where there it is required to have many number of embedded objects in a page, but still we should try to reduce the number to 20/page and should use the cache-able objects so that they are not requested again and again on every time a page is requested.

Download Word Document

Great Indian Developer Summit 2009

gids-logo

Four well packed days with Information and Technology that too latest in trend and which is going to change the future is called Great Indian Developer Summit 2009. Not only that, thousands of participants, 75+ presentations on technologies, 15+ Labs driven by experts from all over the globe and all this organized by media leader Saltmarch Media in a great environment of IISC, Bangalore.
day2

DAY 2:
Rich Internet Applications

This year also as the previous years it has came to prove that whatever new in a developer’s mind comes has been achieved and has not only given satisfaction to him but to the consumer too.

As a Developer & Designer, I pulled myself in the second day of the summit because this day the experts were gonna talk about RIA (Rich Internet Application), the future of web applications. On this particular day GIDS talked about following topics:

Web 2.0 & Social Applications
Enterprise 2.0
Mashups
Social Networking
Ajax In Action
Comet
Dynamic Scripting
Browsers & Rich UI
Rich Web Security
Rich Web Stories

There were five parallel sessions going on at a time, so I was not able to attend every session. But I was able to focus myself to the session of my interest. I am a developer on Microsoft platform and having a keen interest in new Microsoft Technologies, so picked up sessions targeted on Silverlight 3.

I have attended following sessions:

Unravelling the New in Microsoft Silverlight 3
Nahas Mohammed star1star1star1star1
Deep Dive – Microsoft Silverlight Pipelines
Praveen Srivatsa star1star1star1
Building Rich UI using ASP.Net AJAX, AXAJ Control Toolkit & jQuery
Harish Ranganathan star1star1star1star1
Reusable Components for Building Killer RIAs (on Adobe Flex)
Anirudh Sasikumar star1star1star1

You might be thinking that when I am talking about Microsoft Silverlight, what I was doing in the session for the Flex. The reason is very simple, when you have the chance to learn new technologies and side by side compare them with similar technologies, why shouldn’t? So before leaving for the day I grabbed this opportunity too. I have also rated the presenters along with their contents. All over here I would like to say that I had a nice Thursday. The biggest takeaway for me was numerous seed of growth opportunities, which is a result of combined efforts of huge exposure to technology and GIDS.

To conclude this post here I would like to encourage the readers to participate in this kind of events. These events not only give you a chance to explore the new and happening world around you, but also provide a good networking environment, which in terms help you grow in your career.

HTML Application (HTA) – Email IDs Extractor

I have written this application for them who want a simple and clean tool for extracting the Email IDs from a text content. Actually, I was myself in need of such a tool from long time, since I got a requirement to create a Database of Email IDs from Internet. I tried a lot of tools available on the Internet, but they all were too complex that one can find himself troubled very soon with them. So I thought of writing one myself, which is simple and usable by anyone with a basic knowledge of application. But, the big question was which language or platform to pick for this. The Answer was HTML using JavaScript and VBScript.

Here I am going to tell you how to use this simple but useful tool. Firstly download the application from here.

So you got the RAR archive containing the tool. When you run the tool you will see the screen as shown here:

startup

After welcome Screen you will see a prompt telling you that you don’t have any previously saved email ids in the database:

firsttimeload

Click Ok and you are ready to scrab any text containing the Email IDs. Just grab some text containing the Email IDs and paste it in the left hand side textbox and press enter.

idsindatabase

Here you go … you got all the unique Email IDs fetched from the text and collected in the right hand side list in fraction of seconds. This tool is smart enough that it will also not create a duplicate database entry of an existing Email ID. When ever you reopen the Email IDs Extractor it will maintain the previously saved Email IDs till the emailids.txt file exists in the same location as that of the tool (the database is text file based).

Whenever you need to use the database just open the emailids.txt in any text editor and use them.

So simple it is right !!!

I hope you will enjoy this tool. Don’t hesitate to drop me an Email if you get any trouble using it. I will be upgrading this tool according to the response I will get.

Enjoy!!!