When you have a global application, your servers cannot all be located in the same geolocation of all your users. Some users, will have to hop skip and jump to get to your app. While in most cases, this is ok - here's something to keep in mind about.
Huge Pages: By this I mean, pages which can grow over even 50 - 100K. Anytime you start entering that range (could be because of search results, or something else), you have to remember, network latency will start coming into play. Why? If the user is going to have to download a dynamic page or a large size, the time it takes to download will generally piss the user off. Thats why we have smaller images, better compressed images and what not (jpeg, png...)
But what about the regular run of the mill pages? The Asp, JSP pages? What about those pages when they get into the 500K range or in our latest case 3.9Mb!!!!
For a long long time now (~ 2004 or may be even earlier), Webservers have had the ability to zip up the data and send it across, and have the browsers unzip it. (Before Http 1.1, there used to be "deflate" and then now most of us use gzip). To get a webserver to do that, you need to let the Webserver know, you can accept zipped content and it's ok for the Webserver to send you such content.
Accept-Encoding: gzip, deflate
Thats what needs to be sent to the server to allow you to do that. Well..for IIS 6.0, this is not turned on by default - so you can imagine what happened, when we moved some pages to .net in our application and the "ViewState" variable grew incredibly large...we ended up with a 4Mb page which was being transmitted over the network across the atlantic to a user on a dsl connection.....haha! Guess what the user felt?! :)
The other thing to remember is, you have both "Static" (js, css, html) and dynamic content ( asp, jsp etc). When you do have static content, its obvious that you want a cached zip file to be sent out and after the first time, if you've coded it right, the browser shouldn't re-request the same file( Hint: Set the content-expiration header).
For Dynamic content - you're not going to get saved files, but think about it. If you have enough headroom on your server, and you are expecting large files, you could set this also (Make sure you have a baseline, so that if it gets too hard on the server, you can turn it off). You may also be able to do this at directory level to zip only certain directories.
If you really don't want your global users to feel pain - this is a setting you may want to turn on. IIS 7.0 I'm told comes with this turned on by default for static content.
According to some websites (and in my feeble attempt to find out about Apache) - I'm told that it is not turned on by default.
This would make a huge difference to your global users.
Again - I found this by using the Coradiant tool - to tell us that the damn page was 4Mb and the user was half way across the atlantic, and was on a dsl connection.
I deduced that we weren't using gzip and confirmed by using the YSlow plugin for Firefox.
Good luck! - and if my faithful reader(s) want to tell me about their experiences with this it would be nice! (Ofcourse, we may be behind the curve here and not have enabled it, but I'm curious on how many others have actually enabled it :-) )
1 comment:
we enable it in our lab
Post a Comment