Friday, August 15, 2008

IIS 6 & Dynamic Compression!

Ok - this really got me! In my previous post, I spoke about the cool factor with regard to enabling compression. What got me was that our application dynamically generated Crystal Reports. Once I enabled compression, this feature broke entirely. It was a disaster. We had to roll back compression.

Looking at the code and everything else, I got nowhere! Finally, I decided to turn off compression for the those particular directories. Ohh...what a nightmare. As you can probably tell from my previous posts, I've always worked in the Apache-Tomcat/ Linux environment, and not so much in the IIS / Windows environment. Well....I got a crash course in understanding the Oh very flawed documentation on Microsoft's website (where they have disable instead of enable and vice versa for command examples - very confusing for a beginner let me tell you).

I tried running the adsutil.vbs script as suggested in the Microsoft docs directly

adsutil.vbs set /LM/W3SVC//root/directory1/directory2/DoDynamicCompression FALSE

This didn't do much. The compression was still taking place. Eventually though, it turned out that I had to make a seperate metabase entry for every directory that I didn't want to compress, since IIS had to get to it and after struggling with Metabase, I did the following.

For every directory I wanted to disable Dynamic Compressoin, I created a entry in the metabase.xml file.

<IIsWebDirectory Location ="/LM/W3SVC//root/directory1/directory2"
DoDynamicCompression="FALSE" >


You could use adsutil.vbs also to create the webdirectory and then disable dynamic compression
Adsutil.vbs create "/LM/W3SVC//root/directory1/directory2" "IISWebDirectory"
Adsutil.vbs set /LM/W3SVC//root/directory1/directory2/DoDynamicCompression FALSE

Ofcourse, if the directory requires any SSL or any such conditions, it may be lost. So, I checked the previous entries for the directories, and just copied it over instead of using adsutil.

Once I did that, it eventually stopped dynamic compression and things started running fine again. My Google searches led me to one author who actually said that there needs to be such an entry, and that was the "aaha" moment. It would have been nice for Microsoft to mention in their voluminous notes about this property, that IISWebdirectory needs to be created. In fall fairness, the Metabase property documentation did say "IISWebdirectory" but to a novice like me, I expected that if the "root" itself was defined as an IISWebDirectory, everything else would get picked up....


Well...that was my learning experience. Hopefully, the others who stumble upon this will have a less harrying experience then I have had!

Wednesday, August 06, 2008

Network Latency.....

When you have a global application, your servers cannot all be located in the same geolocation of all your users. Some users, will have to hop skip and jump to get to your app. While in most cases, this is ok - here's something to keep in mind about.

Huge Pages: By this I mean, pages which can grow over even 50 - 100K. Anytime you start entering that range (could be because of search results, or something else), you have to remember, network latency will start coming into play. Why? If the user is going to have to download a dynamic page or a large size, the time it takes to download will generally piss the user off. Thats why we have smaller images, better compressed images and what not (jpeg, png...)

But what about the regular run of the mill pages? The Asp, JSP pages? What about those pages when they get into the 500K range or in our latest case 3.9Mb!!!!

For a long long time now (~ 2004 or may be even earlier), Webservers have had the ability to zip up the data and send it across, and have the browsers unzip it. (Before Http 1.1, there used to be "deflate" and then now most of us use gzip). To get a webserver to do that, you need to let the Webserver know, you can accept zipped content and it's ok for the Webserver to send you such content.

Accept-Encoding: gzip, deflate

Thats what needs to be sent to the server to allow you to do that. Well..for IIS 6.0, this is not turned on by default - so you can imagine what happened, when we moved some pages to .net in our application and the "ViewState" variable grew incredibly large...we ended up with a 4Mb page which was being transmitted over the network across the atlantic to a user on a dsl connection.....haha! Guess what the user felt?! :)

The other thing to remember is, you have both "Static" (js, css, html) and dynamic content ( asp, jsp etc). When you do have static content, its obvious that you want a cached zip file to be sent out and after the first time, if you've coded it right, the browser shouldn't re-request the same file( Hint: Set the content-expiration header).

For Dynamic content - you're not going to get saved files, but think about it. If you have enough headroom on your server, and you are expecting large files, you could set this also (Make sure you have a baseline, so that if it gets too hard on the server, you can turn it off). You may also be able to do this at directory level to zip only certain directories.

If you really don't want your global users to feel pain - this is a setting you may want to turn on. IIS 7.0 I'm told comes with this turned on by default for static content.

According to some websites (and in my feeble attempt to find out about Apache) - I'm told that it is not turned on by default.

This would make a huge difference to your global users.

Again - I found this by using the Coradiant tool - to tell us that the damn page was 4Mb and the user was half way across the atlantic, and was on a dsl connection.

I deduced that we weren't using gzip and confirmed by using the YSlow plugin for Firefox.

Good luck! - and if my faithful reader(s) want to tell me about their experiences with this it would be nice! (Ofcourse, we may be behind the curve here and not have enabled it, but I'm curious on how many others have actually enabled it :-) )