Monday, December 22, 2008

Comparisons - Microsoft software and Iron Man!

I'm trying to understand Microsoft Operations Manager - it's a beast. It's something that is powerful, but is complex and at this point very mind numbing.

I'm starting with this and someone walked over to my desk and asked me "Whats going on?" I couldn't resist not comparing this with the beast in Iron Man.

The "real" Iron Man has a very nice finesse robotic suit - where as the beast - is over engineered and can't do everything that the "real Iron Mans" robotic suit can do not to mention the beast is ugly and has a huge footprint!

I say this - because when we are developing tools / software -we try to make it as flexible as possible. However, when you try to give someone a tool with every lever possible, you need to find a way to make sure it has some value right out of the box. Otherwise - all your left is with frustration and possibly loss of users, not to mention productivity!

Executive Salries - Can we please STOP talking about them!

It seems that the media can't let go of the salaries of some of the CEO's or management in companies. I for one am sick and tired of hearing about it.

You really have 2 options -
1. Find a person who can do a better job at a lower salary.
2. Become a board member of the company and do something about it.

I don't get the point why the media or people have to complain about someone elses salary. If someone is not doing their job fire them. If they are or protected you as an investor from significant losses, then they deserve their salary. If you think you can do a better job - go for it, convince the board of directors.

What I don't get is, why the hell are you talking about it day in and day out, when there are significant other topics to be discussed about! Some executives don't deserve their salary, but when the company is not doing anything about it - why are you - perhaps a non-investor so worried about someone elses pay!

Friday, December 12, 2008

Republicans killing the bailout

I have only one question - weren't most of these guys fired during this previous election.

Didn't they see what happened when a big company like Lehman went down. You really really dont cut off another hand when one of them is bleeding.

Update: Now they're going back to the TARP funds. Didn't the Democrats ask the White House to do that in the first place?

The Republicans can have a new mantra - "Republicans - Our Way or the Highway" or "Republicans - Ask Us How to Lower Your Net Worth"

Friday, December 05, 2008

Perl vs Java!

I had to write a simple program to send an email when a log file didn't have the right data. This log file is always apppended to - and the last line is what I needed.

That meant reading the file backwards - or perhaps from the end. I wanted to find something really simple in Java to do it. As much as I looked (FileInputStream, FileReader) etc, it all involved copious amounts of work. Since, I didn't know the length of each line, I couldn't easily go to the end of the file, and set an offset!

Here's one of the examples I found: readingfilebackwards for java.

Here's how I did it in perl:
open FILE "$file";
@lines = reverse ;
foreach $line(@lines)
{
do something
}
Simplicity simplicity simplicity!

Human Communication changes....

When the Mumbai blasts happened, one thing was new - the social networks. Twitter! The number of messages on #mumbai during the blasts, almost a blow by blow account. There are stories where people inside the hotel used twitter to find out if it was really true, and blasts were occuring. Twitter for communication!! Next generation of quicker communication! Move to SMS's and Twitter! Smart phones are changing the way we communicate with each other.

I'm excited to think of what the next generation of communication is going to be - while wondering - are we as humans going to lose the actual social network of sitting at a table and making a deal?

Mumbai Blasts

This was a little disconcerting on all fronts. A number of questions raced through my head.

1. How could after a parliment attack in Dec 2001, India be so woefully unprepared with the placement of the commandos?
1. Why should it take 9 hrs for commandos to get there?
1. Why on earth should there be only one unit based in one place for a country as large and populous as India?
and many more (and yes, they're all 1 because they're all screaming questions).

That said, after the initial 60 hours of gruelling, just by 10 people, we now have a situation where we're not doing anything. The reason the ISI or whoever keeps coming after us, is because theres a ridiculously poor response to an attack. Right now - theres a woeful lack of leadership in India. I heard there were protests in India with the slogans " A country of Lions lead by a a leadership of donkeys" (I paraphrase) - but the quote is such an accurate depiction of most of the leaders in the country today.

I don't know what to say at this point. I feel so helpless (for the lack of a better word) right now. I really wish I was in India. I wish I could be putting together some sort of team - a leadership team of sorts, to start a grass roots movement. May be not now, but in 5 years time, we could be in a better position to demand from our leaders, or put better leaders in place, or perhaps be leaders ourselves?

All that though - starts from the individual. The more I think about it, the more I believe that each individual that starts thinking about the betterment of their surroundings, starts talking to their friends and neighbours and starts forming groups and networks is what will be required to get a better country. It seems so simple - but when you put a diverse group of people - who's first thought on their minds are - how am I going to earn my lunch today - it gets immensely difficult.

The middle class per se - is already caught up in the material wealth gaining - so the middle class turns a blind eye, and when nothing else - bribes its way to what they need. We need to stop doing that. We need to ask for better leaders - and we need to start making sure that our kids can become better people!

Tuesday, November 25, 2008

Hardware at a Software problem?

I'm trying to get people to change their thinking. Should we really be throwing more hardware at inefficient code? Not all code is inefficient, but some of it is! What is the cost tradeoff? People say, it's ok to throw money at Hardware because it's cheaper than getting a human to fix the code and everybody is over worked and underpaid!

I've always thought that you fix problems at the software level, so that your software runs absolutely efficiently and performs better, but is a new way of thinking? Hardware is cheap (especially in the INTEL /AMD world as compared to the Sun world), developers are expensive? I understand that at one point that it will get significantly more expensive to change the hardware, but till then?

What would you do?

Monday, November 24, 2008

Faster Disks for Dbs?

I was having a conversation with my colleagues on Db performance, when it veered into Solid State Drives for TempDb. Assuming that since TempDb is very high writes and less reads, the faster the disks can write, the better it is, and the more you have the I/Os split up, the faster it will be.

That got me thinking - for places that manage a number of databases or even 1 with high volume, wouldn't it make significant difference if we drop in a solid state drive for Temp? This got me further thinking - if all Temp is expendable, and not really stored, why should it be a drive at all.

Why not use RAM Disk instead? Power outage - doesn't matter - since the data is all Temp anyways!

But what about taking it up a notch. (If it's not done already - I'm claiming rights to it :-) ) . Why not storage providers like EMC or others offer Tray's of just RAM for Temp storage? DBA's could potentially create the TempDb / Tablespace on the RAM Disks. When DBAs are managing large number of instances (Oracle) they could just as well point to this RAM disk in this tray.

Wouldn't that significantly improve performance? Instead of always looking at increasing I/Os on disks - especially for Temp, wouldn't this be a better way to go?

I'm writing this before any research on this topic, but I couldn't stop myself! Have any of you done any experiments with this? Do you know of / have any articles on this topic? If so - can you send me a link :)

Thursday, November 13, 2008

India - Stark Contrasts!

I recently came back from my vacation in India. I love India. It's where I was born, where I studied and was raised for a better part of my life. Now that I stay in the US, and only go back for vacations and family visits, I can talk about how it has developed and perhaps not....

What I found in most places I traveled to was ...The roads need to get better. People need to be more civic minded. People need to realize that the Government cannot do everything! Ofcourse - you can also see that the Government employees are only interested in making sure that their wallets are well padded and more money flows under the table than in public. Thats for another discussion.

The reason I write this is is - I've visited a couple of Software Development shops there. They're awesome. Some of the larger ones have canteens to buy food at (not vending machines, but actual food) and a very nice atmosphere. I was surprised at the number of flat screens. I also visited a number of malls ( you cannot believe the number of these complexes that have sprouted up all over the city - even in almost villages!). You can get some of the finest designer items from Gucci to Versace to anything you name it you get it. Whats even more surprising is, that most of these shops have buyers - and most of them are middle class working men and women!

Here's where the stark contrast comes in. When you enter any of these malls - it feels like you've been transported into a different country. The creature comforts, the designer brands and what not. Exit that mall and you're back in the midst of bad roads and sometimes very polluted air. I think this tells more on the government of the city, state & country more than anything else. The people in India are very creative. They make the best of what they have. Yet - the Government does not reciprocate, their employees - all they do is pad their pockets with bribes, and spend that money in these malls, but no services are provided to the public.

Until that mentality changes, until the people demand more, get educated (they try, but a huge uneducated population is better for the politicians - give them a beer and ask them for their vote) and DEMAND more - I don't think the city is going to change.

They changed to an open market, because the country as a whole was pushed to the brink with it's debt and policies. I hope the country is not pushed again to the brink due to the lack of infrastructure. This is probably the best time for India to reinvest in the infrastructure - not just by private industry, but by the government too....

I guess I could have summed this whole post into one word - Infrastructure!

I wish I did have pictures though. It would definitely show the stark contrasts.

Monday, September 29, 2008

Another Open letter - call to arms for the Rescue package

Another open letter to Obama...something I wish he would say or something along these lines...

Dear American People: Today the Congress didn't pass a very important bill and I understand many of you called your Congressman / woman to oppose the bill because it's a bail out for Wall St & their bad investments. However unfortunate it is, what affects us affects them and vice versa. Today, the big banks and investment houses on Wall street are crumbling, because they are unable to get loans. What this means to us, the ordinary consumer, is that when we want to buy a car there wont be money available or when you want to get a loan to grow your small business, buy a tractor to farm your land or buy seed, the interest rate you will pay will be beyond the normal, and perhaps wont allow you to get a profit. The young people who are starting a family and need a loan to buy a house ...wont have the ability to get a mortgage. This is why this bill is important. While it looks ridiculous that we the tax payer, have to bear the burden, it is because the stakes are so high, that the government has to intervene. The Government must be the place where an ordinary American can go to get help, but it should also be the place where businesses come as a last resort.

Tonight, we must take this step to help these businesses, while keeping our core principles of oversight and protection for the tax payer we must make sure that the American financial system is better and healthier, and this is why this package is that important.
I urge you to call your representative in the house and senate and tell them that you will not let America fail and they should not let America fail.
You voted them to be your leader and representative in the government. Tell them to lead and represent you.

Thursday, September 25, 2008

My letter to Obama wrt bailout.....

Hi Everyone,

I have seldom been participating vociferously in the current
election cycle (though people sitting with me at work may disagree ;-)
). However, with the current climate and Mcain's latest stunt, I
really wanted to put out something -- so that you may comment (I'm
also sending this as a suggestion to Senator Obama). Below is the
text of my email I sent to the campaign (I have no idea if they'll
even see it, but it sure trumps anything they're talking about so far
:-) )

Best Regards,

Murali

Sub: Alternative to Bailout
Dear Senator Obama,

Your response to McCain's political stunt is amazing, especially
within the context of the current economic crisis. The problem right
now is that there is an endless cycle of homeowners and communities a
lot losing value in their property, as well as losing their homes
because of foreclosure.

To stem the problem, you could / should propose something on the
following lines:

1: The markets have liquidity problems. So lets solve that by
injecting liquidity instead of buying the bad loans. (You could inject
50-100 billion dollars to give markets instant liquidity instead of
buying bad loans giving the Fed the authority
to buy GOOD commercial paper instead of bad assets (if they already don't have it) -- that will inject
liquidity instantly. The current problem is that banks aren't lending
because of their balance sheets, the Fed can surely lend and even make
a profit!)
2: Invest 200 billion dollars into easing the burden on home owners
by modifying bankruptcy laws (let judges modify the mortgage contracts) and converting bad mortgages into 30 yr fixed ones.
Those who can't show their income lose their home automatically to
avoid foreclosure. Only help those who have been cheated, those who
have gone into the homes with full knowledge need to be held
responsible for their actions.
3. Propose that people who can afford their homes but go into
foreclosure will be brought in front of a judge - this will zap the
republican mantra of "What about all those people who made the right
choice of not getting homes that they can't afford" (This and my
previous point)
4. Propose that people who do go into foreclosure MUST NOT destroy
the home property or will face legal criminal action (this will
prevent foreclosed homes in depreciating in value even further and can
be sold off quicker)

These investments coupled with a major push to resolve the underlying
crisis (by media, marketing and just American resolve), will I'm sure
get us out of the crisis much faster than anything GW or Mcain can
propose.

Sincerely,



PS: While I generally try to keep my comments just to tech, this is a much bigger issue which will touch everyone in almost any industry. So, I'm posting it :-). My faithful reader(s), please comment.

Tuesday, September 16, 2008

SPE - Software Performance Engineering

In my previous life as a performance engineer (I still am, but thats for later), my boss used to harp about the 7 steps of SPE. He would out of the blue, jump out and ask us - what is the SPE methodology and name the steps. Most of us would just stare out, ocassionally coming up with one or two of them (seldom in the right order).

It turned out I was reminded of it yesterday and was suggested I blog it. So - what are these famed 7 steps?

1. Assess performance risk
2. Identify critical use cases
3. Select key performance scenarios
4 Establish performance objectives
5. Construct performance models
6 Determine software resource requirements
7. Determine system resource requirements


Most of those are self explanatory. Funny thing is, in all the days I was there, we used to always try to get to that methodology and try to implement it. Ironically, we were doing it anyways, almost anybody who does anything with PerfEng has to end up doing it, but perhaps not that clearly defined.

The only one which most people may not do is Construct Performance Models -- I would replace that for existing systems with -- Construct a baseline performance model. For new systems - you can hypothesise away, but till you actually run a test, you're never going to get anything.

So - a tip of the imaginary hat to my old boss. And the bigger thing which I'm sure he'll be happy about is, that I do follow it to this day. It's insane to think that anybody who is in the PerfEng world wouldn't be doing any or all of it. To define it and to split it out into its own parts, is IMHO more academic than anything else. It's good to document those things, but in the long run, if you get stuck in the process, you don't have time to implement.

Friday, August 15, 2008

IIS 6 & Dynamic Compression!

Ok - this really got me! In my previous post, I spoke about the cool factor with regard to enabling compression. What got me was that our application dynamically generated Crystal Reports. Once I enabled compression, this feature broke entirely. It was a disaster. We had to roll back compression.

Looking at the code and everything else, I got nowhere! Finally, I decided to turn off compression for the those particular directories. Ohh...what a nightmare. As you can probably tell from my previous posts, I've always worked in the Apache-Tomcat/ Linux environment, and not so much in the IIS / Windows environment. Well....I got a crash course in understanding the Oh very flawed documentation on Microsoft's website (where they have disable instead of enable and vice versa for command examples - very confusing for a beginner let me tell you).

I tried running the adsutil.vbs script as suggested in the Microsoft docs directly

adsutil.vbs set /LM/W3SVC//root/directory1/directory2/DoDynamicCompression FALSE

This didn't do much. The compression was still taking place. Eventually though, it turned out that I had to make a seperate metabase entry for every directory that I didn't want to compress, since IIS had to get to it and after struggling with Metabase, I did the following.

For every directory I wanted to disable Dynamic Compressoin, I created a entry in the metabase.xml file.

<IIsWebDirectory Location ="/LM/W3SVC//root/directory1/directory2"
DoDynamicCompression="FALSE" >


You could use adsutil.vbs also to create the webdirectory and then disable dynamic compression
Adsutil.vbs create "/LM/W3SVC//root/directory1/directory2" "IISWebDirectory"
Adsutil.vbs set /LM/W3SVC//root/directory1/directory2/DoDynamicCompression FALSE

Ofcourse, if the directory requires any SSL or any such conditions, it may be lost. So, I checked the previous entries for the directories, and just copied it over instead of using adsutil.

Once I did that, it eventually stopped dynamic compression and things started running fine again. My Google searches led me to one author who actually said that there needs to be such an entry, and that was the "aaha" moment. It would have been nice for Microsoft to mention in their voluminous notes about this property, that IISWebdirectory needs to be created. In fall fairness, the Metabase property documentation did say "IISWebdirectory" but to a novice like me, I expected that if the "root" itself was defined as an IISWebDirectory, everything else would get picked up....


Well...that was my learning experience. Hopefully, the others who stumble upon this will have a less harrying experience then I have had!

Wednesday, August 06, 2008

Network Latency.....

When you have a global application, your servers cannot all be located in the same geolocation of all your users. Some users, will have to hop skip and jump to get to your app. While in most cases, this is ok - here's something to keep in mind about.

Huge Pages: By this I mean, pages which can grow over even 50 - 100K. Anytime you start entering that range (could be because of search results, or something else), you have to remember, network latency will start coming into play. Why? If the user is going to have to download a dynamic page or a large size, the time it takes to download will generally piss the user off. Thats why we have smaller images, better compressed images and what not (jpeg, png...)

But what about the regular run of the mill pages? The Asp, JSP pages? What about those pages when they get into the 500K range or in our latest case 3.9Mb!!!!

For a long long time now (~ 2004 or may be even earlier), Webservers have had the ability to zip up the data and send it across, and have the browsers unzip it. (Before Http 1.1, there used to be "deflate" and then now most of us use gzip). To get a webserver to do that, you need to let the Webserver know, you can accept zipped content and it's ok for the Webserver to send you such content.

Accept-Encoding: gzip, deflate

Thats what needs to be sent to the server to allow you to do that. Well..for IIS 6.0, this is not turned on by default - so you can imagine what happened, when we moved some pages to .net in our application and the "ViewState" variable grew incredibly large...we ended up with a 4Mb page which was being transmitted over the network across the atlantic to a user on a dsl connection.....haha! Guess what the user felt?! :)

The other thing to remember is, you have both "Static" (js, css, html) and dynamic content ( asp, jsp etc). When you do have static content, its obvious that you want a cached zip file to be sent out and after the first time, if you've coded it right, the browser shouldn't re-request the same file( Hint: Set the content-expiration header).

For Dynamic content - you're not going to get saved files, but think about it. If you have enough headroom on your server, and you are expecting large files, you could set this also (Make sure you have a baseline, so that if it gets too hard on the server, you can turn it off). You may also be able to do this at directory level to zip only certain directories.

If you really don't want your global users to feel pain - this is a setting you may want to turn on. IIS 7.0 I'm told comes with this turned on by default for static content.

According to some websites (and in my feeble attempt to find out about Apache) - I'm told that it is not turned on by default.

This would make a huge difference to your global users.

Again - I found this by using the Coradiant tool - to tell us that the damn page was 4Mb and the user was half way across the atlantic, and was on a dsl connection.

I deduced that we weren't using gzip and confirmed by using the YSlow plugin for Firefox.

Good luck! - and if my faithful reader(s) want to tell me about their experiences with this it would be nice! (Ofcourse, we may be behind the curve here and not have enabled it, but I'm curious on how many others have actually enabled it :-) )

Thursday, May 22, 2008

Microsoft's LiveSearch cashback program

Ooohhh...is this the Google killer? Are they going to grab market share from Google? Apparently some of the analysts think so. Hahaha is all I can say. I won't be surprised if people search on Google for cash back and then go to the Microsoft's cashback site to take advantage of the offer. Thats still secondary. Here's the kicker....

I picked the very first thing I saw on Live Search - a Canon PowerShot A470 - gray and with Cash back, the lowest offer $107.31.

I did a Google Product search for the same thing - Cheapest Price - $97.99 from Amazon!

Here's the thing, you can offer people all the candy they want, but once the candy runs out, they're not going to stay for a second longer. It's like the politicians and their promises, it only lasts till election day.

So, for Microsoft to "disrupt" the model, I sure hope they come up with a better search engine, not better candy. Otherwise, all they're going to do is spend their money and get nothing for it in return.

Ohh - and by the way - from what I see on Microsoft's site, when the vendor charges you more, you get a bigger cash back (ofcourse, it's not enough to beat the lower price by any margin).

I think I'm going to stick to Google. I'm waiting for Google to announce it's next quarter results. It would be really nice to see Google announcing better returns and more searches, and Microsoft announcing ....well....!

Tools to Evaluate Web Design

I've spoken about how much I like coradiant, especially with the way they can capture all of the Web users session data etc.

On a different note, while thats good to evaluate the response time of pages etc, what about how much users actually like the webpage, or what they're doing on a webpage. I came across this posting from sixrevisions : 7 Incredibly Useful Tools for Evaluating a Web Design. Sixrevisions is going to be come a daily stop for me now!

I already use YSlow and it's pretty neat, especially when you want to track down whats taking the most time, or how best to optimize your webpage, but there are a few more tools that caught my eye.

Also, incase you miss it, there is a note about
Crazy Egg from one of the commenters on sixrevisons :

ncdeveloper

April 22nd, 2008

Crazy Egg is NOT for a production website by any means. I use it for the EPA websites and while it looks nice and flashy, it is not accurate at all after a few weeks. The counts just seem to go loopy after a few weeks. Support is just as weak. Every time I called, it sounded as if I woke the guy up or pulled him away from a tv show. Obviously a small company that was unprepared for moderate to high volume. If you have clients that want a real and accurate heatmap, look elswhere.


Please note, that that comment is there only because I'vnt tried it yet, but its also a fairly significant comment. I'm sure the reader(s) of this blog will understand that I'm in no way for or against using any of the tools on that site. I've tried only YSlow and I highly recommend it though :)

Thursday, May 15, 2008

MySQL applications

Well, I've been using MySQL for the past few months now, and I guess, this list is again a good list to keep handy i.e. Handy Applications for a mysql database.

http://sixrevisions.com/tools/applications_mysql_databases/

Some really cool Web Developer tools

Ok....this got "Dugg" and is on my list. I've tried some of the tools provided on these pages, but some of the tools (FireBug & Web Developer for Firefox) are just amazing.

Here's a list of the Top 20 Useful tools for Web Development.
http://sixrevisions.com/tools/20_web_development_tools/

And if you do use IE (ughh)
http://sixrevisions.com/tools/internet_explorer_extensions_addons_web_developers/

Apparently - one of my colleagues who used the Microsoft Web Developer toolbar long time ago had tremendous trouble with it. He doesn't know how it is now.

Anyways - now there's more to look at.

Monday, May 12, 2008

SQL Statements - Always evaluate on the right side of the expression

While trying to tune a SQL Stored procedure, the DBA came across this statement....

select X from where id * -1 = id

That would mean that the optimizer doesn't know what index to use and that simple query took almost 1/2 second with 1000's of reads.

All he did was switch that around
select X from where id = id * -1

It executed in 0 seconds.

The entire stored procedure shed more than 1/2 the time it took to return data!

His simple statement: Always evaluate on the Right Hand Side!

Truesight TCP Timeout

So, this past week when looking at the TrueSight data from Coradiant, something jumped out at me. Though the data was being returned on the screen (from the Web app), truesight was reporting it as a Timeout (Server time out). This was odd.

I put in a call to the friendly Coradiant tech support. In 2 mins, the support tech got to the bottom of it. It turns out that TrueSight's TCP timeout was set to 30 seconds. Well, with our Ajax call, it just didn't work. Getting that time out set to 300 seconds (yes, horrible performance -- I know, we are working on that too) took care of it.

This information on how to make this change is not public, and must be done only by a Truesight tech. So, while its not something that we can change, it's something that the new Truesight image will atleast allow us to change!

Wednesday, April 16, 2008

Netezza

This is just a quick post....

A colleague of mine was talking about how he was working with Netezza on a project of his, and I thought I'd look it up. It seems to be a super fast Data warehousing database. It's actually a DW Appliance!!!!

They move the analytics right next to the database, and things what you would need to do by pulling out data are instead being done right on the Db. Wow!

Something to check out I guess - and see if it will make sense to your app.


Here's a picture from their website:

Tuesday, April 15, 2008

Etags - Performance slowdowns / network bandwidth waste

A friend and ex-colleague of mine (Anand) introduced me today to the Firefox plugin YSlow.

I quickly installed it on Firefox since I already had Firebug installed. This is an awesome plugin!

I ran it on our internal website, and found a number of things, but what caught my eye was Etags. I never knew what they were and wanted to dig into it a little more.

So what are Etags?
From Yahoo:

Entity tags (ETags) are a mechanism that web servers and browsers use to determine whether the component in the browser's cache matches the one on the origin server. (An "entity" is another word a "component": images, scripts, stylesheets, etc.) ETags were added to provide a mechanism for validating entities that is more flexible than the last-modified date. An ETag is a string that uniquely identifies a specific version of a component. The only format constraints are that the string be quoted. The origin server specifies the component's ETag using the
ETag response header.

Heres the entire article: Etags

Yahoo further goes on to talk about what the problem is with Etags.

The problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. ETags won't match when a browser gets the original component from one server and later tries to validate that component on a different server, a situation that is all too common on Web sites that use a cluster of servers to handle requests.

Apparently, the problem exists with both Apache, IIS 5.0 & 6.0 servers. What does this mean to Load Balanced servers? It means that if the request switches from one webserver to another for any reason, then your cached image is now reloaded completely, instead of just sending a quick 304 - waste of bandwidth and perhaps worser performance (Basically by using an Etag on load balanced servers, your proxy caching and content caching don't work as well as intended!).

By removing the Etag completely (especially in LB scenarios), it can instead just validate with the Last-Modified header and not use the Etag.

I'm reiterating here, what's already said very eloquently. Be sure to read the article. They have code for fixing both Apache & IIS (link to Microsoft's article)

Monday, April 07, 2008

Keeping tests short & Data Managable: An Example

A few days ago I wrote up a blog post with some essential points to remember while performance testing. Here's why I wrote it and why I think it's a valid approach.

We had to test a very complicated application, which had many facets, it had a Web component, database component, lots of windows applications, com objects and so and so forth.

I first asked: What is it we want to find out from this test?
The response: Given what we do, and our major transactions how will our application be affected by migrating from Db version X to Db Version X+1.

That was it, simple concise objective.

What did I (we) do?
Background: All our transactions start from the Web Layer i.e the browser.

I used Coradiant to find out the most traversed paths and listed the top 50 use cases. From there, I spoke to the project manager to get a sense of what are the most critical use cases. For this particular test, all I cared were most critical. We came up with 5 (not including ofcourse the Login component)

Next: Develop the scripts and test them in house. We then took this highly portable script (I used VTS in Loadrunner to give me unique users and such) to the Testing Center.

What did we setup: The boxes were all setup by the DBA and Sys admin.
The scripts were so simple, that it took a very short time after the boxes were setup, to kick off the tests (mind you, even the scenario was created at the test center) (around or less than 1/2 hr). This is what I mean by keeping the tests, simple and reusable.

How did we test?
We first tested on Db Version X. We got X number of transactions and looked at critical response times.

Next we tested on Db Version X+1. We got a significantly lesser number of transactions and a much larger response time for a few transactions. We were immediately able to tell the vendor that there is something missing. By keeping the test length small as well as the number of transactions we were able to give this data back immediately. That means right after we validated that our test was not bogus (by resetting all the variables and running the test again) we were able to tell our Db vendor that there is a problem. In less than 2 hrs we were able to do this (2 hrs after we started running tests on the new Db version) from start to finish.

This was because we were able to see our data really quickly. I stress on this a lot, it's important to look at your data soon after the test. The most important part of your test is the data. If you get overwhelmed with it, you're doing your entire team a disservice.

How did we solve it? We got the vendor to give us a patch which was expected in the following service pack. It immediately got out transaction count up and we were off to the races.

This is a simple validation test. There a lot more complicated things in performance testing. The reason I show this as an example is because I believe that when people performance test, we try to get a lot of data, and that is good, if thats what you're looking for, but if you want to validate a build or something like this, getting overwhelmed with data is not a good thing.

Caveat: If you're doing full scale Db X to Db Y then you need to have a much more comprehensive test. Also, if you're going to look for memory leaks in your application or any other kind of leak, you're probably going to need a long running test (may be an hr or even more) to see it. (I should mention here, that my colleague actually reminded me of memory leaks and the use of long running tests after my previous post, and I thought it would be good to note it here)

Friday, April 04, 2008

Coradiant - How I use it

In my previous post Performance Testing I spoke about using Coradiant to figure out what to performance test. I must note here that Coradiant is only good for HTTP / HTTPS applications (basically webapps, for other apps, you can't use this tool)

So, what do I use Coradiant for?

1. Keep an eye on the application as a whole.
2. Figure out why, how and where a User is having trouble. (Especially when they call help and it gets escalated, this is just an awesome way to figure out where the trouble is - is it the network or the server)
3. Figure out what to put in my Loadrunner tests.
4. Help the sys admins by creating some really useful watch points to let us know what the error pages are. We found that our application was so clean (in terms of 404s), the only offending link was a missing css.
5. See if users having trouble are having trouble because of application issues or network issues, so that the correct people are notified.
6. Proactively find where pages are taking too long and let people know.
7. Use their Truesight & TruesightBI product to generate some Before and After fix graphs to validate any particular fix that has been moved into production.

Performance Testing...

Well, I've left my old job being a Performance Engineer, and into a new job with some Perf Eng responsibilities. I've used Loadrunner to do my performance testing and some of the terminology may be unique to Loadrunner.

Well, this got me reflecting, and thinking how best to implement a process here so that we are able to performance test well and rapidly implement the whole process from scratch.

Here's what I came up with.
0. Figure out WHY you are testing: Response time? Capacity Planning? Figure out WHAT you want to report (Current Performance?, Performance after changes?, Available headroom on hardware?) Now you can plan....

1. First off, identify what you want to test: Sure you know that you want to test your critical use cases, but what about after that? Sit down with some users and figure out what they use the most. Or you can use a really cool tool like Coradiant to do what you need. Using a tool like Coradiant will give you the most accessed pages, and you can get user sessions to see the most used paths. Guessing will get you only so far, but if you want to regularly test your application so that your users are always happy - tools like Coradiant are a very handy.

Test anything new thats going to be added to the application. Most Web applications are always in a constant state of development, and anything new you add, should also be tested, lest you end up with a bunch of very unhappy users.

2. Keep your tests small and repeatable: Testing 100 things at the same time will overload you with data and nothing more. You also will perhaps end up producing conditions that will never occur. That doesn't mean you should test every use case individually (which in my opinion may give you some data, but will probably never catch race conditions, deadlocks etc), it means keep a core number of tests.

When you add something new to the mix, be prepared for changes in numbers. If you are looking for exact same numbers, then you should be doing the exact same tests.

3. Length of Tests: Depending upon the application, you need to define the length of your tests. Too long tests mean that you'll have to wait a long time for results. Really short tests don't give you any reasonable data, because the users have not yet reached a stable state. I used to do 1 hr tests, but now I think that was over kill. 10 Min ramp up (100 VUsers) and 15 min tests would have given us just about the same amount of information as an hr long test.

4. Don't over complicate your testing: Remember your audience. Remember, that you want to prove / test that / whether the application will perform well under load. There are 2 different types of load:
1. Large data set
2. Large number of users on the system.

For regular testing, you need to find some sort of mid point and test. If you have written your performance tests right, you should be able to simulate 2 very easily. For 1, you will have to rely on your developers to provide an adequate data set.

Most important of all, always remember, you need to be providing useful information back to the team. Doing the same test 20 different ways will not give you useful results. Unless you define what & why you're testing, you're perhaps wasting your time.

Wednesday, March 26, 2008

Spring with ACEGI

There are a number of posts out there that do Spring with ACEGI.
I found this an interesting and useful tutorial when trying to hookup the Password encoder with acegi

http://www.i-proving.ca/space/Technologies/Acegi+Security+System+for+Spring