A few days ago I wrote up a blog post with some essential points to remember while performance testing. Here's why I wrote it and why I think it's a valid approach.
We had to test a very complicated application, which had many facets, it had a Web component, database component, lots of windows applications, com objects and so and so forth.
I first asked: What is it we want to find out from this test?
The response: Given what we do, and our major transactions how will our application be affected by migrating from Db version X to Db Version X+1.
That was it, simple concise objective.
What did I (we) do?
Background: All our transactions start from the Web Layer i.e the browser.
I used Coradiant to find out the most traversed paths and listed the top 50 use cases. From there, I spoke to the project manager to get a sense of what are the most critical use cases. For this particular test, all I cared were most critical. We came up with 5 (not including ofcourse the Login component)
Next: Develop the scripts and test them in house. We then took this highly portable script (I used VTS in Loadrunner to give me unique users and such) to the Testing Center.
What did we setup: The boxes were all setup by the DBA and Sys admin.
The scripts were so simple, that it took a very short time after the boxes were setup, to kick off the tests (mind you, even the scenario was created at the test center) (around or less than 1/2 hr). This is what I mean by keeping the tests, simple and reusable.
How did we test?
We first tested on Db Version X. We got X number of transactions and looked at critical response times.
Next we tested on Db Version X+1. We got a significantly lesser number of transactions and a much larger response time for a few transactions. We were immediately able to tell the vendor that there is something missing. By keeping the test length small as well as the number of transactions we were able to give this data back immediately. That means right after we validated that our test was not bogus (by resetting all the variables and running the test again) we were able to tell our Db vendor that there is a problem. In less than 2 hrs we were able to do this (2 hrs after we started running tests on the new Db version) from start to finish.
This was because we were able to see our data really quickly. I stress on this a lot, it's important to look at your data soon after the test. The most important part of your test is the data. If you get overwhelmed with it, you're doing your entire team a disservice.
How did we solve it? We got the vendor to give us a patch which was expected in the following service pack. It immediately got out transaction count up and we were off to the races.
This is a simple validation test. There a lot more complicated things in performance testing. The reason I show this as an example is because I believe that when people performance test, we try to get a lot of data, and that is good, if thats what you're looking for, but if you want to validate a build or something like this, getting overwhelmed with data is not a good thing.
Caveat: If you're doing full scale Db X to Db Y then you need to have a much more comprehensive test. Also, if you're going to look for memory leaks in your application or any other kind of leak, you're probably going to need a long running test (may be an hr or even more) to see it. (I should mention here, that my colleague actually reminded me of memory leaks and the use of long running tests after my previous post, and I thought it would be good to note it here)
Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts
Monday, April 07, 2008
Friday, April 04, 2008
Performance Testing...
Well, I've left my old job being a Performance Engineer, and into a new job with some Perf Eng responsibilities. I've used Loadrunner to do my performance testing and some of the terminology may be unique to Loadrunner.
Well, this got me reflecting, and thinking how best to implement a process here so that we are able to performance test well and rapidly implement the whole process from scratch.
Here's what I came up with.
0. Figure out WHY you are testing: Response time? Capacity Planning? Figure out WHAT you want to report (Current Performance?, Performance after changes?, Available headroom on hardware?) Now you can plan....
1. First off, identify what you want to test: Sure you know that you want to test your critical use cases, but what about after that? Sit down with some users and figure out what they use the most. Or you can use a really cool tool like Coradiant to do what you need. Using a tool like Coradiant will give you the most accessed pages, and you can get user sessions to see the most used paths. Guessing will get you only so far, but if you want to regularly test your application so that your users are always happy - tools like Coradiant are a very handy.
Test anything new thats going to be added to the application. Most Web applications are always in a constant state of development, and anything new you add, should also be tested, lest you end up with a bunch of very unhappy users.
2. Keep your tests small and repeatable: Testing 100 things at the same time will overload you with data and nothing more. You also will perhaps end up producing conditions that will never occur. That doesn't mean you should test every use case individually (which in my opinion may give you some data, but will probably never catch race conditions, deadlocks etc), it means keep a core number of tests.
When you add something new to the mix, be prepared for changes in numbers. If you are looking for exact same numbers, then you should be doing the exact same tests.
3. Length of Tests: Depending upon the application, you need to define the length of your tests. Too long tests mean that you'll have to wait a long time for results. Really short tests don't give you any reasonable data, because the users have not yet reached a stable state. I used to do 1 hr tests, but now I think that was over kill. 10 Min ramp up (100 VUsers) and 15 min tests would have given us just about the same amount of information as an hr long test.
4. Don't over complicate your testing: Remember your audience. Remember, that you want to prove / test that / whether the application will perform well under load. There are 2 different types of load:
1. Large data set
2. Large number of users on the system.
For regular testing, you need to find some sort of mid point and test. If you have written your performance tests right, you should be able to simulate 2 very easily. For 1, you will have to rely on your developers to provide an adequate data set.
Most important of all, always remember, you need to be providing useful information back to the team. Doing the same test 20 different ways will not give you useful results. Unless you define what & why you're testing, you're perhaps wasting your time.
Well, this got me reflecting, and thinking how best to implement a process here so that we are able to performance test well and rapidly implement the whole process from scratch.
Here's what I came up with.
0. Figure out WHY you are testing: Response time? Capacity Planning? Figure out WHAT you want to report (Current Performance?, Performance after changes?, Available headroom on hardware?) Now you can plan....
1. First off, identify what you want to test: Sure you know that you want to test your critical use cases, but what about after that? Sit down with some users and figure out what they use the most. Or you can use a really cool tool like Coradiant to do what you need. Using a tool like Coradiant will give you the most accessed pages, and you can get user sessions to see the most used paths. Guessing will get you only so far, but if you want to regularly test your application so that your users are always happy - tools like Coradiant are a very handy.
Test anything new thats going to be added to the application. Most Web applications are always in a constant state of development, and anything new you add, should also be tested, lest you end up with a bunch of very unhappy users.
2. Keep your tests small and repeatable: Testing 100 things at the same time will overload you with data and nothing more. You also will perhaps end up producing conditions that will never occur. That doesn't mean you should test every use case individually (which in my opinion may give you some data, but will probably never catch race conditions, deadlocks etc), it means keep a core number of tests.
When you add something new to the mix, be prepared for changes in numbers. If you are looking for exact same numbers, then you should be doing the exact same tests.
3. Length of Tests: Depending upon the application, you need to define the length of your tests. Too long tests mean that you'll have to wait a long time for results. Really short tests don't give you any reasonable data, because the users have not yet reached a stable state. I used to do 1 hr tests, but now I think that was over kill. 10 Min ramp up (100 VUsers) and 15 min tests would have given us just about the same amount of information as an hr long test.
4. Don't over complicate your testing: Remember your audience. Remember, that you want to prove / test that / whether the application will perform well under load. There are 2 different types of load:
1. Large data set
2. Large number of users on the system.
For regular testing, you need to find some sort of mid point and test. If you have written your performance tests right, you should be able to simulate 2 very easily. For 1, you will have to rely on your developers to provide an adequate data set.
Most important of all, always remember, you need to be providing useful information back to the team. Doing the same test 20 different ways will not give you useful results. Unless you define what & why you're testing, you're perhaps wasting your time.
Subscribe to:
Posts (Atom)