What methodology would you use to measure the load capacity of a software server application?

79 views Asked by At

I have a high-performance software server application that is expected to get increased traffic in the next few months.

I was wondering what approach or methodology is good to use in order to gauge if the server still has the capacity to handle this increased load?

3

There are 3 answers

0
Dmitri T On BEST ANSWER

I think you're looking for Stress Testing and the scenario would be something like:

  1. Create a load test simulating current real application usage

  2. Start with current number of users and gradually increase the load until

    • you reach the "increased traffic" amount
    • or errors start occurring
    • or you start observing performance degradation

    whatever comes the first

  3. Depending on the outcome you either can state that your server can handle the increased load without any issues or you will come up with the saturation point and the first bottleneck

  4. You might also want to execute a Soak Test - leave the system under high prolonged load for several hours or days, this way you can detect memory leaks or other capacity problems.

More information: Why ‘Normal’ Load Testing Isn’t Enough

0
Neville Kuyt On

I would start by collecting base line data on critical resources - typically, CPU, memory usage, disk usage, network usage - and track them over time. If any of those resources show regular spikes where they remain at 100% capacity for more than a fraction of a second, under current usage, you have a bottleneck somewhere. In this case, you cannot accept additional load without likely outages.

Next, I'd start figuring out what the bottleneck resource for your application is - it varies between applications, but in most cases it's the bottleneck resource that stops you from scaling further. Your CPU might be almost idle, but you're thrashing the disk I/O, for instance. That's a tricky process - load and stress testing are the way to go.

If you can resolve the bottleneck by buying better hardware, do so - it's much cheaper than rewriting the software. If you can't buy better hardware, look at load balancing. If you can't load balance, you've got to look at application architecture and implementation and see if there are ways to move the bottleneck.

It's quite typical for the bottleneck to move from one resource to the next - you've got CPU to behave, but now when you increase traffic, you're spiking disk I/O; once you resolve that, you may get another CPU challenge.

0
Rick James On

Test the product with one-tenth the data and traffic. Be sure the activity is 'realistic'.

Then consider what will happen as traffic grows -- with the RAM, disk, cpu, network, etc, grow linearly or not?

While you are doing that, look for "hot spots". Optimize them.

Will you be using web pages? Databases? Etc. Each of these things scales differently. (In other words, you have not provided enough details in your question.)

Most canned benchmarks focus on one small aspect of computing; applying the results to a specific application is iffy.