Endurance testing. Load testing. Scalability testing. Spike Testing. Stress… smoke… volume… Testing, testing, testing! Are all these performance testing types testing your patience? Do you have questions about what they are, when to use them, and how?

The good news is, we can help. One of the key pillars of our AI-led Customer Experience Transformation Platform is automated load and performance testing services which help to support the practice of agile, efficient continuous testing. Thus, we’d like to share some of our curated subject matter expertise in the realm of load testing solutions to help demystify these terms and their importance within the contact center development and QA world. For a quick introduction or a refresher to help you define these types of contact center testing and understand when, why and where to use them, check out our infographic: ‘7 Types of Automated Performance Testing Every Contact Center Needs‘, then meet us back here for a deep dive on (drumroll please…) throughput.
Throughput? What’s that?
Throughput – which admittedly sounds like a mini golf or bowling move (but isn’t!) – is the term used to refer to the measurement of the amount of work or number of tasks that is attainable to complete within a given timeframe. Basically, it’s telling you the rate at which your contact center system can process a workload.
This is important to measure and know, because for every application and piece of your contact center environment, there’s a lot going on. Many users, both internal and external (think your contact center agents and customers) are making lots of requests and demands on your system, most likely at all times. It’s vital, therefore, to ensure your system can handle the type of capacity you’re likely to need… which, to further complicate things, can vary depending on time of year or in alignment with marketing campaigns, special programs, or annual events like open enrollment or tax season, for example.
How do you measure throughput?
Typically, throughput is measured in terms of the “transactions or requests per second” that your application can handle from the beginning to the end of your performance test.
Before you test, it’s a good idea to set a throughput goal for your test, that represents a number of requests your system can handle per second, minute or hour that you’re comfortable with; this allows you to see how your system stacks up against this aim once the test is complete.
Hmm… I’m a little hungry and could use a quick snack before we continue. Let’s leave the contact center and head to the ice cream shop real quick!
At DevOps Dairy down the road, Manager Lamargene Gumbody knows just how many visitors, like us, he can handle in a day. How? He’s measured throughput!
Lamargene has timed it, and knows that his single, dependable employee, Brainfreeze Brian, can fill a cone from a soft serve machine in one minute flat. DevOps Dairy has only one soft-serve machine and only one employee, so this super basic (and delicious!) example demonstrates that Lamargene can expect that Brian will be able to serve one ice cream enthusiast per minute. That’s his throughput.
For the clearest picture of system performance and scalability, it’s a good idea to measure the throughput of different types of requests, too. For example, you could measure throughput of requests based on frequency, criticality and intensity of requests, to see how many pass successfully in a test interval.
What does throughput tell you?
When you run any type of performance test to exercise your system and discover bugs, issues, or failures and measure throughput, the results returned tell you about the performance and scalability of your contact center system.
- High throughput numbers: Good news! This generally means that the performance is good. Your system is telling you that it can handle the expected (or sometimes even a larger) number of transactions or requests across the time period you’ve defined for your test.
- Low throughput numbers: Surprise! This generally means the opposite. Your system is struggling to handle requests and may suffer when traffic volumes – like inbound phone calls to your IVR or a flurry of chat messages submitted to your chatbot or conversational AI assistant – may overload your system and cause a delay, breakdown or even a failure.
How to use throughput to improve results?
During a performance test, there are many factors that are influencing how well your collective contact center infrastructure is working (or not working), like hardware capabilities, network bandwidth, system configuration, code efficiency, and database performance. Testing your system performance, whether it’s with ad-hoc or homegrown tests or with the assistance of a performance testing services provider, involves simulating user (agent or customer, for example) activities, generating realistic workloads you’re likely to face, and then measuring the response time to ensure that it meets the desired performance goals. Varying the load conditions and the number of requests will help you to determine the amount of stress your contact center system can handle and any bottlenecks or breakpoints that should be addressed.
Armed with this data regarding breakpoints and maximum throughput, also known in the industry as fixed upper-bound constraint – you can now identify, hone in, and work on specific areas for improving your system performance.
We’ve put together a few suggested steps to follow in order to use throughput to improve your system performance:
- Establish your baseline: Set your target throughput measurement under normal operating conditions. This serves as a reference point for all future comparisons.
- Identify bottlenecks: Analyze throughput results to identify any bottlenecks or performance limitations within the system that don’t measure up against your baseline. These can be due to hardware, network, software, database, or configuration issues. Pinpointing these bottlenecks will help you to prioritize and focus your efforts.
- Tune performance: Now that you know where the bottlenecks exist, concentrate tuning work on those areas. This could look like optimizing code, improving database queries, adjusting configuration settings, enhancing network infrastructure, or upgrading hardware components. By addressing the identified issues, you can increase the system’s throughput.
- Run another test: Now that you’ve tuned the system, it’s time to load test your work and ensure you’ve solved the identified issues. Run the same test and measure your throughput to validate that the work done actually addressed the issue and improved performance. Don’t be shy… apply a little pressure and gradually increase the load to really test your optimization efforts. And if your system is expected to handle growing or varying workloads, conduct scalability testing at this point too, to see how well your fine-tuned system can handle the heat.
- Monitor and adjust: New code, updates to customer journeys, third party provided services or applications and outside factors, like network stability, cloud contact center platform or carrier services are constantly in flux. Therefore, it’s important to apply the practices of continuous testing, constantly exercising and monitoring your system’s throughput with real-world scenarios. Fortunately, and unfortunately, we live in a world of rapidly changing technology, and given the pace of change, we can’t afford to operate under the belief that what worked yesterday will still work today!
Throughput is an extremely important metric to measure, but it’s also only one aspect of system performance to consider, and should be evaluated alongside other performance metrics, such as response time, resource utilization, error rates, and – of course! – agent and overall customer experience.