Insights

Get playful with performance testing

I recently had the honour of attending my second WOPR (Workshop on Performance and Reliability), this time in Melbourne, Australia. This year’s theme was quite different – ‘Cognitive Bias in Performance Testing’. In other words, what biases are there which affect the objectivity and accuracy of our work?

Get playful with performance testing

Date: 16 March 2018

We covered a huge array of discussion points and I think it would be a fool’s errand for me to attempt to summarise all of it. Instead, I want to focus on the single theme which I think will most change the way I approach my work.

Experiment or simulation?

In Performance Testing, we have the concept of a ‘workload model’. Different people call it different things, but, ultimately, it’s about building some kind of mathematical model of how much load a software system will be under. This ‘load’ has different dimensions including ‘volume’ (e.g. ‘pages per second’ or ‘business scenarios per hour’) and ‘concurrency’ (e.g. ‘concurrent users’). We also care about the nature of that load – for example, what proportion of the time are our users browsing, searching or purchasing products on our website?

When we build our workload models, we strive for realism. The aim is to plug this workload model into our load test suite to simulate what we expect real users to do in the real world. In reality, we are often very far from reality. How many of us validate our workload models from production? When we do validate our model from production, how often do we find our model is wildly different than what happens in the real world?

New ways of workload modelling

This obsession with realism can be unhealthy. For one, it can lead to over-engineered workload models which take an enormous amount of time to build yet provide little or no additional value than something strung together with assumptions. It can also suck the life out of the performance testing process – aiming to represent reality is an impossible task. So how can we do workload modelling differently?

The purpose of workload modelling is to inform the volume and nature of the load we apply during load testing. With that in mind, what’s *actually* important is not simulating realistic load but finding and diagnosing performance issues during testing. How close our model is to the real world is irrelevant if we achieve that outcome.

I’m not saying we should be arbitrarily firing load at a system to understand its performance (that would be a terrible idea), but there are times when we could be a little easier on ourselves and fill in some blanks to get testing going sooner. This is where the ‘experimentation’ comes into play and I think this kind of energy is what is going to take us into the new world of exploratory performance testing.

Be curious

We generally build a number of end-to-end business scenarios into our load test suite. Something I’m sometimes guilty of myself is waiting until all of these scenarios are scripted before running any substantial testing. What if we started testing with just one scenario? Many performance issues arise not from a specific function, so even a single scenario will pick them up. By doing this, we get the chance to find performance issues much sooner with significantly less effort. The point is to run *something* as an experiment and see what shakes out.

Or even earlier, how often do you navigate around your system (assuming a web application) using a web proxy before you build any test assets at all? There’s plenty to discover:

  • Is there anything obviously slow even with just a single user?
  • Are there any errors or timeouts which look performance related?
  • Is client-side caching enabled?
  • Are the responses being compressed?
  • Are there components being called sequentially which could be called in parallel?

There’s no need to wait weeks or months to discover these kinds of issues. Be curious.

This leads on to another topic that we had an interactive session on – exploratory performance testing. What would it look like? How are people doing this now? What new features do we need from our tools to help us make it happen? I do not have conclusive answers to these questions yet, but it has the potential to put creativity back into performance testing and help us find performance defects faster.

Be playful

My point is that it’s easy to get obsessed with perfection. We want our testing to be just like the real world, but that’s unachievable. Our job is really about finding and diagnosing performance issues – and this requires an experimental (or dare I say ‘playful’) mindset. It may sound simple, but this has been a revelation to me.

Some of this blog is inspired by findings from the 26th Workshop on Performance and Reliability. WOPR26 was held on 5-7 March 5-7 2018 in Melbourne. The theme was ‘Cognitive Biases in Performance Testing’. Participants in the workshop included: Stephen Townshend, Aravind Sridharan, Joel Deutscher, Harinder Seera, Diana Omuoyo, Derek Mead, Andy Lee, Srivalli Aparna, Sean Stolberg, Scott Stevens, Eric Proegler, Paul Holland, Tim Koopmans, Ben Rowan, Stuart Moncrieff and John Gallagher.

Share Article

Want to know more?

Want to be inspired?

Want to learn?

Want to get in touch?

Share on Facebook
Share on LinkedIn
Share on Instagram
Follow on YouTube