I spent quite some time in 2015 trying to improve performance of web-scale digital services. Of course it wasn’t just me, I was with the teams at Kainos working to deliver new digital services for UK citizens.
So what did I learn?
Very often performance optimisation is under-valued. For some software projects performance is a box that needs to be ticked at the 11th hour before the first production release. For some folks it is all about performance testing using tool X from Acme Corporation. Others successfully test and optimise but still the production service fails when real users get at it. And of course there are those that don’t bother at all.
Lets be really clear then: performance optimisation is more important than all the features of a service combined (hint: users won’t use your features if they are slow or unresponsive). This is a different perspective to begin with that will challenge how you address performance.
I say “performance optimisation” because many reduce it to performance testing. And that means hiring a performance tester who knows who to use tool X from Acme Corp. But testing (and tooling) is only one small part of performance optimisation. Don’t be fooled into a false sense of safety just because you do performance testing or you selected an industry-leading tool. Make sure you are analysing the results and fixing the service — by service I mean application and infrastructure together — and repeating. And repeating again (hint: you can’t stop performance optimisation if you keep adding features).
It’s helpful to consider performance to be a feature not some non-functional bucket of technical debt to risk manage. Once performance is a feature then it’s harder to de-prioritise. You can do this in multiple ways but one way is to make performance part of the acceptance criteria for your user stories. This way it becomes avoidable only by intentional negligence of your teams.
To allow your product owners to do this you cannot leave performance optimisation to the end. You also cannot leave the provisioning of a technical environment for representative performance testing to the end either. You will need to be ready to performance optimise early. Having a feature should mean having a performance-optimised feature. So When is too early? I would suggest when you get the user need and are choosing to build a service for production – this is aligned with the start of the Beta phase for UK Digital by Default.
There are some who get performance optimisation. They test, they analyse and they fix. But when real users start to use their service they have performance problems. Unfortunately these folks often get snared by the NFR Trap. The NFR — Non-Functional Requirements — did not describe the level of real-world usage that real users impose on the service. Instead very complicated, hard to understand and detailed requirements were constructed by a smart software architect that didn’t get the user need or user context.
So challenge your performance NFR. If you’re not satisfied tear them up and write new performance targets. Ask yourself some questions,
If you just cannot agree on performance targets then an alternative is to provide product owners with quantitative data on the capacity available when using the feature. This can be reached by stress testing on a production-like environment with a full dataset. This allows the performance risk to be assessed by product owners or senior managers.
Experienced folks worry that there will be performance issues with a newly live service; it can really kill a launch. To build this confidence be ready to publish your performance results to your whole project team. To do this you will need to make it intelligible (so typical NFR won’t cut it). Think of summary performance targets that can demonstrate the progress of performance optimisation and the risks if the feature is accepted in its current state.
So you’ve heard of TDD — Test-Driven Development. Test-Driven Infrastructure brings the same result-oriented approach to infrastructure builds that TDD does for applications.
Very often infrastructure is built using a waterfall process even when the development teams are agile. This has the downside of making the results appear near the end and is based on the smarts of the architect. And given the Production environment tends to be provisioned late in the project lifecycle this exacerbates the performance risks.
Instead bring the benefits of agile to your infrastructure. Be test-driven, particularly performance testing leading to performance optimisation and working infrastructure. Starting with infrastructure tests – performance, failure, security – means your infrastructure design is proven as your develop it. Scaling can be done later as iteration and further tested to find the sweet spot for your workloads.
So how will you analyse your Production infrastructure as you test it early? Your expensive performance test tooling won’t help much for this. Instead embrace the DevOps Doctrine to bring forward all of the traditional ops monitoring features (hint: typically they are only needed after go-live so don’t get built until the end). This way you can test and optimise your monitoring as you test during development. Development and Ops together!
Be ready for performance defects you cannot fix. You will need to make bold decisions based on the risk to your service.
This event is more likely than you may realise even if you have the skills to fix your own code. If for example you have selected commercial products to include in your service you already are exposed to this risk. You will not have control to fix issues and will instead be dependent on the vendor.
So instead when selecting products you need to get assurances before you buy that there are real-world case studies or lab test results that demonstrate it can meet your performance targets. If your project buys anyway then you are dependent on the vendor unless you design you way out of it. It is clearly a last resort to do this but if the risk is too great then you may need to protect the users of your service.
Data remains one of the most difficult elements to reach agreement on, especially for government services given information security and privacy. Information Security custodians may inform you that you may not use real data for non-production uses for reasons. This can often result in no full size real data used for functional and performance testing.
However it is clear that you cannot performance test without full-sized dataset. There are also risks using synthetic full-sized datasets for functional and performance testing given the actual real data can be important (hint: data is more important than the code, invest more your time getting data right).
So make the clear case for full-sized real data for performance testing. Take it to senior management if necessary. It may necessary to pseudo-anonymise personally identifiable information as part of the agreement to use it but this is worth it.
Hey, so what about those top 10 optimisations you mentioned at the beginning…
In summary then, 10 things I’d recommend you follow to help mitigate performance risks.