Performance testing before go-live a worthwhile practice?Rajesh Sarangapani
Listen on the go!
Is Performance testing before go-live a worthwhile practice?
For every performance testing professional and purists out there that have been religiously practicing proactive performance testing before go-live, is this not a sacrosanct question?
And we can expect that they would invariably react by letting us know of numerous examples in the past of how they have been able to proactively identify performance issues before go-live that resulted in better utilization of hardware and improved customer experience. Over and above, how they have managed to protect the brand from being tarnished, or it might also be improved sales as examples of benefits due to this proactive approach.
Of course, the benefits of proactive performance initiatives far outweigh when compared to practices of “throwing more hardware to solve application performance problems” but how relevant is this practice in the age of
- DevOps methods to software delivery, where the expectation is to deliver new capabilities at supersonic speed?
- Cloud adoption where additional hardware can be provisioned by simple policy/rule
- New deployment strategies that can control deployments to services, servers, and users
- Our inability to establish test environments at the right scale or size to reflect the application eco system complexity
- New observability tools that can aid accelerated diagnostics so that root cause can be identified quickly, when coupled with DevOps methods of delivery changes can be pushed and tested swiftly in production than in test environments
Having implemented a CI/CD pipeline in their processes, a few clients also question if there is a need to do performance testing as they can release any fixes to production when they arise. Why should they invest money in proactive performance testing when they can get issues fixed in production faster than before? As the applications are also migrated to the cloud, is it bringing any value to the table by pitching in early performance testing when we can take the op-ex route to effectively maintain our applications in the cloud?
So, for all practitioners out there, the obvious question that invariably needs to be answered is: what is the right “timing” to conduct performance tests and assessments?
- Start early
- Before go-live
- Shift right: After go-live
- Shift right: Use observability platforms to conduct RCA and fix issues
- No need to test, throw more hardware
We think there is no silver bullet answer that we can adopt to get the performance risks mitigated.
The answer needs to be framed by considering Organization context and culture, Budget spend versus Risk appetite, Business criticality versus Brand identity, Delivery practices maturity, Technology adoption curve, and the Timelines to market introduction.
All these above factors will determine how an organization reacts to this new challenge of testing “early” versus “late” or “No-test” versus “Quick diagnose and fix” rather than investing in long cycles of performance testing before go-live.
There is definitely no single approach that can be termed better than the other. You may have to opt for a context specific approach that entails performance engineering practices that are apt for that context balancing risks, time and costs.
Cigniti has built a dedicated Performance Testing CoE that focuses on providing solutions around performance testing and engineering for our global clients. We focus on performing in-depth analysis of client’s context and provide a holistic approach that can range from practices at the component level, dynamic profiling, capacity evaluation, testing and reporting to help isolate bottlenecks and provide appropriate recommendations.
Cigniti’s contextual “performance engineering framework” enables clients with services that can deliver end-to-end testing & engineering, while our “analytics-driven workload modeler” helps avoid all workload modeling hassles. We maintain a dedicated pool of resources with expertise in a wide range of tools, technology stacks, and processes.
Cigniti teams follow a robust contextual Performance Testing methodology to test applications
Cigniti has had over 100 + Performance Test engagements, providing 40% to 70% savings on costs and over 50% efficiency gains. The common apps that we test are http, https, XML, SOAP, Java-based protocols, FTP, User Interface, headless etc…
We have strong partnership with vendors, that helps our clients choose the right tool based on their requirements and budget.
Need help? Talk to our Performance Testing experts to learn more about the right “timing” to conduct performance tests and assessments.