There is no business which should treat performance as an afterthought because slow loading or not responding applications can quickly have an impact on financial revenue. Unsatisfied users will most likely not return and will continue with other more reliable services. Therefore, I highly recommend integrating performance aspects into your development chain.
This is the third blog post on the performance engineering maturity level. Now I will outline activities which help you to improve the performance of your business applications. The chart below outlines nine practices of the performance engineering maturity model and each practice is split up in three activities across the maturity levels. Low maturity level activities are less effective and easier to implement.
Keep in mind that there is no need to reach the highest level across all domains. Each organization should decide what activities suits best for them.
Activities of the build domain
I recommend trying to reach a higher level in the build domain because the biggest proportion of performance failures are due to bad application design and therefore your success ratio is quite high. Below I will outline the three practices and their performance engineering activities.
Early Performance Analysis (PA)
- PA3: Comparison of actual response times with earlier releases
- PA2: Response time measurement is part of unit tests
- PA1: Developers investigate long running requests
Standards and Metrics (SM)
- SM3: Follow a specified performance engineering process and verify agreed key performance metrics
- SM2: Educate developers in performance analysis and use supportive tools
- SM1: Specify a set of key performance indicators
Design for Performance (DP)
- DP3: Performance quality gates are in place and used to ensure that new releases are within agreed boundaries
- DP2: Performance tests are fundamental part of the build process
- DP1: Follow performance best practices
Activities of the test domain
Verification and validation of non-functional requirements is a fundamental step towards reliable business applications. Below I will outline the three practices of the test domain and their performance engineering activities.
Single User Performance Checks (SP)
- SP3: End-to-End backend performance measurement
- SP2: Response time measurement is part of automated regression tests
- SP1: Manual response time measurement
Realistic Multi-User Performance Tests (RM)
- RM3: Production like performance test environment for business-critical applications is in place
- RM2: Calculate load pattern based on given environment capacity and run test scenarios to verify performance requirements
- RM1: Document analysis approach and use performance testing tools to simulate concurrent user and measure response time
Test Boundaries (TB)
- TB3: Performance testing approach includes page design analysis and performance engineers have an antipattern knowledgebase
- TB2: Test and optimize applications to provide highest possible user experience on slow WAN locations
- TB1: Simulate future growth load patterns
Activities of the operate domain
Small changes in usage patterns can have a huge impact on user experience. Therefore, I recommend trying to reach a higher performance engineering maturity level on operation domain because this will help you to permanently monitor and optimize your production setting.
Detection of slow running requests (DR)
- DR3: Use synthetic performance monitoring figures to detect slow running applications
- DR2: Use log files for analysis of slow running requests
- DR1: User identify and escalate performance issues and the majority of those will not be solved
Collection Performance Metrics (PM)
- PM3: Application and service owner are using real user performance metrics and visualize those on dashboards
- PM2: Developer, tester, and support teams have access to performance metrics of all critical applications
- PM1: Integrate and use performance monitoring metrics for business critical applications
Performance metrics drive business decisions (PB)
- PB3: Customer impact metrics drive ongoing improvement and innovation initiatives
- PB2: Specialists regularly review actual performance metrics and provide tuning recommendations
- PB1: A user experience monitoring is for all critical applications in place
There is more to come…
Keep in mind that there is no need to reach the highest level and implement all activities. Use this maturity model for comparison with your competitors and as a supportive guideline to improve your performance engineering activities over time.
In my final post concerning the PEMM I will provide a working sheet which you can use right away to measure your performance maturity level.