Part 4 – Assessment of your Performance Engineering Maturity Level

Every business depends on reliable and responsive applications and with the rise of digitalisation user experience will become more important than ever before. Treating performance as an afterthought will directly have an impact on your financial revenue. Therefore, I highly recommend closing blind spots in your performance engineering approach.

Obviously, it’s not possible to reach a mature performance engineering level overnight, but you should be aware that there are eventually gaps which hold you back from achieving excellent user experience. I’ll give you now three simple steps which help you to identify the maturity level of your performance engineering activities.

Step 1 – Assessment matrix

Answer the questionnaire and understand how well you’ve implemented performance considerations in your development chain.

  • Acknowledge questions with Y (yes) if you follow this approach
  • Acknowledge questions with N (no) if you don’t have this activity in place

You can use this sample assessment matrix. I’ve answered the inquiries for a sample firm and will use their results in the two steps below.

Step 2 – Your Firm’s high water mark

Extract your score for all practices and create a spider chart because this will give you a better understanding of eventually missed opportunities. The diagram below contains the high water mark of our sample firm. Obviously, they have weak areas in their test and operate domain.


Step 3 – Your Firms maturity level

Finally, calculate the average of your high water mark for each domain to get the corresponding performance engineering maturity level.


The chart above outlines that

  • our sample firms focus is on design for performance because they reached a very mature level in their build domain.
  • they have weak areas in test and operate domains which will most likely result in reduced user experience, reactivity, and missed opportunities.

Once you’ve identified the blind spots in your development process you can create a remediation plan. As already mentioned in my earlier posts on this topic, the most companies operate on level 2.

A higher maturity level will reduce the stress related to firefighting and eliminate war rooms. Finally, due to early failure identification, you will save money, hold your timelines and improve users experience.


Part 3 – Performance Engineering Maturity Model at Work

There is no business which should treat performance as an afterthought because slow loading or not responding applications can quickly have an impact on financial revenue. Unsatisfied users will most likely not return and will continue with other more reliable services. Therefore, I highly recommend integrating performance aspects into your development chain.

This is the third blog post on the performance engineering maturity level. Now I will outline activities which help you to improve the performance of your business applications. The chart below outlines nine practices of the performance engineering maturity model and each practice is split up in three activities across the maturity levels. Low maturity level activities are less effective and easier to implement.


Keep in mind that there is no need to reach the highest level across all domains. Each organization should decide what activities suits best for them.

Activities of the build domain

I recommend trying to reach a higher level in the build domain because the biggest proportion of performance failures are due to bad application design and therefore your success ratio is quite high. Below I will outline the three practices and their performance engineering activities.

Early Performance Analysis (PA)

  • PA3: Comparison of actual response times with earlier releases
  • PA2: Response time measurement is part of unit tests
  • PA1: Developers investigate long running requests

Standards and Metrics (SM)

  • SM3: Follow a specified performance engineering process and verify agreed key performance metrics
  • SM2: Educate developers in performance analysis and use supportive tools
  • SM1: Specify a set of key performance indicators

Design for Performance (DP)

  • DP3: Performance quality gates are in place and used to ensure that new releases are within agreed boundaries
  • DP2: Performance tests are fundamental part of the build process
  • DP1: Follow performance best practices

Activities of the test domain

Verification and validation of non-functional requirements is a fundamental step towards reliable business applications. Below I will outline the three practices of the test domain and their performance engineering activities.

Single User Performance Checks (SP)

  • SP3: End-to-End backend performance measurement
  • SP2: Response time measurement is part of automated regression tests
  • SP1: Manual response time measurement

Realistic Multi-User Performance Tests (RM)

  • RM3: Production like performance test environment for business-critical applications is in place
  • RM2: Calculate load pattern based on given environment capacity and run test scenarios to verify performance requirements
  • RM1: Document analysis approach and use performance testing tools to simulate concurrent user and measure response time

Test Boundaries (TB)

  • TB3: Performance testing approach includes page design analysis and performance engineers have an antipattern knowledgebase
  • TB2: Test and optimize applications to provide highest possible user experience on slow WAN locations
  • TB1: Simulate future growth load patterns

Activities of the operate domain

Small changes in usage patterns can have a huge impact on user experience. Therefore, I recommend trying to reach a higher performance engineering maturity level on operation domain because this will help you to permanently monitor and optimize your production setting.

Detection of slow running requests (DR)

  • DR3: Use synthetic performance monitoring figures to detect slow running applications
  • DR2: Use log files for analysis of slow running requests
  • DR1: User identify and escalate performance issues and the majority of those will not be solved

Collection Performance Metrics (PM)

  • PM3: Application and service owner are using real user performance metrics and visualize those on dashboards
  • PM2: Developer, tester, and support teams have access to performance metrics of all critical applications
  • PM1: Integrate and use performance monitoring metrics for business critical applications

Performance metrics drive business decisions (PB)

  • PB3: Customer impact metrics drive ongoing improvement and innovation initiatives
  • PB2: Specialists regularly review actual performance metrics and provide tuning recommendations
  • PB1: A user experience monitoring is for all critical applications in place

There is more to come…

Keep in mind that there is no need to reach the highest level and implement all activities. Use this maturity model for comparison with your competitors and as a supportive guideline to improve your performance engineering activities over time.

In my final post concerning the PEMM I will provide a working sheet which you can use right away to measure your performance maturity level.

Part 2 – How to bring the Performance Engineering Maturity Model into play

This post is the second of my series concerning the performance engineering maturity model (PEMM). In the first, I’ve introduced the overall idea, and now it’s time to get in more detail.

Whenever an application doesn’t fulfill performance requirements, it’s clear that something went wrong and the root-cause analysis can be very challenging. Based on my experience a proactive approach is much better because this gives confidence and prevents you from frustrated users due to not responding applications.

As already mentioned in my previous post I’ve created a performance engineering maturity model which provides the required transparency about best practices in this discipline. Obviously, there is no need that all companies reach the highest level in all available performance engineering subjects. Therefore, I use a maturity level based approach which allows a tailor-made adoption and improvement over time. In this post, I will outline the three domains and nine practices of this maturity model.


The 3 Domains

I’m convinced that only a lifecycle performance engineering approach will help companies to reach acceptable user experience. Therefore, I think it’s essential that performance considerations must become part of every stage in the development chain. Those businesses who build high responsive applications consider performance from day one in their development process, they verify performance requirements on pre-production stages and continuously monitor and improve performance metrics on production. Summarizing this, the three domains of the performance engineering maturity model are: build, test and operate.

The 9 Practices

There are nine performance engineering practices, and each is divided into the three maturity levels.

Build domain: The biggest proportion of performance issues are related to failures in software design and implementation. Therefore,  integrate non-functional aspects in your early design and development considerations. The three practices of the build stage are:

  • Early performance analysis
  • Standards and metrics
  • Design for performance

Test domain: High latency or peak load patterns can result in unsatisfied response times. Some applications behave totally different during concurrent user situations. The failure range goes from slow loading pages to crashes of the entire system. Obviously, the risk related to ignoring non-functional testing aspects is too high and therefore the performance engineering practices in this testing domain are:

  • Single user performance checks
  • Realistic multi-user checks
  • Test boundaries

Operate domain: Successful organizations are proactive. They know how to derive business decisions based on their most valuable assets, their customer impact metrics. The operational performance engineering practices includes the following

  • Detection of slow running requests
  • Collection of performance metrics
  • Performance metrics drive business decisions

There is more to come…

In a subsequent post, I will outline the 27 practices of the performance engineering maturity model followed by another post about how you can calculate your maturity level in this discipline.

Part 1 – Introducing a Performance Engineering Maturity Model

Speed is everything, but not everything is speed! Nobody enjoys slow loading or erroneous applications, and bad user experience has already a significant impact on commercial revenue. I’m still wondering that the average response time of mobile sites is 19 seconds. Personally, speaking I won’t wait more than 5 seconds for a page to load.

I’ll give you now some simple steps you can use right away to identify blind spots in your software development chain which holds you back from providing a better user experience. There are different strategies, but I’ve decided to use a maturity level based method because this allows a tailor-made, step-by-step improvement over time.

Whether you follow a waterfall or agile based development approach, there are three critical touchpoints you need to consider if you want to reach a satisfied user experience. I call this touchpoints domains and the PEMM will allow you to measure your maturity levels within those.


Level 1 – Firefighting

Those businesses which operate on this level completely ignore performance best practices and solve the majority of their issues at production. They have often very frustrated users. Their monitoring approach is reactive and log file based. A majority of performance failures are well known, but it takes very long to solve those. Organizations who operate in this mode are exposing themselves to high risks.

Level 2 – Performance Validation

Companies which reached this level understood that testing of non-functional requirements is necessary. They have processes, and tools in place which allow simulation of production load patterns and they share performance metrics across their organizations. For business critical applications they have a performance monitoring at production in place.

Level 3 – Performance Driven

Organizations which integrated early design and architecture validations in their development process reached the highest level in this discipline. They have a proactive performance monitoring strategy in place and continuously analyze and improve use cases and focus on the end user experience. Their business units understood the value of user experience and application performance metrics.

There is more to come…

According to Forrester Research, most companies operate at level 2 at the moment. With the rise of digital services, user experience and reliability will become more important and therefore it’s a good idea to improve your performance engineering maturity level.

In my next post, I will write about how you can bring this model into play.