Page 2 of 5

How do you Manage Security Risks in Open Source?

Open source is at the heart of almost every application. If you have ever developed a new application from scratch, the chance is very high that you’ve also built this on open source. In this post, I will outline security risks related to open source and give you a mitigation approach.

Reasons for open source

According to Gartner, 99% of mission-critical application portfolios within Global 2000 companies contains open source components. The complexity of our services is increasing. Users expect easy to use and responsive applications. At the same time, IT costs must be reduced. One approach to deal with this growing expectation and limited resources is building new applications on open source libraries which help developers to speed up their construction time.

Implementing critical functions such as encryptions or asynchronous processing can be both, time-consuming and challenging because there are many pitfalls involved. One being the in-depth knowledge of a particular topic which quickly leads to many hours of research. Another one being that the self-made component is erroneous. Therefore, many developers avoid reinventing the wheel and prefer open source components.

Risks

Your applications consist widely on open source libraries. I assume that you have a robust security test concept in place which also includes secure code scans according to industry standards. But, are you also aware of risks introduced by your open source components?

A static application security testing solution is unable to identify vulnerabilities without the actual source code. Typically, you don’t have the source of your open source libraries used in your business applications and your code scan solution will not point out any vulnerabilities within those.

Another often ignored risk are license terms of your open source components. While those libraries are free neglecting to comply with their requirements may result in business and technical risks.

Mitigations

First of all, you should be aware of all open source libraries are used across your applications and development projects. This open source inventory is essential because whenever a breach arises, you can quickly identify the affected application and apply a bugfix.

Secondly, regularly verify the known vulnerabilities in your open source libraries. Whenever you are using out-dated or vulnerable components, you should consider upgrading to the fixed version.

Finally, track what open source licenses you have used in your applications including their dependencies.

There are several secure code scan platforms out there which also provides an integrated solution for open source secure code analysis. Personally, I recommend using the Checkmarx Application Security Testing (CxSAST) solution.

Why Monitoring is essential in DevOps

Time to market is more important than ever before because the competition is on the rise. Successful retailers are deploying every 12 seconds a new release. In this blog post, I will outline reasons for short release sprints and shine a light on the fundamental role of monitoring in a DevOps environment.

Benefits of the DevOps approach

Tight release sprints are a challenge for your development pipeline.  Design-implement-test-deploy slots are short and require an excellent collaboration between your teams. Daily standup meetings will help to improve and distribute the knowledge about the new product quickly.

Ongoing learning and product optimization is another key aspect for teams working in DevOps mode. Especially if you develop innovative products, the requirements are often not in place, and therefore the design is very tricky. Tight feedback loops are essential. Your teams will continuously create and develop new features, and your clients will review those. Due to this involvement, the acceptance for new products is high.

Also testing and deployment at production is no longer a pain in DevOps projects. All repetitive tasks such as functional regression tests, performance tests, and necessary security checks are highly automated. QA specialists focus mainly on the new features. Automated tests verify the core functionality and quickly provide an overview of the actual quality of the new product.

Automated deployment solutions enable teams promptly roll-out new features or switch back to the previous version if something does not work as expected. Thanks to this high automation of test and deployment activities the failure rate is low. All parties involved are willing to share their product knowledge. Once the new release successfully deployed your operation teams provide the necessary insight and feedback. They continuously collect key performance metrics and share those across the organization.

Monitoring in DevOps environments

DevOps is no guarantee for error free software. The pace of new features and changed components is high. Typically, complexity is increasing, and often hundreds of micro-services are used. Whenever a failure occurs, there is not much time to identify the root-cause and work on an appropriate resolution.

Successful DevOps teams found a solution for this dilemma. They understood that a profound monitoring is a key to adequate mean time to repair cycles. And appropriate monitoring approach includes all layers, all transactions, and all environments. Often they use a monitoring platform which captures the end user experience, allows a drill down through technical components and a vertical analysis of error hot spots.

Your Takeaways

  • Automation is essential in DevOps
  • Monitoring of all transactions provides the required insights
  • Horizontal and vertical drill down allows a quick hotspot analysis
  • Share monitoring data across organization

Don’t put the advantages of DevOps projects at risk. I recommend improving your monitoring strategy to hold the pace with your competitors and realize more benefits of the powerful DevOps procedure.

Performance is Everyone’s Matter

Retailer such as Amazon set the user experience bar extremely high and it seems that this is one of their secret recipes. I am not a passioned Amazon shopper, but sometimes I buy technical stuff from their fantastic online shop. Independent whether I use my mobile, tablet or desktop computer, their websites load fast. Several clicks later I place my order, and within a few days, the new equipment arrives.

Performance is a vertical discipline

Maybe you are not in competition with those resellers, but in the past, they quickly adopted to new fields, and suddenly your former uniqueness may disappear. Once you are in direct competition, the available time to speed up your applications will be very short.

Responsive and reliable services require a holistic approach. Let’s assume that your developers did an excellent job and considered performance from day one in design decisions and your test teams simulated adequate multi-user tests on a close to a production environment. Several months after this app has been deployed into production your former responsive system sucks, and the blame game starts.

Your user becomes extremely frustrated. They avoid using your business application. Some of them raise tickets and talk hours with your support team about the slowness of this application. Business units escalate this topic through your upper management and the pressure on IT gets higher and higher. Daily war room sessions end without any outcome. There is a large proportion of try and error, but nobody can tackle the performance issue.

After a while, your teams identify gaps in the monitoring chain because they are not able to correlate their data lakes, and there is no data flow capturing solution in place. In fact, nobody has an idea concerning the interaction between the application components. This learning is essential because from this point on your teams understood two things. The first beeing that performance is a vertical discipline and the second being that they need a transaction monitoring solution which captures the flow across their application components 24 x 7.

Benefits of a performance first enterprise

Those businesses which consider performance from day one in their development pipeline save money. Often they learned this through an experience such as outlined above. However, they implement, test and operate their business applications with performance in mind. All parties agreed that ongoing analysis, optimization, and innovation is the best solution for reliable and responsive business applications. Due to this proactive thinking mentality war room sessions are no longer required.

There is no reason for endless firefighting. Besides short resolution times and excellent user experience, your teams will have more time for challenging tasks such as optimization and innovation. Make performance everyone’s matter is a comfortable way to have more fun at work.

Turn the Ship around; From Firefighting to Performance Driven

Jumping from one performance hotspot to the next can be very frustrating because there is never enough time to eliminate the issues. Successful companies addressed those troubles years ago. If you are still in the firefighting mode; don’t worry, I will give you insights to this dilemma and also a resolution for this frustrating development.

The Problem

Slow loading or pure performing business applications are a nightmare. The longer this slowdown continues, the higher the pressure from your customers. They expect a quick bugfix and do not understand why their IT department is not able to provide a better service quality. Endless war room sessions, try end error experiments and regularly detailed status reports make those exercises even more annoying.

Traditional software development initiatives often neglect non-functional aspects. Your business clearly describes their functional demands. Software designer and developer construct the required software. Testing specialists capture those requirements with sufficient test cases. After deployment on production, the response time sucks and your business application is almost unusable during peak hours.

It takes ages for your support teams to become aware of those slowdowns. Their log file based monitoring approach does not deliver the necessary insight. System resource utilization is below the agreed boundaries. Your infrastructure teams recommend solving this issue with a ramp up in hardware.  Several days later your user community is nevertheless frustrated because the performance is still unacceptable.

The Solution

This performance disaster mentioned above is no one-way road. You can always turn the ship around by integrating performance considerations in your development chain. Obviously, organizations struggling with frustrated users due to slow responding applications need to understand their gaps which hold them back.

Based on my experience from hundreds of performance engineering projects non-functional requirements are essential and should never treat as an afterthought. Once you’ve documented the required aspects, your developers can consider those in their implementation decisions and your test teams can organize the required test scenarios.

Finally, you should monitor all transactions from the end user perspective. There are powerful application and user experience monitoring solutions out there which will give you the essential insights. Also, those tools come out with analytics features and enable your support teams to triage complex performance issues.

The easiest way to turn the ship around is to assess your performance engineering maturity and improve it step by step according to my maturity model.

Keep doing the good work!

A Forward-Looking Application Monitoring Strategy

Over the past few years, I’ve worked with companies on the transformation of their monitoring strategy and the outcome was fantastic. User experience and reliability of their business critical applications have been dramatically improved. In fact, a modern application monitoring strategy is more a matter of doing the right things.

Organizations often rely on an out-dated monitoring approach. They don’t have an active monitoring of their business critical applications in place. Only their customers who work with the applications creates a ticket if the expected functionality doesn’t work properly. Whenever a ticket arrives, a support analyst tries to reproduce the identified problem, which is often not possible due to the lack of information and data available. Regrettably, hours or even days later the problem will be solved, and the customers are not happy that they had to wait so long for the solution to their issues.

Outages are a pain because they lead to shortages in financial revenue and in worst cases to a bad reputation. There is no error-free software and therefore you have to find ways to deal with this uncertainty. I will give you now three simple steps which help you to mitigate those risks and gain excellent insight into your business applications.

Step 1

Actively monitor user experience in production applications. A robot executes your important use cases according to the specified schedule and depending on the result of those executions your support team will be alerted. Especially in non-working hours when nobody is using your application this synthetic execution of important use cases is essential. When it comes to tools, I recommend using Silk Performance Manager from Microfocus because it’s easy to use and very powerful.

Step 2

You should monitor all transactions from the end users perspective. Some problems have an impact on several users while others affect the whole user community. For ongoing improvements and efficient root-cause analysis, this kind of monitoring is essential. dynaTrace is the market leader in this user experience and application monitoring field. Their platform provides many outstanding features such as automatic problem detection, artificial intelligence, and excellent integration possibilities.

Step 3

Finally, collect system monitoring metrics. Your application won’t deliver adequate user experience if CPU, memory, network or IO metrics are permanent in critical areas. Therefore, collect low-level metrics and raise alerts if thresholds exceeded. Tool wise you can choose between commercial and open source solutions. The most companies have this kind of monitoring already in place. The low-level monitoring landscape is huge. Look at the solution from Nagios if you consider removing gaps in this discipline. A good user and performance monitoring solution provides also infrastructure monitoring features.

Once you’ve implemented your proactive monitoring strategy don’t forget a continuous review of the collected metrics. Take 30 minutes per month for each of your applications and review the captured user experience, response times, throughput, error rate and system resource utilization metrics of the last 30 days.

Some data scientists argue:

“The truth is in your data”

I fully agree with this argument and I believe that, once you’ve implemented a forward-thinking application monitoring strategy, you will share the same opinion.

 

Part 4 – Assessment of your Performance Engineering Maturity Level

Every business depends on reliable and responsive applications and with the rise of digitalisation user experience will become more important than ever before. Treating performance as an afterthought will directly have an impact on your financial revenue. Therefore, I highly recommend closing blind spots in your performance engineering approach.

Obviously, it’s not possible to reach a mature performance engineering level overnight, but you should be aware that there are eventually gaps which hold you back from achieving excellent user experience. I’ll give you now three simple steps which help you to identify the maturity level of your performance engineering activities.

Step 1 – Assessment matrix

Answer the questionnaire and understand how well you’ve implemented performance considerations in your development chain.

  • Acknowledge questions with Y (yes) if you follow this approach
  • Acknowledge questions with N (no) if you don’t have this activity in place

You can use this sample assessment matrix. I’ve answered the inquiries for a sample firm and will use their results in the two steps below.

Step 2 – Your Firm’s high water mark

Extract your score for all practices and create a spider chart because this will give you a better understanding of eventually missed opportunities. The diagram below contains the high water mark of our sample firm. Obviously, they have weak areas in their test and operate domain.

pemmhighwatmark

Step 3 – Your Firms maturity level

Finally, calculate the average of your high water mark for each domain to get the corresponding performance engineering maturity level.

PEMMMAtLevel.png

The chart above outlines that

  • our sample firms focus is on design for performance because they reached a very mature level in their build domain.
  • they have weak areas in test and operate domains which will most likely result in reduced user experience, reactivity, and missed opportunities.

Once you’ve identified the blind spots in your development process you can create a remediation plan. As already mentioned in my earlier posts on this topic, the most companies operate on level 2.

A higher maturity level will reduce the stress related to firefighting and eliminate war rooms. Finally, due to early failure identification, you will save money, hold your timelines and improve users experience.

Part 3 – Performance Engineering Maturity Model at Work

There is no business which should treat performance as an afterthought because slow loading or not responding applications can quickly have an impact on financial revenue. Unsatisfied users will most likely not return and will continue with other more reliable services. Therefore, I highly recommend integrating performance aspects into your development chain.

This is the third blog post on the performance engineering maturity level. Now I will outline activities which help you to improve the performance of your business applications. The chart below outlines nine practices of the performance engineering maturity model and each practice is split up in three activities across the maturity levels. Low maturity level activities are less effective and easier to implement.

pemm_act

Keep in mind that there is no need to reach the highest level across all domains. Each organization should decide what activities suits best for them.

Activities of the build domain

I recommend trying to reach a higher level in the build domain because the biggest proportion of performance failures are due to bad application design and therefore your success ratio is quite high. Below I will outline the three practices and their performance engineering activities.

Early Performance Analysis (PA)

  • PA3: Comparison of actual response times with earlier releases
  • PA2: Response time measurement is part of unit tests
  • PA1: Developers investigate long running requests

Standards and Metrics (SM)

  • SM3: Follow a specified performance engineering process and verify agreed key performance metrics
  • SM2: Educate developers in performance analysis and use supportive tools
  • SM1: Specify a set of key performance indicators

Design for Performance (DP)

  • DP3: Performance quality gates are in place and used to ensure that new releases are within agreed boundaries
  • DP2: Performance tests are fundamental part of the build process
  • DP1: Follow performance best practices

Activities of the test domain

Verification and validation of non-functional requirements is a fundamental step towards reliable business applications. Below I will outline the three practices of the test domain and their performance engineering activities.

Single User Performance Checks (SP)

  • SP3: End-to-End backend performance measurement
  • SP2: Response time measurement is part of automated regression tests
  • SP1: Manual response time measurement

Realistic Multi-User Performance Tests (RM)

  • RM3: Production like performance test environment for business-critical applications is in place
  • RM2: Calculate load pattern based on given environment capacity and run test scenarios to verify performance requirements
  • RM1: Document analysis approach and use performance testing tools to simulate concurrent user and measure response time

Test Boundaries (TB)

  • TB3: Performance testing approach includes page design analysis and performance engineers have an antipattern knowledgebase
  • TB2: Test and optimize applications to provide highest possible user experience on slow WAN locations
  • TB1: Simulate future growth load patterns

Activities of the operate domain

Small changes in usage patterns can have a huge impact on user experience. Therefore, I recommend trying to reach a higher performance engineering maturity level on operation domain because this will help you to permanently monitor and optimize your production setting.

Detection of slow running requests (DR)

  • DR3: Use synthetic performance monitoring figures to detect slow running applications
  • DR2: Use log files for analysis of slow running requests
  • DR1: User identify and escalate performance issues and the majority of those will not be solved

Collection Performance Metrics (PM)

  • PM3: Application and service owner are using real user performance metrics and visualize those on dashboards
  • PM2: Developer, tester, and support teams have access to performance metrics of all critical applications
  • PM1: Integrate and use performance monitoring metrics for business critical applications

Performance metrics drive business decisions (PB)

  • PB3: Customer impact metrics drive ongoing improvement and innovation initiatives
  • PB2: Specialists regularly review actual performance metrics and provide tuning recommendations
  • PB1: A user experience monitoring is for all critical applications in place

There is more to come…

Keep in mind that there is no need to reach the highest level and implement all activities. Use this maturity model for comparison with your competitors and as a supportive guideline to improve your performance engineering activities over time.

In my final post concerning the PEMM I will provide a working sheet which you can use right away to measure your performance maturity level.

Part 2 – How to bring the Performance Engineering Maturity Model into play

This post is the second of my series concerning the performance engineering maturity model (PEMM). In the first, I’ve introduced the overall idea, and now it’s time to get in more detail.

Whenever an application doesn’t fulfill performance requirements, it’s clear that something went wrong and the root-cause analysis can be very challenging. Based on my experience a proactive approach is much better because this gives confidence and prevents you from frustrated users due to not responding applications.

As already mentioned in my previous post I’ve created a performance engineering maturity model which provides the required transparency about best practices in this discipline. Obviously, there is no need that all companies reach the highest level in all available performance engineering subjects. Therefore, I use a maturity level based approach which allows a tailor-made adoption and improvement over time. In this post, I will outline the three domains and nine practices of this maturity model.

pemm_domainspractices

The 3 Domains

I’m convinced that only a lifecycle performance engineering approach will help companies to reach acceptable user experience. Therefore, I think it’s essential that performance considerations must become part of every stage in the development chain. Those businesses who build high responsive applications consider performance from day one in their development process, they verify performance requirements on pre-production stages and continuously monitor and improve performance metrics on production. Summarizing this, the three domains of the performance engineering maturity model are: build, test and operate.

The 9 Practices

There are nine performance engineering practices, and each is divided into the three maturity levels.

Build domain: The biggest proportion of performance issues are related to failures in software design and implementation. Therefore,  integrate non-functional aspects in your early design and development considerations. The three practices of the build stage are:

  • Early performance analysis
  • Standards and metrics
  • Design for performance

Test domain: High latency or peak load patterns can result in unsatisfied response times. Some applications behave totally different during concurrent user situations. The failure range goes from slow loading pages to crashes of the entire system. Obviously, the risk related to ignoring non-functional testing aspects is too high and therefore the performance engineering practices in this testing domain are:

  • Single user performance checks
  • Realistic multi-user checks
  • Test boundaries

Operate domain: Successful organizations are proactive. They know how to derive business decisions based on their most valuable assets, their customer impact metrics. The operational performance engineering practices includes the following

  • Detection of slow running requests
  • Collection of performance metrics
  • Performance metrics drive business decisions

There is more to come…

In a subsequent post, I will outline the 27 practices of the performance engineering maturity model followed by another post about how you can calculate your maturity level in this discipline.

Part 1 – Introducing a Performance Engineering Maturity Model

Speed is everything, but not everything is speed! Nobody enjoys slow loading or erroneous applications, and bad user experience has already a significant impact on commercial revenue. I’m still wondering that the average response time of mobile sites is 19 seconds. Personally, speaking I won’t wait more than 5 seconds for a page to load.

I’ll give you now some simple steps you can use right away to identify blind spots in your software development chain which holds you back from providing a better user experience. There are different strategies, but I’ve decided to use a maturity level based method because this allows a tailor-made, step-by-step improvement over time.

Whether you follow a waterfall or agile based development approach, there are three critical touchpoints you need to consider if you want to reach a satisfied user experience. I call this touchpoints domains and the PEMM will allow you to measure your maturity levels within those.

pemm_overview

Level 1 – Firefighting

Those businesses which operate on this level completely ignore performance best practices and solve the majority of their issues at production. They have often very frustrated users. Their monitoring approach is reactive and log file based. A majority of performance failures are well known, but it takes very long to solve those. Organizations who operate in this mode are exposing themselves to high risks.

Level 2 – Performance Validation

Companies which reached this level understood that testing of non-functional requirements is necessary. They have processes, and tools in place which allow simulation of production load patterns and they share performance metrics across their organizations. For business critical applications they have a performance monitoring at production in place.

Level 3 – Performance Driven

Organizations which integrated early design and architecture validations in their development process reached the highest level in this discipline. They have a proactive performance monitoring strategy in place and continuously analyze and improve use cases and focus on the end user experience. Their business units understood the value of user experience and application performance metrics.

There is more to come…

According to Forrester Research, most companies operate at level 2 at the moment. With the rise of digital services, user experience and reliability will become more important and therefore it’s a good idea to improve your performance engineering maturity level.

In my next post, I will write about how you can bring this model into play.

Quick Start Guide for Security Tests

Software testers are sometimes unable to cope with the verification of security requirements because of their very technical nature. In this post, I will give you some guidance and orientation which you can use right away for your application security testing activities.

Step 1 – Static Application Security Tests

First of all, make sure that static application security testing or secure code review according to security standards such as OWASP top 10, SANS top 25 or PCI will be conducted. Bear in mind that vulnerabilities have to be eliminated from the root, which is the source code. You can do a manual code review or even better an automatic analysis using professional or freeware tools. Also, please verify the security risks related to open source libraries used by your application under test.

Step 2 – Dynamic Application Security Tests

Secondly, there are also security risks related to the surrounding application infrastructure. The application server, operating system, and additional runtime libraries can lead to serious hazards. Such so-called dynamic application security tests are always tool based, and baseline scans your runtime environment against known security issues. Typically, such application scans or penetration tests should be executed during system or user acceptance testing.

Step 3 – Business Security Tests

Lastly, your testing activities should also include so-called business security tests, which focus on sensitive areas such as authentication, authorization, and session management. Also, I recommend paying attention to login procedures and permission management of  applications under test. Conduct positive and negative tests for those critical areas mentioned above.

All things considered, a robust security testing approach incorporates early secure code reviews, dynamic and business security tests. Keep on doing the good work, and please share your own security testing experience and strategy with me.