Speech about Fast and Reliable Software at MF Summit on 20-21.June 2017 in Düssledorf

Come and visit the Microfocus Summit in Düsseldorf on 20.-21.June 2017. I will talk about fast and reliable software on 21.June 11:00 am. Registration, agenda and more details under this link.

 

 

 

 

 

Advertisements

Implement your Automated Web Page Design Analysis

In recent years, static websites completely disappeared and with the rise of technology, companies provide their services more and more online. The former latent web pages were replaced with content-rich and dynamic websites. Frequent changes in content and website layout can have a high impact on end to end response times.

In this post, I will give you a simple approach how to implement an automated web page design analysis based on open source tools PhantomJS and Yslow.

Setup Details

The good thing is that all components required for this automated web page design analysis are open source and for free. You will need the following tools:

  • PhantomJS – a headless Webkit and automation solution
  • YSlow – page design best practices
  • Atom – a powerful editor for easy scripting and test execution

First of all, install PhantomJS on your environment. There is a detailed installation description on their website which you can use right away.

Secondly, download Yslow for phantomJS and customize the yslow.js file. Open the yslow.js file in any editor, add the line var system = require(‘system’); to the top of this file and replace all phantomjs.args with system.args in this file.

Thirdly, install Atom editor and enable the run command. Atom is an extremely powerful editor with tons of plugins. I used it’s run command to execute command line scripts.

Finally, test the installation with the given command below. You can use Atom after you enabled the run command, open a new window, insert the command phantomjs yslow.js –help and click ctrl-r.

Run the Analysis

PhantomJS and Yslow are powerful tools and provide many features which you can use right away for your automated web page design analysis. Personally speaking, I recommend starting with the basic command and work your way through the more advanced features.

Basic

In this mode, you will get a high-level page design analysis which consists of the size of your page, the overall score, the number of requests and the page load time. Execute the command below on your machine configured before.

phantomjs yslow.js -info basic -format plain http://focusaps.com

The picture below contains the output of this command. It shows that the given website has an overall page design score of 76 out of 100, has a size of 1.5 MB and a load time of 3.2 seconds.

yslow basic plain

Detailed

The detailed mode provides more insights to the weak areas of your website. It supports also predefined thresholds and supports the TAP output format which is supported by many tools such as Jenkins. Run the command below on your machine.

phantomjs yslow.js -info grade -format tap – threshold C http://focusaps.com

You will get the following output including relevant tuning hints which you can share with your developers.

yslow grade tap output.jpeg

I believe that you have now many integration ideas for the page design analysis. Automation is that easy. Add the automated checks to your build process, testing procedures and daily checks on your productive environments. You will see that this really helps to identify deviations in a proactive way.

Web Page Design Analysis is not a one-off Exercise

Small things matter most, and this is not only true for day to day activities. Minor changes in application configuration can have a significant impact on end to end user experience. In this post, I will give you insights into the nature of such changes and some simple steps towards proactive detection of speed degradations.

Changes and their Impact

Frequent modifications in the look and feel of websites are very much appreciated. Nowadays, websites are not only used for advertisement or gain commercial revenue. Companies try to stand out of the crowd and design websites which underline their image. Navigation has become easier since web designer understood that the number of clicks required to buy products is essential for their business.

Very common web page design failures are the absence of compression, large images, blocking java scripts and videos in auto play mode. Software test professionals are extremely familiar with those nasty pitfalls. They detect and eliminate those during QA stages. However, once the new business application has been deployed at production, nobody cares about the impact of this minor changes and slowly the speed of the former quick loading website is gone. Suddenly, frustrated users drop their shopping trip on your side, and financial revenue declines.

Automated detection of slow downs

You can avoid this frustrating scenario above. It’s not a rocket science and eventually easier than you could imagine. Quality assurance does not end post deployment of the new website on production. You won’t have a test plan for your live system, but an automated health and performance monitoring solution with significant test cases is required.

There are great cloud-based or locally hosted monitoring platforms out there which you can use. Replace the reactivity with a proactive health monitoring solution. Automation is a great feature when it comes doing repetitive things such as periodically execution of your monitoring test cases. Setup the performance and availability boundaries and let alerts flow out if your website slows down for whatever reason.

It’s good to now that the speed of a website is below expectations, but this helps you nothing if nobody is there who dives deeper, understands the cause and fix this issue. As already mentioned above there are those minor adjustments which could impact the end-to-end response time. Based on my experience, a good way to detect the real problem behind a slowdown is to implement QA checks used on testing stages also on production.

Actionable insights

As a performance engineer, I verify page design of new applications on pre-production. Google and Yahoo provide powerful tools which make this analysis quite easy. The good thing is that those solution detects issues and provide more insights to the actual root-cause such as disabled caching, large images or blocking java scripts. It makes much sense that your health monitoring solution also checks the page speed score of your web pages on a regular basis.

Recently, during some research for another paper, I became aware that automated page design analysis is almost for free. In my next post, I will outline how you could implement your self-made automated page design monitoring solution based on PhantomJS, Netsniff.js, and Yslow.

Transform Performance Metrics to Actionable Insights

Digital services are eating the world. Billions of websites provide almost everything you can image, and the competition is on the rise. End users are more and more in the driving seat because they decide what service they will use.  Digital customers often ban services which provide a bad user experience.  Therefore, excellent application performance becomes a top priority for many organizations. After outlining both, performance metrics and the term actionable insights, I will illustrate how you can transform the former to the latter.

According to technopedia, actionable insights  are

“analytics result that provides enough data to make an informed decision.”

When it comes to performance engineering, specialists collect metrics which help them to understand the root-cause of hotspots. Decades ago it was very common that slowdowns were solved with additional hardware. Those times are gone. Nowadays, engineers have to deal with questions such as:

  • Why is the number of frustrated users on the rise?
  • Why is page speed score below the target?
  • Why does service response time exceeds allowed thresholds?
  • Why is the average size of our web page > 1 MB?
  • Why is our system not scalable?
  • Why is our application running out of memory?

When it comes to answering those questions, you can follow a trial and error based approach or derive the tuning recommendation from your collected performance metrics. I highly recommend following the latter because the former will result in endless war rooms trial and error exercises.

What are the typical performance metrics

1. User layer. Modern real user monitoring solutions provide powerful insights to activities performed by end-user on your applications. They often use javascript injection approach and add small functions to your web pages which allows a detailed capturing of last-mile performance figures such as actions performed, client errors, actual bandwidth, client times, rendering time and much more.

2. Service layer. User interactions lead to service calls and whenever a slowdown arise you should be able to identify the time beeing spend on your middleware. Typically, those service based metrics include response times, throughput, error rates, exceptions, 3rd party service call figures, heap statistics and much more.

3. System layer. Network, CPU, memory and IO metrics are also critical factors for our applications. If they are often above acceptable thresholds, they can quickly influence overall application performance.

Extensive data analytics and data processing

Almost every business application collects some log file data, but if you intend to nail down performance hotspots, the information provided in log files is most likely not useful. You will need each, and every user transaction to be taken into account including your horizontal components such as web server, application server, and databases as well as 3rd party services. Obviously, you will deal with big data volumes. Monitoring and testing platforms are nowadays well equipped to manage millions of such transactions.

Make a proper course of action

Forward-thinking players started with the implementation of performance analytics engines. Their vision is clearly the reduction of operational efforts, and their mission is to make everyone act as a  performance engineer. In the past, specialists were responsible for the correlation of those insights. Such experts are aware of many slow-down patterns, and their radar identifies those quickly.

In the past, experienced engineers transformed performance metrics to actionable insights. However, in recent years automation is slowly taking over. Algorithms decide forthwith about the acceptance of user experience and point out the root-cause of performance slowdowns. Such artificial performance advisers are still in early stages, but there is an immense potential.

Performance Testing in a Dynamic World

In many fields such as finance, engineering or politics there are groundbreaking changes ongoing. However, our human skill to adapt to new situations will help us to deal with this disruptions.

In this post, I will shine a light on challenges in software engineering, more specifically, on load and performance testing in an energetic environment.

What are the difficulties we are facing?

Over many years we’ve planned performance tests in advance. Requirement engineers documented non-functional aspects. Software developer designed and implemented the new system with the requirements in mind. Finally, testing teams verified and validated the requirements and hand the new product over to operation teams.

This stage-by-stage approach disappears more and more in an agile world. Nowadays, a single team is responsible for design, implementation, test and operation of the new product. Excellent collaboration is fundamental to the success of teams operating in this mode. When it comes to load and performance testing the biggest hurdles are time constraints, the frequency of changes and the often just partially available system under test.

What are the pillars of a dynamic performance testing approach?

First of all, you need to work on your application and environment monitoring. If you are not able to capture all transactions on development or production stages, you’ll loose too much time with troubleshooting. Ideally, you integrate a real user, application performance, and component monitoring and you share all metrics with your project members.

Secondly, implement and continually execute service based performance tests. Even if your new system is not completely integrated, it makes sense to evaluate response times of your new services under multi-user load conditions. Provide results of those tests in online dashboards and grant access to the whole team. Set thresholds for your most important performance metrics such as throughput, error rate, response time and clearly communicate any violation.

Finally, don’t forget end-to-end performance tests of the fully integrated application. While service based tests are required to find issues in early stages, the E2E test in a close to production like environment is a final validation and utterly required.

Don’t forget that performance engineering is more a journey than a destination.

Why Monitoring is essential in DevOps

Time to market is more important than ever before because the competition is on the rise. Successful retailers are deploying every 12 seconds a new release. In this blog post, I will outline reasons for short release sprints and shine a light on the fundamental role of monitoring in a DevOps environment.

Benefits of the DevOps approach

Tight release sprints are a challenge for your development pipeline.  Design-implement-test-deploy slots are short and require an excellent collaboration between your teams. Daily standup meetings will help to improve and distribute the knowledge about the new product quickly.

Ongoing learning and product optimization is another key aspect for teams working in DevOps mode. Especially if you develop innovative products, the requirements are often not in place, and therefore the design is very tricky. Tight feedback loops are essential. Your teams will continuously create and develop new features, and your clients will review those. Due to this involvement, the acceptance for new products is high.

Also testing and deployment at production is no longer a pain in DevOps projects. All repetitive tasks such as functional regression tests, performance tests, and necessary security checks are highly automated. QA specialists focus mainly on the new features. Automated tests verify the core functionality and quickly provide an overview of the actual quality of the new product.

Automated deployment solutions enable teams promptly roll-out new features or switch back to the previous version if something does not work as expected. Thanks to this high automation of test and deployment activities the failure rate is low. All parties involved are willing to share their product knowledge. Once the new release successfully deployed your operation teams provide the necessary insight and feedback. They continuously collect key performance metrics and share those across the organization.

Monitoring in DevOps environments

DevOps is no guarantee for error free software. The pace of new features and changed components is high. Typically, complexity is increasing, and often hundreds of micro-services are used. Whenever a failure occurs, there is not much time to identify the root-cause and work on an appropriate resolution.

Successful DevOps teams found a solution for this dilemma. They understood that a profound monitoring is a key to adequate mean time to repair cycles. And appropriate monitoring approach includes all layers, all transactions, and all environments. Often they use a monitoring platform which captures the end user experience, allows a drill down through technical components and a vertical analysis of error hot spots.

Your Takeaways

  • Automation is essential in DevOps
  • Monitoring of all transactions provides the required insights
  • Horizontal and vertical drill down allows a quick hotspot analysis
  • Share monitoring data across organization

Don’t put the advantages of DevOps projects at risk. I recommend improving your monitoring strategy to hold the pace with your competitors and realize more benefits of the powerful DevOps procedure.

A Forward-Looking Application Monitoring Strategy

Over the past few years, I’ve worked with companies on the transformation of their monitoring strategy and the outcome was fantastic. User experience and reliability of their business critical applications have been dramatically improved. In fact, a modern application monitoring strategy is more a matter of doing the right things.

Organizations often rely on an out-dated monitoring approach. They don’t have an active monitoring of their business critical applications in place. Only their customers who work with the applications creates a ticket if the expected functionality doesn’t work properly. Whenever a ticket arrives, a support analyst tries to reproduce the identified problem, which is often not possible due to the lack of information and data available. Regrettably, hours or even days later the problem will be solved, and the customers are not happy that they had to wait so long for the solution to their issues.

Outages are a pain because they lead to shortages in financial revenue and in worst cases to a bad reputation. There is no error-free software and therefore you have to find ways to deal with this uncertainty. I will give you now three simple steps which help you to mitigate those risks and gain excellent insight into your business applications.

Step 1

Actively monitor user experience in production applications. A robot executes your important use cases according to the specified schedule and depending on the result of those executions your support team will be alerted. Especially in non-working hours when nobody is using your application this synthetic execution of important use cases is essential. When it comes to tools, I recommend using Silk Performance Manager from Microfocus because it’s easy to use and very powerful.

Step 2

You should monitor all transactions from the end users perspective. Some problems have an impact on several users while others affect the whole user community. For ongoing improvements and efficient root-cause analysis, this kind of monitoring is essential. dynaTrace is the market leader in this user experience and application monitoring field. Their platform provides many outstanding features such as automatic problem detection, artificial intelligence, and excellent integration possibilities.

Step 3

Finally, collect system monitoring metrics. Your application won’t deliver adequate user experience if CPU, memory, network or IO metrics are permanent in critical areas. Therefore, collect low-level metrics and raise alerts if thresholds exceeded. Tool wise you can choose between commercial and open source solutions. The most companies have this kind of monitoring already in place. The low-level monitoring landscape is huge. Look at the solution from Nagios if you consider removing gaps in this discipline. A good user and performance monitoring solution provides also infrastructure monitoring features.

Once you’ve implemented your proactive monitoring strategy don’t forget a continuous review of the collected metrics. Take 30 minutes per month for each of your applications and review the captured user experience, response times, throughput, error rate and system resource utilization metrics of the last 30 days.

Some data scientists argue:

“The truth is in your data”

I fully agree with this argument and I believe that, once you’ve implemented a forward-thinking application monitoring strategy, you will share the same opinion.

 

The 3 User Experience Antipatterns

The digital transformation is ongoing, and user experience will become more important than ever before. Obviously, there are many benefits of this development, but there are also flipsides. This post will outline typical user experience antipatterns.

The importance of user experience
Nowadays there are still areas in our day-to-day activities where digital services are very limited or not available. Many businesses are closing their digitalization gaps and will come up with new or improved services. The risen portfolio will lead to picky users who chooses only those with excellent user experience.

According to a study from DoubleClick, user experience has already impact on revenue. Those firms with average web page response times below 5 seconds generate 2x more commercial revenue compared to their competitors.

Naturally, websites with low latency will lead to satisfied users. Here are tips you can use right away. You’ll make your online customers happy if you avoid those three antipatterns.

1. No design for user experience
Application design is often extremely functional oriented. Design specialists create a detailed description of the new features and focus more on the visual representation than non-functional aspects. Developers bring their preferred frameworks and create a rich browser-based application. Often nobody cares about design best practices.

2. No testing for user experience
At least one test case for each requirement will be documented and executed. As there are no non-functional requirements, nobody will verify performance, scalability, usability or throughput. In best cases, a penetration test will be performed. Once all tests have been executed, and the identified defects have been solved the new App will be installed at production.

3. No monitoring of user experience
Application support teams are manually watching the log files, or in the best case, there is an automated error pattern detection in place. A system resource monitoring is available, but not used at all. Issues identified by the business user cannot be reproduced and often the ticket will be closed without any corrective action.

Keep doing the good work and kick out the mentioned antipatterns from your software development chain. How do you deal with user experience?

There are few things more frustrating than a slow application

Have you ever been affected by a slow loading or not responding application? According to recent studies a typical user is willing to wait up to 4 seconds and will often never return.

Obviously, performance is a crucial aspect of our business applications besides their primary objective to work properly. However, many treat it simply as an afterthought.

One project I was involved in recently was successfully launching a new account opening form application. In this project, both functional and non-functional requirements were tested methodically pre-deployment. In a collaboration between performance and development specialists, we implemented some significant optimizations which improved response times by a factor of 5.

benefits-ofperformance-testing

All things considered, performance optimization can mean different things, but the value of building efficient code is clear. Those projects that consider non-functional tests in early stages reduce their costs dramatically and avoid frustrating fire-fighting at production.

Effective Synthetic Monitoring

Why do we need a synthetic monitoring and how should we integrate it into our software development chain?

Nowadays, downtimes of business applications result in a loss of revenue because a user order cannot be placed and this user may not return in the future. Uptime has now become a major concern of business and IT departments.

Based on my experience, manual monitoring of application availability, accuracy and performance is very time consuming and too expensive. A much better approach is to identify some critical use cases within the affected applications, automate those and execute them regularly at the required business locations.

This so-called synthetic monitoring allows you to identify downtimes prior to they affecting the end user. In addition, performance, accuracy and availability metrics could be permanently collected and used to raise a ticket if certain thresholds were violated.

The flipside of synthetic monitoring is that a change in an application under monitoring could result in false alerts. To avoid such situations you should make synthetic monitoring part of your development chain and test your synthetic scripts also during acceptance test of your business applications.

Ideally, you should make your performance engineering team responsible for the monitoring platform and maintenance of the synthetic scripts. In addition, I recommend selecting one of the synthetic monitoring suites, which would allow you a re-use of existing performance testing scripts.