As the complexity of our IT landscape increases, our customers have higher performance expectations than ever before.
What is the real value of rich applications developed with performance and security in mind and why do we often realize far too late that we have lost something important? […]
Recently, I have been asked for a brief description on how to achieve outstanding responsive and lightning speed loading applications. A detailed description of all the why and how won’t […]
Gone are those times of static websites and thanks to AJAX and other new concepts the interaction with web-based services are very natural nowadays. Content gets updated dynamically, and there […]
Many companies are cutting their IT costs. Outsourcing, out-tasking, and elimination of redundancies are some of those measures how organizations try to reduce spending on digital services. The competition is […]
New technology is popping up almost overnight, and the complexity of our applications is increasing. Virtualization, artificial intelligence, microservices, and machine learning are just the beginning, and there is much more to come. Some people argue that performance is the most important feature.
The best business application struggles from time to time, responds slow to user queries or is temporarily not available. Organizations learned that monitoring of critical components is the way to go. In this post, I will explain benefits and drawbacks involved in this development.
Users speed expectations are on the rise and companies started investing in the optimization of their services and business processes. Outstanding players such as Amazon understood that the competition is high and bad user experience leads quickly to growing abandon rates. When those frustrated users start to spend their money on other, more reliable websites, it's already too late.
User experience is the most important success criteria, and the expectation of our users is permanently on the rise. According to a recent study from Akamai, in 2006 the average business user expected response times of 4 seconds. Today, 49% expect load times of 2 seconds or below. In this post, I will shine a light on reasons why organizations are failing to meet users speed expectations followed by simple steps towards performance by nature.
With the rise of technology, the complexity of our business applications has dramatically increased. Virtualization, microservices and artificial intelligence are about to dominate our IT landscape soon. In this post, I will write about problem spots and proactive solutions.
Over the past decade companies, large and small have started with the integration of load and performance testing into their development process. There are many good reasons for this evolution. In this post will outline why testing of performance requirements has become so popular.
There is no excerpt because this is a protected post.
In recent years, static websites completely disappeared and with the rise of technology, companies provide their services more and more online. The former latent web pages were replaced with content-rich and dynamic websites. Frequent changes in content and website layout can have a high impact on end to end response times. In this post, I will give you a simple approach how to implement an automated web page design analysis based on open source tools PhantomJS and Yslow.
Small things matter most, and this is not only true for day to day activities. Minor changes in application configuration can have a significant impact on end to end user experience. In this post, I will give you insights into the nature of such changes and some simple steps towards proactive detection of speed degradations.
Performance testing is meanwhile a fundamental step in many software development projects. Test early and repeat often is also true for load and performance testing. It's not a one-time shot and there are some pitfalls involved. In this post, I will outline the three most frequently used performance test varieties.