Tutorials and resources on how to apply test automation in software testing
The first part of this article presented some of the current challenges of performance testing. It discussed also the data and times pillar of performance testing. This second part covers the resource and cost aspects. The author shares some final thoughts on the future of performance testing.
Are your full stack acceptance tests slow, non-deterministic and hard to maintain? You’re not alone. Imagine running hundreds of them in a few seconds, giving the same result every time. How do you think a feedback loop that fast would that affect your team’s productivity?
More and more organizations build test automation frameworks based on the WebDriver and Appium tools to perform software testing their web and mobile projects. A big part of why there are so many flaky tests is that we don’t treat our tests as production code. Moreover, we don’t treat our framework as a product.
It doesn’t matter if you are developing software with Java, .NET, PHP or another language. If you need to do performance testing – it will be a challenging task, especially nowadays with microservices architectures, clusters and very complex systems. This presentation addresses the most common pitfalls of performance tests. The presenter shares his experience gained through demanding experiments and quite often frustrating failures.
Running more than 5000 automated system tests on a deployed application with outgoing connections to about 25 other systems, each with their own dependencies, where test data is complex and needs to be in-sync, is a great challenge. Doing it every night, year after year, with the requirement to fail only on the event of actual errors in the application under test, is a nightmare.
One of the most widely touted drawbacks of the automated tests is that they work in strictly bounded context. They can only detect problems for which they are specifically programmed. The standard automated test has a bunch of assertions in the last step. By definition, an automated test cannot detect an ‘unknown’ problem. Because of their narrow focus, the automated tests are occasionally compared to dumb robots. It takes a lot of time and effort to write and support them however their return on investment is still marginal.
Different people from the software testing field can have completely opposing opinions about the process of test automation. Sometimes it happens that after a while, the initial expectations and hopes of this substantially costly investment are not justified at all. In simple words, test automation does not bring the expected profits. In this article, we will try to understand why such situations happen and how to avoid the most common mistakes.