The advantages of automated testing probably don't need to be explained. For testers, they mean faster and easier work, allowing us to focus more on manual tests. But how do developers view automated testing?
Advantages of Automated Testing
Confidence
Software development isn't just about adding new features or fixing bugs. If we want a project to remain viable, we occasionally need to "clean up," refactor, or update the libraries we use. However, this also necessitates completely retesting the entire product. This can mean preparing new test cases and determining the scope of testing. We need to think about what has been changed and what could have been affected by the change. We need to allocate time for testing. This increases the risk that we might overlook or omit something - simply because it doesn't seem important to us, or because we have limited time for testing. With more frequent changes, the likelihood that anyone will devote time to testing may also decrease.
Automated testing gives us the confidence to make any changes. Thanks to automation, we don't have to fear changes because we can essentially test them immediately. We can improve the project on multiple levels and always have an overview of how the changes have affected it. Without improvements, the project would soon start to "rot."
Costs
Both manual and automated testing cost money, of course. Although the investment in automation may seem high, its return is essentially immediate. We can then add more tests or modify existing ones at minimal cost. Automation helps reduce costs, and we can then devote precious time to more detailed manual testing.
Scalability
Undeniable advantage is also scalability. If we test the product on multiple supported systems and want to add support for another, we immediately know where we stand. With manual testing, this would mean a lot of repetitive work. We can run automated tests on (essentially) an unlimited number of configurations.
Reliability of Automated Tests
With automated tests, we must consider that they are as reliable as the least reliable component in the testing chain. Tests must be deterministic, and chance cannot play a role. If a test is randomly evaluated as faulty, the probability of this happening increases with the number of tests run.
When creating automated tests, it's therefore necessary to have good knowledge of the testing system, input parameters, and configurations. Tests should not be dependent on each other and their order should not matter. Tests should also not have side effects; at the beginning, the test should define all necessary prerequisites and at the end return the system to its original state. A larger number of small and independent tests gives us greater variability.
When to Start with Automation
The best time is to start immediately. "As soon as it does something, it should be tested." When we want to start with automated tests, we need to design the infrastructure and strategy, and this is best done at the beginning of the project along with planning other things. If the project is already running, adding automation becomes more difficult.
When to Write Tests
Prepare the logic first and then the tests? This depends on the developer's preferences, project setup, or technology used. The order is less important; it's always necessary to think about both. Sometimes it's good to iterate - when adding logic, think about how the thing will be tested. Then write tests and return to the logic again. And repeat this until we're satisfied with the result.
How Often to Run Tests
Ideally (automatically) after each project build. The developer can also run tests manually after each change that could affect something. It's therefore important that tests are easily runnable and take a reasonable amount of time. It's advisable to have tests divided into multiple layers so that it's possible to run only a part of them, for example.
Coverage and Prioritization
The pursuit of 100% code coverage by tests can bring difficulties with refactoring - tests can worsen the ability to refactor the system from within. A large number of tests can be a problem in itself, also because tests simply take a long time. It also increases the demands for necessary maintenance. When designing tests, we must therefore consider prioritizing coverage: public application interfaces, functionality important for business - these are areas where potential bugs would have a big impact.
Evaluating Automated Tests
Some tests may "glow red" in the results, but this doesn't necessarily mean an error or impossibility of releasing the product. Some may be known bugs; in some cases, we may be waiting for a third-party fix. However, it's important to know why these tests are failing. For each such test, we should have the ability to add a note with justification. Before releasing a version, of course, such tests should not exist or their number should be minimal. The interpretation of results should thus be clear, and we should be able to manage with the classic passed/failed.