Joana Roque Durao
Testing is considered to be one of the good coding practices, but fewer developers are encouraged to write tests. Time to make testing great again.
There are basically two reasons why software testing is not in vain these days. On the one hand I notice that testing is deliberately not taken into account as part of the development process. Particularly you can see this tendency around outsourced projects, where time is money. On the other hand, software developers just overlook the importance of writing tests. Surprisingly they prefer the old “run-and-try-things” approach, which actually takes more time, is more error prone, and needs to be repeated for all future changes within the application.
Ignoring testing is counterproductive
But let me be clear: whatever you reason is, neglecting software testing is counterproductive. In this ever increasing world of Software-as-a-Service, Platforms-as-a-Service and mobile apps developed with the speed of light, it’s a deal breaker when an application isn’t working, as users can google new alternatives within one minute.
It’s increasingly important to show that online services are robust and can constantly offer new features, without breaking the existing functionality. It’s one of the reasons that good testing procedures and practices are still a widely discussed topic. Related articles, such as the one you read now, nowadays continuously pop up within the article feeds of developer communities.
The old “run-and-try-things” approach takes more time, is more error prone, and needs to be repeated for all future changes within the app.
Currently and luckily I’m working for a company where the time to write tests was always considered to be an integral part of development. Here it’s enforced that no task is finished without tests to prove that the code works forever. There’s a CI tool in place that helps me and my colleagues to prevent accepting code merges with failing tests, so that we can ensure in our peer-to-peer reviews that the responsible person will verify and correct the code before merging it to our staging and production versions.
What defines good software testing?
But only enforcing the activity is of course not enough for supreme software testing. Because, what actually defines a good test? Which are the components of an application that should be tested? At which architecture levels should tests occur? What are the best procedures to ensure code quality? Those are questions that should be addressed when designing a testing setup for an application.
The answers to these questions are dependent on several factors and there is no general formula that applies to all these cases. That’s why I’ve written the first part of a serie of articles that aims to provide some answers, taken from my current experience in a product implemented in a microservices architecture.
Software testing requirements
A good test is a logical proof, written in code, that our function will always work. It shouldn’t only run the happy-path for the function and then expect no errors. It should verify that the changes implemented in the function were actually applied and that the return argument is the correct one for the given inputs. It should list all of the situations a function/method doesn’t work as intended. Sometimes all of the possible failure situations can be found, sometimes not, but a good test should at least list all of the corner cases?—?think about null or empty inputs and incorrect formats.
One technique that is useful to ensure that most of the error prone situations are detected is to continuously analyse the code coverage of the tests. Some companies restrict merges in their codebases where coverage levels of tests are below a certain level. This is highly encouraged, to make developers build more comprehensive tests and therefore minimising the possibility of errors in the long term.
But it is very easy to get a good coverage without having good tests. To avoid this, it is also important to review the code of the tests, to ensure their quality, by peer to peer review or by another mechanism. As a development team, you want to reduce the need of refactoring or improving the code of tests in the future.
To mock or not to mock
A common application nowadays may interact with several outside components. For example, it could interact with databases and other data storages, external APIs, communication queues and other custom applications/services in the same environment.
While it is not required that every test verifies the interaction with all external components, most of the common problems that could occur should be listed. One way of doing this without setting up an entire development environment is by using mocked entities that replicate a component with a certain behaviour. This way failures in communication, failures in data retrieval, amongst other cases, can be detected quickly and without the need of setting up an instance of the component.
Integration tests with more complex and comprehensive checks should be included.
In order to be able to use mocks, its also important to design our software components in such a way that the external components interactions are separated from the functionalities. Having separate components managing the interactions with the external entities makes the application more extensible and modular, as it becomes much more easier to replace the current external component interface if it is ever needed to switch it with another technology/tool.
The usage of mocks is important, encouraged, but of course does not substitute integration tests. Integration tests with more complex and comprehensive checks should also be included. These tests are applied to confirm the behaviour of the application, to recognize the required and optional components in the environment, and to detect breaking changes when because of bugs or updates a communication interface between the components is being modified. My promise: I will discuss the importance and usage of integration tests in the second part of this serie.
Test Methodology and Organisation
Personally, I think the way that tests are organised should be a choice of the development team, made with the consideration of the stack, colleagues’ experience and the project management method in use.
One thing that works for me is to define tests by scenarios, based on the inputs of the functions. Think about a formula like if <input> then <output>. For example, if my input is empty then the function returns an error x. I don’t necessarily think about the requirement or user story scenarios (except when defining end-to-end tests), but in terms of overthinking all the possible scenarios for the function to be run, even when used in isolation from the application. In this way possible situations where a software component is tightly connected to an application, can be detected and therefore improve its chances of reusability in the future.
Another important question is ‘At which point of the development process is it appropriate to write tests? In my personal case, I organize my development in the following phases (and I’m sure most developers do the same):
- Read the requirements/issue description to solve
- Write the code to implement or solve the issue in question
- Write tests to run the function or improve the existing tests to prove that the issue is being corrected by the fix
- Run tests, detect failures, correct the implementation and repeat until there are no more failures left.
Notice I do not mention “running the function in main with different kinds of outputs”. That is basically doing a manual test and most likely the cases that would be run are the ones that should be coded in the tests. By avoiding manual testing the time wasted in running manually all the possible failure cases is used to write down the code to automate their execution, which is far more convenient and will lead to less errors.
Automated software testing forever
Other development techniques may be used instead and are valid as well, such as Test Driven Development, Behaviour Driven Development, amongst others. The important aspect is to ensure that no matter the technique chosen by the developers, writing tests should be encouraged and never forgotten, replacing the need for the manual tests.