Thursday, January 16, 2014

TDD, what to test?

TDD is a development strategy or methodology, but it doesn't tell what to test for a given solution. However, defining what to test clearly is so essential that without it we won't benefit from TDD.

According to Kent Back, TDD stems from the awareness of a gap between requirements (decisions) and solutions (the feedback to requirements). TDD is a technique to control/mitigate that gap. How does TDD help? Through this process: Red/Green/Refactor. That is, we first write a test (based on a requirement, of course) and it doesn't go through since there are not yet solution components supporting it (Red). Next, we develop required solution components and make the test pass (Green). Not done yet, passing the test is not enough. The third step of TDD, Refactor, is essential for maintaining dry code and for keeping a consistent design. And this is why TDD is also called Test Driven "Design".

Writing tests is based on requirements, and requirements are usually provided in forms of user stories or use cases, so it is a process of examining the solution from outside of the box. In other words, writing tests is something like telling stories on how the user (or other external entities) interacts with the solution - how the user provides certain input data and how the solution responses in an expected manner based on the input and the environment setup/ configuration. Realized or not, by writing tests we are focusing on interaction, we are inventing the interface and discovering business/system functions. This is one of the amazing part of TDD.

TDD keeps the team focused and tells what and when are "done". This is all good. But when it comes to identifying test cases (what to test), things are getting sluggish. In reality, unit tests are often made over complicated - they are trying to test a whole business process flow which covers multiple business functions, and in order to do that, developers work on "test services" which orchestrate multiple business service functions. Conversely, some unit tests are made too granular - not being a meaningful test. The question is how to make appropriate level of unit tests?

Some people say what to test depend on the requirements, but this is rather an idea that is too ambiguous to follow. Fortunately, the "interaction" view mentioned above implies that all public / interface methods should be tested, while it also means that all public methods should be carefully identified, though this stretches to design effort for encapsulation. Another aspect to consider is code coverage. Tests should cover all code, i.e. tests should be designed to go through each logical path. But code coverage doesn't mean we have to prepare test cases for all methods. We don't have them for "private" methods, because they are implicitly included; we don't have to prepare test cases for public methods that are in lower layer too; but we have to prepare test cases for the top layer public methods.

So, instead of asking a general question what to test, we should ask, what is the minimum set of tests when that passes we will feel confident about the solution? In a three-layer paradigm, tests are mainly against the business service layer. That being said, we should avoid business logic in the controller layer or the view layer, but rather keep them under service layer. In considering the fact that unit tests are prepared by developers, the idea of "testing the top layer public methods" still applies, just that it is from the developer's perspective. E.g., a developer only code for the data access layer methods that support upper layer functions, then he needs to set up test cases for the data access layer public methods.

In addition, unit tests are "white-box" in nature, which means we can explicitly prepare testing data sets based on the intention and implementation of a public interfacing method so that all paths/branches are traversed.

No comments: