Growing Object-Oriented Software, Guided By Tests (chapter 5)

Feb 08, 22

Q: How do we maintain the TDD cycle?

A: Start each feature with an acceptance test.

We should only use terminology from the application’s domain (and not the underlying technologies).

This will help to de-couple the technology from the logic, and should protect us should the underlying technology need to change.

Writing an acceptance test before coding will help to clarify exactly what we need to implement. The precision required to express the requirements in a form that can be automatically tested will help to uncover any implicit assumptions.

The failing tests will keep us focused on implementing the limited features they describe and will improve our chances of actually delivering them (heh)

These tests will also help us to look at the system from the point of view of a user. We will focus on what they need from the software, rather than be guided by assumptions on what they want, from the view of an implementer.

Unit tests on the other hand test the objects in isolation, which does not lend itself to thinking at the whole project. They do not show how smaller units will interact as a whole - this is why acceptance tests are so important.


We should seek to separate the acceptance tests into tests that catch regressions, or tests that push the system forward and influence the design.

Once an acceptance test is passing, it should always pass. Any failures indicate regression in our system. If requirements change, the affected acceptance tests should be isolated from the main test suite until they are functional again.


Where should we start? The authors recommend starting with the simplest possible success case (this is good for morale and will also help to shake out any failure cases that can be added later).

In the process of creating the ‘happy path’ acceptance test, an engineer should make notes and create a checklist of pain points and error handling cases that can be implemented once the happy path test is green. This will help us to stay on track and not get distracted. However, we must remember that the feature is not complete until the tasks on the checklist are completed.

Try to write the test to be as clear as possible about the behaviour you would like to see from the system or object.

We completely ignore that the code will not compile, and just concentrate on giving the test a great description.

We know when we have implemented enough of the supporting code when the test fails in the way that we expect (so basically the code will compile at this point, but our assertions will be incorrect as the logic is not yet present). There should also be a clear error message explaining what needs to be done.

Only then, do we start writing the code to make the tests pass.


Always watch the test fail before writing code to make it pass. This avoids so-called ‘evergreen’ tests, that do not test anything. Also, if the test fails in a way we didn’t expect, we should fix this before trying to make the tests pass - we want to be sure we are building on a solid logical understanding of the domain.

We should also strive to provide useful error messages when the test fails. This will help us/our team with debugging in the future.

We keep adjusting the tests code and rerunning the tests until the messages guide us to the problem with the code.

There is an emphasis on making the diagnostic messages clear and accurate. This will help us with future debugging, but will also help us to clarify the domain problems as we progress.

Again, we should be testing ‘from the outside in, allowing our acceptance tests to guide us in our internal implementations. We should ‘develop from the inputs to the outputs’, filling in the internal implementation once we have robust tests at the boundaries of the system. If we start by writing unit tests, we could waste time by creating domain objects that have low cohesion with the wider system, and ultimately need to be discarded.

Here is a super important point - when unit testing:

WE SHOULD TEST BEHAVIOUR, NOT METHODS

We should strive to test behaviour to reduce coupling to implementation details. Try not to test the methods themselves explicitly, rather test the behaviour you would like to see the code exhibit. This way we can see the features of the system that are being tested. We need to know how to use the class to achieve a goal, not how to exercise all the paths through its code woooOOOooooah

Test names should describe features

Listen to the tests - If the system is hard to test, maybe it is time to refactor the production code? Make it less coupled, more extensible etc.

Tests are a great way to expose when you are backing yourself into a ‘design corner’.

The refactoring should also be relatively painless if you are testing behaviour and not the methods. We will also have the bonus of a test suite to catch regressions, woohoo!

How do we know which granularity to test at? It depends and should be adjusted on a case-by-case basis.

The main thing is that we can change/refactor the code and be covered by a noice test suite, which will give us confidence that our changes won’t fuck shit up.

Remember - the only constant in software engineering is change ;)