Growing Object-Oriented Software, Guided By Tests (chapter 23)
Feb 26, 22The real point of a test is not to pass, but to fail. A failing test gives us great insight into our system and any regressions.
That being said, we need to ensure that failing tests are easy to diagnose when they happen. We want to avoid having to use the debugger when a test fails!
Synchronize with version control frequently, so incremental changes are easy to roll back if a test starts failing unexpectedly.
We should strive to make the tests fail ‘informatively’ - giving us as much information at runtime as possible.
Here are some tips:
### Use small, focused, well-named tests Keep each test small so its name should essentially give you all the information that you need.
Use explanatory assertion messages
If the failure output of the test is not clear it will be hard to understand what went wrong just by looking in the console. Make your assertion messages clear for the reader.
Highlight details with matchers
Try to draw the reader’s attention to the critical information in the test failure message by using matchers.
Use self-describing values
You could also try introducing helper methods that match values and return the name of the value to pinpoint incorrect output. E.g
this:
- Test Discount
$ Expected <50>, got <60>
becomes this:
- Test Discount
$ Expected <normal discount>, got <super discount>
Obviously canned value
If values are hard to explain, consider using ‘funky’ values that would obviously not appear in prod data (this is debatable :hehe). For example, when choosing an invalid ID for a user, we could set their ID to -1
.
Always make sure to watch a test fail before making attempting to make it pass!
We could also add an extra step to the red-green-refactor
cycle: report
This means that during the refactoring stage, we also look at how we can improve our diagnostics messages. This will aid maintenance in the future when we have forgotten what the code/tests do!