Adopting Test Driven Development Without Losing Velocity

April 4, 2022
10 min read

Very few practices in modern software engineering elicit a wider range of emotions and responses than test-driven development (TDD). This is especially true for teams that haven’t adopted TDD yet and their stakeholders. Perhaps one of the greatest ironies of the agile revolution is how we do not apply agile principles to process changes. By looking at TDD through an agile lens we will see how to adopt TDD without losing velocity.

What’s the MVP of TDD?

As agile practitioners we are taught to identify the MVP of what we want to build: the quickest outcome that we can get into users’ hands. For TDD, our developers (current and future) are our users. The value TDD brings is primarily two fold:

1. Warns the future developer about regressions. A healthy code base is one where a failing test sets off warning klaxons for the developer to reconsider what they are doing.

2. Identifies when the present-day developer is done. If your tests are passing and you have the functionality you need, then you know you are done. That is, it answers this question in the negative: Do I need to write more code?

Where’s the Friction?

One of the most useful ways to look at how we do things is through the perspective of friction: bad and good. Bad friction is the practices that slow us down without sufficient ROI (such as “this meeting could’ve been an email”). Good friction is the practices that slow us down but have enough of a ROI that we should keep doing them (such as pull request reviews). For this article, we’ll focus on the sources of bad friction when adopting TDD. (TDD as a practice is good friction; if you don’t already accept that, then you should know that this article’s purpose isn’t to change your mind.) 

Another way of asking this question is, what is difficult about adopting TDD?

Thrown into the deep end by unrelatable examples

Many tutorials and training sessions on TDD use examples, such as writing a function that sums its parameters, to teach TDD. The friction comes from that much of the code that most of us software engineers write day-in and day-out is business logic code. It doesn’t cleanly parallel the examples often used for teaching TDD. So it is like sitting down at a piano or with a guitar for the first time and being expected to play one of Bach’s sonatas when all you’ve ever seen are people playing Chopsticks on YouTube.

Not knowing the patterns

Being effective with TDD is all about knowing the patterns of how to test common situations. Consider a typical web application; there are three general layers: the controller or API endpoint layer, the service or business logic layer that processes the controller input, and a data access layer that may talk to a database or another application. Testing each of these layers is significantly different from each other, but within each layer there are common patterns.

For example, with an API endpoint, we need to test the URI and the response format. You do those a few times and you’ll figure out not only the code needed to test those with the framework of your choice but also the edge cases you’ll want to test (e.g., if the URI takes in a query parameter then have a test for when there is no query parameter value). Then we add more requirements. Another routine requirement for API endpoints is to validate user input, at least it better be if we don’t want to fall victim to stored XSS attacks or meet Little Bobby Tables. So now we need to figure out how to write a test that ensures the input is being validated (depending on the framework you are using you might be able to extract the actual validation logic into a separate unit test entirely, which would be great). Now we’ve expanded the corpus of API endpoint patterns we know how to test. When we go to add new endpoints in the future, we know how to write tests for these cases.

Not knowing how to write a failing test

This is similar to the previous point but there is a not insignificant difference between them: No matter the number of patterns you know, you will always encounter novel scenarios that you have no idea how to test. You have two choices: Bang your head against the wall while screaming “I must write a failing test first” or realize what you are doing is not writing production code but rather doing exploratory work to figure out what you need to do. Most of the coding scenarios I find myself in are these exact ones where despite the 15 years of experience under my belt, there are still going to be cases I don’t know how to test from the outset. Haven’t tested a SAML processing flow before? Good luck writing that failing test first.

Misunderstanding when a test has to be written first

This might be the most common, yet controversial, source of friction when it comes to adopting TDD, and we hinted at it in the previous point. Far too many TDD advocates and far too many TDD newbies think that you always have to have a failing test before you can write any code. That’s simply not helpful. What TDD says is you should have a failing test before you write or modify production code. Need to play around to figure out how to even accomplish something before even thinking about how to test it? Great! Do that. You’re still going to be a TDD practitioner.

Adopting TDD

Now that we know what the TDD MVP is and some of the roadblocks teams encounter when adopting TDD, this is how I train people to adopt it. 

1. Figure out what you need to do: We all start out coding this way. It is how we approach every other challenge in life. Go forth and write your code. Use git. You’ll know what you’ve edited. At this point, we haven’t done anything to lose velocity.

2. Now that we’ve figured out how we need to modify the code, write tests for your changed code. Think through edge cases and different input values and write tests for those. At this point, we’ve not taken significantly longer because the alternative scenario where tests aren’t written hasn’t incorporated the external costs of not writing good tests (i.e., bugs and production issues).

3. Here’s the crucial step. Take out your changed code (not the tests) and add it back in chunks. The chunk size will require judgment: You need your code to be compilable so tests can run, etc. After each chunk, run your tests, some will fail and some won’t.

4. Once all your tests are passing, do you have code left over?

5. Yes. Then this is either code you didn’t need (thanks, tests!) or evidence that you need to write more tests. Only you can answer that.

6. No. Then you have a decent amount of reason to think your code is properly tested. (Like with most things in life, this requires you to operate in good faith. This approach won’t help if you just want to game the system to cross something off a list.)

What just happened? You met the MVP of TDD, you’ve ensured you had failing tests for production code (that was part of the point for steps 3 and 4), and you’ve learned patterns. Next time when you go to write some similar code in the future you’ll know how to start with your failing tests.

The beauty of this approach is that it meets people where they are, it lets them get benefits from writing tests before they can do TDD-by-the-book 100% of the time, and stakeholders don’t have to worry about losing velocity.