The TDD Zealot

January 12, 2011 · 7 comments

On December 22, 2010 Mark Seemann published a blog post The TDD Apostate. Since then several friends and associates have sent me links to his post as if to say, “See, TDD is not all it is cracked up to be.” It is an excellent post and I agree with much of what the author says as well as the comments by his readers but there are a few points he makes that seem misleading and perhaps underrate the true value of TDD for those who do not practice it.

One of the challenges about discussing TDD and design is that different people have different views of what TDD is. For the purposes of this post (and my feelings about TDD in general) when I refer to TDD I mean test first development that strives for 100% code coverage but no more (for more on the misuse of TDD please see my other posts: Triangulation and That’s Not TDD as well as my many posts on effective ways to use TDD).

I believe the author’s point in his post was that you can’t blindly follow TDD and necessarily end up with a good design. That is true because you can’t blindly follow any approach and expect to come up with a good design, if you could then computers would write software and programmers would be out of a job.

However, with the dimmest awareness of a few key things, I believe that test first development can help us emerge good designs and often shows us good design alternatives if we pay attention.

The author used Bob Martin’s SOLID principles as his criteria for good design. Out of the 5 SOLID principles (Single Responsibility Principle, Open-Closed Principle, Liskov Substitution Principle, Interface Segregation Principle and Dependency Inversion Principle) he says TDD only helps with the Interface Segregation Principle and a little with the Open-Closed Principle so he scores TDD with 1.5 out of 5 for driving us towards the SOLID principles.

Here’s how I score it:

SRP: “God objects” are very hard to test because they have so many responsibilities. I believe TDD supports making classes and methods have a single responsibility so they are easier to test. For example, if I have a class with 6 responsibilities I’d need 32 tests to ensure those responsibilities don’t interact (the formula is 2^n). If I have 6 classes that encapsulate each of the 6 responsibilities then I only need 6 tests (plus integration) to ensure they work. If I want to make it home for dinner instead of writing tests all night then I’ll follow TDD as it drives me towards the Single Responsibility Principle and make my classes and method have only one responsibility so testing is easier. I count one for TDD.

OCP: The author gives the Open-Closed Principle half a point; I give it one full point because one of the easiest ways to make an issue open for extension is to put it behind an abstraction and we know this is good design (when it is not over-design). When we encapsulate issues behind abstractions we build better, more robust code. Furthermore, OCP helps us identify responsibilities so we can more easily mock them out to test and helps us push our business rules to factories instead of spreading them out across our system. Again, TDD shows me that I must mock out everything not under control of the test which often makes these issues open for extension later. This makes 2 points for TDD.

LIS: I agree with the author; no point here and I think this is an important distinction. TDD does not drive us to make superclasses substitutable for their subclasses. Of course, my car does not drive me to the store, I drive it. I drive TDD to use the Liskov Substitution Principle but I do not think that blindly following the mechanism of TDD gets me to follow the Liskov Substation Principle so I agree with the author on this point. The score is still 2 points for TDD.

ISP: Disagree, for the same reason as SRP. Multiple, unrelated reasons for calling the same API is a cohesion issue and it is also harder to mock. Because of possible interactions of multiple orthogonal callers we would need more tests than if we segregated the interfaces. Following Interface Segregation Principle makes code easier to test. My score so far is 3 points for TDD.

DIP: I agree with the author, TDD does drive us towards the Dependency Inversion Principle and helps us design interfaces based on the client’s needs. As my friend and colleague Scott Bain likes to say, “Think of the test as your first client.” When we write the test first we are focusing on building a stable signature for our methods up front and this is good design. The score is 4 points for TDD.

Total: My score is 4 points out of 5 points for TDD. I think that is pretty good. I think TDD does help us build better code based on the SOLID principles but the SOLID principles are just one way to get better code and just because code follows the SOLID principles does not mean that it is well designed. We must also employ other principles, practices and techniques.

What TDD does not help us with is finding the right abstractions; that requires other techniques. I agree with the author on this point but we shouldn’t infer that TDD is worthless. We can’t build a house with only a hammer but that doesn’t mean hammers are useless when building a house.

I read an article a few years ago from one of the developers at Object Mentor who was showing how you can use TDD to emerge the design of a parser. I can’t locate the article now but I recall it was about 50 pages long and it showed how, based on need, to emerge the design of a full parser just in time and without a lot of reworking. It was very interesting. Unfortunately, the design the author emerged to was a giant switch statement but if this author knew patterns I’m sure he would have been refactoring to a Chain of Responsibility and he would have ended up with a great design.

TDD is only a tool–a vehicle–but we still must drive it with the intention of getting somewhere. The SOLID principles give us some guidance with design but the problem with principles is they tell us what to strive for but not how to get it. The SOLID principles specifically point to some aspects of good design but only conforming to these principles do not guarantee a good design. I believe if I had to limit my design guidance to the bare minimum I would focus on code qualities rather than the SOLID principles. Fortunately, I can use both and together with a good understanding of patterns and some basic practices I believe TDD can help us emerge good designs as needed much of the time.

I have a list of 18 principles, practices, techniques and qualities that I teach in my Software Development Essentials classes. You cannot blindly follow them to get a good design no more than you can blindly wield carpentry tools and expect to build a beautiful home. However, with some intelligence and awareness you can use TDD and the other 17 things along with about a dozen design patterns and have a powerful set of tools to help design in most situations.

What I believe the mechanism of TDD (test first TDD) helps us with is sequence. A list of ingredients does not make a recipe, we also need the instructions to show how to combine them and this is often what is missing when developers try to emerge designs. TDD can help show us the “when” of emergent design based on needs and unfolding requirements and the refactor step in TDD tells us when we should focus on design in the small but we must use other techniques to show us the “how” of good design.

Building software is one of the most complex activities we can engage in. Good software design take experience, more experience than sitting through a 3-day class. I’ve been at it for over a quarter of a century so I’ve seen how various design approaches play out. I know the common mistakes that developers tend to make and I carefully selected those 18 things to address them. At the core of everything are 6 code qualities that I believe all good principles and practices proceed from.

So what are these mysterious code qualities? Stick around. In the coming weeks, through a series of blog posts, I will introduce to you 6 code qualities that you can use to evaluate a design and drive towards better designs (with your eyes open, of course).

{ 4 comments… read them below or add one }

Mark Seemann January 13, 2011 at 2:14 am

It may surprise you, but I agree with most of what you wrote; I certainly agree on the overall conclusion. TDD is a tool, but skill is required to properly wield it.

I also agree that TDD is only one of many tools that enable us to create great software. SOLID is another tool, but just one of them. I picked it because it’s relatively well-known, but I might as well have picked other ‘measures’ of code quality, such as cyclomatic complexity, cohesion, POLA, CQS, etc. The reason I chose SOLID is that it constitutes a cohesive set of principles, whereas had I chosen other principles, my selection would have looked (and perhaps been) more subjective. However, I believe the analysis and conclusion still stands.

The only place where I really disagree with you is on the scoring of the SOLID principles. Not so much on OCP, because it’s not that important whether we grant ½ or 1 point, but more when it comes to the SRP and ISP. In this particular case I think you are letting your own high skill level get in the way :)

Your argument that TDD drives SRP hinges on the assumption that a God Class would exhibit high cohesion. When that is the case, I certainly agree with you that TDD would be painful and thus drive us towards SRP.

However, God Classes certainly don’t have to be cohesive – after all, when we discuss God Classes, we’ve already entered anti-pattern territory. It’s easy to TDD new functionality for such a class: just write the tests that exercise the new method. If you need more state than the class already maintains, just add new properties as you go along – don’t worry about protecting invariants: as long as all tests are green, you’ve followed TDD to the letter ;)

The resulting class would be a horrible mess. Not only would it have low cohesion, but it would most likely also have tight temporal coupling. With low cohesion your 2^n formula does not apply.

The same argument applies to the ISP, which is why I gave them both zero points.

While I’m exaggerating a bit to make a point, I’ve certainly seen enough real-life production code created with the TDD ‘process’ to be confident that this is more than an academical exercise. There’s lots of bad code out there that was created with TDD, and often the developers don’t even realize it.

I’ll be posting more on this subject, but one of the reasons I wrote the post was actually in defense of Test-First. Too many times I’ve seen code go bad because of lack of skill, but if TDD was involved, developers or project managers blame TDD. They throw away their unit tests and carry on as they did before.

Reply

Jeff L. January 13, 2011 at 12:37 pm

Hi Toran–

Great article. Regarding the parser: I deemed the ugly switch statement sufficient at the time, and patterns would have been mildly helpful but overkill IMO. I’d probably go the next step were I to build it again. What can I say, that’s 11 years ago (and yes I did know about dem pattern t’ings even in those dark ages).

Re: TDD, let’s not forget one main reason to do TDD, and that’s because it enables the ability to refactor with high confidence.

I do think TDD scores high against SOLID, including the SRP, for many of the reasons you state. (I think people quickly find that it’s far easier to test-drive cohesive and decoupled classes…)

But yeah, I’m getting the advantage of hindsight every time I code a bit, and I use Beck’s four simple design rules to *really* make sure I’ve produced a good design. TDD is really only the first of these four rules. I’ve done exploration in the past on the other three rules–no duplication, high expressiveness, minimal # of classes and methods–and how they relate to solid. Test-after just doesn’t give you as much ability to shape. Sure, you can do it (done it for over 15 years prior learning TDD), but it’s riskier and thus slower, and therefore not nearly as much gets done to “fix” the design as needs to be done.

Mark’s point about there being plenty of bad code out there produced with TDD is dead on–as is one of the uglier results (i.e. people throwing away the tests). This is what Tim Ottinger & I refer to as the “bad test death spiral”–see http://agileinaflash.blogspot.com/2009/09/stopping-bad-test-death-spiral.html.

When you do TDD, you *must* take advantage of the opportunities to keep the design clean via refactoring, and when you do so, you must know what good design is, whether you use SOLID, simple design, or other heuristics to get there. TDD in the absence of good design sense and a will to ensure it ends up in the code will generate crap.

Jeff

Reply

Nat January 13, 2011 at 4:02 pm

I think that if TDD guides you towards the Dependency Inversion Principle, then it must also guide you to the Liskov Substitution Principle. If objects depend on abstract interfaces, then the implementations of those interfaces must be “plug compatible”, or the objects in the system would not be able to collaborate correctly, and you’d have failing tests.

Reply

davidbernstein January 18, 2011 at 4:04 pm

I am not so sure about this but remember my post is in the context of Mark Seemann’s excellent post referenced above. Here’s how I see it: The Liskov Substitution Principle is about using inheritance to classify rather than specialize and underscores the need for abstract base classes. The Dependency Inversion Principle helps us write APIs that are useful to our callers by taking the caller’s perspective but does not necessarily tell us to put the interface behind an abstraction so I don’t think they are dealing with the same thing.

Reply

Leave a Comment

{ 3 trackbacks }

Previous post:

Next post: