Quick Testing Tips: One Class, One Test?

Posted on by Matthias Noback

I've mentioned this several times without explaining: the rule that every class should have a test, or that every class method should have a test, does not make sense at all. Still, it's a rule that many teams follow. Why? Maybe they used to have a #NoTest culture and they never want to go back to it, so they establish a rule that is easy to enforce. When reviewing you only have to check: does the class have a test? Okay, great. It's a bad test? No problem, it is a test. I already explained why I think you need to make an effort to not just write any test, but to write good tests (see also: Testing Anything; Better Than Testing Nothing?). In this article I'd like to look closer at the numbers: one class - one test.

A Platonic concept of object-oriented programming

Some say there are in essence two types of thinking. One based on Plato, one based on Aristotle. This may not really be the case, but it's an attractive thought. I find that in testing at least there is a similar distinction. The rule that every class should have a corresponding test class is a Platonic rule. It is similar to Plato's ideal world, which is (roughly speaking) a universe of ideas on which our concrete reality is based. Given that a programming problem is also an idea, and that classes are ideas as well, the problems and their class-based solutions all live in that ideal world. The programmer's job is to extract those classes from the world of ideas and turn them into code that we can run on our computers. All the classes we'll ever need already exist, we only have to discover them. Once we have found such a class (eureka!) we know, because of our team rule, that we have to now also create a test class for it. Why? Because we are mere mortals, and in the process of bringing that perfect class from the immaterial realm into our computer, we make mistakes. The real class is less perfect than the ideal class.

Classes are arbitrary things

Why doesn't all of this make sense? Because we don't discover classes the way they are. We arbitrarily decide on a set of properties and methods (data and behavior) that we want to keep together. The size of a class changes over time: we extract a new class, or we extract a method, and we often do the opposite too: we let the data and behavior of one class be absorbed into another one, and so on. Why? Because we want to fix some code smells, or maybe we made a design mistake. We want to try a different design pattern, or we want to revert that decision.

If the size and shape of a class is arbitrary, how can we link the number of required tests to the number of classes?

This is a known problem for the testing school that follows the one class-one test rule. They have to decide: when I extract class B from class A, do I write new tests for class B? Maybe not, because B's behavior is indirectly covered by the test we already have for class A. Maybe yes, because I can mock B and test A and B separately. The same reasoning goes for methods: do you test each method? What happens when you extract a new method (maybe in another class, maybe in the same class)? Do you write new tests because you have to follow the rules?

It's clear that in practice this one class-one test rule needs a lot of re-evaluation, leading to lots of discussion in the team. Furthermore, it leads to demotivating testing practices, where you spend a lot of time changing tests, writing tests that are mock-heavy, or are otherwise too close to the implementation of the subject under test.

An Aristotelian alternative

What is the alternative? In my experience, it makes a lot more sense to follow an Aristotelian approach. Down-to-earth. What do we have in front of us? What are we working on? What kind of test does it need? Can we test this at a somewhat higher level of abstraction than this class we only accidentally use? We shouldn't be focused on classes anyway, since they are just the way we write code as object-oriented programmers: classes are an implementation detail. What matters is the behavior of the application as a whole. What value does it provide to the user?

When we focus on the bigger picture, we can separate the essential from the accidental. If our test covers only the essential parts, we can leave all the accidental parts inside the black box. We can then freely change those parts and still have a working test, because the test didn't focus on the details. I find this a very rewarding approach to testing. It's not as demotivating, because you don't spend a lot of time rewriting tests. As a bonus, these tests tell the bigger story, and help the reader understand what's going on in the code, and for what reason. So they will serve as documentation too; future programmers won't have to re-invent and reverse engineer all the business rules again.

PHP testing
Comments
This website uses MailComments: you can send your comments to this post by email. Read more about MailComments, including suggestions for writing your comments (in HTML or Markdown).
Tuomas Valtonen

I feel that you need both approaches.

The higher level you go, the more errors you will catch. What i mean by this is that your test breaks for almost anything that breaks the feature and you will not know what is broken. You just know something is broken. If you have those lower level tests in place, some of them will fail as well and will communicate to you what is broken in much greater detail.

With only high level tests, it can get very tricky to test some edge cases. Like when you have 2 different behaviors causing the same result, it is very difficult to proof which one caused it and that the other didn’t.

On the other hand, having only those lower level tests will not guarantee to you that the user can actually use the feature, but they will tell you that all pieces are there working if you just connect them properly. So you need that higher level test there.

Matthias Noback

That's a good point, thanks for writing it down here. I think, in general, projects tend to have either too many high or too many low level tests, and I'm personally aiming to get more tests somewhere in the middle. I don't advocate testing all the possible edge cases through high-level tests, and agree that you need to zoom in closer to the source (class) in that case.

At the same time there are a number of things you can do to let the code produce better errors at all levels, so that it doesn't matter that much at what level they surface. As an example I saw some high-level tests that hide all problems between a 500 error page; of course, that isn't helpful. But if you can skip the HTTP layer in your tests, then you'll have the exception in plain sight.