"Yes, I know. Our tests aren't perfect, but it's better to test anything than to test nothing at all, right?"
Let's look into that for a bit. We'll try the "Fowler Heuristic" first:
One of my favourite (of the many) things I learned from consulting with Martin Fowler is that he would often ask "Compared to what?"
- Agile helps you ship faster!
- Compared to what?
Often there is no baseline.
The baseline here is: no tests. The statement is that having some kind of test is better than having no tests. Without any context, this is evidently true. Since we know that tests are good, and we want tests for our code, but we have no tests yet, adding some kind of test gets us at least one step closer to the end goal.
However, without context, most statements can't be judged for their value. So let's add some context. My guess is that most development teams want tests for their code for two reasons:
- They want to have the tests form a kind of safety net, for when they're making changes to that horrible legacy code.
- They want to understand why something was implemented the way it is by reading the tests.
To judge the value of any test over no test we should find out if those few tests that developers write in a #NoTest code base are actually helpful to achieve 1 and 2.
One Class, One Test
In my experience, developers have much, much less experience writing tests than writing production code. And the little experience they have is in writing unit tests (maybe because most "TDD" tutorials focus on unit tests). A beginning unit tester writes tests that are very close to the implementation itself. The unit test is usually written after writing the production code, which leads to the developer simply repeating the entire implementation logic in the test. This often results in about three times more test code than production code. In the end this practice doesn't lead to a safety net, nor an understanding of why something is implemented the way it is. It just leads to a lot of unmaintainable test code, that breaks all the time, for all the irrelevant reasons.
Another common minimal approach to testing is to go The Framework Way. Whatever the framework describes in their "Testing" chapter, the team will do. Mostly this results in tests that focus around performing web UI interactions and checking what ends up being in the database afterwards. These tests also break for all kinds of unrelated reasons, which makes them annoying to maintain. They are usually quite slow. This reduces the value of the safety net they provide. Furthermore, in most cases they still don't document the "why". They show a number of steps and what's supposed to happen, but they don't explain why that happens in this case. There is often no clear connection between the start and the end of the test.
Based on my experience with different teams and different projects, this leads me to think that it's definitely not better to write any test than no test. If you don't know what types of test you should write for each area of your application, you'll end up with an unmaintainable test suite, and demotivating team standards like "every class should have a test". If anything, you'll get people to dislike writing tests. At that point, your principle that "any test is better than no test" has reached the opposite of the intended effect.
Instead of writing just any test, focus on writing good tests. Work on tests together, treat them as specifications (which makes it easier to include the "why", something we programmers often forget). While doing so, make sure that writing tests is Fun, Easy, and Effective. The FEE for this is that you have to invest in:
- Test tools that help you run specific tests really quickly (Can you right-click a specific test method and run it in PhpStorm? You should!)
- Tests that give quick results (How long does it take to run the relevant tests before committing? If it's more than a few seconds, fix it!)
- It should be really cheap to add more test cases to show how the code behaves when different data is being provided (How much do you have to copy/paste when you want to create a new test class, or a new test method? Make sure it's just a few lines).
This approach, I claim, is Effective; when the test suite no longer works against you, it'll become a trusted safety net instead of an annoying maintenance burden. More tests will be created, and they will be of a better quality than just one-class-one-test or make-request-then-look-in-DB tests.
I'm sure it takes a lot of effort, but just like everybody understands Technical Debt for production code and invests in improving its design, you know you have Test Debt; both in your own experience as a developer and in your project. So, go go go!