Triggered by Marco Pivetta who apparently said during his talk at Symfony Live Berlin: "If you still don't have tests, this is unprofessional", I thought I'd tweet about that too: "It's good for someone to point this out from time to time".
I don't like it when a blog post is about tweets, just as I don't like it when a news organization quotes tweets; as if they are some important source of wisdom. But since this kind of tweet tends to invoke many reactions (and often emotionally charged ones too), I thought it would be smart to get the discussion off Twitter and write something more nuanced instead.
First of all, many developers experience testing as slowing them down. They see the solution, so they can just write the code and be done with it. Just like any other case of technical debt, failing to write tests means you're cutting corners; it will feel like you're going fast, but you'll certainly go slower over time. In the end, untested code becomes very hard to deal with. If you or someone else looks at it a year later it will be hard to understand what its purpose was, and it will be difficult to change anything about it without breaking it. On top of that, having no tests prevents the design of the original developer to evolve according to new discoveries and insights of later developers.
But to an "unexperienced" developer technical debt doesn't look like technical debt. It just looks like "getting things done". Yes, you've reached your short-term goal faster (although I actually doubt that), but at the same time you forgot about the long-term goals. The software has to support development for longer than just this week, month, or even year. A lot of the code that I see as a developer and consultant is older than a year, much older usually. And it always has the same problem: so much technical debt has been accumulated that the team isn't able to perform anymore. They are being slowed down by the project itself. Management doesn't trust the IT department (anymore) because they fail to deliver.
This will become a matter of acceptance: you can never fix this kind of situation. You have to live with it instead. However, there are things you can do to regain some of that speed. It's my job to point these things out. One of them is always testing. The problem is: if you don't know how to test, it's going to be difficult. You'll need discipline. And we all know that under pressure or stressful conditions, discipline won't survive. You want to do something, and maybe you have seen a glimpse of what testing can bring, but with a hard deadline in front of you, or a disappointing sprint behind you, you'll forget about it, and do what you did before: just write the code. And then you're back to cutting corners again; you're introducing technical debt.
- Code without tests can not survive for years, and
- You won't write tests if you feel like you don't have to.
Let's say your project is supposed to live for longer than, say, a year. Or that your project already lives longer than a year. In both cases, if you don't write tests for code you add or modify, you're actively limiting its chance of survival. This is a direct problem for the company you're working for. It means the application that is at the core of its business is in danger, maybe not today, but certainly within a few years. If your company creates software for other companies instead, it's just as "unprofessional" if you ship that code without tests, knowing that by doing so you don't contribute to a long and prosper future. Your customers will be building their businesses on top of something with an expiration date.
(This is becoming a true blog post now: direct, critical, even harsh maybe. But it's good to say these things out loud every now and then.)
Given human nature, and our inclination not to write tests, I wish we were all taught to only write programs that:
- Have tests, and therefore
- Can be tested, and ideally,
- For which we started testing, before considering an implementation
Unfortunately, programming education rarely focuses on testing as a part of the job that can't be skipped, neglected, or ignored. Therefore I envy Kent Beck's daughter:
"I taught Bethany, my oldest daughter, TDD as her first programming style when she was about age 12. She thinks you can't type in code unless there is a broken test. The rest of us have to muddle through reminding ourselves to write the tests."
— Kent Beck, "Dealing with Failure", from "Test Driven Development: By Example, Addison-Wesley Professional (2002)
If you start learning how to test code as soon as you start learning how to write code, you won't feel so much slowed down by it. But for most programmers, myself included, it's the other way around: we only start acquiring testing skills after years and years of writing code without tests. So yes, we'll feel like "babies", making baby steps, when we used to go so fast.
And this is why you should never postpone the day you start testing your software. Make sure to start now. Start small, if you have to, but do something. Do something every day, and talk to your co-workers about testing. When you start working on your next task, ask: how would I test this? How do I know that I've not made a mistake (without opening up a browser that is)? How do I know that I'll be finished (in other words: what are the acceptance criteria)?
The good thing is, once you start getting the hang of it, you'll notice how you'll be more calm, feeling more secure about your work as a programmer:
"When we test first, we reduce the stress, which makes us more likely to test. There are lots of other elements feeding into stress, however, so the tests must live in other virtuous cycles or they will be abandoned when stress increases enough. But the immediate payoff for testing—a design and scope control tool—suggests that we will be able to start doing it, and keep doing it even under moderate stress.
— Kent Beck, "Test-Driven Development Patterns", from "Test Driven Development: By Example", Addison-Wesley Professional (2002)
And in my experience, this is likely to reduce the stress that made you "skip the tests" in the first place. Plus, of course, you're contributing to the future that your project deserves. So you won't just personally benefit from testing and becoming better at it, but you'll also be a better professional, since you make a valuable contribution to the project and the company you're working for.
There's another interesting moral aspect to being a professional here: should you work this hard for the project to survive? What if nobody cares? What if your software supports immoral practices? What if your company has no moral vision, and just wants to make a lot of money? What if somebody prevents you from doing the right thing? What if your efforts are nullified? I think these are very interesting questions, but I'd like to save them for another post.
Enough polemics for now. Instead, I'd like to reply to some questions.
Dalibor Karlović mentioned on Twitter:
[Testing] can be more expensive if:
- you don't know how to actually test so you're inefficient
- you're testing the wrong things / in a wrong way
- team does not change during the entirety of the project's lifecycle
- project is small, well known scope and domain, has short known TTL
I think I've addressed the first two issues by now: you should never postpone the day you'll start testing, or indeed, you will be inefficient, and test things in the wrong way, etc. Of course, learning this takes time, and like all learning, will be difficult. And of course, you'll make mistakes, just like with regular coding. So in my opinion it's perfectly fine to take the additional cost.
If your team stays the same for the duration of the entire project, I agree that testing may not have all of its expected value in the area of documentation and specification ("just ask the developer who worked on that"). However, this objection does not take the future of the project into consideration, and of course, the future can't be predicted (including who will work on the team, how long will the project live).
The last point is relevant though. If you are absolutely certain that the project is short-lived, you may not want to invest in a test suite. Another reason not to write tests could be that the project simply doesn't produce much value, nobody really depends on it, a customer is willing to pay much more later on when fixing bugs or adding new features, or the customer is ready to accept that the project is delivered as-is, and won't be maintained. This does sound a little bit cynical, but these are possibilities that you have to consider. I did want to point out that the bar for not having to write tests is very low.
I think Dalibor actually agrees on this, because he continues:
"In this cases, I'd argue you'll see little value from tests.
The key point being, as a business, you either can't (fixed team? yeah right) or don't want to be (making throw-away projects, badly) in this position. If your throw-away project grows into more, you'll wish you've spent money on tests when you had the chance. Now you're probably already missing key people (team turnover), forgot/lost key insider knowledge and are stuck with maintaining a project built like throw-away. Not fun."
I agree, as a development team/company you never want to be in this position.
Mathias Verraes said:
Not disagreeing, but for the sake of exploring this line of thought: why stop there? Eg "If you don't use a strongly typed pure FP language, it's unprofessional"
I found this question hard to answer, because it's a clever diversion. I think what's implied is that what we consider to be essential actually evolves. And I think that empirically speaking this is true: the evolution seems to go from coding-without-tests to coding-with-tests. Once you've reach that "level", you don't want to go back, so you'll say: tests are essential. It seems to me that functional programming could be considered another next step in the evolution of programming. Once you know about it, you will find object-oriented programming an outdated and dangerous practice. There is more truth in that than I'd like to admit for now. But I think we should again divert the discussion from evolution to ethics: what is the morally right thing to do?
Then you can say: whatever the current way of programming is, if you know the morally right thing to do, you should do it. So, if you're an object-oriented programmer you know that you should be testing your code. Then do it. If you're a functional programmer there is the same question: are you doing the right thing? Once it becomes apparent that it's not right to write functional code without XXX, then make sure to do XXX.
Given all that we know about quality in software and efficient development processes, I think it's safe to say that testing software is the right thing to do. Many of us don't do it (including myself, when I'm back from my holiday and have forgotten why it's important), and that's what we need to work on.
Well said. I work in a company that has a very old legacy code say 20yrs. Code not were structured. No separation of concern, sphagetti code. Overall, we spend more time fixing bugs than writing code. How do I test code like these? Cos to me it looks very impossible..