Paradigm; a nice word that means "a theory or a group of ideas about how something should be done, made, or thought about" (Merriam-Webster). In software development we have them too. From the philosophy and history of science courses I've followed, I remember that scientists working with different paradigms have great difficulty understanding each other, and appreciating each other's work.
It's an extremely common problem in legacy code bases: a new way of doing things was introduced before the team decided on a way to get the old thing out.
I've mentioned this several times without explaining: the rule that every class should have a test, or that every class method should have a test, does not make sense at all. Still, it's a rule that many teams follow. Why? Maybe they used to have a #NoTest culture and they never want to go back to it, so they establish a rule that is easy to enforce. When reviewing you only have to check: does the class have a test? Okay, great. It's a bad test? No problem, it is a test. I already explained why I think you need to make an effort to not just write any test, but to write good tests (see also: Testing Anything; Better Than Testing Nothing?). In this article I'd like to look closer at the numbers: one class - one test.