System and integration tests need database fixtures. These fixtures should be representative and diverse enough to "fake" normal usage of the application, so that the tests using them will catch any issues that might occur once you deploy the application to the production environment. There are many different options for dealing with fixtures; let's explore some of them.
Generate fixtures the natural way
The first option, which I assume not many people are choosing, is to start up the application at the beginning of a test, then navigate to specific pages, submit forms, click buttons, etc. until finally the database has been populated with the right data. At that point, the application is in a useful state and you can continue with the act/when and assert/then phases. (See the recent article "Pickled State" by Robert Martin on the topic of tests as specifications of a finite state machine).
Populating the database like this isn't really the same as loading database fixtures, but these activities could have the same end result. The difference is that the natural way of getting data into the database - using the user interface of the application - leads to top quality data:
- You don't need to violate the application's natural boundaries by talking directly to the database. You approach the system as a black box, and don't need to leverage your knowledge of its internals to get data into the database.
- You don't have to maintain these fixtures separately from the application. They will be recreated every time you run the tests.
- This means that these "fixtures" never become outdated, incomplete, invalid, inconsistent, etc. They are always correct, since they use the application's natural entry points for entering the data in the first place.
However, as you know, the really big disadvantage is that running those tests will become very slow. Creating an account, logging in, activating some settings, filling in some more forms, etc. every time before you can verify anything; that's going to take a lot of time. So honestly, though it would be great; this is not a realistic scenario in most cases. Instead, you might consider something else:
Generate once, reload for every test case
Instead of navigating the application and populating the database one form at a time, for every test case again, you could do it once, and store some kind of snapshot of the resulting data set. Then for the next test case you could quickly load that snapshot and continue with your test work from there.
This approach has all the advantages of the first option, but it will make your test suite run a lot faster. The risk is that the resulting set of fixtures may not be diverse enough to test all the branches of the code that needs to be tested.
With both of these options, you may also end up with a chicken/egg problem. You may need some data to be in the database first, to make it even possible to navigate to the first page where you could start building up the fixtures. Often this problem itself may provide useful feedback about the design of your application:
- Possibly, you have data in the database that shouldn't be there anyway (e.g. a country codes table that might as well have been a text file, or a list of class constants).
- Possibly, the data can only end up in the database by manual intervention; something a developer or administrator gets asked to do every now and then. In that case, you could consider implementing a "black box alternative" for it (e.g. a page where you can accomplish the same thing, but with a proper form or button).
If these are not problems you can easily fix, you may consider using several options combined: first, load in some "bootstrap" data with custom SQL queries (see below), then navigate your way across the application to bring it in the right state.
But, there are other options, like:
Insert custom data into the database
If you don't want to or can't naturally build up your fixtures (e.g. because there is no straight-forward way to get it right). you can in fact do several alternative things:
- Use a fixture tool that lets you use actually instantiated entities as a source for fixtures, or
- Manually write
INSERT queries (possibly with the same net result).
Option 1 has proven useful if you use your database as some anonymous storage thing that's used somewhere behind a repository. If you work with an ORM, that is probably the case. Option 2 is the right choice if your database is this holy thing in the centre of your system, and:
- The data in this database is often inconsistent or incomplete, and/or
- Other applications are also reading from or writing to this database.
Manually writing fixtures in that case allows you to also write "corrupt" fixtures on purpose and verify that your application code is able to deal with that.
There's still one problematic issue, an issue which all of the above solutions have in common: shared data between all or many test cases. One radical approach that in my experience works really well, is to:
Insert custom data for each test case
What happens when you load a lot of data into the database (no matter how you do that), and run all the tests from this starting point?
- All the tests start relying on some data that was not consciously put there.
- You can't easily find out which data a test relies on, so you can't easily replace or modify the fixtures, without breaking existing tests.
- Even adding new data might have a negative influence on the results of other tests.
In my experience, even the tiniest bit of "inheritance" you use in the process of loading fixtures, will always come back to bite you in the tail. Just like when you use class inheritance, when you use fixture inheritance, you may find certain things impossible to achieve. That's why, when it comes to fixtures, you should apply something like the "prefer composition over inheritance" rule. But I often take this one step further: no composition, no inheritance (no fear of duplication): just setup the fixtures specifically for one test (class or suite, possibly even method or scenario).
This has several advantages:
- The fixture data is unique for the test, so you can be very specific, tailoring the fixtures to your needs.
- You can even document why part of the data set is even there.
- The set of fixture data is small, leading to fast load times.
- You can safely modify fixtures, even remove them, without worrying about some remote test breaking.
There is one disadvantage I can think of: it takes more work to prepare those fixtures. However, the time spent writing fixtures is easily won back by the sheer joy and ease of maintaining them. In fact, I find that "fixture maintenance" is hardly a thing.
As a conclusion, you should consider an important "meta" question too:
Do your objects really need to be reconstituted from the database? What if the repository itself would - when used in a test case - be able to just store and retrieve objects in-memory? This often requires a bit of architectural rework using the Dependency inversion principle. But afterwards, you probably won't need to test every part of your application with a full-featured database anymore.
P.S. Just don't replace MySQL with Sqlite for speed. It's still much better to test actual database interactions against the real thing. Testing it with Sqlite doesn't prove that it's going to work with the real database in production. See also my previous article "Mocking at architectural boundaries - Persistence & Time".
A very important "trick" in finding the flow in life is: do what you like most. Of course, you have to do things you don't like (and then you need different life hacks), but when you can do something you like, you'll find that you'll be more successful at it.
When it comes to blogging, I find that it helps to follow my instincts, to write about whatever I like to write about at the moment. I can think of a list of things that need blogging about, but I end up not writing about them because they don't light the fire inside me (anymore).
Start writing very soon
So, when I decided in January of this year to publish a blog post every week, this was the rule to be applied: whenever I felt the urge to write an article about something I thought was very interesting at the moment, I'd just do it. It turns out that this sometimes requires me to write on the train, in the plane, at night. But I'd still do it (or at least, I'd do it within a certain amount of time, like a maximum of one day after I had the idea). Otherwise, the idea would fade away, just like its importance.
Starting to write should be very easy
Besides starting to write about some idea early after its conception, another important trick is to make that "starting to write" step as easy as possible. It should be very easy to:
- Start working on a new blog post
- Modify an existing one
- Add code samples
- Publish it
(One thing that could improve in my own workflow is: it should be very easy to add images...)
For me, static site generator Sculpin, combined with an automated Docker setup, helps a lot with this rule. For an idea about the setup, check out Lucas van Lierop's open source blog.
Imagine your audience, but never let the imaginary audience judge you
When writing, it helps to imagine who will be reading it. I tend to mostly write with my direct co-workers as the audience in mind. I make sure to add links to reference material, in case the reader needs a refresher on a topic, or is just not familiar with it. It's always good to take a meta-perspective while writing, considering the places in the text where you may lose readers.
Although considering the audience while writing is a good idea, you should never give them a voice while writing. You shouldn't allow them to interrupt your good work and point out that
- somebody else has already written about it (and better),
- this is not very original,
- this is only interesting for 2 people,
- this contains so little information, it doesn't deserve to be called a blog post,
- and so on...
Leave the judging to the real people who'll eventually read your article. And even then, don't let yourself be disappointment by their comments. By the way, even when lots of people are reading your posts, they usually don't comment at all, in which case: don't worry about that either!
Besides judging, people may also provide you with good feedback. Since it's your blog, you're still allowed to ignore them, but you could of course decide to do something with it too.
Don't publish the first article immediately
In my experience, if you write one article and publish it immediately, you'll have another mountain to climb before you publish the second article. In other words: that second article won't come and you'll feel very bad about it too. So, my trick is to write two or more articles before starting to publish one. To see this queue of articles makes me feel like I'm in control and I don't have to worry about the next deadline anymore; I'll make it anyway, even if I don't write for two weeks in a row. This is also a great way to deal with holidays; I just write a few posts in advance, and publish them even when I'm not working.
Finding good topics
For me, a great source of topics are questions asked during workshop sessions, or conference/meetup talks. These questions are a clear sign that not everybody "just knows". For example, the discussion about "where to generate an ID" comes up often during workshops. In such a case, I provide a brief summary to the participants, but point to the article for more details. A similar source of interesting topics are conversations with colleagues.
Another source is programming work itself. Whenever I feel like I found a good solution for something, or even found a solution "template" or "pattern", I like to write about it to gather feedback. The "ORMless" article is a nice example of that.
Finally, sometimes I feel like blogging when I notice a discussion on Twitter about a topic I have an opinion on/experience with/etc.
With these rules, tips and tricks, it turned out that my goal to publish one blog post every week was quite achievable. It took me about 3 hours every week to keep up with the rhythm. I suspect that actually having such a rhythm is part of why it worked. I also realize that it's still a lot of time to invest, and that 3 hours a week is optimistic if you're starting out as a blogger. Which is why I'm not saying: "you can do it too". But I still believe that these suggestions might come in handy. Please let me know how it went...
The downsides of starting with the domain model
All the architectural focus on having a clean and infrastructure-free domain model is great. It's awesome to be able to develop your domain model in complete isolation; just a bunch of unit tests helping you design the most beautiful objects. And all the "impure" stuff comes later (like the database, UI interactions, etc.).
However, there's a big downside to starting with the domain model: it leads to inside-out development. The first negative effect of this is that when you start with designing your aggregates (entities and value objects), you will definitely need to revise them when you end up actually using them from the UI. Some aspects may turn out to be not so well-designed at all, and will make no sense from the user's perspective. Some functionality may have been designed well, but only theoretically, since it will never actually be used by any real client, except for the unit test you wrote for it.
Some common reasons for these design mistakes to happen are:
- Imagination ("I think it works like that.")
- The need for symmetry ("We have an
accept() method so we should also have a
- Working ahead of schedule ("At some point we'll need this, so let's add it now.")
My recent research about techniques for outside-in development made me believe that there are ways to solve these problems. These approaches to development are known as Acceptance Test-Driven Development (ATDD), or sometimes just TDD, Behavior-Driven Development (BDD), or Specification By Example (SBE). When starting out with the acceptance criteria, providing real and concrete examples of what the application is supposed to do, we should be able to end up with less code in general, and a better model that helps us focus on what matters. I've written about this topic recently.
The downsides of starting with the smallest bricks
Taking a more general perspective: starting with the domain model is a special case of starting with the "smallest bricks". Given the complexity of our work as programmers, it makes sense to begin "coding" a single, simple thing. Something we can think about, specify, test and implement in at most a couple of hours. Repeating this cycle, we could create several of these smaller building blocks, which together would constitute a larger component. This way, we can build up trust in our own work. This problem-solving technique helps us gradually reach the solution.
Besides the disadvantages mentioned earlier (to be summarized as the old motto "You Ain't Gonna Need It"), the object design resulting from this brick-first approach will naturally not be as good as it could be. In particular, encapsulation qualities are likely to suffer from it.
The downsides of your test suite as the major client of your production code
With only unit tests/specs and units (objects) at hand, you'll find that the only client of those objects will be the unit test suite itself. This does allow for experiments on those objects, trying out different APIs without ruining their stability. However, you will end up with a sub-optimal API. These objects may expose some of their internals just for the sake of unit testing.
You may have had this experience when unit-testing an object with a constructor, and you just want to make sure that the object "remembers" all the data you pass into it through its constructor, e.g.
final class PurchaseOrderTest extends TestCase
public function it_can_be_constructed_with_a_supplier_id(): void
$supplierId = new SupplierId(...);
$purchaseOrder = new PurchaseOrder($supplierId);
I mean, you know a purchase order "needs" a supplier ID, but why does it need it? Shouldn't there be some aspect about how you continue to use this object that reveals this fact? Maybe you're making a wrong assumption here, maybe you're even fully aware that you're just guessing.
Try following the rule that a domain expert should be able to understand the names of your unit test methods, and that they would even approve of the way you're specifying the object. It will be an interesting experiment that is worth pushing further than you may currently be comfortable with. The fact that an object can be constructed with "something something" as constructor arguments is barely interesting. What you can do with the object after that will be definitely worth specifying.
Experiment 1: Don't test your constructor
Don't even let your object have any properties, until the moment it turns out you have to. You may have had a similar experience with event sourcing: all the relevant data will be inside the domain event, but you don't necessarily have to copy the data to a property of the entity. The same goes for this experiment: only copy data into an attribute, once you find out that you actually need it, because of some other behavior of the object after constructing it.
This experiment will prevent you from adding unneeded attributes.
So, what happens in a constructor should serve other purposes than just keeping the data into an attribute. But the same goes for the data that can get out. Looking again at the constructor test in the unit test case above, you can see that we needed to add a
supplier() getter method to
PurchaseOrder, just so we can verify that the data gets copied into one of the object's attributes. The time that passes between constructing a
PurchaseOrder and calling
supplierId() on it proves that the object can remember its supplier ID. But what's the purpose of that?
Try following the rule that every getter should serve at least one real client, i.e. one caller that is not the unit test itself. The need for "getting something out of an object" can be a very real one of course, but it's a use case that should be consciously implemented by the developer. It should not be there just to make the object "unit-testable".
Experiment 2: Don't add any getters
Don't add any getters just for your unit test. Only add them if there is a real usage for the information you can "get" from the getter, outside of the test suite.
Basically, these two experiments are two sides of the same coin. They can be summarized as: don't keep unnecessary state, don't unnecessarily expose state.
As an example, a
PurchaseOrder can have lines, each containing a product ID and a quantity. A first test idea would be this:
final class PurchaseOrderTest
public function you_can_add_a_line_to_it(): void
$purchaseOrder = new PurchaseOrder();
$productId = ...;
$quantity = ...;
new PurchaseOrderLine($productId, $quantity);
Unless there is a real client for the
lines() method, besides this unit test, we should not simply start exposing the array of lines this
PurchaseOrder uses to remember the lines. In fact, along the way it may turn out that for the actual use cases our application implements, there will be no real client needing a
lines() method. Maybe all that's needed is a
Instead, you should think about the (domain) invariants you need to protect here. Like "you can't add two lines for the same product", or "you can place a purchase order when it has at least one line":
final class PurchaseOrderTest extends TestCase
public function you_cannot_add_two_lines_for_the_same_product(): void
$purchaseOrder = new PurchaseOrder();
$productId = ...;
$quantity = ...;
$purchaseOrder->addLine($sameProduct = $productId, $quantity);
public function you_cannot_place_a_purchase_order_when_it_has_no_lines(): void
$purchaseOrder = new PurchaseOrder();
// add no lines
public function you_can_place_a_purchase_order_when_it_has_at_least_one_line(): void
$purchaseOrder = new PurchaseOrder();
* Depending on how purchase order is actually used, you need to
* decide how you want to verify that the order was "placed".
* Again, don't immediately go and add a getter for that! :)
Of course, the implementation of
PurchaseOrder now definitely needs a way to remember previously added lines, but we leave its implementation details to
PurchaseOrder itself. The tests don't even mention the child entity
PurchaseOrderLine, leaving even its implementation details to
PurchaseOrder. This is a big win for the flexibility of
There's one other experiment that I'd like to mention here, which will help you write only the necessary code:
Experiment 3: Don't create any value objects
Start out with primitive-type values only, and let the potential value object wrapping these values prove their own need. For example, accept two floats, latitude and longitude. Accept an email address as a plain-old string value. Only when you're repeating knowledge that's particular for the concept that these values represent, introduce a value object, extracting the repeated knowledge into its own unit.
As an example, latitude and longitude may begin to form a code smell called "Data Clump" - where two values are always to be found together. These values may also be validated at different point (e.g. verifying that they are between -180 and 180 or whatever). However, these value objects are naturally implementation details of the aggregates they belong to. Achieving the maximum level of encapsulation and data hiding for the aggregate, you could (and should) introduce value objects only to protect the overall domain invariants of the aggregate itself.
All of these experiments are not meant to say that you shouldn't have constructors, attributes, getters, or value objects. They are meant to shake your approach to testing, while designing your objects. Tests are not the main use case of an object, they should guide the development process and make sure you write exactly the code that you need. This means not sacrificing the object's encapsulation just to make it testable.
Write the code, and only the code, that supports the use case you're implementing. Write the code, and only the code, that proves that your objects behave the way you are specifying them to behave.
This amounts to adopting an outside-in approach and to make unit tests black box tests. When you do this, you may let go of rules like "every class has to have a unit test". You may also stop creating test doubles for everything, including domain objects. Which itself could lead you to resolve the question "how do I mock final classes?" once and for all: you don't need to do that.