Today I'm happy to release my latest book, "Microservices for everyone"! 90% of it was done in July but you know what happens with almost-finished projects: they remain almost-finished for a long time. I did some useful things though, like preparing a print edition of the book.
Finally, and I'm very proud of that, I received Vaughn Vernon's foreword for this book. You may know Vaughn as the author of "Implementing Domain-Driven Design". It made a lot of sense to ask him to write the foreword, since many of the ideas behind microservice architecture can be traced back to messaging and integration patterns, and Domain-Driven Design. Vaughn is an expert in all of these topics, and it was only to be expected of him to write an excellent foreword.
[…] Microservices, if used responsibly and properly, will tend to solve these three big problems. Teams can be made smaller and more fluid, allowing each developer to work on multiple microservices. This is also a significant investment in developer knowledge of the overall business. There will be one repository and one database (or database schema) for each microservice, having little to no impact on other microservices and teams. Further, the cloud can be an important move toward the successful on-time deployment and resiliency of microservices, and even better support the specific infrastructural and technical mechanism needs of each microservice. Spin up a new server or cluster in no time, and keep them running with no new administrative burden on your organization.
— Vaughn Vernon
I've been asked: "Is this book written for PHP developers?" There are two answers:
- No. The code samples in this book and in the accompanying source code repositories are written in PHP, but you don't need to know PHP to be able to understand what's going. In fact, the main text of the book doesn't make any assumptions about the languages and tools you use.
- Yes. My secret mission was to make PHP developers more familiar with the concept of asynchronous service integration and related topics like eventual consistency.
So basically, I expect this book to be interesting for everyone, no matter what programming language they use or prefer.
Where to buy
Besides the completed e-book (EPUB, MOBI and PDF), you can now order physical copies on Amazon. You can also order the book in non-US stores, e.g. in the UK and Germany.
To celebrate the release, I offer you a discount on the e-book (use this link). I'd love to do the same for Amazon purchases, but the publishing platform doesn't offer this functionality.
What people say about this book
Release time is not a time to be modest, so here's a number of comments about this book which make me very happy:
"This book showed me what I needed to know to actually start playing with microservices. It had the right blend of why and how without too much focus on implementation details. Highly recommended!" — Beau Simensen
"Read, Learn, Succeed! A comprehensive and really complete guide for creating microservices from scratch! Matthias can abstract the topic complexity in this book that is really, for everyone." — Christophe Willemsen
"As Microservices become more and more popular each day, it's important for professional developers to familiarize themselves with the basic concepts. As with his previous books, Matthias explains these concepts well, in a clear and concise manner. His examples are useful, and the reader is presented with a solid introduction to using Microservices. Highly recommended!" — Mark Badolato
During the last few days I've been finishing my latest book, Microservices for everyone. I've used the Leanpub self-publishing platform again. I wrote about the process before and still like it very much. Every now and then Leanpub adds a useful feature to their platform too, one of which is an export functionality for downloading a print-ready PDF.
Previously I had to cut up the normal book PDF manually, so this tool saved me a lot of work. Though it's a relatively smart tool, the resulting PDF isn't completely print-ready for all circumstances (to be honest, that would be a bit too much to ask from any tool of course!). For example, I wanted to use this PDF file to publish the book using Amazon's print-on-demand self-publishing service CreateSpace, but I also wanted to order some copies at a local print shop (preferably using the same source files). In this post I'd like to share some of the details of making the print-ready PDF even more print-ready, for whomever may be interested in this.
Preparing the inside of the book
I found that several things needed to be fixed before all layout issues were gone:
- Some lines which contained inline, pre-formatted code, like
AVeryLongClassName, would not auto-wrap to the next line. Where this is the case, it triggers a warning in Createspace's automatic review process: the margin becomes too small for print. I fixed these issues by cutting these long strings in multiple parts, adding soft hyphens (
\-) in between them.
- Some images appeared to be too large. This was because Leanpub shows all images with a 100% width. Vertically oriented images will appear to be much larger than horizontally oriented ones.
I added some whitespace in the images source file to force a "horizontal" rendering, but I later found out that you can also specify image positioning options, like width, float, etc.
- Some images had a resolution that's too low for printing. When I realized, I started adding images and illustrations with higher resolutions than required. Unfortunately I had to redraw some of the illustrations manually in order to get a higher resolution version... Something to keep in mind from the beginning of the writing process!
The result of Leanpub's print-ready PDF export is a PDF containing colored code snippets and images. In order to turn it into a grayscale PDF document, I googled a bit and found a good solution. I now use Ghostscript to do the PDF conversion, using the following options:
-o /print/book.pdf \
This takes the
book.pdf document from the
/preprint directory, removes transparency, includes all used fonts, converts it to grayscale, and stores the images with a 300DPI resolution (which is excellent for print). It then saves the resulting PDF file in
Preparing the cover
I designed the cover image using Gimp. The size and layout of the cover image are based on the number of pages, the thickness of the paper I wanted to use, the size of the PDF ("Technical", i.e. 7 x 9.1 inches) and the cut margin for the book printer. I put all this information in one spreadsheet (allowing me to use constants, variables, and simple derivations):
Using the information in this sheet I could then create an image file of the right size, and put visual guidelines at the right places:
I always miss Photoshop when I'm working with Gimp. It can do most of what I want; except store CMYK images... That's very unfortunate and frustrating. I've been trying to overcome this issue in various ways (upload an RGB image to a website to let it be converted to CMYK, use Ghostscript, etc.). The final and automated solution came from Imagemagick's
convert tool. The only problem was that you need to feed it color profiles. I have absolutely no clue what these are, but I downloaded some from the Adobe website and was able to use them in the following command:
convert /preprint/cover-rgb.png \
+profile icm \
-profile RGB.icc \
-profile CMYK.icc \
The options mean: remove any color profile in use, then convert from RGB profile to CMYK profile.
The conversion process more or less keeps the look and feel of the original RGB/screen-based cover image intact. I'm curious what a real print looks like. When CreateSpace's review process is finished, I'll be sure to order a sample copy for one last proof-reading session.
I don't know when the final version of the book will be released yet. When I do, I'll blog about it here.
In the previous article we discussed a sensible layer system, consisting of three layers:
The infrastructure layer, containing everything that connects the application's use cases to "the world outside" (like users, hardware, other applications), can become quite large. As I already remarked, a lot of our software consists of infrastructure code, since that's the realm of things complicated and prone to break. Infrastructure code connects our precious clean code to:
- The filesystem
- The network
- The ORM
- The web framework
- Third party web APIs
The layering system already offers a very useful way of separating concerns. But we can improve the situation by further analyzing the different ways in which the application is connected to the world. Alistair Cockburn calls these connection points the "ports" of an application in his article "Hexagonal architecture". A port is an abstract thing, it will not have any representation in the code base (except as a namespace/directory, see below). It can be something like:
In other words: there is a port for every way in which the use cases of an application can be invoked (through the UserInterface, through an API, through a TestRunner, etc.) as well as for all the ways in which data leaves the application (to be persisted, to notify other systems, etc.). Cockburn calls these primary and secondary ports. I often use the words input and output ports.
What exactly a port is and isn’t is largely a matter of taste. At the one extreme, every use case could be given its own port, producing hundreds of ports for many applications.
— Alistair Cockburn
For each of these abstract ports we need some code to make the connection really work. We need code for dealing with HTTP messages to allow users to talk to our application through the web. We need code for talking with a database (possibly speaking SQL while doing so), in order for our data to be stored in a persistent way. The code to make each port actually work is called "adapter code". We write at least one adapter for every port of our application.
Adapters, which are very concrete and contain low-level code, are by definition decoupled from their ports, which are very abstract, and in essence just concepts. Since adapter code is code related to connecting an application to the world outside, adapter code is infrastructure code and should therefore reside in the infrastructure layer. And this is where ports & adapters and layered architecture play well together.
If you remember the dependency rule from my previous article, you know that code in each layer can only depend on code in the same layer or in deeper layers. Of course the application layer can use code from the infrastructure layer at runtime, since it gets everything injected as constructor arguments. However, the classes themselves will only depend on things more abstract, i.e. interfaces defined in their own layer or a deeper one. This is what applying the dependency inversion principle entails.
When you apply the principle everywhere, you can now write alternative adapters for your application's ports. You could run an experiment with a Mongo adapter side by side with a MySQL adapter. Also, you can make the tests that exercise application layer code a lot faster by replacing the real adapter with something faster (for example, an adapter that doesn't make network or filesystem calls, but simply stores things in memory).
Knowing which ports and adapters your application has or should have, I recommend reflecting them in the project's directory/namespace structure as well:
Having specialized adapters for running tests is the main reason why Cockburn proposed the ports & adapters architectural style in the first place. Having a ports & adapters/hexagonal architecture increases your application's testability in general.
At the same time, when we start replacing real dependencies with fake ones, we should not forget to test the real thing. This kind of test is what Freeman and Pryce call an integration test. It thoroughly tests one adapter. This means it tests infrastructure code, limited to one port. While doing so, it actually uses and calls as many "real" things as possible, i.e. calls a real external web API, it creates real files, and it uses a real database (not a faster SQLite replacement one, but the real deal - how would you know the persistence adapter for MySQL works if you use SQLite instead?).
Integrating Bounded Contexts
Now, for the Domain-Driven Design fans: when integrating bounded contexts, I find that it makes sense to designate a port for each context integration point too. You can read a full example usng a REST API in chapter 13, "Integrating Bounded Contexts" of Vaughn Vernon's book "Implementing Domain-Driven Design". The summary is: there's the Identity & Access, which keeps track of active user accounts and assigned roles, and there is a Collaboration context which distinguishes different types of collaborators: authors, creators, moderators, etc. To remain consistent with Identity & Access, the Collaboration context will always directly ask Identity & Access if a user with a certain role exists in that context. To verify this, it makes an HTTP call to the REST API of Identity & Access.
In terms of ports & adapters, the integration relation between these two contexts can be modelled as an "IdentityAndAccess" port in the Collaboration context, together with an adapter for that port which you could call something like "Http", after the technical protocol used for communication through this port. The directory/namespace structure would become something like this:
Http/ # Serving a restfull HTTP API
Http/ # HTTP client for I & A's REST API
You could even use a "faux" port adapter if you like. This adapter would not make a network call but secretly reach into Identity & Access's code base and/or database to get the answers it needs. This could be a pragmatic and stable solution, as long as you're aware of the dangers of not making a bounded context actually bounded. After all, bounded contexts were meant to prevent a big ball of mud, where the boundaries of a domain model aren't clear.
This concludes my "Layers, ports and adapters" series. I hope it gives you some useful suggestions for your next project - or try to apply it in (parts) of your current project. I'd be happy to hear about your experiences in the field. If you have anything to share, please do so in the comment section below this post.
Also, I would be stupid not to mention that I offer in-house training on these topics as well, in case you want to experience layered/hexagonal architecture hands-on.