Layers, ports & adapters - Part 3, Ports & Adapters

Posted on by Matthias Noback

In the previous article we discussed a sensible layer system, consisting of three layers:

  • Domain
  • Application
  • Infrastructure

Infrastructure

The infrastructure layer, containing everything that connects the application's use cases to "the world outside" (like users, hardware, other applications), can become quite large. As I already remarked, a lot of our software consists of infrastructure code, since that's the realm of things complicated and prone to break. Infrastructure code connects our precious clean code to:

  • The filesystem
  • The network
  • Users
  • The ORM
  • The web framework
  • Third party web APIs
  • ...

Ports

The layering system already offers a very useful way of separating concerns. But we can improve the situation by further analyzing the different ways in which the application is connected to the world. Alistair Cockburn calls these connection points the "ports" of an application in his article "Hexagonal architecture". A port is an abstract thing, it will not have any representation in the code base (except as a namespace/directory, see below). It can be something like:

  • UserInterface
  • API
  • TestRunner
  • Persistence
  • Notifications

In other words: there is a port for every way in which the use cases of an application can be invoked (through the UserInterface, through an API, through a TestRunner, etc.) as well as for all the ways in which data leaves the application (to be persisted, to notify other systems, etc.). Cockburn calls these primary and secondary ports. I often use the words input and output ports.

What exactly a port is and isn’t is largely a matter of taste. At the one extreme, every use case could be given its own port, producing hundreds of ports for many applications.

— Alistair Cockburn

Adapters

For each of these abstract ports we need some code to make the connection really work. We need code for dealing with HTTP messages to allow users to talk to our application through the web. We need code for talking with a database (possibly speaking SQL while doing so), in order for our data to be stored in a persistent way. The code to make each port actually work is called "adapter code". We write at least one adapter for every port of our application.

Adapters, which are very concrete and contain low-level code, are by definition decoupled from their ports, which are very abstract, and in essence just concepts. Since adapter code is code related to connecting an application to the world outside, adapter code is infrastructure code and should therefore reside in the infrastructure layer. And this is where ports & adapters and layered architecture play well together.

If you remember the dependency rule from my previous article, you know that code in each layer can only depend on code in the same layer or in deeper layers. Of course the application layer can use code from the infrastructure layer at runtime, since it gets everything injected as constructor arguments. However, the classes themselves will only depend on things more abstract, i.e. interfaces defined in their own layer or a deeper one. This is what applying the dependency inversion principle entails.

When you apply the principle everywhere, you can now write alternative adapters for your application's ports. You could run an experiment with a Mongo adapter side by side with a MySQL adapter. Also, you can make the tests that exercise application layer code a lot faster by replacing the real adapter with something faster (for example, an adapter that doesn't make network or filesystem calls, but simply stores things in memory).

Directory structure

Knowing which ports and adapters your application has or should have, I recommend reflecting them in the project's directory/namespace structure as well:

src/
    <BoundedContext>/
        Domain/
            Model/
        Application/
        Infrastructure/
            <Port>/
                <Adapter>/
                <Adapter>/
                ...
            <Port>/
                <Adapter>/
                <Adapter>/
                ...
            ...
    <BoundedContext>/
        ...

Testing

Having specialized adapters for running tests is the main reason why Cockburn proposed the ports & adapters architectural style in the first place. Having a ports & adapters/hexagonal architecture increases your application's testability in general.

At the same time, when we start replacing real dependencies with fake ones, we should not forget to test the real thing. This kind of test is what Freeman and Pryce call an integration test. It thoroughly tests one adapter. This means it tests infrastructure code, limited to one port. While doing so, it actually uses and calls as many "real" things as possible, i.e. calls a real external web API, it creates real files, and it uses a real database (not a faster SQLite replacement one, but the real deal - how would you know the persistence adapter for MySQL works if you use SQLite instead?).

Integrating Bounded Contexts

Now, for the Domain-Driven Design fans: when integrating bounded contexts, I find that it makes sense to designate a port for each context integration point too. You can read a full example usng a REST API in chapter 13, "Integrating Bounded Contexts" of Vaughn Vernon's book "Implementing Domain-Driven Design". The summary is: there's the Identity & Access, which keeps track of active user accounts and assigned roles, and there is a Collaboration context which distinguishes different types of collaborators: authors, creators, moderators, etc. To remain consistent with Identity & Access, the Collaboration context will always directly ask Identity & Access if a user with a certain role exists in that context. To verify this, it makes an HTTP call to the REST API of Identity & Access.

In terms of ports & adapters, the integration relation between these two contexts can be modelled as an "IdentityAndAccess" port in the Collaboration context, together with an adapter for that port which you could call something like "Http", after the technical protocol used for communication through this port. The directory/namespace structure would become something like this:

src/
    IdentityAndAccess/
        Domain/
        Application/
        Infrastructure/
            Api/
                Http/ # Serving a restfull HTTP API
    Collaboration/
        Domain/
        Application/
        Infrastructure/
            IdentityAndAccess/
                Http/ # HTTP client for I & A's REST API

You could even use a "faux" port adapter if you like. This adapter would not make a network call but secretly reach into Identity & Access's code base and/or database to get the answers it needs. This could be a pragmatic and stable solution, as long as you're aware of the dangers of not making a bounded context actually bounded. After all, bounded contexts were meant to prevent a big ball of mud, where the boundaries of a domain model aren't clear.

Conclusion

This concludes my "Layers, ports and adapters" series. I hope it gives you some useful suggestions for your next project - or try to apply it in (parts) of your current project. I'd be happy to hear about your experiences in the field. If you have anything to share, please do so in the comment section below this post.

Also, I would be stupid not to mention that I offer in-house training on these topics as well, in case you want to experience layered/hexagonal architecture hands-on.

PHP architecture design hexagonal architecture
Comments
This website uses MailComments: you can send your comments to this post by email. Read more about MailComments, including suggestions for writing your comments (in HTML or Markdown).
Ondřej Frei

Hello @matthiasnoback:disqus, thanks for a very interesting article, I'll definitely try applying these principles in my code. One thing is not clear to me though - you mention that ORM should be used in the infrastructure layer, however the domain layer surely defines entities. How should the domain layer entities be related to the ORM ones from the infrastructure layer?

Matthias Noback

Just to be clear: entities live in the Domain layer. There are no entities in the Infrastructure layer. The thing is, if you use an ORM, it will probably force you to change certain things about your entities to make them persistable. That's just how it is. I put my entities in Domain, my repository interfaces too, and I then provide an implementation of those repository interfaces in the infrastructure layer, so the Domain and Application code is unaware of how things are persisted exactly. They just assume that something is going to take care of that.

Ondřej Frei

It's clear now, thanks! :)

Juan

Hi @matthiasnoback:disqus , I also read Nat Pryce article about the kind of tests related to Hexagonal Architecture, and I think that he says Integration Test is not for testing the "real thing" (he call it third-party package) as you said, but just testing our adapter code that translate port to "real thing" and viceversa, i.e. the test ensure the translation is done correctly. Regards, Juan.

Matthias Noback

Good point; this isn't often addressed correctly. The thing is: how can you be sure that, for instance, your Doctrine-specific repository implementation works correctly? By mocking out the EntityManager you wouldn't know if you've made the correct assumptions about it (or have used the correct annotations ;)). By replacing the MySQL database with an Sqlite one, you won't know either (since the differences are vast). So, what's left to do is test the repository implementation against Doctrine ORM, connecting to a real database, one that resembles the one you use in production.

Juan

I think that it could be achieve with kind of «listener» in the database, that would listen for the data that is going to be inserted (e.g. Oracle triggers). The test would have to compare that data with the expected data. This way we would be testing the adapter like Nat Pryce says. Regards, Juan.

Matthias Noback

Yeah, I usually do something like call the method that is supposed to store the data, then call a method that is supposed to retrieve the data and see if it turns an object that looks like the one I expected, given the database does its work.

Juan

Hello Matthias. The «method that retreives the data» is from the adapter? If so, you are relying the checking of the test on the thing you are testing. I think that the method that retreives the data should be some software of the database itself. Regards.

Matthias Noback

It's the method of my repository class (which is indeed an adapter). I wouldn't go test the actual database, but the whole adapter as "black box". That is, I test that what I persist, I can also retrieve from the repository and it will be the same thing (I'm writing a new post about this topic by the way).

Juan

Hi Matthias I dont mean to test the database neither, maybe I don't express well what I want to say. I mean we are testing the adapter (ie the repository implementation), and you are use the adapter (the read method) to check the test of itself (the write method). I think that we shouldn't use a component under test to check the test of itself. Looking forward to your new post about this topic then. Regards.

Matthias Noback

Okay, well, I'll explain more about it in this next post indeed ;) This doesn't mean I'm not wrong on this topic for sure.

Juan

Ok, I was just saying my opinion, and it seems we don't agree. I don't know if you are wrong or not, I'm not anyone to judge it :) anyway discussing is good for knowledgement. Thanks for your attention

Matthias Noback

No problem, thanks for joining the discussion here!

Juan

Well from the hexagonal point of view, the driven actor (secondary actor) is the database. Any code implementing the technology agnostic driven port, and accessing the database would be the driven adapter. Testing any database driven adapter means that if we say for instance "save X" to the port, the database receives Y=M(X) to insert it into the tables. And that if we know that Y=M(X) is stored in the database and we say to the port "retreive X", then X is returned by the port. (X is the domain model, Y the persistence model, and M the mapping between both models done by the adapter). How to test this? That's a very good question :) but I think that's what Nat Pryce says in the article. Regards, Juan.

fbourigault

Thank you for this great series!
However I have a pratical question. I want to abstract the doctrine dbal transactional method and the Propel2 transaction. I would write a Transaction port, then two adapters but where does the interface lives?

Cherif BOUCHELAGHEM

Where I do usually is decorating the application services (Command handlers) with a transaction, so following the rule "put interfaces where they are used" I suggest to put interfaces in the application layer and the implementation in the infrastructure layer here is an example:
the decorator interface can be implemented by doctrine or propel:
https://github.com/dddinphp...

A base application service where the decorator is used:

https://github.com/dddinphp...

Doctrine transaction implementation in the infrastructure layer:

https://github.com/dddinphp...

Cherif BOUCHELAGHEM

Hey @matthiasnoback:disqus thank you for this article is it fine to integrate bounded contexts via library API? I mean not using a RESTFUL but invoking methods on classes/objects which can be used as an anti-corruption layer that acts as a facade for the up stream BC.

Matthias Noback

If you're reading up on service integrating a synchronous HTTP-based integration isn't preferable at all ;) I'm writing about this in my new book "Microservices for everyone"; the runtime dependency will be a danger to the stability of the system at large. Different solutions there, one of them being a code-based dependency. I like this as a pragmatic first step to decoupling the code. That way, the rule for coupling will become: the adapter for connecting one BC to another BC is the only place where a direct dependency on (some) code from that other BC is allowed.

Cherif BOUCHELAGHEM

thank you for taking time to answer my question, yes what I mean is code-based dependency to some services implementations in the infrastructure layer and the down-stream BC will have a facade to connect with those services.

Matthias Noback

A facade would be the proper term indeed. It basically hides underlying implementation details, which you can change later on if service boundaries need to be more firmly enforced.

Cherif BOUCHELAGHEM

Many thanks for your sharing!

Xu Ding

Would you mind explaining more on the topic of BoundedContext. I found it is mind twisting to find a proper BoundedContext

Matthias Noback

I'd suggest Vaughn Vernon's book Domain-Driven Design Distilled, or Implementing Domain-Driven Design on this topic.

gabi rusu

Thank you, Matthias for all your articles! They are of great help for us. I have a question regarding Bounded Contexts as well. Let's say we have an e-commerce application in which we have a Catalog module. Would it be correct to consider Category and Product as Bounded Contexts, each of them having Application, Domain and Infrastructure layers? They are both living in the Catalog domain, right? Did I understand correctly this abstract/vague (at least for me) concept of Bounded Contexts?

Matthias Noback

Hi, thanks! Same recommendation for you: Vaughn Vernon explains it well in his "Implementing Domain-Driven Design" book.