Assertions and assertion libraries

Posted on by Matthias Noback

When you're looking at a function (an actual function or a method), you can usually identify several blocks of code in there. There are pre-conditions, there's the function body, and there may be post-conditions. The pre-conditions are there to verify that the function can safely proceed to do its real job. Post-conditions may be there to verify that you're going to give something back to the caller that will make sense to them.

In a quasi-mathematical way: most pre-conditions verify that the input is within the supported domain of the function. Post-conditions verify that the output is within the range of the function. A mathematical function (as far as I know) only verifies its pre-conditions, because bad input could've been provided for which the function just doesn't work (e.g. 1/x doesn't work for input x = 0). Once the input has been validated, the function will yield an answer which doesn't have to be verified. Every answer will be valid no matter what.

It works the same way for function pre-conditions and post-conditions in code; you'll usually only find pre-conditions in code, no post-conditions. Quite often however you may not even find pre-conditions, but "medio-conditions"; that's when input validation happens everywhere inside the function.

This is not a desirable situation: for the function body to be as clear as possible, we'd want to push all pre-condition checks to the top of the function. Then we'll end up with a function body where "nothing can go wrong".

Sometimes the programming language itself can help with these pre-conditions: for instance, the language may support strict typing, which prevents certain types of invalid input to be provided. Some languages offer more advanced ways of defining pre-conditions, like pattern matching.

PHP doesn't have a lot of options, and before PHP 7 we didn't even have a way to define parameter types using primitives like int, string, etc. So many of us have been doing manual assertions at the top of functions, to verify some basic aspects of the provided arguments, like this:

if (!is_string($name)) {
    throw new InvalidArgumentException('$name should be a string');
}

This leads to lots of code duplication, across projects even, so it's a great opportunity for code reuse. Benjamin Eberlei created a popular library for it, and Bernhard Schussek created a variation on it. Both have become quite commonly used in projects. They offer useful shortcuts like Assert::string($value), Assert::greaterThan($value), which will check the value and throw an InvalidArgumentException if an expectation is not met. You can provide custom exception messages as well:

Assertion::string($name, '$name should be a string');

The funny thing is, PHP already has a built-in assertion tool. It's not as convenient as the assertion functions that these libraries provide. You'd have to write all the checks yourself:

assert(is_string($name), '$name should be a string');

On the other hand, it has one interesting feature that exposes the core idea of assertions: the fact that you can turn them off (e.g. in a production environment), without touching the code. Even though you can't easily turn off an assertion library once you start using it, I still think it's a very interesting test to see if you're using such a library in the correct way: just entertain the thought that you would turn the assertions off, would the system still function correctly?

I think this deserves a bit of an explanation. We should first consider the question why we need assertions in the first place. The answer is that some callers may provide bad data as input arguments to our function, so we need to protect it against this bad data. We throw an exception because things aren't going to work out well if we'd just carry on. The culprit however, isn't the innocent user of our program, it's the caller of the function. So we'd never want an InvalidArgumentException to bubble up to the user.

So the first rule of using assertions is: don't use assertions to validate user input, use it to validate function arguments. This means that, given that the user uses the application in a way that is valid and supported by our user interface (e.g. they are not trying to "hack" our system by tampering with POST request data), they should never receive a useless "500 Internal server error" response because some assertion failed. The other way around: if you find an assertion exception in your logs, assuming that all your users are innocent, you know that something is wrong about your user interface, since it apparently allows the user to accidentally provide the wrong data.

// $page is taken from the request's query parameters
$page = ...;

Assertion::greaterThan(0, $page, 'The page query parameter should be larger than 0');

User input will indeed be a reason for functions to fail. But so are external failures in outgoing calls. If a function reaches out to the database to fetch an entity by its ID, then the entity may not exist (anymore) and the call will fail. Before you make that call to the database, you don't know yet if the function will fail or not. This is why language designers usually make a difference between LogicExceptions and RuntimeExceptions. They all extend from the generic Exception class, but their intent is different. A RuntimeException is a failure caused by external, unpredictable things: the database, the filesystem, the network, etc. A LogicException is a programming mistake. It shouldn't have happened, but somehow, somewhere, a programmer didn't use a function well. Can you guess what the parent class of InvalidArgumentException is? It's LogicException, and rightfully so. Whenever an assertion triggers an exception, you know that you have made a mistake.

This brings us to the second rule of using assertions: don't use assertions to validate return values from other functions.

$id = 123;
$entity = $repository->findById($id);

// Don't use an assertion here
Assertion::isInstanceOf($entity, Entity::class);

// Instead, throw a RuntimeException, or a domain-specific one
if ($entity === null) {
    throw new RuntimeException('Entity with ID ' . $id . ' not found');
}

Another example of making an assertion about a return value:

$dateTime = DateTimeImmutable::createFromFormat('d/m/Y', $dateString);

Assertion::isInstanceOf(DateTimeImmutable::class, $datetime);

The real problem here is that DateTimeImmutable::createFromFormat() has a design issue: it returns either false or a DateTimeImmutable instance. This isn't good form. If it's impossible to construct an object from the provided $dateString argument, this function should throw an exception. Once it does, we don't need to make an assertion about its return value. The solution in code would be introduce a wrapper with a more appropriate API, e.g.

final class DateTime
{
    public static createFromFormat(
        string $format, 
        string $dateString
    ): DateTimeImmutable {
        $dateTime = DateTimeImmutable::createFromFormat($format, $dateString);

        if (!$dateTime instanceof DateTimeImmutable) {
            throw new InvalidArgumentException(
                'The provided date string is in the wrong format' 
            );
        }

        return $dateTime;
    }
}

The above example also demonstrates a more general rule for assertions: don't use assertions as a replacement for exceptions. If you think about it, you can replace every if branch which throws an exception with an assertion. This may seem like a useful trick, because it saves you from writing a unit test for that branch:

/*
 * There's a separate branch in the code that throws this exception, 
 * so theoretically it should be covered with an extra unit test.
 */
if ($entity === null) {
    throw new RuntimeException('Entity with ID ' . $id . ' not found');
}

/*
 * There's no longer a separate branch, so the unit test for the happy
 * path of this function will also cover this line, even though it 
 * won't trigger the exception.
 */
Assertion::isInstanceOf($entity, Entity::class);

There's more to talk about with regard to unit testing, and the big question to me is: should we write unit tests to verify that our assertions work?

Assertions should be used as sanity checks. In that sense, they are more like a trace: evidence that someone called a function with an incompatible piece of data. In that sense, you usually don't need to write specific unit test cases that catch the exceptions produced by these assertions.

Why? Let's get back to the beginning of this post: many things that we use assertions for, could also be verified at the level of the programming language itself. You may know this from experience if you've worked with PHP 5, have added lots of assertions like Assertion::string() and the likes, until PHP 7 came along and you could remove all those assertions. It's just that PHP is still quite limited with respect to what function pre-conditions can be checked by the language.

The same goes for the type system. For instance, if your language supports union types, like something is either This or That, you don't have to write an assertion for that anymore. With pattern matching, things become even more advanced, and you could omit assertions like "there should be at least one element in the list".

Now let's combine this with the idea that it should be possible to switch off assertions and still have a working program (except that it may be harder to debug the weird issues that would be caught by assertions otherwise). Should or shouldn't we write unit tests for assertions? I find that not every assertion is as important, and so not every assertion requires an extra test,

Rules of thumb for me are: If a better type system would be able to fix it, then don't test it. For example:

// Verify that all elements of a list are of a certain type
Assertion::allIsInstanceOf($list, Element::class);

// And all the primitive type assertions for PHP 5 applications
Assertion::string($value);
Assertion::boolean($value);
// etc.

On the other hand, If you're asserting that an input value is within the allowed domain, test it.

For example:

// Verify that the value is within a certain range:
Assertion::greaterThan($value, 0);
Assertion::lessThan($value, 10);
// etc.

// Verify that a string matches a certain pattern:
Assertion::regex($value, '/\d+/');
Assertion::alnum($value);
// etc.

// Verify a number of elements:
Assertion::count($value, 2);

This explains why I find myself testing mostly assertions from the constructors of value objects, since value objects are much like native language types, but they usually limit the domain of the input arguments.

Conclusion

Assertions are sanity checks. When they would be left out, you should still have a correctly function application. They should never become user-facing errors.

Useful rules of thumb for working with assertions and assertion libraries are:

  • Use them to validate function arguments, but only at the beginning of a function.
  • Instead of making assertions about another function's return value, throw an exception inside the that other function.
  • Don't use assertions as replacement for exceptions.
  • If a better type system would fix be able to it, use an assertion, but don't unit test for its exception.
  • If an assertion validates the domain of a value, write a unit test that shows that it works.
PHP design assertions Comments

Final classes by default, why?

Posted on by Matthias Noback

I recently wrote about when to add an interface to a class. After explaining good reasons for adding an interface, I claim that if none of those reasons apply in your situation, you should just use a class and declare it "final".

PHP 5 introduces the final keyword, which prevents child classes from overriding a method by prefixing the definition with final. If the class itself is being defined final then it cannot be extended.

— PHP Manual, "Final Keyword"

An illustration of the final syntax for class declarations:

final class CanNotBeExtended
{
    // ...
}

class CanStillBeExtended
{
     // ...
}

class Subclass extends CanStillBeExtended
{
    // override methods here
}

// Produces a Fatal error:

class Subclass extends CanNotBeExtended
{
    // ...
}

For a couple of years now I've been using the final keyword everywhere (thanks to Marco Pivetta for getting me on track!). When I see a class that's not final, it feels to me like it's a very vulnerable class. Its internals are out in the open; people can do with it what they want, not only what its creator has imagined.

Still, I also remember my initial resistance to adding final to every class definition, and I often have to defend myself during workshops, so I thought it would help if I explained all about it here.

The alternative: non-final classes

Omitting the final keyword is the standard for many developers, who reason that they should allow others to reuse the class and change just a part of its behavior by extending it. At first this may seem like a very considerate thing to do, but there are several downsides to it.

You are the initial designer of the class, and you had one particular use case in mind when you created it. Using the class for something else by overriding part of its behavior may jeopardize its integrity. The state of the extended object may become unpredictable or inconsistent with the state of the original object. Or the behavior may change in such a way that the subclass can no longer be considered a proper substitute for the base class.

Furthermore, the developer who creates a subclass of your class to override its behavior, probably doesn't need all of the internals of the parent class. Still it inherits those internals (data structures, private methods), and will soon arrive at the point that it has to work around them.

In terms of future development there's another issue: the class that has now been subclassed, still has a life on its own. Its clients may have different needs over time, so changes will be made to it. These changes may affect the subclasses too, since they likely rely on a particular implementation detail of the parent class. When this detail changes, the subclass will break.

From all these issues we have to conclude that by allowing subclassing, we will end up with an undesirable situation. The future of the subclass and the parent class will become intertwined. Refactoring the parent class will be quite hard, because who knows which classes are relying on a particular implementation detail. And since the subclass doesn't need all of the parent class's data and behaviors, they may act like they are the same kind of thing, but in reality, they aren't.

Replacing is better than overriding

So changing the behavior of a class shouldn't be done by subclassing that class and overriding methods. But what else can we do? In most cases, actually modifying the code of the original class isn't possible. Either because the class is used and relied on elsewhere in the project, or it's not physically an option to modify its code because it's part of a vendor package.

I'd even say that changing code shouldn't be the preferred way of changing behavior in the first place. If you can, change the behavior of an object by reconfiguring it with, that is, by swapping out constructor arguments, you should do it.

This reminds us of the Dependency inversion principle, according to which we should depend on abstractions, and the Open/closed principle, which helps us design objects to be reconfigurable, without touching its code.

What about the Template Method pattern?

There's an alternative approach for changing the behavior of a class. It's a behavioral pattern described in the classic "Design Patterns: Elements of Reusable Object-Oriented Software". The pattern is called "Template Method":

Define the skeleton of an algorithm in an operation, deferring some steps to subclasses. Template Method lets subclasses redefine certain steps of an algorithm without changing the algorithm’s structure.

This solution is slightly better than allowing subclasses to override a parent class's behavior, since it limits what can be changed. It renders the parent class itself useless by marking it as abstract. This prevents it from having a life on its own. In the Symfony Security component we find an excellent example of the Template Method pattern in the abstract Voter class. It helps users implement their own authorization voter, by providing the bulk of the algorithm that is supposed to be the same for all voters. The user only needs to implement supports() and voteOnAttribute(). which are both smaller parts of the overall algorithm. Symfony wouldn't be able to guess these implementations, since they are project-specific.

abstract class Voter implements VoterInterface
{
    public function vote(TokenInterface $token, $subject, array $attributes)
    {
        $vote = self::ACCESS_ABSTAIN;

        foreach ($attributes as $attribute) {
            if (!$this->supports($attribute, $subject)) {
                continue;
            }

            $vote = self::ACCESS_DENIED;
            if ($this->voteOnAttribute($attribute, $subject, $token)) {
                return self::ACCESS_GRANTED;
            }
        }

        return $vote;
    }

    abstract protected function supports($attribute, $subject);

    abstract protected function voteOnAttribute($attribute, $subject, TokenInterface $token);
}

It's really nice, but still, there's nothing in here that couldn't have been solved with composition instead of inheritance. An equivalent solution would've been:

// The user needs to implement this interface, instead of extending from Voter:

interface AttributeVoter
{
    public function supports($attribute, $subject);

    public function voteOnAttribute($attribute, $subject, TokenInterface $token);
}

// The Security package provides this standard Voter implementation:

final class Voter implements VoterInterface
{
    private $attributeVoter;

    public function __construct(AttributeVoter $attributeVoter)
    {
        $this->attributeVoter = $attributeVoter;
    }

    public function vote(TokenInterface $token, $subject, array $attributes)
    {
        $vote = self::ACCESS_ABSTAIN;

        foreach ($attributes as $attribute) {
            // We delegate calls to the injected AttributeVoter

            if (!$this->attributeVoter->supports($attribute, $subject)) {
                continue;
            }

            $vote = self::ACCESS_DENIED;
            if ($this->attributeVoter->voteOnAttribute($attribute, $subject, $token)) {
                return self::ACCESS_GRANTED;
            }
        }

        return $vote;
    }
}

Following the reasoning from "When to add an interface to a class" we might even reconsider the presence of a VoterInterface in the example above. If all the voting is done in the same way, that is, the voting algorithm is the same in all situations, we don't actually want users to write their own implementations of VoterInterface. So we also might provide only the Voter class, not the interface (I'm not actually sure it's the case for the Symfony Security component, but it's a good design choice to consider in your own projects).

The proposed solution doesn't involve inheritance anymore. It leaves the internals of all the involved classes to themselves. That is, these classes can all be final, and can be safely refactored now. Another advantage is that we could reuse each class in a different scenario. Final classes are more like building blocks ready for use, instead of templates that still need to be made concrete.

Composition over inheritance

You may have heard the phrase "Composition over inheritance" before; this is what is meant by that. In most cases, you should prefer a solution that involves composition over a solution that involves inheritance. To be more specific, if you feel the need to reconfigure an object, to change parts of an algorithm, to rewrite part of the implementation, consider creating a new class instead of overriding an existing class. If you need to represent a hierarchy of classes, where subclasses are proper substitutes for their parent classes, this would be the classic situation where you may still consider inheritance. However, the result may still be better if you don't inherit from concrete parent classes but from abstract interfaces. Still, this is very tricky to get right (I personally regret most of my previous work involving inheritance), so if you can, you should still prefer composition.

Extension should be a distinct use case

As a class designer you create a class in such a way that it provides a number of useful things its users can do with it. For example, they can:

  • instantiate it,
  • call a method on it,
  • get some information from it,
  • or change something about it.

"Reconfiguring it to change part of its behavior" should also be on this list, as a separate item. And so should "allowing it to be extended to override part of its behavior". It should be a deliberate decision to allow the user to do that with your class. Allowing a class to be extended has (often limiting) consequences for its design, and its future. So at least you shouldn't allow it by default.

"Final" pushes everyone in the right direction

So if allowing users to subclass a class shouldn't be the standard, then not allowing it should be. In other words: adding final to a class declaration should be your default. This simple trick will lead everyone in the right direction: towards classes that are smaller, have fewer maintenance issues, are easier to refactor, and act more like building blocks that can be reused in different parts of the project.

"You can always remove final if you want to"

I've often heard one argument for using final that I wanted to address here: "You can always remove final if you want to." In a sense, this is right; you can always allow extending a class whenever you want. But the same seems to be true for adding "final" - it's always just a few keystrokes away. As discussed in this article, I don't think it's the right attitude; it's better to close down, instead of open up. Opening up after having been closed is asking for all the trouble of inheritance described above. Make sure that instead of removing "final" from the class declaration, you will always aim for a solution that replaces part of the existing solution, and uses composition to allow for reconfiguration.

"My mocking tool doesn't work with final classes"

One objection against final classes that I've often heard is: "I like what you say, but my mocking tool doesn't work with final classes". Indeed, most don't. It makes sense, because the test doubles that such a tool generates usually look something like this:

final class CanNotBeExtended
{
}

// This produces a Fatal error:

class TestDoubleForCanNotBeExtended32789473246324369823903 extends CanNotBeExtended
{
    // override implementations of parent class here...
}

It's unfortunate, but the tools are not to blame. They'd have to do some sneaky things like hooking into the class loader to make the class non-final (in fact, that's quite doable). But they don't, so the truth gets slapped in our face. All that the tool is saying is that a final class can't be extended - there's nothing problematic about that fatal error. Instead, whenever we feel the need to replace a final class with a test double, we should consider two options:

  1. Maybe we shouldn't want to replace the real thing, i.e. the class.
  2. Maybe we should introduce something we can replace, i.e. an interface.

Option 1 is often useful when you're trying to mock things like entities and value objects. Just don't do it.

Option 2 should be applied in most other cases. When the class should've had an interface, add one, and create a test double for the interface instead.

Conclusion

In conclusion: make your classes final by default. One trick to help you with that is to modify the IDE template for new classes to automatically add final for you. Also, make the "final" discussion part of your technical code review. Ask: why is this class not final? The burden of proof should be on the author of the class (and if they don't agree, talk to them about the merits of using "final").

PHP design reuse package design Comments

Reusing domain code

Posted on by Matthias Noback

Last week I wrote about when to add an interface to a class. The article finishes with the claim that classes from the application's domain don't usually need an interface. The reason is that domain code isn't going to be swapped out with something else. This code is the result of careful modelling work that's done based on the business domain that you work with. And even if you'd work on, say, two financial software projects in a row, you'll find that the models you produce for each of them will be different in many subtle (if not radical) ways. Paradoxically you'll find that in practice a domain model can sometimes be reused after all. There are some great examples out there. In this article I explain different scenarios of where and how reuse could work.

Reuse-in-the-small

In "Facts and Fallacies of Software Engineering" (2002), Robert Glass speaks about reuse-in-the-small:

Reuse-in-the-small (libraries of subroutines) began nearly 50 years ago and is a well-solved problem.

Reuse in software is quite possible, but only if the amount of shared code is relatively small. Examples of components with the right size in the PHP ecosystem would be:

  • Flysystem (filesystem abstraction)
  • ProxyManager (proxy generation)
  • JMSSerializer (object serialization)
  • Symfony Validator (validation)

And so on. The idea being: if the library is basically a utility function that "got a bit out of hand", but is flexible at the same time, supporting many different use cases, then we can speak of successful reuse. In particular because PHP package maintainers tend to set a very high standard for themselves: every quality indicator should be green, 100%, etc. And so it can happen that these packages have millions and millions of downloads.

By the way, I also count frameworks as successful reuse-in-the-small: most of them are a collection of utility-like libraries anyway, and they rarely get in the way in terms of flexibility, at least in my experience; you can build any web application on top of any of them.

Reuse-in-the-large and software diversity

If a reusable component is too large, many aspects of it will be irrelevant, or even counter-productive, for its users. This leads to objections like:

  • When I'd use this component in my project, I'll be downloading way too much stuff that I'll never actually need.
  • Considerable parts of this component don't work as I want them to, so maybe I should roll my own.
  • This component is good today, since we're in the prototype phase, but probably in about a year, it will limit us.

In Glass's terms, we're talking about reuse-in-the-large, and he poses that:

Reuse-in-the-large (components) remains a mostly unsolved problem, even though everyone agrees it is important and desirable.

Software projects in general are very diverse. Every project comes with its own requirements, its own domain experts, its own team, and everything about it is special (although some product owners would be better off not thinking that their project was so very special). Still, one might say, there should be some common ground. Some domain knowledge will be potentially reusable, like a Money class, or a DateTime class, right?

If there are enough common problems across projects and even application domains, then component-based approaches will eventually prevail. If, as many suspect, the diversity of applications and domains means that no two problems are very similar to one another, then only those common housekeeping functions and tasks are likely to be generalized, and they constitute only a small percentage of a typical program’s code.

Over the past few years we've had some excellent articles making the rounds, which prove the point that "no two problems are the same". I'd like to mention Ross Tuck's "Precision Through Imprecision: Improving Time Objects" and Mathias Verraes's "Type Safety and Money". In these articles, we learn by example how design decisions are based on domain knowledge, that different situations require different decisions, and that designs resulting from those decisions can't be useful in every other situation. Trying to use things like a Money object in situations where they just don't belong, is very much like the saying: "trying to fit a round peg in a square hole".

In particular, reusing domain code will make us feel like we have to ignore or work around certain aspects of it. Which is why, instead of reusing this code from some shared location, we might be better off copying it instead. By copying the code and using it in a different situation, we can find out if the code really has potential for reuse. And it gets even better: it'll be easier to find the right abstractions, since we'll be able to clearly see what's essential to the thing we we're trying to reuse, once we see how it may serve other use cases.

We won't have any of these advantages if we aim to reuse the code from the outset. Which is why Glass gives us one of his "rules-of-three":

[...] a reusable component should be tried out in three different applications before it will be sufficiently general to accept into a reuse library.

Reuse-in-the-even-larger: reusing entire subdomains

It's funny that, while reuse-in-the-large is deemed an unsolved problem, today we see several larger reusable software projects, which are (according to Glass, against all odds) very successful. We see large reusable components for e-commerce software, like Sylius, Spryker, Spark, etc. These are not just oversized utility functions; they provide complete solutions for running commercial internet-based businesses, and offer features like inventory management, payments and invoicing, and so on. There's bound to be a lot of domain decisions in that code. This contradicts the reuse-in-the-large problem of software diversity. Even though no two problems/projects are the same, these components still aim to solve many problems at once, and given the popularity of them, we should conclude that these reusable components are indeed examples of successful reuse-in-the-large.

How to explain this?

In part, I think, because most of us don't want to spend time building the equivalent components from the ground up. As it says on the Spark homepage:

Spark is a Laravel package that provides scaffolding for all of the stuff you don't want to code.

Another reason might be that these components are still relatively small and focused - in most cases there's always the option to write your own component to replace the third-party ones. Limiting the scope of a package to a small part of the domain therefore contributes to its success. At that point you can make a conscious decision: which part of the domain is special in your case, which part requires more careful modelling, how can my application make a difference?

Eric Evans has written extensively about how and when reusing a domain model could work, in his book "Domain-Driven Design" (2003). He explains how you can divide the overall domain of an application or software system into several subdomains. He calls this "strategic distillation", because the aim is to find out what your core domain is, separating it from the other subdomains, including so-called "generic" ones. Evans summarizes this as follows:

Identify cohesive subdomains that are not the motivation for your project. Factor out generic models of these subdomains and place them in separate MODULES. Leave no trace of your specialties in them. Once they have been separated, give their continuing development lower priority than the CORE DOMAIN, and avoid assigning your core developers to the tasks (because they will gain little domain knowledge from them). Also consider off-the-shelf solutions or published models for these GENERIC SUBDOMAINS.

For application developers this is useful advice, because it helps you focus your development effort on areas where your application can stand out amongst many others. At the same time, it will help you decide for which parts of your application you should rather use an existing library or external service, also known as an "off-the-shelf solution".

It's also useful advice for package developers: application developers looking for off-the-shelf solutions are the potential users of the packages that you publish. So whenever you consider extracting part of your application into a reusable package, consider if it can be used by others to help them get to their core domain quicker.

Once more, for application developers this is crucial advice: when using these third-party solutions in your own applications, consider once again if you've done your strategic distillation right. Can you be certain that the off-the-shelf solution won't get in your way when you're continuously improving the code for your core domain?

Conclusion

Reuse-in-the-small is definitely possible. Reuse-in-the-large is deemed to be impossible, because no two problems/projects are alike, but practice proves otherwise. There are reusable components covering entire subdomains, which are nonetheless quite successful. The chance of success is bigger if such a reusable component is used to cover for a generic subdomain. Using an off-the-shelf solution in such a case helps you save development effort which can instead be redirected to the core domain. Another requirement is that the component is flexible enough to replace parts that don't fit well with your requirements, and/or that the components are small enough to be replaced or rewritten in its entirety.

Epilogue

Here's something interesting to watch: a talk by Eric Evans called "Exploring Time". In it, he discusses date/time calculations in terms of instances and intervals. This could be considered a generic subdomain that's part of almost every larger domain. Pointing out that existing date/time handling APIs don't make a lot of sense, he comes up with better models. One big conclusion for me is that modelling efforts in generic subdomains will improve the models in many ways. Even though we don't know all the situations in which a generic model will be used, we can still do a better job by extensively thinking and researching aspects of the domain.

PHP Domain-Driven Design reuse package design Comments