Test-driving repository classes - Part 1: Queries

Posted on by Matthias Noback

There's something I've only been doing since a year or so, and I've come to like it a lot. My previous testing experiences were mostly at the level of unit tests or functional/system tests. What was left out of the equation was integration tests. The perfect example of which is a test that proves that your custom repository class works well with the type of database that you use for the project, and possibly the ORM or database abstraction library you use to talk with that database. A test for a repository can't be a unit test; that wouldn't make sense. You'd leave a lot of assumptions untested. So, no mocking is allowed.

But how do you test everything that is going on in a repository? Well, I found out a nice way of doing so, one that even allows you to use some kind of test-driven approach. In this article I'll cover one of the two main use cases for repositories: querying the database, and returning some objects for it. The other use case - storing and loading objects - will be discussed in another article.

What's a query?

In essence, a query defines criteria for selecting certain records from the database. Comparing it to the real-world equivalent: you'd have some kind of room full of stuff and you let a robot go in with a list of selection criteria. It examines things one by one to see if it matches those criteria, and if it does, it takes the thing with it, and brings it to you. How can you verify that the robot works well? Sure, you put some things in that room that match the criteria, and see if it fetches those things for you. However, you should also try dropping some things in the room that wouldn't match the criteria you gave to the robot, and verify that the robot doesn't take those things too.

For repository classes it should work the same way. Testing if some query in the repository class works, means that you should load some fixtures for records that would match the criteria. But you shouldn't forget to add some records that wouldn't match the criteria too. Otherwise, you wouldn't notice the difference between your elaborate SELECT query and a SELECT * WHERE 1 query. So basically you'd have to come up with examples, as well as counter-examples. It turns out that these counter-examples can be used to test-drive the query itself.

As an example, consider you need a query that will return a list of products, but only those which are active. You'd start with the simplest situation; a single product that should be returned:

INSERT INTO products (product_id) VALUES (1);

You then write the query for this:

$result = $this->connection
    ->createQuery()
    ->from('products')
    ->select('*')
    ->execute()
    ->fetchAll();

In your test you can then verify that this query indeed loads this one product. At this point I often add another record to the database, to prove that the query is capable of returning more than one object (i.e. there's no "limit" in place or anything).

INSERT INTO products (product_id) VALUES (1);
INSERT INTO products (product_id) VALUES (2);

Implement the first requirement

The query we wrote doesn't yet take the "is active" flag for products into consideration. So now, before you dive in and modify the query, you need to show that the code as it is doesn't yet implement all the requirements. So you add a counter-example; an inactive product:

- Active products
INSERT INTO products (product_id, is_active) VALUES (1, 1);
INSERT INTO products (product_id, is_active) VALUES (2, 1);

- Inactive product - should be ignored
INSERT INTO products (product_id, is_active) VALUES (3, 0);

A failing test

If you run the test again, it will fail, because it returns an extra product that wasn't expected to be there. This is the traditional "red" phase of TDD.

You then modify the query, adding a "where" clause that will exclude inactive products:

$result = $this->connection
    // ...    
    ->andWhere('active = 1')
    // ...
    ->fetchAll();

Run the test again, and the light will be green again.

Green; implement another requirement

Now you can continue working on the next requirement. In this example, we need the product to be in a group of products that's marked as "stock" products. With every extra requirement, we run into another variation we'd need to check. Consider a product that is active, but is not in the right kind of product group; we have to add a counter-example for that:

- Active products (1, 2)
INSERT INTO products (product_id, is_active) VALUES (1, 1);
INSERT INTO products (product_id, is_active) VALUES (2, 1);

- Inactive product (3) - should be ignored
INSERT INTO products (product_id, is_active) VALUES (3, 0);

- Active product, but in a non-stock product group (100) - should be ignored
INSERT INTO product_groups (product_group_id, has_stock_products) VALUES (100, 0);
INSERT INTO products (product_id, is_active, product_group_id) VALUES (4, 1, 100);

Running the tests will make the new product pop up of course, something we don't want, so we need to modify the query:

$result = $this->connection
    // ...    
    ->andWhere('active = 1')
    ->innerJoin(
        'products', 
        'product_groups',
        'product_groups.product_group_id = products.group_id'
    )
    ->andWhere('product_groups.has_stock_products = 1')
    // ...
    ->fetchAll();

This would seem to make it work, but when we re-run the tests now, the result is empty. Products 1 and 2 aren't in a product group yet, so the "inner join" will filter them out. So we have to modify the fixtures to fix this. The same goes for product 3 actually; we should put it in a stock product group, to verify that the "active" flag is still taken into account:

- Active products (1, 2), in a stock-product group (101)
INSERT INTO products (product_id, is_active, product_group_id) VALUES (1, 1, 101);
INSERT INTO products (product_id, is_active, product_group_id) VALUES (2, 1, 101);
INSERT INTO product_groups (product_group_id, stock_products) VALUES (101, 1);

- Inactive product (3), in a stock-product group (101) - should be ignored
INSERT INTO products (product_id, is_active, product_group_id) VALUES (3, 0, 101);

- Active product (4), but in a non-stock product group (100) - should be ignored
INSERT INTO product_groups (product_group_id, has_stock_products) VALUES (100, 0);
INSERT INTO products (product_id, is_active, product_group_id) VALUES (4, 1, 100);

And so on, and so on. For every new requirement, first add a counter-example to the fixtures, then see how it pops up in the query result. Then modify the query to ensure that it doesn't. This is the Red - Green - (Refactor) TDD cycle for queries in repository classes. By the way, I find it helpful to add important IDs or other characteristics as a comment to the fixtures, so it's easy for the reader to figure out what's so special about a certain record.

Helpful?

I find this approach very helpful. It helps you take small steps when working on repositories, where you may often feel insecure about getting the queries right (and understanding your ORM well). Test-driving your queries can prevent you from making stupid mistakes when loading records for specific owners (tenants, users). With this approach you can actually prove that you're loading the right things by adding some records owned by other tenants and users in your fixtures.

Just like with unit tests, it also helps make the code satisfy new requirements, whenever they become relevant. You have a good starting point for making amendments, a clear view on what's there, and a safety net in case you're worried that modifying the query in some way will accidentally result in unwanted records being loaded.

Potential problems

So far I've found that implementing more requirements can become a bit more tedious. You'd have to add more and more counter-examples. In particular if you also want to test how the repository deals with different data and data types, and you want to verify that it hydrates objects correctly.

Still, being able to safely write queries is something that I've wanted for a long time, and now that I can do it, I'm no longer worried as much if I see the correct data on my screen. More than once I've made the mistake of only testing repositories by clicking through the pages in the browser. Seeing anything at all on the screen was sufficient "proof" to me that the query I wrote was correct. Of course, in practice, it often turned out there was an awful mistake hidden in the query, and the data on the screen wasn't the right data at all. With this test-driven approach however, the test has already proven that the repository loads the correct records, nothing more, nothing less.

PHP design fixtures testing database Comments

Assertions and assertion libraries

Posted on by Matthias Noback

When you're looking at a function (an actual function or a method), you can usually identify several blocks of code in there. There are pre-conditions, there's the function body, and there may be post-conditions. The pre-conditions are there to verify that the function can safely proceed to do its real job. Post-conditions may be there to verify that you're going to give something back to the caller that will make sense to them.

In a quasi-mathematical way: most pre-conditions verify that the input is within the supported domain of the function. Post-conditions verify that the output is within the range of the function. A mathematical function (as far as I know) only verifies its pre-conditions, because bad input could've been provided for which the function just doesn't work (e.g. 1/x doesn't work for input x = 0). Once the input has been validated, the function will yield an answer which doesn't have to be verified. Every answer will be valid no matter what.

It works the same way for function pre-conditions and post-conditions in code; you'll usually only find pre-conditions in code, no post-conditions. Quite often however you may not even find pre-conditions, but "medio-conditions"; that's when input validation happens everywhere inside the function.

This is not a desirable situation: for the function body to be as clear as possible, we'd want to push all pre-condition checks to the top of the function. Then we'll end up with a function body where "nothing can go wrong".

Sometimes the programming language itself can help with these pre-conditions: for instance, the language may support strict typing, which prevents certain types of invalid input to be provided. Some languages offer more advanced ways of defining pre-conditions, like pattern matching.

PHP doesn't have a lot of options, and before PHP 7 we didn't even have a way to define parameter types using primitives like int, string, etc. So many of us have been doing manual assertions at the top of functions, to verify some basic aspects of the provided arguments, like this:

if (!is_string($name)) {
    throw new InvalidArgumentException('$name should be a string');
}

This leads to lots of code duplication, across projects even, so it's a great opportunity for code reuse. Benjamin Eberlei created a popular library for it, and Bernhard Schussek created a variation on it. Both have become quite commonly used in projects. They offer useful shortcuts like Assert::string($value), Assert::greaterThan($value), which will check the value and throw an InvalidArgumentException if an expectation is not met. You can provide custom exception messages as well:

Assertion::string($name, '$name should be a string');

The funny thing is, PHP already has a built-in assertion tool. It's not as convenient as the assertion functions that these libraries provide. You'd have to write all the checks yourself:

assert(is_string($name), '$name should be a string');

On the other hand, it has one interesting feature that exposes the core idea of assertions: the fact that you can turn them off (e.g. in a production environment), without touching the code. Even though you can't easily turn off an assertion library once you start using it, I still think it's a very interesting test to see if you're using such a library in the correct way: just entertain the thought that you would turn the assertions off, would the system still function correctly?

I think this deserves a bit of an explanation. We should first consider the question why we need assertions in the first place. The answer is that some callers may provide bad data as input arguments to our function, so we need to protect it against this bad data. We throw an exception because things aren't going to work out well if we'd just carry on. The culprit however, isn't the innocent user of our program, it's the caller of the function. So we'd never want an InvalidArgumentException to bubble up to the user.

So the first rule of using assertions is: don't use assertions to validate user input, use it to validate function arguments. This means that, given that the user uses the application in a way that is valid and supported by our user interface (e.g. they are not trying to "hack" our system by tampering with POST request data), they should never receive a useless "500 Internal server error" response because some assertion failed. The other way around: if you find an assertion exception in your logs, assuming that all your users are innocent, you know that something is wrong about your user interface, since it apparently allows the user to accidentally provide the wrong data.

// $page is taken from the request's query parameters
$page = ...;

Assertion::greaterThan(0, $page, 'The page query parameter should be larger than 0');

User input will indeed be a reason for functions to fail. But so are external failures in outgoing calls. If a function reaches out to the database to fetch an entity by its ID, then the entity may not exist (anymore) and the call will fail. Before you make that call to the database, you don't know yet if the function will fail or not. This is why language designers usually make a difference between LogicExceptions and RuntimeExceptions. They all extend from the generic Exception class, but their intent is different. A RuntimeException is a failure caused by external, unpredictable things: the database, the filesystem, the network, etc. A LogicException is a programming mistake. It shouldn't have happened, but somehow, somewhere, a programmer didn't use a function well. Can you guess what the parent class of InvalidArgumentException is? It's LogicException, and rightfully so. Whenever an assertion triggers an exception, you know that you have made a mistake.

This brings us to the second rule of using assertions: don't use assertions to validate return values from other functions.

$id = 123;
$entity = $repository->findById($id);

// Don't use an assertion here
Assertion::isInstanceOf($entity, Entity::class);

// Instead, throw a RuntimeException, or a domain-specific one
if ($entity === null) {
    throw new RuntimeException('Entity with ID ' . $id . ' not found');
}

Another example of making an assertion about a return value:

$dateTime = DateTimeImmutable::createFromFormat('d/m/Y', $dateString);

Assertion::isInstanceOf(DateTimeImmutable::class, $datetime);

The real problem here is that DateTimeImmutable::createFromFormat() has a design issue: it returns either false or a DateTimeImmutable instance. This isn't good form. If it's impossible to construct an object from the provided $dateString argument, this function should throw an exception. Once it does, we don't need to make an assertion about its return value. The solution in code would be introduce a wrapper with a more appropriate API, e.g.

final class DateTime
{
    public static createFromFormat(
        string $format, 
        string $dateString
    ): DateTimeImmutable {
        $dateTime = DateTimeImmutable::createFromFormat($format, $dateString);

        if (!$dateTime instanceof DateTimeImmutable) {
            throw new InvalidArgumentException(
                'The provided date string is in the wrong format' 
            );
        }

        return $dateTime;
    }
}

The above example also demonstrates a more general rule for assertions: don't use assertions as a replacement for exceptions. If you think about it, you can replace every if branch which throws an exception with an assertion. This may seem like a useful trick, because it saves you from writing a unit test for that branch:

/*
 * There's a separate branch in the code that throws this exception, 
 * so theoretically it should be covered with an extra unit test.
 */
if ($entity === null) {
    throw new RuntimeException('Entity with ID ' . $id . ' not found');
}

/*
 * There's no longer a separate branch, so the unit test for the happy
 * path of this function will also cover this line, even though it 
 * won't trigger the exception.
 */
Assertion::isInstanceOf($entity, Entity::class);

There's more to talk about with regard to unit testing, and the big question to me is: should we write unit tests to verify that our assertions work?

Assertions should be used as sanity checks. In that sense, they are more like a trace: evidence that someone called a function with an incompatible piece of data. In that sense, you usually don't need to write specific unit test cases that catch the exceptions produced by these assertions.

Why? Let's get back to the beginning of this post: many things that we use assertions for, could also be verified at the level of the programming language itself. You may know this from experience if you've worked with PHP 5, have added lots of assertions like Assertion::string() and the likes, until PHP 7 came along and you could remove all those assertions. It's just that PHP is still quite limited with respect to what function pre-conditions can be checked by the language.

The same goes for the type system. For instance, if your language supports union types, like something is either This or That, you don't have to write an assertion for that anymore. With pattern matching, things become even more advanced, and you could omit assertions like "there should be at least one element in the list".

Now let's combine this with the idea that it should be possible to switch off assertions and still have a working program (except that it may be harder to debug the weird issues that would be caught by assertions otherwise). Should or shouldn't we write unit tests for assertions? I find that not every assertion is as important, and so not every assertion requires an extra test,

Rules of thumb for me are: If a better type system would be able to fix it, then don't test it. For example:

// Verify that all elements of a list are of a certain type
Assertion::allIsInstanceOf($list, Element::class);

// And all the primitive type assertions for PHP 5 applications
Assertion::string($value);
Assertion::boolean($value);
// etc.

On the other hand, If you're asserting that an input value is within the allowed domain, test it.

For example:

// Verify that the value is within a certain range:
Assertion::greaterThan($value, 0);
Assertion::lessThan($value, 10);
// etc.

// Verify that a string matches a certain pattern:
Assertion::regex($value, '/\d+/');
Assertion::alnum($value);
// etc.

// Verify a number of elements:
Assertion::count($value, 2);

This explains why I find myself testing mostly assertions from the constructors of value objects, since value objects are much like native language types, but they usually limit the domain of the input arguments.

Conclusion

Assertions are sanity checks. When they would be left out, you should still have a correctly function application. They should never become user-facing errors.

Useful rules of thumb for working with assertions and assertion libraries are:

  • Use them to validate function arguments, but only at the beginning of a function.
  • Instead of making assertions about another function's return value, throw an exception inside the that other function.
  • Don't use assertions as replacement for exceptions.
  • If a better type system would fix be able to it, use an assertion, but don't unit test for its exception.
  • If an assertion validates the domain of a value, write a unit test that shows that it works.
PHP design assertions Comments

Final classes by default, why?

Posted on by Matthias Noback

I recently wrote about when to add an interface to a class. After explaining good reasons for adding an interface, I claim that if none of those reasons apply in your situation, you should just use a class and declare it "final".

PHP 5 introduces the final keyword, which prevents child classes from overriding a method by prefixing the definition with final. If the class itself is being defined final then it cannot be extended.

— PHP Manual, "Final Keyword"

An illustration of the final syntax for class declarations:

final class CanNotBeExtended
{
    // ...
}

class CanStillBeExtended
{
     // ...
}

class Subclass extends CanStillBeExtended
{
    // override methods here
}

// Produces a Fatal error:

class Subclass extends CanNotBeExtended
{
    // ...
}

For a couple of years now I've been using the final keyword everywhere (thanks to Marco Pivetta for getting me on track!). When I see a class that's not final, it feels to me like it's a very vulnerable class. Its internals are out in the open; people can do with it what they want, not only what its creator has imagined.

Still, I also remember my initial resistance to adding final to every class definition, and I often have to defend myself during workshops, so I thought it would help if I explained all about it here.

The alternative: non-final classes

Omitting the final keyword is the standard for many developers, who reason that they should allow others to reuse the class and change just a part of its behavior by extending it. At first this may seem like a very considerate thing to do, but there are several downsides to it.

You are the initial designer of the class, and you had one particular use case in mind when you created it. Using the class for something else by overriding part of its behavior may jeopardize its integrity. The state of the extended object may become unpredictable or inconsistent with the state of the original object. Or the behavior may change in such a way that the subclass can no longer be considered a proper substitute for the base class.

Furthermore, the developer who creates a subclass of your class to override its behavior, probably doesn't need all of the internals of the parent class. Still it inherits those internals (data structures, private methods), and will soon arrive at the point that it has to work around them.

In terms of future development there's another issue: the class that has now been subclassed, still has a life on its own. Its clients may have different needs over time, so changes will be made to it. These changes may affect the subclasses too, since they likely rely on a particular implementation detail of the parent class. When this detail changes, the subclass will break.

From all these issues we have to conclude that by allowing subclassing, we will end up with an undesirable situation. The future of the subclass and the parent class will become intertwined. Refactoring the parent class will be quite hard, because who knows which classes are relying on a particular implementation detail. And since the subclass doesn't need all of the parent class's data and behaviors, they may act like they are the same kind of thing, but in reality, they aren't.

Replacing is better than overriding

So changing the behavior of a class shouldn't be done by subclassing that class and overriding methods. But what else can we do? In most cases, actually modifying the code of the original class isn't possible. Either because the class is used and relied on elsewhere in the project, or it's not physically an option to modify its code because it's part of a vendor package.

I'd even say that changing code shouldn't be the preferred way of changing behavior in the first place. If you can, change the behavior of an object by reconfiguring it with, that is, by swapping out constructor arguments, you should do it.

This reminds us of the Dependency inversion principle, according to which we should depend on abstractions, and the Open/closed principle, which helps us design objects to be reconfigurable, without touching its code.

What about the Template Method pattern?

There's an alternative approach for changing the behavior of a class. It's a behavioral pattern described in the classic "Design Patterns: Elements of Reusable Object-Oriented Software". The pattern is called "Template Method":

Define the skeleton of an algorithm in an operation, deferring some steps to subclasses. Template Method lets subclasses redefine certain steps of an algorithm without changing the algorithm’s structure.

This solution is slightly better than allowing subclasses to override a parent class's behavior, since it limits what can be changed. It renders the parent class itself useless by marking it as abstract. This prevents it from having a life on its own. In the Symfony Security component we find an excellent example of the Template Method pattern in the abstract Voter class. It helps users implement their own authorization voter, by providing the bulk of the algorithm that is supposed to be the same for all voters. The user only needs to implement supports() and voteOnAttribute(). which are both smaller parts of the overall algorithm. Symfony wouldn't be able to guess these implementations, since they are project-specific.

abstract class Voter implements VoterInterface
{
    public function vote(TokenInterface $token, $subject, array $attributes)
    {
        $vote = self::ACCESS_ABSTAIN;

        foreach ($attributes as $attribute) {
            if (!$this->supports($attribute, $subject)) {
                continue;
            }

            $vote = self::ACCESS_DENIED;
            if ($this->voteOnAttribute($attribute, $subject, $token)) {
                return self::ACCESS_GRANTED;
            }
        }

        return $vote;
    }

    abstract protected function supports($attribute, $subject);

    abstract protected function voteOnAttribute($attribute, $subject, TokenInterface $token);
}

It's really nice, but still, there's nothing in here that couldn't have been solved with composition instead of inheritance. An equivalent solution would've been:

// The user needs to implement this interface, instead of extending from Voter:

interface AttributeVoter
{
    public function supports($attribute, $subject);

    public function voteOnAttribute($attribute, $subject, TokenInterface $token);
}

// The Security package provides this standard Voter implementation:

final class Voter implements VoterInterface
{
    private $attributeVoter;

    public function __construct(AttributeVoter $attributeVoter)
    {
        $this->attributeVoter = $attributeVoter;
    }

    public function vote(TokenInterface $token, $subject, array $attributes)
    {
        $vote = self::ACCESS_ABSTAIN;

        foreach ($attributes as $attribute) {
            // We delegate calls to the injected AttributeVoter

            if (!$this->attributeVoter->supports($attribute, $subject)) {
                continue;
            }

            $vote = self::ACCESS_DENIED;
            if ($this->attributeVoter->voteOnAttribute($attribute, $subject, $token)) {
                return self::ACCESS_GRANTED;
            }
        }

        return $vote;
    }
}

Following the reasoning from "When to add an interface to a class" we might even reconsider the presence of a VoterInterface in the example above. If all the voting is done in the same way, that is, the voting algorithm is the same in all situations, we don't actually want users to write their own implementations of VoterInterface. So we also might provide only the Voter class, not the interface (I'm not actually sure it's the case for the Symfony Security component, but it's a good design choice to consider in your own projects).

The proposed solution doesn't involve inheritance anymore. It leaves the internals of all the involved classes to themselves. That is, these classes can all be final, and can be safely refactored now. Another advantage is that we could reuse each class in a different scenario. Final classes are more like building blocks ready for use, instead of templates that still need to be made concrete.

Composition over inheritance

You may have heard the phrase "Composition over inheritance" before; this is what is meant by that. In most cases, you should prefer a solution that involves composition over a solution that involves inheritance. To be more specific, if you feel the need to reconfigure an object, to change parts of an algorithm, to rewrite part of the implementation, consider creating a new class instead of overriding an existing class. If you need to represent a hierarchy of classes, where subclasses are proper substitutes for their parent classes, this would be the classic situation where you may still consider inheritance. However, the result may still be better if you don't inherit from concrete parent classes but from abstract interfaces. Still, this is very tricky to get right (I personally regret most of my previous work involving inheritance), so if you can, you should still prefer composition.

Extension should be a distinct use case

As a class designer you create a class in such a way that it provides a number of useful things its users can do with it. For example, they can:

  • instantiate it,
  • call a method on it,
  • get some information from it,
  • or change something about it.

"Reconfiguring it to change part of its behavior" should also be on this list, as a separate item. And so should "allowing it to be extended to override part of its behavior". It should be a deliberate decision to allow the user to do that with your class. Allowing a class to be extended has (often limiting) consequences for its design, and its future. So at least you shouldn't allow it by default.

"Final" pushes everyone in the right direction

So if allowing users to subclass a class shouldn't be the standard, then not allowing it should be. In other words: adding final to a class declaration should be your default. This simple trick will lead everyone in the right direction: towards classes that are smaller, have fewer maintenance issues, are easier to refactor, and act more like building blocks that can be reused in different parts of the project.

"You can always remove final if you want to"

I've often heard one argument for using final that I wanted to address here: "You can always remove final if you want to." In a sense, this is right; you can always allow extending a class whenever you want. But the same seems to be true for adding "final" - it's always just a few keystrokes away. As discussed in this article, I don't think it's the right attitude; it's better to close down, instead of open up. Opening up after having been closed is asking for all the trouble of inheritance described above. Make sure that instead of removing "final" from the class declaration, you will always aim for a solution that replaces part of the existing solution, and uses composition to allow for reconfiguration.

"My mocking tool doesn't work with final classes"

One objection against final classes that I've often heard is: "I like what you say, but my mocking tool doesn't work with final classes". Indeed, most don't. It makes sense, because the test doubles that such a tool generates usually look something like this:

final class CanNotBeExtended
{
}

// This produces a Fatal error:

class TestDoubleForCanNotBeExtended32789473246324369823903 extends CanNotBeExtended
{
    // override implementations of parent class here...
}

It's unfortunate, but the tools are not to blame. They'd have to do some sneaky things like hooking into the class loader to make the class non-final (in fact, that's quite doable). But they don't, so the truth gets slapped in our face. All that the tool is saying is that a final class can't be extended - there's nothing problematic about that fatal error. Instead, whenever we feel the need to replace a final class with a test double, we should consider two options:

  1. Maybe we shouldn't want to replace the real thing, i.e. the class.
  2. Maybe we should introduce something we can replace, i.e. an interface.

Option 1 is often useful when you're trying to mock things like entities and value objects. Just don't do it.

Option 2 should be applied in most other cases. When the class should've had an interface, add one, and create a test double for the interface instead.

Conclusion

In conclusion: make your classes final by default. One trick to help you with that is to modify the IDE template for new classes to automatically add final for you. Also, make the "final" discussion part of your technical code review. Ask: why is this class not final? The burden of proof should be on the author of the class (and if they don't agree, talk to them about the merits of using "final").

PHP design reuse package design Comments