Shared value object in concept but with different validation rules

54 views Asked by At

I have read the similar questions Is there any concept in DDD about shared value objects and Value object design rules in ddd and they both make sense when a VO is 100% shared.

I have multiple domain entities that share the same Name value object (VO) concept.

public record Name
{
    public string Value { get; }

    private const uint MinLength = ValidationConstants.MinNameLength;

    private const uint MaxLength = ValidationConstants.MaxNameLength;

    public Name(string value)
    {
        value = value.Trim();

        if (!value.LengthIsBetween(MinLength, MaxLength))
        {
            throw new InvalidResourceNameException(
                $"Name must be provided and be between {MinLength} and {MaxLength} characters (inclusive)"
            );
        }

        Value = value;
    }
}

How the concept differs between the entities in the in MinNameLength and MaxNameLength allowed lengths. When constructing the Name I could provide the MinNameLength and MaxNameLength length values but this doesn't feel right as the VO should be responsible for its own invariants (correct me if I am wrong?).

Would it be more in keeping with DDD to repeat the Name VO for each entity, or define a Name VO that can then be extended to meet the entities specific requirements?

public record EntityName : Name
{
  // Entity specific stuff
}
1

There are 1 answers

1
VoiceOfUnreason On BEST ANSWER

How the concept differs between the entities in the in MinNameLength and MaxNameLength allowed lengths. When constructing the Name I could provide the MinNameLength and MaxNameLength length values but this doesn't feel right as the VO should be responsible for its own invariants (correct me if I am wrong?).

TL;DR - if you have two different policies, you probably have two different "value objects", and in a programming environment where you are intending that static analysis tools can detect when the "wrong" value object is passed, then that will normally mean two different types.

In the case where these two different types are sufficiently similar, then they might have a common base type, or they might have traits/mixins in common -- that will help reduce the amount of "duplication" at work.


OK, longer version:

public record Name
{
    public string Value { get; }
    
    // ...
}

Here's the big riddle: what assumptions are clients allowed to make about Name.Value?

What we're really talking about here is the contract of Name; if the client satisfies all of the preconditions of the contract, what are the postconditions that the Name type promises in exchange?

If we have one context that promises that Name.Value will be fewer than ten characters, and another context that promises that Name.Value will be more than ten characters, then we have two different contracts.

And if we have two different contracts, then we will normally want two different types, so that the automatic checking of the machine can detect faults that connect consumers of one contract with providers of a different contract.

(These aren't DDD ideas, of course -- the lineage goes back to Hoare 1969; in an "object oriented" context, you are more likely to hear these ideas expressed in the language of Bertrand Meyer's "design by contract".)

So what we see here:

    public Name(string value)
    {
        value = value.Trim();

        if (!value.LengthIsBetween(MinLength, MaxLength))
        {
            throw new InvalidResourceNameException(
                $"Name must be provided and be between {MinLength} and {MaxLength} characters (inclusive)"
            );
        }

        Value = value;
    }

Is an attempt to dynamically express the precondition of the Name constructor. Which is to say, in order that we can be guaranteed that Name can fulfill its entire contract (including the post conditions of Name.Value described above), then the caller of the Name constructor must pass a string Value satisfying some pre-conditions.

Part of what the Name constructor is doing here is acting as a firewall - we can't (because of the design of the programming language we are using to implement our solution) express the static constraints on the general purpose string data type, so we instead create our value type and have a bunch of statically verified constraints on that, with the only string -> Name function having a dynamic check.

Now the bad news: the code above is following the patterns that were designed for detecting programmer errors.

And those patterns look like they'd be suitable for sanitizing untrusted inputs as well; but that isn't quite the same problem, and it isn't at all clear that we should be using the solution to one problem to solve the other.

(Recommended reading: Parse, Don't Validate by Alexis King.)

One way of expressing the tension: programmer errors are not supposed to happen, so certainly qualify as an exceptional condition, and we would certainly prefer that our design not be cluttered with a bunch of explicit error handling code. On the other hand, invalid inputs are certainly not an exceptional condition, especially near system boundaries, so it is much less clear that exceptions are the right way to design our sanitation code.

(Either way "works", of course, in the sense that the computer will interpret the instructions correctly. We're really talking about the implications of the design, and what effects the different trade offs have on short and long term developer productivity.)