What is the correct way of designing aggregate in DDD, for exmaple you need to create some user, and to create it we need to have id, email, living address and born address.
So we can do it like this (I'll use PHP as it's my main language):
class User {
private function __construct(
private UserId $id,
private Email $email,
private Address $livingAddress,
private Address $bornAddress
) {}
public static create(
string $uuid,
string $email,
string $city,
string $country,
string $address,
string $bornCity,
string $bornCountry,
string $bornAddress,
) {
return new self (
UsesId::fromString($uuid),
new Email($email),
new Address($city, $country, $address),
new Address($bornCity, $bornCountry, $bornAddress)
);
}
}
in this case we following a rule that aggregate root responsible of checking all invariants , because he creating all VO's and they has own validation. But it's adding complexity to actual User class, we have to many parameters passed etc.
Another possible solution is to build it like this:
class User {
private function __construct(
private UserId $id,
private Email $email,
private Address $livingAddress,
private Address $bornAddress
) {}
public static create(
UserId $id,
Email $email,
Address $livingAddress,
Address $bornAddress
) {
return new self (
id,
$email,
$livingAddress,
$bornAddress
);
}
}
now User object is smaller, maybe even more "elegant", but we breaking the rule that aggregate should check all invariants.
And the third option is probably to use Factory, I'll not provide code here but explain how I see it. Basically factory takes a row data, create VO, and passing it to aggregate, so aggregate creation will look like in the second example. Again aggregate is not responsible for all invariants, but Factory is on one of allowed patterns by DDD so I think thats fine.
I know probably there is no a right way, but I want to hear some best practices and suggestions from someone who has distinct knowledge in DDD about how to do it right.
we breaking the rule that aggregate should check all invariants.
I think you are misunderstanding this rule.
The aggregate is responsible for domain dynamics: how information changes over time. In particular, it is responsible for ensuring that the information stored is internally consistent.
That's not the same problem as input validation, which as a rule we want to solve as close to the boundary as we can manage.
Imagine, if you will, a form submission that looks like
...&userId=alphabet-soup&...
Sure, "alphabet-soup" is a string, but it's not consistent with the schemas described in RFC 4122, so something has gone Very Wrong, and we should be bailing out with a client error rather than forwarding the suspect data to the domain model.
See also Parse, Don't Validate (Alexis King, 2019)
Related
I am currently working on a DDD system that is composed out of several bounded contexts. 2 of them are:
Context "account management": Only staff members are allowed to work here. The idea is to manage customer accounts (like address, phone numbers, contacts etc etc) and to verify the account of a customer (basically checking if the data the customer supplied is valid).
Context "website": I can login as a customer and edit my data (change my address for example)
Here is the issue:
A user logged in into the account management context is per definition an employee. So I can assume that changes made here are "trustworthy" in the sense of "the data is verified". A simplified variant of the appservice looks like this:
class AccountAppService
{
public function changeAddress(string $accountId, string $address) : void
{
$account = $this->accountRepository->ofId(new Guid($accountId));
$account->changeAddress(new Address($address));
}
{
This is the appservice I am calling when an employee is changing an address. Note that there is no IdentityService that I inject/use in order to know who the employee is as this is not interesting here. The Account entity would emit an AccountAddressChanged event after successfully calling its changeAddress() method like so
class Account implements Entity
{
public function changeAddress(Address $address) : void
{
$this->address = $address;
DomainEventSubscriber::instance()->publish(new AccountAddressChanged($this));
}
}
But I also need to reflect changes as soon as a customer edits data on the website. I plan to do this async via events a la "AccountAddressChangedViaWebsite". The account management context will subscribe and handle that event, setting the corresponding account to "unverified" again. So a simplified subscriber of the account management context could look like:
class AccountAddressChangedViaWebsiteSubscriber
{
public function handle(AccountAddressChangedViaWebsite $event) : void
{
$accountId = $event->accountId();
$address = $event->getAddress();
$this->accountService->changeAddress($accountId, $address);
}
}
Now the question: Employees call the appservice directly, customers via subscribers. If we say "we have to reverify an account after the customer updates his data" it sounds like a domain concept.
Domain concepts fit into entities or domain services, but not into application services or subscribers for what I know. It implies to me that the following should be avoided (note the last line calling unverifyAccount()):
class AccountAddressChangedViaWebsiteSubscriber
{
public function handle(AccountAddressChangedViaWebsite $event) : void
{
$accountId = $event->accountId();
$address = $event->getAddress();
$this->accountService->changeAddress($accountId, $address);
$this->accountService->unverifyAccount($accountId);
}
}
This is domain logic that is somewhat hidden in a subscriber which seems odd. I have the gut feeling that this should be the responsibility of a domain service, but how would the domain service know that it is called by an external event (via subscriber) or a command?
I could pass a sort of "Originator" ValueObject that tells me wheter the user causing this is an employee or an external system. Example:
class OriginatorService
{
public function changeAddress(Originator $originator, Account $account, Address $address) : void
{
$account->changeAddress($address);
if(($originator instanceof Employee) === false) {
$account->unverify();
}
}
}
Here I delegate the responsibility of what to do to a domain service. But might double dispatching the OriginatorService into the Account entity be a good solution? This way the entity could check who caused the change via asking the passed in originatorService and could unverify itself.
I guess I am going down the DDD rabbit hole here, but what are your experiences/best practises in such a case?
The simplest answer is probably introduce UnverifiedAddress as a concept in your model, rather than trying to treat "Address" as a universal idea with the verification bolted on as an afterthought.
I'm trying to store a "Role" object and then get a list of Roles, as shown here:
public class Role
{
public Guid RoleId { get; set; }
public string RoleName { get; set; }
public string RoleDescription { get; set; }
}
//Function store:
private void StoreRole(Role role)
{
using (var docSession = docStore.OpenSession())
{
docSession.Store(role);
docSession.SaveChanges();
}
}
// then it return and a function calls this
public List<Role> GetRoles()
{
using (var docSession = docStore.OpenSession())
{
var Roles = from roles in docSession.Query<Role>() select roles;
return Roles.ToList();
}
}
However, in the GetRoles I am missing the last inserted record/document. If I wait 200ms and then call this function the item is there.
So I am not in sync. ?!
How can I solve this, or alternately how could I know when the result is in the document store for querying?
I've used transactions, but cannot figure this out. Update and delete are just fine, but when inserting I need to delay my 'List' call.
You are treating RavenDB as if it is a relational database, and it isn't. Load and Store are ACID operations in RavenDB, Query is not. Indexes (necessary for queries) are updated asynchronously, and in fact, temporary indexes may have to be built from scratch when you do a session.Query<T>() without a durable index specified. So, if you are trying to query for information you JUST stored, or if you are doing the FIRST query that requires a temporary index to be created, you probably won't get the data you expect.
There are methods of customizing your query to wait for non-stale results but you shouldn't lean on these too much because they're indicative of a bad design - it is better to figure out a better way to do the same thing in a way that embraces eventual consistency, either changing your model (so you get consistency via Load/Store - perhaps you could have one document that defines ALL of the roles in a list?) or by changing the application flow so you don't need to Store and then immediately Query.
An additional way of solving this is to query the index with WaitForNonStaleResultsAsOfLastWrite() turned on inside the save function. That way when the save is completed the index will be updated to at least include the change you just made.
You can read more about this here
I have a business requirement to only send permissioned properties in our response payload. For instance, our response DTO may have several properties, and one of them is SSN. If the user doesn't have permissions to view the SSN then I would never want it to be in the Json response. The second requirement is that we send null values if the client has permissions to view or change the property. Because of the second requirement setting the properties that the user cannot view to null will not work. I have to still return null values.
I have a solution that will work. I create an expandoObject by reflecting through my DTO and add only the properties that I need. This is working in my tests.
I have looked at implementing ITextSerializer. I could use that and wrap my response DTO in another object that would have a list of properties to skip. Then I could roll my own SerializeToString() and SerializeToStream(). I don't really see any other ways at this point. I can't use the JsConfig and make a SerializeFn because the properties to skip would change with each request.
So I think that implementing ITextSerializer is a good option. Are there any good examples of this getting implemented? I would really like to use all the hard work that was already done in the serializer and take advantage of the great performance. I think that in an ideal world I would just need to add a check in the WriteType.WriteProperties() to look and the property is one to write, but that is internal and really, most of them are so I can't really take advantage of them.
If someone has some insight please let me know! Maybe I am making the implementation of ITextSerialzer a lot harder that it really is?
Thanks!
Pull request #359 added the property "ExcludePropertyReference" to the JsConfig and the JsConfigScope. You can now exclude references in scope like I needed to.
I would be hesitant to write my own Serializer. I would try to find solutions that you can plug in into the existing ServiceStack code. That way you will have to worry less about updating dlls and breaking changes.
One potential solution would be decorating your properties with a Custom Attributes that you could reflect upon and obscure the property values. This could be done in the Service before Serialization even happens. This would still include values that they user does not have permission to see but I would argue that if you null those properties out they won't even be serialized by JSON anyways. If you keep all the properties the same they you will keep the benefits of strong typed DTOs.
Here is some hacky code I quickly came up with to demonstrate this. I would move this into a plugin and make the reflection faster with some sort of property caching but I think you will get the idea.
Hit the url twice using the following routes to see it in action.
/test?role
/test?role=Admin (hack to pretend to be an authenticated request)
[System.AttributeUsage(System.AttributeTargets.Property)]
public class SecureProperty : System.Attribute
{
public string Role {get;set;}
public SecureProperty(string role)
{
Role = role;
}
}
[Route("/test")]
public class Test : IReturn
{
public string Name { get; set; }
[SecureProperty("Admin")]
public string SSN { get; set; }
public string SSN2 { get; set; }
public string Role {get;set;}
}
public class TestService : Service
{
public object Get(Test request)
{
// hack to demo roles.
var usersCurrentRole = request.Role;
var props = typeof(Test).GetProperties()
.Where(
prop => ((SecureProperty[])prop
.GetCustomAttributes(typeof(SecureProperty), false))
.Any(att => att.Role != usersCurrentRole)
);
var t = new Test() {
Name = "Joe",
SSN = "123-45-6789",
SSN2 = "123-45-6789" };
foreach(var p in props) {
p.SetValue(t, "xxx-xx-xxxx", null);
}
return t;
}
}
Require().StartHost("http://localhost:8080/",
configurationBuilder: host => { });
I create this demo in ScriptCS. Check it out.
About Domain Driven Design, Order and OrderLines are always seen as an aggregate, where Order is the root. Normally, once an order is created, one cannot change it. In my case however, that is possible. Instead each order has a state determining whether the order can be changed or not.
In this case, are both Order and OrderLines their own “aggregate root”? I need to be able to update order lines, so I figure that they should have their own repository. But I do not want to retrieve order lines, and persist them without the order. So this indicates that there’s still an aggregate where Order is the root with a factory method to create order lines (Order.CreateOrderLine(quantity, text, …).
Another approach could be to update the Order when the order lines collection has been modified, and then call UpdateOrder(Order). I would need some way of detecting that only the collection should be updated, and no the Order itself (using Entity Framework).
What do you think?
Order lines shouldn't be an aggregate of it's own, and doesn't need it's own repository. Your aggregate should be setup something like this...
public class Order
{
private List<OrderLine> _orderLines;
private OrderState _orderState;
public IEnumerable<OrderLine> OrderLines
{
get { return _orderLines.AsReadOnly();}
}
public OrderState Status
{
get { return _orderState; }
}
public void DeleteOrderLine(Guid orderLineID)
{
if (Status.IsProcessed)
throw new InvalidOperationException("You cannot delete items from a processed order");
OrderLine lineToRemove = _orderLines.Find(ol => ol.Id == orderLineID);
_orderLines.Remove(lineToRemove);
}
public void AddOrderLine(Product product, int quantity)
{
if (Status.IsProcessed)
throw new InvalidOperationException("You cannot add items to a processed order");
OrderLine line = new OrderLine(product.ProductID, (product.Price * quantity), quantity);
_orderLines.Add(line);
}
}
Entity framework has some built in features to detect changes to your object. This is explained here (conveniently with an order/order lines example): http://msdn.microsoft.com/en-us/library/dd456854.aspx
I have an account object that creates a user like so;
public class Account
{
public ICollection<User> Users { get; set; }
public User CreateUser(string email)
{
User user = new User(email);
user.Account = this;
Users.Add(user);
}
}
In my service layer when creating a new user I call this method. However there is a rule that the users email MUST be unique to the account, so where does this go? To me it should go in the CreateUser method with an extra line that just checks that the email is unique to the account.
However if it were to do this then ALL the users for the account would need to be loaded in and that seems like a bit of an overhead to me. It would be better to query the database for the users email - but doing that in the method would require a repository in the account object wouldn't it? Maybe the answer then is when loading the account from the repository instead of doing;
var accountRepository.Get(12);
//instead do
var accountRepository.GetWithUserLoadedOnEmail(12, "someone#example.com");
Then the account object could still check the Users collection for the email and it would have been eagerly loaded in if found.
Does this work? What would you do?
I'm using NHibernate as an ORM.
First off, I do not think you should use exceptions to handle "normal" business logic like checking for duplicate email addresses. This is a well document anti-pattern and is best avoided. Keep the constraint on the DB and handle any duplicate exceptions because they cannot be avoid, but try to keep them to a minimum by checking. I would not recommend locking the table.
Secondly, you've put the DDD tag on this questions, so I'll answer it in a DDD way. It looks to me like you need a domain service or factory. Once you have moved this code in a domain service or factory, you can then inject a UserRepository into it and make a call to it to see if a user already exists with that email address.
Something like this:
public class CreateUserService
{
private readonly IUserRepository userRepository;
public CreateUserService(IUserRepository userRepository)
{
this.userRepository = userRepository;
}
public bool CreateUser(Account account, string emailAddress)
{
// Check if there is already a user with this email address
User userWithSameEmailAddress = userRepository.GetUserByEmailAddress(emailAddress);
if (userWithSameEmailAddress != null)
{
return false;
}
// Create the new user, depending on you aggregates this could be a factory method on Account
User newUser = new User(emailAddress);
account.AddUser(newUser);
return true;
}
}
This allows you to separate the responsiblities a little and use the domain service to coordinate things. Hope that helps!
If you have properly specified the constraints on the users table, the add should throw an exception telling you that there is already a duplicate value. You can either catch that exception in the CreateUser method and return null or some duplicate user status code, or let it flow out and catch it later.
You don't want to test if it exists in your code and then add, because there is a slight possibility that between the test and the add, someone will come along and add the same email with would cause the exception to be thrown anyway...
public User CreateUser(string email)
{
try
{
User user = new User(email);
user.Account = this;
user.Insert();
catch (SqlException e)
{
// It would be best to check for the exception code from your db...
return null;
}
}
Given that "the rule that the users email MUST be unique to the account", then the most important thing is to specify in the database schema that the email is unique, so that the database INSERT will fail if the email is duplicate.
You probably can't prevent two users adding the same email nearly-simultaneously, so the next thing is that the code should handle (gracefully) an INSERT failure cause by the above.
After you've implemented the above, seeing whether the email is unique before you do the insert is just optional.