I am currently using J Oliver's EventStore which as I understand uses Guids for the Stream ID and this is what is used to build my aggregate root.
From a CQRS point of view and a DDD perspective, I should be thinking about the domain and not GUIDs.
So, if I do a GET (Mvc client), am I right to say that my URL should have the identity of my domain object (aggregate root) and from that I get the GUID from the read store which is then used to build my aggregate root from the event store?
Or should I pass the GUID around to my forms and pass them back as hidden form variables? At least this way I know the aggregate root id and do not have to query the read store?
I suppose the first way is the correct way (not using GUIDs in forms) as then all my Gets and POSTS deal with identities of domain objects rather than GUIDSs which the client will not know.
I suppose this also allows me to build as REST based API which focuses on resources and their identities rather than system generated GUIDS. Please correct me if I am wrong
In my opinion you are on the right track here. The ui should rely solely on the read model and not really care about the aggregates. Then when you need to send a command you can use the read model to get the id of the aggregate you are interested in. The read model should be very fast to read from anyway (that's the whole reason behind using different models for reads and writes) and very easy to cache if you need to. This will also give you much nicer urls.
Related
I have two questions related to CQRS and Domain Driven Design (DDD).
As far as I understand the segregation idea behind CQRS, one would have two separate models, a read model and a write model. Only the write model would have access to and use the business domain model by means of commands. The read model, however, directly translates database content to DTOs by means of queries and has no access to the business domain at all.
For context: I am writing a web application back-end that provides calculation services for particle physics. Now my questions:
1.) My business domain logic contains some functions that calculate mathematical values as output from given measurement system configurations as input. So, technically, these are read-only queries that calculate the values on-the-fly and do not change any state in any model. Thus, they should be part of the read model. However, because the functions are heavily domain related, they have to be part of the domain model which again is part of the write model.
How should I make these calculation functions available for the front-end via my API when the read model, that should contain all the queries, does not have any access to the domain model?
Do I really have to trigger a command to save all the calculations to the database such that the read model can access the calculation results? These "throw-away" calculations will only be used short-term by the front-end and nobody will ever have to access the persistent calculation results later on.
It's the measurement configuration that has to be persistent, not the calculation results. Those will be re-calculated many times, whenever the user hits the "calculate" button on the front-end.
2.) I also feel like I duplicate quite a bit of data validation code, because both read model and write model have to deserialize and validate the same or very similar request parameters in the process chain http request body -> json -> unvalidated DTO -> validated value -> command/query.
How should I deal with that? Can I share validation code between read model and write model? This seems to dissolve the segregation.
Thanks in advance for all your help and ideas.
I think that what do you have is a set of domain services that with a given input they return an output.
As you said, this services are located in the domain. But, nothing denies you to use them in the read model. As long as you don't change the domain inside the functions, you can use them in any layer above the domain.
If, for any reason, this solution is not viable, because for example the services require domain objects that you cannot/don't want to build in the query side, you can always wrap the domain services inside application services. There you take in input a base object, you do all the transformations to the domain one, you call the domain service and you return the resulting value.
For the second question, you can build a validation service in the domain layer, as a set of services or simple functions. Again, nothing denies you to use them in the validation steps.
I've done the same in my last web app: the validation step of the form data calls a set of domain services, that are used also when I build the domain objects during the handling of a command. Changing the validation in the domain has as effect that also the Web related validation changes. You validate two times (before building the command and during the building of the domain object), but it's ok.
Take a look at the ports/adapters or the onion architecture: it helps a lot understanding what should stay inside a layer and what can be used by an overlapping layer.
Just to add my two pence worth.
We have microservices that use mongo as a backend. We use NET Core using mediatr (https://github.com/jbogard/MediatR) to implement a CQRS pattern in the MSs. We have .Query and .Command features.
Now in our world we never went to Event Sourcing. So, at the mongo level, our read and write models (entities) are the same. So what worked best for us was to have a single entity model which could be transformed into different models (if needed ) via each command/query handler(s).
If you are doing pure CQRS via event sourcing (so different read and write models) or something else then please ignore me! But this worked best for us.
Recently I've been trying to make my web application use separated layers.
If I understand the concept correctly I've managed to extract:
Domain layer
This is where my core domain entities, aggregate roots, value objects reside in. I'm forcing myself to have pure domain model, meaning i do not have any service definitions here. The only thing i define here is the repositories, which is actually hidden because axon framework implements that for me automatically.
Infrastructure layer
This is where the axon implements the repository definitions for my aggregates in the domain layer
Projection layer
This is where the event handlers are implemented to project the data for the read model using MongoDB to persist it. It does not know anything other than event model (plain data classes in kotlin)
Application layer
This is where the confusion starts.
Controller layer
This is where I'm implementing the GraphQL/REST controllers, this controller layer is using the command and query model, meaning it has knowledge about the Domain Layer commands as well as the Projection Layer query model.
As I've mentioned the confusion starts with the application layer, let me explain it a bit with simplified example.
Considering I want a domain model to implement Pokemon fighting logic. I need to use PokemonAPI that would provide me data of the Pokemon names stats etc, this would be an external API i would use to get some data.
Let's say that i would have domain implemented like this:
(Keep in mind that I've stretched this implementation so it forces some issues that i have in my own domain)
Pokemon {
id: ID
}
PokemonFight {
id: ID
pokemon_1: ID
pokemon_2: ID
handle(cmd: Create) {
publish(PokemonFightCreated)
}
handle(cmd: ProvidePokemonStats) {
//providing the stats for the pokemons
publish(PokemonStatsProvided)
}
handle(cmd: Start) {
//fights only when the both pokemon stats were provided
publish(PokemonsFought)
}
The flow of data between layers would be like this.
User -> [HTTP] -> Controller -> [CommandGateway] -> (Application | Domain) -> [EventGateway] -> (Application | Domain)
Let's assume that two of pokemons are created and the use case of pokemon fight is basically that when it gets created the stats are provided and then when the stats are provided the fight automatically starts.
This use case logic can be solved by using event processor or even saga.
However as you see in the PokemonFight aggregate, there is [ProvidePokemonStats] command, which basically provides their stats, however my domain do not know how to get such data, this data is provided with the PokemonAPI.
This confuses me a bit because the use case would need to be implemented on both layers, the application (so it provides the stats using the external api) and also in the domain? the domain use case would just use purely domain concepts. But shouldn't i have one place for the use cases?
If i think about it, the only purpose saga/event processor that lives in the application layer is to provide proper data to my domain, so it can continue with it's use cases. So when external API fails, i send command to the domain and then it can decide what to do.
For example i could just put every saga / event processor in the application, so when i decide to change some automation flow i exactly know what module i need to edit and where to find it.
The other confusion is where i have multiple domains, and i want to create use case that uses many of them and connects the data between them, it immediately rings in my brain that this should be application layer that would use domain APIs to control the use case, because I don't think that i should add dependency of different domain in the core one.
TL;DR
What layer should be responsible of implementing the automated process between aggregates (can be single but you know what i mean) if the process requires some external API data.
What layer should be responsible of implementing the automated process between aggregates that live in different domains / micro services.
Thank you in advance, and I'm also sorry if what I've wrote sounds confusing or it's too much of text, however any answers about layering the DDD applications and proper locations of the components i would highly appreciate.
I will try to put it clear. If you use CQRS:
In the Write Side (commands): The application services are the command handlers. A cmd handler accesses the domain (repositories, aggreagates, etc) in order to implement a use case.
If the use case needs to access data from another bounded context (microservice), it uses an infraestructure service (via dependency injection). You define the infraestructure service interface in the application service layer, and the implementation in the infra layer. The infra then access the remote microservice via http rest for example. Or integration through events.
In the Read Side (queries): The application service is the query method (I think you call it projection), which access the database directly. There's no domain here.
Hope it helps.
I do agree your wording might be a bit vague, but a couple of things do pop up in my mind which might steer you in the right direction.
Mind you, the wording makes it so that I am not 100% sure whether this is what you're looking for. If it isn't, please comment and correct my on the answer I'll provide, so I can update it accordingly.
Now, before your actual question, I'd firstly like to point out the following.
What I am guessing you're mixing is the notion of the Messages and your Domain Model belonging to the same layer. To me personally, the Messages (aka your Commands, Events and Queries) are your public API. They are the language your application speaks, so should be freely sharable with any component and/or service within your Bounded Context.
As such, any component in your 'application layer' contained in the same Bounded Context should be allowed to be aware of this public API. The one in charge of the API will be your Domain Model, that's true, but these concepts have to be shared to be able to communicate with one another.
That said, the component which will provide the states to your aggregate can be viewed from two directions I think.
It's a component that handles a specific 'Start Pokemon Match' Command. This component has the smarts to know to firstly retrieve the states prior to being able to dispatch a Create and ProvidePokemonStats command, thus ensuring it'll consistently create a working match with the stats in it by not dispatching any of both of the external stats-retrieval API fails.
Your angle in the question is to have an Event Handling Component that reacts on the creation of a Match. From here, I'd state a short-lived saga would be in place, as you'd need to deal with the fault scenario of not being able to retrieve the stats. A regular Event Handler is likely to lean to deal with this correctly.
Regardless of the two options you select, this service will deal with messages, a.k.a. your public API. As such it's within your application and not a component others will deal with directly, ever.
When it comes to your second question, I feel the some notion still holds. Two distinct applications/microservices only more so suggests your talking about two different Bounded Contexts. Certainly then a Saga would be in place to coordinate the operations between both contexts. Note that between Bounded Contexts, you want to share consciously when it comes to the public API, as you'd ideally not expose everything to the outside world.
Hope this helps you out and if not, like I said, please comment and provide me guidance how to answer your question properly.
Consider a typical Breeze controller that limits the results of a query to entities that the logged in user has access to. When the browser calls SaveChanges, does Breeze verify on the server that the entities reported as modified are from the original set?
To put it another way, does the EFContextProvider (in the case Entity Framework) keep track of entities that have been handed out, so it can check against malicious data passed to SaveChanges? Or does BeforeSaveEntity need to validate that the user has access to the changed entities?
You must guard against malicious data in your BeforeSaveEntity or BeforeSaveEntities methods.
The idea that the EFContextProvider would keep track of entities that have already been handed out is probably something that we would NOT want to do because
The EFContextProvider would no longer be stateless, which was a design goal to facilitate scaling.
You would still need to guard against malicious data for "Added" entities in the BeforeXXX methods.
It is actually a valid use case for some of our users to "modify" entities without having first queried them.
I'm rather new at this, but I've come to understand the security risks of using Breeze to expose an IQueryable<>. Would someone please suggest to me some best practices (or merely some recommendations) for securing an IQueryable collection that's exposed in the JavaScript? Thanks.
I would not expose any data via IQueryable that should nto be sent to the client via a random query. So a projection could be exposed or a DTO.
I'm not sure if this answers your question tho ... What "security risks" are you worried about?
I second this question, too. But to add some specifics along the questions that Ward asked:
In securing queryable services, two traditional issues come to mind:
1) Vertical security: Which items is the currently logged in user (based on user identity or roles) NOT allowed to see in the UI. Those need to be removed from the queryable list. IMO, this can be done as part of the queryable ActionFilter magic by chaining some exclude logic on the returned IQueryable.
2) Horizontal security: Some models contain fields that are not appropriate for the logged in user to see (and/or edit). This is more difficult to handle as it's not a matter of just removing instances from the returned IQueryable. The returned class has a different shape and therefore can be handled either by the json formatter omitting the fields based on security (which AFAIK screws up breeze meta data) or you return a DTO in which case since the DTO doesn't exist in the metadata it's not a full life cycle (updatable) class? (I am asking this not stating it)
I would like to see either built-in support or easy to implement recipes for number 2). Perhaps some sample code to amend the client side metadata to make DTOs work perfectly fine comingled with model objects. The newset VS 2012 SPA templates (in the TodoList app) seem to push DTO variants of the model object both on the queryable and insert/update side. This is similar to the traditional MVC modelviews...
Finally - I'd add a request to auto-handling of the overposting security issue for inserts and updates. This is the reciprocal aspect of 2). Some users should not be able to edit certain fields.
I model a User as an aggregate root and a User is composed of an Identifier value object as well as an Email value object. Both value objects can uniquely identify a User, however the email is allowed to change and the identifier cannot.
In most examples of DDD I have seen, a repository for an aggregate root only fetches by identifier. Would it be correct to add another method that fetches by email to the repository? Am I modeling this poorly?
I would say yes, it is appropriate for a repository to have methods for retrieving aggregates by something other than the identity. However, there are some subtleties to be aware of.
The reason that many repository examples only retrieve by ID is based on the observation that repositories coupled with the structure of aggregates cannot fulfill all query requirements. For instance, if you have a query which calls for some fields from an aggregate as well as some fields for a referenced aggregate and some summary data, the corresponding aggregate classes cannot be used to represent this data. Instead, a dedicated read-model is needed. Therefore, querying responsibilities are decoupled from the repository. This have several advantages (queries can be served by a dedicated de-normalized store) and it is the principal paradigm of CQRS. In this type of architecture, domain classes are only retrieved by the repository when some behavior needs to execute. All read-only use cases are served by a read-models.
The reason that I think it appropriate for a repository to have a GetByEmail method is based on YAGNI and battling complexity. You an allow your application to evolve as requirements change and grow. You don't need to jump to CQRS and separate read/write stores right away. You can start with a repository that also happens to have a query method. The only thing to keep in mind is that you should try to retrieve entities by ID when you need to invoke some behavior on those entities.
I would put this functionality into a service / business layer that is specific to your User object. Not every object is going to have an Email identifier. This seems more like business logic than the responsibility of the repository. I am sure you already know this, but here is good explanation of what I am talking about.
I would not recommend this, but you could have a specific implementation of your repository for a User that exposes a GetByEmail(string emailAddress) method, but I still like the service idea.
I agree with what eulerfx has answered:
You need to ask yourself why you need to get the AR using something
other than the ID.
I think it would be rather obvious that you do not have the ID but you do have some other unique identifier such as the e-mail address.
If you go with CQRS you need to first determine whether the data is important to the domain or only to the query store. If you require the data to be 100% consistent then it changes things slightly. You would, for instance, need 100% consistency if you are checking whether an e-mail address exists in order to satisfy the unique constraint. If the queried data is at any time stale you will probably run into problems.
Remember that a repository represents a collection of sorts. So if you do not need to actually operate on the AR (command side) but you have decided that where you are using your domain is appropriate then you could always go for a ContainsEMailAddress on the repository; else you could have a query side for your domain data store also since your domain data store (OLTP type store) is 100% consistent whereas your query store (OLAP type store) may only be eventually consistent, as is typical of CQRS with a separate query store.
In most examples of DDD I have seen, a repository for an aggregate
root only fetches by identifier.
I'd be curious to know what examples you've looked at. According to the DDD definition, a Repository is
A mechanism for encapsulating storage, retrieval, and search behavior
which emulates a collection of objects.
Search obviously includes getting a root or a collection of roots by all sorts of criteria, not only their ID's.
Repository is a perfect place for GetCustomerByEmail(), GetCustomersOver18(), GetCustomersByCountry(...) and so on.
Would it be correct to add another method that fetches by email to the repository? - I would not do that. In my opinion a repository should have only methods for getting by id, save and delete.
I'd rather ask why you don't have user id in the command handler in which you want to fetch the user and call a domain method on it. I don't know what exactly you are doing, but for the login/register scenario, I would do following. When a user logs in, he passes an email address and a password, and you do a query to authenticate the user - this would not use domain or repository (that is just for commands), but would use some query implementation which would return some UserDto which would contain user id, from this point you have the user id. Next scenario is registration. The command handler to create a new user would create a new user entity, then the user needs to log in.