In order to support offline clients, I want to evaluate how to fit Multi-Version Concurrency Control with a CQRS-DDD system.
Learning from CouchDB I felt tempted to provide each Entity with a version field. However there are other version concurrency algorithms like vector clocks. This made me think that maybe, I should just not expose this version concept for each Entity and/or Event.
Unfortunately most of the implementations I have seen are based on the assumption that the software runs on a single server, where the timestamps for the events come from one reliable source. However if some events are generated remotely AND offline, there is the problem with the local client clock offset. In that case, a normal timestamp does not seem a reliable source for ordering my events.
Does this force me to evaluate some form of MVCC solution not based on timestamps?
What implementation details must an offline-CQRS client evaluate to synchronize a delayed chain of events with a central server?
Is there any good opensource example?
Should my DDD Entities and/or CQRS Query DTOs provide a version parameter?
I manage a version number and it has worked out well for me. The nice thing about the version number is that you can make your code very explicit when dealing with concurrency conflicts. My approach is to ensure that my DTO's all have the version number of the aggregate they are associated with. When I send in a command it has the current version as seen on the client. This number may or may not be in sync with the actual version of the aggregate, ie. the client has been offline. Before the event is persisted I check the version number is the one I expected and if not I do a check against the preceding events to see any of them actually conflict. Only then if they do, do I raise an exception. This is essentially a very fine grained form of optimistic concurrency. If your interested I've written more detail including some code samples on my blog at: http://danielwhittaker.me/2014/09/29/handling-concurrency-issues-cqrs-event-sourced-system/
I hope that helps.
I suggest you to have a look at Greg's presentation on the subject. It might have answers you're looking for https://skillsmatter.com/skillscasts/1980-cqrs-not-just-for-server-systems
I guess you should rethink your domain, separate remote client logic in its own bounded context and integrate it with the other BC using the known principles of DDD for BC interop.
Related
I'm new to DDD and CQRS and I'm planning to build a simple application to improve my skills a bit.
What I'm planning to do is a simple Taxi Corp application.
Requirements:
Client orders a taxi.
Client can have only one order at a time.
Driver picks an order.
Driver can have only one order at a time.
Driver goes to client.
Client enters cab.
Course starts.
Course finishes.
Client is purchased and driver is paid
And so on.
I can see there can be three aggregates: Client, Order and Driver. I want to split them into separate microservices. Do you think it's a good idea or I should start with one microservice?
I'm currently focused on the ordering a taxi. First of all I need to check if client doesn't already have a course assigned, later on I can create an order. After the order is created, I need to assign it to client. As during one request only one aggregate can be updated/created I wonder how to do it correctly. I've read something about Process Managers and I think it will be very useful in this case. I even draw a schema of communication. Can anyone tell me if my approach is correct and give me some tips on how to going further?
Process of creating an order
Do you think it's a good idea or I should start with one microservice?
I refer you to the wisdom of John Gall
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system.
Instead of worrying about microservices, give your attention to messages.
Someone said: "If you have more microservices than customers, you are doing it wrong".
And if you really follow CQRS/ES approach, resulting system is much easier to split apart than traditional ORM monolyths.
So focus on the domain first and start with monolyth.
start with the microservices design even in a wrong way, you get a better insight into desired architecture. because problems in microservices architecture design show themselves very soon.
client and driver are both users of systems and have some commonalities so you can consider them as one domain and one micro-service for them.
consider an order manager micro-service to assign client and driver to a trip by their ids. the order database may include trips table with two id keys for driver-Id and client-Id and some columns for the different states. after finishing each trip you can remove it from the trip table and insert that in an archive table. also, you can leave it there and partition your table daily to keep your database performance high.
consider an accounting micro-service for keeping payments and transactions. It's ok if you opt to use NoSql databases for other microservices, but do use SQL database for your transactions.
you may need another microservice for reporting and dashboards. mirror other dbs in a new one for reporting.
you also need an API gateway to route requests to micro-services or do authentication
your process is a set of events. definitely, you will expand the system later on and perhaps will have some long-running tasks, better to have a message broker and implement your flow as an event/task flow using patterns like event sourcing.
I can see there can be three aggregates: Client, Order and Driver. I
want to split them into separate microservices. Do you think it's a
good idea or I should start with one microservice?
They all belong to the same bounded context. Bounded context translates nicely to microservices (see Eric Evans video: https://www.infoq.com/news/2015/06/dddx-microservices-boundaries). But don't start by designing a micro service, you are doing it in the wrong order. Design first your bounded context then if it makes sense create a micro service around the hexagonal architecture.
After the order is created, I need to assign it to client. As during
one request only one aggregate can be updated/created I wonder how to
do it correctly.
This is the perfect example of why you need to do it all in the same process.
But in the case you want to go multiple micro services, think of eventual consistency (https://en.wikipedia.org/wiki/Eventual_consistency) and create a message driven architecture between your services. Might be too much work in my opinion but for learning purpose can be a good idea.
In "Implementing Domain-Driven Design", Vernon give detailed examples for integrating bounded context with a messaging or REST based solution, it also mention database integration, but I understand it is not a very clean solution to share database or at least db tables between BC.
But what if the 2 BCs I want to integrate are hosted locally on the same server, is it really a good idea to use a messaging/rest/rpc solution ? (which seems more suitable for a remotely hosted BC to me)
Otherwise, except with DB integration, what are the other alternatives ? Hosting both BC in the same process and calling it directly (still using adapters and translators for clean seperation) ?
Thanks
You could look into using something like 0MQ for inter-process communication on the same server. I've also in the past just hosted things in the same process as you suggest and just used interfaces / in-memory messaging to separate out contexts.
Everything is about trade-offs in the end, so you just need to decide what level of isolation you are willing to accept. The simplest solution would be to separate inside a solution via folders and interfaces, the other end of the spectrum being completely separate servers.
I don't think that location should come into play w.r.t. integration between BCs.
There really are other factors to consider such as guaranteed delivery to the recipient in order to ensure that the processing takes place. This should be required whether or not the two BCs are hosted on the same server.
Another reason to ignore location is that when you need to scale, your architecture should be able to handle it from the get-go.
As tomliversidge mentioned it is possible to use some deployment mechanisms such as non-durable messaging to speed up things but there will definitely be a trade-off and that has to be a conscious decision.
When implementing CQRS with Domain Driven Design, we separate our command interface from our query interface.
My understanding is that at the domain level this reduces complexity significantly (especially when using event sourcing) in the domain model sense your read model will be different from your write model. So that looks like a separate domain service for your read and write bounded context.
At the application level, do we need a separate application service for the read and write separations of our domain?
I've been playing devil's advocate on the matter. My thoughts are it could be overkill, requiring clients to know the difference. But then I think about how a consuming webservice might use it. Generally, it will issue get requests for reading and post for writing, which means it already knows.
I see the benefits being cleaner application services.
The real value is having a properly separated read and domain model. They do fundamentally different things. And often have very different shapes. It's entirely possible for the read model to contain an amalgam of data from differing domain objects for example.
When you think about how they are used and the way the function within an application you can start to appreciate the need for the separation. The classic example here is to consider the number of writes compared to the reads in a typical application. The reads massively outnumber the writes. By maintaining a difference you can optimise each side for it's respective role.
Another aspect to bear in mind is that a 'post' will constitute a command not a viewmodel which may contain a read model. If using a CQRS approach you need to adapt the way you do queries and posts. In fact you can achieve a much more descriptive language rather than simply reflecting a view model back and forth to a server.
If your interested I have a blog post which outlines the high level overview of a typical CQRS architecture. You can find it here: A step by step overview of a typical CQRS application. Hope you find it useful.
Final point. We are in the process of adding new functionality and have found the separation to be very helpful. Changes to one side don't impact the other in the same way as they might.
I'd been using RestKit for the last two years, but recently I've started thinking about transition from these monolith framework as it seems to be really overkill.
Here's my pros for moving forward:
There is big need in using NSURLSession for background fetches and RestKit has only experimental branch for transition to AFNetworking 2.0. No actual dates when transition will be finished. (Main Reason)
No need for CoreData support in network library as no need for fully functional offline data storage.
Having headache with new concept of response/request descriptors as they don't support different parameters in path patterns (ex. access token parameter) and there is no way to create object request operation in one line with custom descriptor. Here I am loosing features of object manager as facade.
I. The biggest loss of RestKit for me in object mapping process.
Could you recommend standalone libraries that you use which shows themselves as flexible and stable?
II. And as I sad I need no fully functional storage but I still need some caching support in some places.
I've heard that NSURLCache has become useful in last OS release.
Did you use it and what's the strategy?
Does it return cached API responses when network connection is down?
III. Does anybody faces the same problems?
What solutions have you applied?
Maybe someone could give some piece of advice about architecture that he or she uses in multiple apps with pure AFNetworking?
I. In agreement with others who have commented, AFNetworking + Mantle is a simple and effective way to interact with a Restful API and to replace RestKit's object mapping process that you miss.
II. To answer the requirements of your caching support is highly dependent on the context. However, I have found for my recent functional requirements that caching a view model for a particular controller's screen and only caching reference data returned by APIs allows me to keep the application logic relatively simple whilst giving the user some continuity. A simple error notification for connectivity issues can be dealt with a cross-cutting manner.
III. One thought on the architecture relevant to this aspect is to ensure that the APIs the app is dependent on provides data according to the app experience. This allows your app to focus on what it is good at (a very slick user-experience) and moves logic into the API's closer to API dependencies such as data. This has a further benefit of reducing the chattiness of the app.
I'm talking a crack at some of the concepts behind distributed domain driven design and I'm building a proof of concept. I have three C# solutions that have specific responsibility within the overall system.
The solutions I have are:
The write model (receives commands from a client and creates and sends events)
The read model (receives events from write model, creates a database and exposes DTO services to the client, could potentially be 2 separate solutions)
The client (calls services to get needed data and sends commands to the write model)
All three solutions use messaging (commands, events) through a service bus. (MassTransit in my case).
My main question is: Is it common practice to create an assembly with the messages and have each solution reference that assembly?
Extra credit: Is there anything I'm doing that seems weird or problematic in this POC? Any additional info I should be aware of when creating this type of a system?
Is it common practice to create an assembly with the messages and have
each solution reference that assembly?
Yes. This is a common practice with messaging systems in general. For example, many NServiceBus samples employ this approach. Think of this assembly as representing your contract. In systems built upon different platforms this representation would come in the form of an XSD schema or some other schema definition mechanism.
Is there anything I'm doing that seems weird or problematic in this
POC? Any additional info I should be aware of when creating this type
of a system?
Everything seems to be well fitted to CQRS so far. To be fair, I should mention that it can be easy to get carried away with CQRS as a silver bullet and structure systems around it. It is often a wise decision to forgo CQRS all together. Keep focus on the business domain and use CQRS as an architectural style to implement your system, not to guide its model.