High Scalability in Domain-Driven Design - domain-driven-design

I'm using DDD for a service-oriented application intended to transmit a high volume of messages between a high volume of web clients (i.e., browsers).
Because in the context of required functionality, the need for transmission outweighs the need for storage, I love the idea of relying on RAM primarily and minimizing use of the database.
However I'm unclear on how to architect this from a scalability point of view. A web farm creates high availability of service endpoints and domain logic processing. But no matter how many servers I have, it seems they must all share a common repository so that their data is consistent.
How do I build this repository so that it's as scalable as possible? How can it be splashed across an array of physical machines in a manner such that all machines are consistent and each couldn't care less if another goes down?
Also since touching the database will be required occasionally (e.g., when a client goes missing and messages intended for it must be stored until it returns), how should I organize my memory-based code and data access layer? Are they both considered "the repository"?

There are several ways to solve this issue. No single answer can really cover it all...
One method to ensure your scalability is to simply scale the hardware. Write your web services to be stateless so that you can run a web farm (all running the same identical services, pointing to the same DB) and turn your DB into a cluster. Clustered databases run over multiple servers and work on the same storage. However, this scenario can get complicated and expensive quite quickly.
Some interesting links:
http://scale-out-blog.blogspot.com/2009/09/future-of-database-clustering.html
http://en.wikipedia.org/wiki/Server_farm
Another method is to look at architecture. CQRS is a common architectural model that ensures scalability. For instance, this architecture model -- its name stands for Command/Query Responsibility Segregation -- builds different databases for reading and writing. This seems contradictory, but if you study it, it becomes natural and you wonder why you've never thought of it before. Simply put, most apps do a lot more reading than writing, and writing tends to be a lot more complicated than reading (requiring business rule validation etc.), so why not separate the two? You can use your expensive transactional database for writing and then your cheap, maybe Non-SQL based or open source, database over multiple reading servers. Your read model is then optimized for the screens of your application(s), whereas the write model is optimized solely for writing and is in fact a DDD-based set of repositories.
There's just not enough room here to cover this option in detail, but CQRS is a good way of achieving scalability and even ease of development, once you have a CQRS framework in place. There are many other advantages to CQRS, such as ease of auditing (if you combine it with the "event sourcing" technique, which is common in CQRS-based environments).
Some interesting links:
http://cqrsinfo.com
http://abdullin.com/cqrs
http://blog.fossmo.net/post/Command-and-Query-Responsibility-Segregation-(CQRS).aspx

Are you ready for some reading? There are a lot of options, but I believe you should start by learning about the advantages of modern distributed NoSQL dbs, and enjoy learning from the experience learned in facebook, linkedin and other friends. Start here:
http://highscalability.com/
http://nosql-database.org/

Related

How to classify services in microservices?

I am new in microservices. I am coming from monolithic background in current environment i have different kinds services for different purposes like search, file, email, notification. I have taken so many courses but in that the instructor separate each entity and make it's own database also create API for that(like separate shopping cart entity, product entity) it makes no sense, I am not getting what is real world use of microservices or how to make separate component to build it's own microservice.
Can anyone give Real Project example?
Thanks in advance
Read this and this. Also look here and here. I don't think that anyone will give a link to the real working project, so you can try this.
I am not getting what is real world use of microservices
mostly as you heard in all of those tutorials the microservices architecture leverage advantages of:
the smaller services are easy to maintain and develop
easily can scale specific services rather than the whole project(monolith). for example you scale service-1 to 4 instances that request traffic split into these 4 instance and service-2 to 2 instances and go on (load balance). and these services may distributed in to different servers and locations.
if one service failed to work it does not terminate the whole system since they are independent.
services can be reusable for other scenarios or features.
small team can works for each services and its easy to manage both project and development flow.
and also it suffer from disadvantages of
services are simple and small but all as a whole system is complex so designing part are very critical.
poor performance and it requires do some extras to improve the performance (different types of caching on different levels).
transactions are complex and its developments are time costly. imagine simple update should be projected to other services if its required and you have to consider failure and rollback strategy ( SAGA ).
how to make separate component to build it's own microservice
this is the most challenging part of microservices. you need deep study on Domain driven design DDD.
Decompose by subdomain
Decompose by Business Capabilities
Can anyone give Real Project example?
there are many projects the develop microservices with different patterns. I think you have to start your own and make your hands dirty.

Questions pertaining to micro-service architecture

I have a couple of questions that exist around micro service architecture, for example take the following services:
orders,
account,
communication &
management
Question 1: From what I read I understand that each service is suppose to have ownership of the data pertaining to that service, so orders would have an orders database. How important is that data ownership? Would micro-services make sense if they all called from one traditional database such that all data pertaining to the services would exist in one database? If so, are there an implications of structuring the services this way.
Question 2: Services should be able to communicate with one and other. How would that statement be any different than simply curling an existing API? & basing the logic on that response? Is calling a service more efficient than simply curling the API?
Question 3: Is it worth it? Now I understand this is a massive generality , and it's fundamentally predicated on the needs of the business. But when that discussion has been had, was the re-build worth it? & what challenges can you expect to face
I will try to answer all the questions.
Respect to all services using the same database. If you do so you have two main problems. First the database would become a bottleneck because all requests will go to the same point. And second you will have coupled all your services, so if the database goes down or it needs to update, all your services will be affected. (The database will became a single point of failure)
The communication between services could be whatever your services need (syncrhonous, asynchronous, via message passing (message broker), etc..) it all depends on the use cases you have to support. The recommended way to do to avoid temporal decoupling is to use a message broker like kafka, doing this your services don't have to known each other and in case some of them go down the others will still working. And when they are up again, they can continue to process the messages that have pending. However, if your services need to respond in synchronous way, you can define synchronous communication between services and use a circuit breaker to behave properly in case the callee service is down.
Microservices architecture is far more complicated to make it work, to monitoring and to debug than a traditional monolith architecture so, it is only worth if you will have very large requirements of scalability and availability and/or if the system is very large and it will require several teams working in different parts of the system and it is recommendable to avoid dependencies among them. So each team can work at their own pace deploying their own services

Should I be moving to a microservices based architecture?

I am working on a monolith system. All of it's code is in one repository (Web API and background workers). System is written in Nodejs and MongoDB (Mongoose) is used as a data store. My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
Monolith architecture creates some problems:
If my background workers needs to scale. I have to deploy all the project to the server despite only using a small fraction of it.
All system must be redeployed when code changes. What if payment processor calls webhook while system is being redeployed?
Using microsevices advantages are quite obvious:
Smaller code base for individual microservice. Easier to reason about it.
Ability to select programming tools best for particular use case.
Easier to scale.
Looking at the current code I noticed that Mongoose ODM (Object Document Mapper) models are used across all the project to create, query and update models in database. As a principle of a good programming all such interactions with database should be abstracted. Business logic should not leak into other system layers. I could do that by introducing REPOSITORY pattern (Domain Driven Design). While code is still being shared across web api and it's background workers it is not a hard task to do.
If i decide to extract repositories into standalone microservices than all bunch of problems arise:
Some sort of query language must be introduced to accommodate complex search queries.
Interface must provide a way to iterate over search results (cursor based navigation) without returning all database documents over network.
Since project is in it's early stage and I am the only developer, going to microservices based architecture seems like an overkill. Maybe there are other approaches I should consider?
Extracting business logic and interaction with database into separate repository and sharing among services to avoid complex communication protocols between services?
Based on my experience with working in Microservices for last few years, it seems like an overkill in current scenario but pays off in long-term.
Based on the information stated above, my thoughts are:
Code Structure - Microservices Architecture (MSA) applying in above context means not separating DAO, Business Logic etc. rather is more on the designing system as per business functions. For example, if it is an eCommerce application, then you can shipping, cart, search as separate services, which can further be divided into smaller services. Read it more about domain-driven design here.
Deployment Unit - Keeping microservices apps as an independent deployment unit is a key principle. Hence, keep a vertical slice of the application and package them as Docker Image with Application Code, App Server (if any), Database and OS (Linux etc.)
Communication - With MSA, communication between services become a key and hence general practice is to remain with the message-oriented approach for communication (read about the reactive system and reactive programming for more insight).
PaaS Solution - There are multiple PaaS solutions available, which you can apply so that you don't need to worry about all the other aspects like container management, container orchestration, auto-scaling, configuration management, log management and monitoring etc. See following PaaS solutions:
https://www.nanoscale.io/ by TIBCO
https://fabric8.io/ - by RedHat
https://openshift.io - by RedHat
Cloud Vendor Platforms - AWS, Azure & Google Cloud all of them have specific support for Microservices App from the deployment perspective, which we can use as an alternative solution if you don't want to deploy PaaS solution in your organization.
Hope these pointers will have in understanding the overall landscape so that you can structure your architecture for future need.
I am working on a monolith system... My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
In what ways do you need to evolve the project? Will it be mostly bugfixes, adding features, improving performance and/or scalability? Do you anticipate other developers collaborating in the future? Are you currently having maintenance issues? The answers to these questions (and many more) should be considered in guiding your choices.
You seem to be doing your homework around the pros and cons of a microservice architecture, so if you haven't asked yourself why you're even doing this in the first place, now would be good time to do so.
Maybe there are other approaches I should consider?
There's always the good old don't-break-what's-going ;)

Integration of bounded contexts locally

In "Implementing Domain-Driven Design", Vernon give detailed examples for integrating bounded context with a messaging or REST based solution, it also mention database integration, but I understand it is not a very clean solution to share database or at least db tables between BC.
But what if the 2 BCs I want to integrate are hosted locally on the same server, is it really a good idea to use a messaging/rest/rpc solution ? (which seems more suitable for a remotely hosted BC to me)
Otherwise, except with DB integration, what are the other alternatives ? Hosting both BC in the same process and calling it directly (still using adapters and translators for clean seperation) ?
Thanks
You could look into using something like 0MQ for inter-process communication on the same server. I've also in the past just hosted things in the same process as you suggest and just used interfaces / in-memory messaging to separate out contexts.
Everything is about trade-offs in the end, so you just need to decide what level of isolation you are willing to accept. The simplest solution would be to separate inside a solution via folders and interfaces, the other end of the spectrum being completely separate servers.
I don't think that location should come into play w.r.t. integration between BCs.
There really are other factors to consider such as guaranteed delivery to the recipient in order to ensure that the processing takes place. This should be required whether or not the two BCs are hosted on the same server.
Another reason to ignore location is that when you need to scale, your architecture should be able to handle it from the get-go.
As tomliversidge mentioned it is possible to use some deployment mechanisms such as non-durable messaging to speed up things but there will definitely be a trade-off and that has to be a conscious decision.

Separate application service for command / query in CQRS implementation in Domain Driven Design?

When implementing CQRS with Domain Driven Design, we separate our command interface from our query interface.
My understanding is that at the domain level this reduces complexity significantly (especially when using event sourcing) in the domain model sense your read model will be different from your write model. So that looks like a separate domain service for your read and write bounded context.
At the application level, do we need a separate application service for the read and write separations of our domain?
I've been playing devil's advocate on the matter. My thoughts are it could be overkill, requiring clients to know the difference. But then I think about how a consuming webservice might use it. Generally, it will issue get requests for reading and post for writing, which means it already knows.
I see the benefits being cleaner application services.
The real value is having a properly separated read and domain model. They do fundamentally different things. And often have very different shapes. It's entirely possible for the read model to contain an amalgam of data from differing domain objects for example.
When you think about how they are used and the way the function within an application you can start to appreciate the need for the separation. The classic example here is to consider the number of writes compared to the reads in a typical application. The reads massively outnumber the writes. By maintaining a difference you can optimise each side for it's respective role.
Another aspect to bear in mind is that a 'post' will constitute a command not a viewmodel which may contain a read model. If using a CQRS approach you need to adapt the way you do queries and posts. In fact you can achieve a much more descriptive language rather than simply reflecting a view model back and forth to a server.
If your interested I have a blog post which outlines the high level overview of a typical CQRS architecture. You can find it here: A step by step overview of a typical CQRS application. Hope you find it useful.
Final point. We are in the process of adding new functionality and have found the separation to be very helpful. Changes to one side don't impact the other in the same way as they might.

Resources