node.js using domain driven design - node.js

I am in the process of moving an application from c# to node.js. I am learning node.js as I go along, so I am a node.js newbie. I am reading the book, Patterns, Principles and Practices of Domain Driven Design and found a lot of great information my current project could benefit from.
For instance, in the book, there is a sample e-commerce application that contains three bounded contexts: sales, shipping and billing. Each bounded context is responsible for its own database and each bounded context is running in an instance of NServiceBus. This seems to be a great approach as everything is running under the same solution, but different projects. In translating this to node.js, I am having a few areas of confusion.
1) I am having a rough time finding good examples that incorporate node.js with ddd like the e-commerce example above? Some of the hurdle here is the difference in how OOP is handled.
2) If in the book sample code, each bounded context is in its own project and runs within NServiceBus, would this mean that in translating this to node.js, I am using vscode as GUI, that I would need to create a separate parent folder(bounded context) for each project and supply a different port for each bounded context to listen on if I want all of the bounded contexts to run on the same server until I need to scale accordingly?
3) NServiceBus allows messages and events to get passed back ad forth. For node.js, what service bus technology, preferably open source and can run on linux-based machine, exists that would provide the kind of functionality NServiceBus provides and reliably? Should I just look at using rabbitmq alone to provide this functionality including sending events?

You might be interested in wolkenkit, a CQRS and event-sourcing framework for Node.js and JavaScript that plays very well together with domain-driven design (DDD).
Besides the actual framework (which is deployed as npm module wolkenkit), there are a number of sample applications available that show how things work:
wolkenkit-todomvc is the classical TodoMVC application modeled using DDD
wolkenkit-boards is a team collaboration software similar to Trello
wolkenkit-nevercompletedgame is the wolkenkit version of the game at nevercompletedgame.com
wolkenkit-geocaching is an application to manage caches for geocaching
Apart from that you might want to take a look at the wolkenkit documentation, and there again especially at the brochure you can download which explains DDD, event-sourcing and CQRS, what they are, how they relate to each other, and so on…
PS: Please note that I am one of the authors of wolkenkit, so please take my answer with a grain of salt.

I would suggest going through the npm modules tagged with ddd:
https://www.npmjs.com/browse/keyword/ddd
and tagged with service bus:
https://www.npmjs.com/browse/keyword/servicebus
There is also a JavaScript Domain-Driven Design book by Philipp Fehre.

This post is a couple of years old, but for anyone still interested there's a DDD framework for Typescript/node at:
https://github.com/node-ts/ddd
As well as an NServiceBus inspired message bus at:
https://node-ts.github.io/bus/
They're designed to work together to build message driven DDD systems with node

Related

How to classify services in microservices?

I am new in microservices. I am coming from monolithic background in current environment i have different kinds services for different purposes like search, file, email, notification. I have taken so many courses but in that the instructor separate each entity and make it's own database also create API for that(like separate shopping cart entity, product entity) it makes no sense, I am not getting what is real world use of microservices or how to make separate component to build it's own microservice.
Can anyone give Real Project example?
Thanks in advance
Read this and this. Also look here and here. I don't think that anyone will give a link to the real working project, so you can try this.
I am not getting what is real world use of microservices
mostly as you heard in all of those tutorials the microservices architecture leverage advantages of:
the smaller services are easy to maintain and develop
easily can scale specific services rather than the whole project(monolith). for example you scale service-1 to 4 instances that request traffic split into these 4 instance and service-2 to 2 instances and go on (load balance). and these services may distributed in to different servers and locations.
if one service failed to work it does not terminate the whole system since they are independent.
services can be reusable for other scenarios or features.
small team can works for each services and its easy to manage both project and development flow.
and also it suffer from disadvantages of
services are simple and small but all as a whole system is complex so designing part are very critical.
poor performance and it requires do some extras to improve the performance (different types of caching on different levels).
transactions are complex and its developments are time costly. imagine simple update should be projected to other services if its required and you have to consider failure and rollback strategy ( SAGA ).
how to make separate component to build it's own microservice
this is the most challenging part of microservices. you need deep study on Domain driven design DDD.
Decompose by subdomain
Decompose by Business Capabilities
Can anyone give Real Project example?
there are many projects the develop microservices with different patterns. I think you have to start your own and make your hands dirty.

Should I be moving to a microservices based architecture?

I am working on a monolith system. All of it's code is in one repository (Web API and background workers). System is written in Nodejs and MongoDB (Mongoose) is used as a data store. My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
Monolith architecture creates some problems:
If my background workers needs to scale. I have to deploy all the project to the server despite only using a small fraction of it.
All system must be redeployed when code changes. What if payment processor calls webhook while system is being redeployed?
Using microsevices advantages are quite obvious:
Smaller code base for individual microservice. Easier to reason about it.
Ability to select programming tools best for particular use case.
Easier to scale.
Looking at the current code I noticed that Mongoose ODM (Object Document Mapper) models are used across all the project to create, query and update models in database. As a principle of a good programming all such interactions with database should be abstracted. Business logic should not leak into other system layers. I could do that by introducing REPOSITORY pattern (Domain Driven Design). While code is still being shared across web api and it's background workers it is not a hard task to do.
If i decide to extract repositories into standalone microservices than all bunch of problems arise:
Some sort of query language must be introduced to accommodate complex search queries.
Interface must provide a way to iterate over search results (cursor based navigation) without returning all database documents over network.
Since project is in it's early stage and I am the only developer, going to microservices based architecture seems like an overkill. Maybe there are other approaches I should consider?
Extracting business logic and interaction with database into separate repository and sharing among services to avoid complex communication protocols between services?
Based on my experience with working in Microservices for last few years, it seems like an overkill in current scenario but pays off in long-term.
Based on the information stated above, my thoughts are:
Code Structure - Microservices Architecture (MSA) applying in above context means not separating DAO, Business Logic etc. rather is more on the designing system as per business functions. For example, if it is an eCommerce application, then you can shipping, cart, search as separate services, which can further be divided into smaller services. Read it more about domain-driven design here.
Deployment Unit - Keeping microservices apps as an independent deployment unit is a key principle. Hence, keep a vertical slice of the application and package them as Docker Image with Application Code, App Server (if any), Database and OS (Linux etc.)
Communication - With MSA, communication between services become a key and hence general practice is to remain with the message-oriented approach for communication (read about the reactive system and reactive programming for more insight).
PaaS Solution - There are multiple PaaS solutions available, which you can apply so that you don't need to worry about all the other aspects like container management, container orchestration, auto-scaling, configuration management, log management and monitoring etc. See following PaaS solutions:
https://www.nanoscale.io/ by TIBCO
https://fabric8.io/ - by RedHat
https://openshift.io - by RedHat
Cloud Vendor Platforms - AWS, Azure & Google Cloud all of them have specific support for Microservices App from the deployment perspective, which we can use as an alternative solution if you don't want to deploy PaaS solution in your organization.
Hope these pointers will have in understanding the overall landscape so that you can structure your architecture for future need.
I am working on a monolith system... My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
In what ways do you need to evolve the project? Will it be mostly bugfixes, adding features, improving performance and/or scalability? Do you anticipate other developers collaborating in the future? Are you currently having maintenance issues? The answers to these questions (and many more) should be considered in guiding your choices.
You seem to be doing your homework around the pros and cons of a microservice architecture, so if you haven't asked yourself why you're even doing this in the first place, now would be good time to do so.
Maybe there are other approaches I should consider?
There's always the good old don't-break-what's-going ;)

Integration of bounded contexts locally

In "Implementing Domain-Driven Design", Vernon give detailed examples for integrating bounded context with a messaging or REST based solution, it also mention database integration, but I understand it is not a very clean solution to share database or at least db tables between BC.
But what if the 2 BCs I want to integrate are hosted locally on the same server, is it really a good idea to use a messaging/rest/rpc solution ? (which seems more suitable for a remotely hosted BC to me)
Otherwise, except with DB integration, what are the other alternatives ? Hosting both BC in the same process and calling it directly (still using adapters and translators for clean seperation) ?
Thanks
You could look into using something like 0MQ for inter-process communication on the same server. I've also in the past just hosted things in the same process as you suggest and just used interfaces / in-memory messaging to separate out contexts.
Everything is about trade-offs in the end, so you just need to decide what level of isolation you are willing to accept. The simplest solution would be to separate inside a solution via folders and interfaces, the other end of the spectrum being completely separate servers.
I don't think that location should come into play w.r.t. integration between BCs.
There really are other factors to consider such as guaranteed delivery to the recipient in order to ensure that the processing takes place. This should be required whether or not the two BCs are hosted on the same server.
Another reason to ignore location is that when you need to scale, your architecture should be able to handle it from the get-go.
As tomliversidge mentioned it is possible to use some deployment mechanisms such as non-durable messaging to speed up things but there will definitely be a trade-off and that has to be a conscious decision.

Is it common practice to have a shared and dedicated assembly or library for messages in a distributed system?

I'm talking a crack at some of the concepts behind distributed domain driven design and I'm building a proof of concept. I have three C# solutions that have specific responsibility within the overall system.
The solutions I have are:
The write model (receives commands from a client and creates and sends events)
The read model (receives events from write model, creates a database and exposes DTO services to the client, could potentially be 2 separate solutions)
The client (calls services to get needed data and sends commands to the write model)
All three solutions use messaging (commands, events) through a service bus. (MassTransit in my case).
My main question is: Is it common practice to create an assembly with the messages and have each solution reference that assembly?
Extra credit: Is there anything I'm doing that seems weird or problematic in this POC? Any additional info I should be aware of when creating this type of a system?
Is it common practice to create an assembly with the messages and have
each solution reference that assembly?
Yes. This is a common practice with messaging systems in general. For example, many NServiceBus samples employ this approach. Think of this assembly as representing your contract. In systems built upon different platforms this representation would come in the form of an XSD schema or some other schema definition mechanism.
Is there anything I'm doing that seems weird or problematic in this
POC? Any additional info I should be aware of when creating this type
of a system?
Everything seems to be well fitted to CQRS so far. To be fair, I should mention that it can be easy to get carried away with CQRS as a silver bullet and structure systems around it. It is often a wise decision to forgo CQRS all together. Keep focus on the business domain and use CQRS as an architectural style to implement your system, not to guide its model.

DDD with .NET - Is there common infrastructure library available?

We're starting a web application using DDD and CQRS (using the ncqrs framework) and before we get started writing our own infrastructure class library, i wanted to see if any are already available.
I'd think at least some basic interfaces and common implementations for writing to the file system, sending emails, etc could be used in any project.
Those types of services are sufficiently context dependent to be unyielding to common frameworks above the facilities provided by the .NET Framework. There may frameworks centered around specific tasks, such as emailing, however you're better of selecting a solution that fits the requirements, instead of the converse. Instead, consider reviewing some sample DDD projects as listed here.
I agree with what eulerfx stated earlier. I'd add that if you depend upon a framework for using DDD and CQRS, then you risk depending on the framework and not truly understanding what is happening. As a result, you may miss what DDD (and CQRS) is providing to you.
I will state that I started off learning about CQRS by using a framework (NCQRS in fact), but my DDD knowledge was based on Evans' book and I didn't look for a framework for modeling my domain. As each domain is unique to the problem, I think it's hard to truly have a framework that "helps" you implement DDD.
In retrospect, I wish I had not gone with NCQRS right from the start as I missed or passed over some of the subtleties of the CQRS pattern.
There are probably some DDD frameworks out there, but I'd recommend forgoing them and build your own. You'll thank yourself later.
Hope this helps. Good luck!
You can try my library CoreDdd, documentation here, blog posts about it here. It contains support for DDD (entities, aggregate roots) and CQRS (commands, queries). No support for writing file system or sending emails, use standard .net for this.

Resources