How should I handle multiple aggregates root interaction - domain-driven-design

I have read this post, in this post Udi Dahan talks about many to many relationships.
In that example he explains that in the case of a many to many relationship like the one a job would have with job boards, and taking into account the bounded context of adding a job to a job board the aggregate root would be the job board and you just add the job to it.
In the comment section he also explains that in a different bounded context the Job would be the aggregate root, which makes sense since the job can exist without the job board and there many operations that you could do to a job that does not affect the job board.
I have a similar problem but I can not seem to figure out how this would work out. I have 2 issues:
In case where we would need to delete a job, it looks like depending on wether the job has been posted or not, we will need to either delete job alone or delete the job but also remove it from the board, which would means to modify two aggregate roots in the same transaction, but also where should this code go? domain service?
if job and job board can be two different aggregates, job entity needs to exist in both context so how do we deal with this, just create two job classes with duplicated data?
UPDATE 1:
So this is the scenario I'm dealing with... I have a routing app, I have Requests which represent a trip request, I have routes, which have stops and each stop have one or more requests. In order to create a route, I use an external service that does the routing, and stores the routing result in a routing table.
The problem is that I do not how to model this relationship, here is a use case to consider, request cancellation is a process that depending on the state of the request and the state of the route can lead to different actions:
Request is not routed (not assigned to a route), just cancel the request and that is it.
Request is routed, route is schedule,then cancel the request, remove the request from the route, and re-create the route (using an external library), since remove a request may lead to removing a stop, so I need to recreate the route internals, it is still an update.
Request is routed, route is en route, then I mark the request as no-show, and update the route.
So at first I though that request, route and routing table are separate aggregates, but that means that I need to modify more than one aggregate in the same transaction, (either by using a service or by using domain events)
I'm not sure if it makes sense to create a higher level aggregate root (with request, and route data, and eventually routing data) because I' won't always have all the data to load the aggregate root, in facto most of the times I'll have a portion of it, either 1 request or a route with multiple requests.
I'm open to suggestions, because I can not seem to get a solution to this.
UPDATE 2:
So adding some more context, I'll add some more detail to the entities:
Request, it represents a trip request, it has several states with a defined workflow
Route, it has a defined workflow with defined transitions, has a collection of stops, and each stop has a collection of payloads, each payload has a request id, (Route -> stops[] -> payloads[]-> requestId)
Routing, it represents the result fo calling a routing engine, which based on a series of requests that you want to route it will generate the route/routes
So this entities are stored in a mongodb collection, lets see the Use Cases:
UC - Request Cancellation
I can cancel a request using the request id only, but depending on the state of the request I may need to modify the route also.
1 Request is NOT routed, so with the request id, I get the request and cancel it, this one is simple.
2 Request is Routed, and Route is Scheduled, in this case I need to get the request, then get the route and all the requests that are tied to that route (it includes the one that triggered the command), then remove the payloads (and the stop if it has only one payload) that are tied to the requests, since this option can change the stops I need to re-create the route using an external api (routing engine) and create an entry in the routing table.
3 Request is Routed, and Route is en-route, in this case I need to get the request, then get the route and all the requests that are tied to that route (it includes the one that triggered the command), and change the request as a no-show, but also mark the payload as a no-show
UC - Start a route
Once a route is created and scheduled, I can start it, which means modifying the state of the route, state of the stop, state of the payloads, and state of the asociated requests.
As you can see in the use cases, route - request and routing table, are very closely related, so at first though of having separate aggregates root, Request is an AR, route is an AR, routing is an AR, but this means modifying more than one AR in the same transaction.
Now lets see what an AR that will have all entities will look like
class Aggregate {
constructor(routeData, requests[]) {
}
}
So lets see the UC again
UC - Request Cancellation
In this scenario I only have 1 request data, so I have to leave routeData empty, which does not sound right
In this one I have route and request data so I'm good
In this one I have route and request data so I'm good
The main problem here is that some operations can be done on 1 request, and some other operations will be done on the route, and some other will be done in both. So I can not always get the aggregate by Id, or I can not always get it with the same id.

There is no such thing as "two aggregate roots in the same transaction". Transactions are scoped inside a single aggregate since in theory all aggregates should live in their own micro-service. The proper way to update two or more aggregates in a atomic way is with a saga. Sagas are a complex/advanced topic. I recommend avoiding them if you can by re-thinking your design.
Spliting an entity between two bounded contexts is perfecly fine and most of time necessary, but these entities should be adapted to fit their context e.g. in the boards bounded context the "job" entity could be a "board card" which will not have the same properties than the "job" entity from the jobs bounded context.

Related

Where should calculations be done in a MEAN stack app

I am building an ecommerce website for a project for my portfolio, and I wanted to know where the calculations should be done for the cart.
Normally I use react and I create a model folder, route folder and a controller folder but the way I was taught Angular it seems like the services acts like the routes and the actual calls to the database are done in the node server file which I am sure I could separate into a separate controller file. My question is where should the calculations for the cart be done before I send the order to the database? I thought about doing it in the cart component before the order is place or should it be done in the services or in the backend in the controller? I am just trying to figure out what is the standard
When writing an Angular app, I think it is important to adhere to the following principles that:
Components - should have a single responsibility for simple view logic only, shouldn't reach out to the server, and shouldn't do complex calculations and/or logic that is not related to the view.
Services - should have a single responsibility for (reusable/shared) and complex logic, to do outbound communication and reach out to the server, and to act as data stores (using BehaviorSubjects).
Therefore, if your calculations are needed to update the view of the cart, I would vote that these calculations need to be made at the component. If your calculations are needed to update the items or the request to be sent to the server, they need to be made at the service.
Remember, the component "shouldn't know" how the data comes to it or how it is manipulated or sent to the server. The component should only know, given any data - how to present it in the view, and shouldn't "worry about" how that data came to it. Similarly, the component shouldn't know how the data is calculated before being sent to the server, and this would fall within the responsibility of the service that works with and processes the cart data and builds the request to the server.
However, you have to always consider the security of your app, and if a malicious data modification at the client side can affect your cart. If such caculations affect the app's security - they should at least be validated at the server if not fully delegated to it.
I don't know the calculation you need exactly, but since it is an e-commerce website I assume it is simple math such as the total payment amount of checkout.
The main role of the server is communicating to the database. If a task does not involve interacting with data, you can do the calculations on the client-side. Leaving details on client-side allows you to have access to details of your formula, and reduce the communication time between client and server.

Send a PUT/GET/POST request to JHipster in one single transaction

I am quite new to Jhipster and have problem understanding some of its functionalities. Hence here is my question.
I have the following two microservices.
Microservice 1 (MS1) has the following data structures in Java:
Lead {
Customer customer;
Deal deal;
}
Customer{
Integer phoneNumber;
etc...
}
Deal{
Integer value;
etc...
}
Microservice 2 (MS2) is a JHipster generated database.
The DB only has the following SQL tables :
CUSTOMER
LEAD
When changes happen in Microservice 1, I send 2 separate PUT requests from MS1 to MS2.
first a request to update CUSTOMER through the /customer API in MS2
if update is OK, then send a request to update DEAL /deal API in MS2
For a successful update for Lead, PUT requests to Customer, Deal should all be OK. If updating one table fails, all should fail.
Hence, I would like to avoid sending 2 separate requests to avoid a case where CUSTOMER request is OK and DEAL request fails for whatever reason.
If possible, I would like to send one single transaction throught an API such as /lead that udpates the two table..
What is the best way I can achieve this without creating an extra table for LEAD?
e.g., a layer/service that I should generate using Jhipster.
If possible (but not necessary), I would like to avoid touching code that are frequently regenerated. (e.g., Customer, Deal)
Please kindly direct me to a documentation too if one already exist. They are quite hard to understand so I am not sure if any current one specifically addresses this problem. Thank you.
This is a common issue when directly exposing JPA entities from a CRUD REST API.
Your persistence model does not need to be your API model.
If 2 entities are related and should be updated within same transaction, it means that they should be updated with one atomic API request.
So, you could define a new resource with a DTO combining your 2 entities, exposed by a new API that you would code manually (so no need for an additional table).
As you are using microservices architecture, you could have similar situation also between MS1 and MS2 and here you could not use a transaction, you could then have to implement remediation.

Application-side join ORM for Node?

To start: I've tried Loopback. Loopback is nice but does not allow for relations across multiple REST data services, but rather makes a call to the initial data service and passes query parameters that ask it to perform the joined query.
Before I go reinventing the wheel and writing a massive wrapper around Loopback's loopback-rest-connector, I need to find out if there are any existing libraries or frameworks that already tackle this. My extensive Googling has turned up nothing so far.
In a true microservice environment, there is a service per database.
http://microservices.io/patterns/data/database-per-service.html
From this article:
Implementing queries that join data that is now in multiple databases
is challenging. There are various solutions:
Application-side joins - the application performs the join rather than
the database. For example, a service (or the API gateway) could
retrieve a customer and their orders by first retrieving the customer
from the customer service and then querying the order service to
return the customer’s most recent orders.
Command Query Responsibility Segregation (CQRS) - maintain one or more
materialized views that contain data from multiple services. The views
are kept by services that subscribe to events that each services
publishes when it updates its data. For example, the online store
could implement a query that finds customers in a particular region
and their recent orders by maintaining a view that joins customers and
orders. The view is updated by a service that subscribes to customer
and order events.
EXAMPLE:
I have 2 data microservices:
GET /pets - Returns an object like
{
"name":"ugly",
"type":"dog",
"owner":"chris"
}
and on a completely different microservice....
GET /owners/{OWNER_NAME} - Returns the owner info
{
"owner":"chris",
"address":"under a bridge",
"phone":"123-456-7890"
}
And I have an API-level microservice that is going to call these two data services. This is the microservice that I will be applying this at.
I'd like to be able to establish a model for Pet such that, when I query pet, upon a successful response from GET /pets, it will "join" with owners (send a GET /owners/{OWNERS_NAME} for all responses), and to the user, simply return a list of pets that includes their owner's data.
So GET /pets (maybe something like Pets.find()) would return
{
"name":"ugly",
"type":"dog",
"owner": "chris",
"address": "under a bridge",
"phone": "123-456-7890"
}
Applying any model/domain logic on your API-gateway is bad decision, and considered as bad practice. API Gateway should only do your systems's CAS (with relying onto Auth service which holds the logic), And convert incoming external requests into inner system requests (different headers/ requester payload data) and proxy formatted requests to services for any other work, recieves them, cares about encapsulating errors, and presents every response in proper external form.
Another point, if there is alot of joins between two models required for application core flow (validation/scoping etc) then perhaps you should reconsider to which Business Domain your models/services are bound. If it's same BD perhaps they should be together. Priciples of Domain-Driven-Design helped me to understand where real boundaries between micro-services are.
If you work with loopback (like we are and face same problem we faced - that loopback have no proper join implementation) you can have separate Report/Combined data service, which is only one who can access to all the service databases and does it only for READ purposes - i.e. queries. Provide it with separately set-up read-only wide access to the db - instead of having only one datasource being set up (single database) it should be able to read from all the databases which are in scope of this query-join db user.
Such service should able to generate proper joins with expected output schema from configuration json - like loopback models (thats what I did in same case). Once abstraction is done it's pretty simple to build/add any equery with any complex joins. It's clean, and it's easy to reason about. Also, it's DBA friendly. For me such approach worked well so far.

node.js: writing test cases for social networking like APIs

I'm developing APIs for social networking web application for learning perspective. When I started to write test cases I stuck around how to organize/write test cases.I'm initially proceeding like this:
First setup global data base initialization:
I need some users' auth tokens to test my routes so I decide to set up these information at global context. Also there are other information that also need to setup at global context so I'm setting that too.
Then for each route:
I start to write test cases and thought that I would write test cases in
such a way that each route test cases would be independent to each other.
and after completing all test suites:
I thought that I would clean up my data base.
The problems with this approach I'm facing are:
Say I want to test four routes named /users , /users/:id/my_invites , /send_invites, /response_invites. And further suppose I'm only interested in writing test cases for GET request and response for /users , /users/:id/invites and POSTing data in the case of others.
/send_invites, /response_invites definitely trigger some actions on the server side that modifies the data base state.
As we see that these routes effect the state of other routes' data say one user sends invites to another user and gets response true/false, so for that user his request was successful but how to ensure that another user actually received the invitation if we don't checking his received invitation documents(i.e through another route) in the first route test cases. Means /send_invites effects the /users/:id/my_invites .
because these routes are dependent on each others
So The questions I want to ask are:
how to write test cases for these routes so that each route would be independent ?
I tried with three dummy testing users in the global context and trying all sort of combinations for them in all test suites.My test suit presently deals with more than one routes to check 1 route true functionality.
Can anyone suggest me better solution for writing test cases for the above mentioned scenarios?
May be my question is too long or not clear. Please let me know and help me if you can.
My opinion:
First the bases EACH test case should have a fresh new state. That means that before you check one scenario, you want to flush your database and insert your new data prepared to test that case scenario. You can use a real database o mock one with data in-memory or however you prefer.
Second, each endpoint potentially affects a lot of tables in your database. So it's perfectly normal to check the state of your data, independently of the tables to check that the information is correct.
And well mocha and other test frameworks have functions that help you to do that. Like beforeEach and afterEach to set up and tear down your data before each test case.

Queue Requests To MVC Controller

I have an interesting problem that I need to solve and I have no clue where to even start. I am writing an MVC web application that take a list of records via a form and will make an ajax call for each. The controller that the ajax call hits uses a resource that can only process one request at a time. The simple solution is to change the ajax calls to synchronous, however, that hangs the browser and gives a poor experience.
Also, it is possible that multiple users might use this app concurrently so queuing on the client side will not work.
Anyone have any suggestions?
Mike
Well first off, my requirements are not quite the same as yours. My problem was that my backend database tends to be a little slow, and user responsiveness was extremely important.
Therefore, I had to remove the database interaction from the equation.
My solution has two main parts:
Maintain a server side cache of the data
Create a separate process to contain all database work that can interact with the server
The separate process was implemented as a named pipe WCF service hosted by a windows service.
The basic process overview is:
User clicks "Save", Ajax post the form to an Mvc controller
The controller updates the cache data, then invokes the WCF pipe
The service pushes the data into a concurrent queue (along with a session ID), and returns a guid token
The controller returns the token as a JSON response.
jQuery Ajax handler intercepts the response and saves the token into a UI element that 6. represents the "Saved" form.
The service itself works like this:
On start create a timer.
On Timer tick:
Stop the timer.
Remove all queued work items from the concurrent queue
Send each item to be processed by the work processor
Add the item to the "Completed", or "Has an Error" concurrent dictionaries, keyed by the earlier session Id (along with some time keeping stuff to eliminate stale data). This includes the original work token.
Start the timer again.
Back in user land, there is a javascript setInterval loop running:
Ajax request to the server (Heartbeat controller)
The controller connects to the service, and passes the current session id
The service returns all items from the "Completed" and "Error" dictionaries
The controller returns the lists as JSON object arrays
The javascript loops through the returned lists and uses the tokens to do appropriate UI updates
The end result is a very responsive UI despite the slow backend persistence server.
If you want any specific portions of implementation code let me know.

Resources