Bulk update of aggregates based on state change of an aggregate - domain-driven-design

I'm working on an event sourced application following DDD and CQRS principles, which allows the posting of ads to sell goods.
There's one specific invariant that i'm trying to model which would seem to involve a bulk update of AR's, and I don't really know how to go about it.
The invariant is as such:
A Member can post an ad
A Member could be banned by an Admin
If a Member is banned, his ads must be suspended
for the purposes of the discussion, an Ad needs to have a status, as a Member can buy an item by clicking on an ad, so it's important to know if an ad is active.
I have designed my aggregate roots as such:
Member
Ad
Order
A Member can be a buyer or a seller, depending on the context, so I decorate the member object as needed.
When ads are published, they are of course inserted in a read model.
Now, when a Member is banned, there's an event associated that the Member AR triggers.
MemberWasBanned (MemberId)
My question is how do I go about finding every Ad that the member owns, and suspend them?
While I could rely on the member status for a buy transaction, it's important that the Ad tracks its status as there are other similar operations that could trigger the sending of an email for instance to the member indicating that his ads were suspended for such or such reason.
So my best approach after a lot of thinking is to create a long running process, in which I create a handler for MemberWasBanned, then go look for his active ads in the read model, and issue commands to suspend them one by one.
Am I missing something? I thought of using a process manager, but read that you shouldn't access the read side from a PM. In any case a PM in most cases determines the command sent to ONE AR.
Am I missing something?

If you have a messaging mechanism, maybe you can "explode" the MemberWasBanned event.
Publish the MemberWasBanned (or equivalent) event through your messaging pipeline, and subscribe to it from the context that handles ads. When this event is received in your messaging mechanism, you can explode it into multiple DisableAd events that will also be sent through your messaging system, each of them targetting one current ad of the banned member.
Each of these events, then, will only write on a single Aggregate (each ad, disabling it) when they're processed by the messaging mechanism.
Concurrently, the banned user will prevent further ads from being inserted, so you will be safe on that end as well.

Related

Need help improving a design with aggregate roots

I have the follow scenario:
You need to create a Request before it to become a Shop and to get a Owner Account.
So one day you register a Request.
2 days after a manager reviews and approves your Request and it means that the system have to create the Shop and Owner Account.
In my model, I thought that Request, Shop and Owner Account were 3 Aggregate Roots, but then I read that I cannot update more than one aggregate in one transaction because they may be (and in fact they are, because Owner Account is in an external authentication service) in separated db servers.
The thing is.. I still have a Request, and when it gets approved I need to create 2 aggregate roots, the Shop (with all the shop attributes, I only have some invariants with the data, like the limit of contact emails or phones) and the Owner Account.
Then one Owner Account can be allowed to edit someone else's Shop (like a collaborator)
How could I model it?
Thanks!
From your requirements, my design would be:
Two bounded contexts:
Shopping: It has two aggregates ( Request and Shop ).
Authentication: One aggregate ( OwnerAcount ).
There exists eventual consistency:
(1) Request aggregate would have a method "aprove". This method creates the RequestAproved event.
(2) Shopping BC publishes the RequestAproved event.
(3) Authentication BC is a subscriber to this event. It reacts to the event creating an OwnerAcount aggregate.
(4) The constructor method of OwnerAcount aggregate creates the OwnerAcountCreated event.
(5) Auth BC publishes the OwnerAcountCreated event.
(6) Shopping BC is a subscriber to this event. It reacts to the event creating a Shop aggregate.
Transaction creating the Shop aggregate is different from the one that created the Request aggregate.
Here's a diagram:
(Note: There's a message queue for each event type. Another option would be just one queue for all event types)

How to handle incomplete stripe connected account onboarding?

I am after the best practice for handling incomplete stripe connected account onboarding.
When onboarding goes smoothly, everything is simple. But there are fiddly edgecases everywhere, which results in a lot of permutations of values for account requirements
These include
current_deadline
currently_due
disabled_reason
errors
eventually_due
past_due
pending_verification
This creates a lot of complexity.
I need a simple way to:
figure out if the connected user needs to be notified of something (i.e. that they need to give more info), and
what to tell them.
My current strategy is to check if errors is empty, and if not, simply display them along with a link to manage the user's stripe account so they can address the errors.
But I'm worried this strategy will miss things (perhaps minor things that could be addressed before they become errors).
TL;DR I suspect most users will onboard without any problem, but for the few who do have issues, I want to ensure the app notifies them that they need to address them. What is the best way to do this? (using the information in requirements or other info)
When handling identity verification manually using the API, a simple way to check whether your connected user might need to be notified to provide more info is to look at the charges_enabled and payouts_enabled properties on the user's account object. If either of these two properties are false then you might need to reach out to the connected user for more information.
In cases where the connected user's charges and payouts are disabled, you would use the disabled_reason property on the requirements hash to learn the reason why charges and/or payouts are disabled. The possible disabled reasons are all documented here, but I'll list them out nonetheless:
action_required.requested_capabilities You need to request
capabilities for the connected account. For details, see Request and
unrequest capabilities.
requirements.past_due Additional verification
information is required to enable payout or charge capabilities on
this account.
requirements.pending_verification Stripe is currently
verifying information on the connected account.
rejected.fraud Account is rejected due to suspected fraud or illegal activity.
rejected.terms_of_service Account is rejected due to suspected terms
of service violations.
rejected.listed Account is rejected because
it's on a third-party prohibited persons or companies list (such as
financial services provider or government).
rejected.other Account is rejected for another reason.
listed Account might be on a prohibited persons or companies list (Stripe will investigate and either reject or reinstate the account appropriately).
under_review Account is under review by Stripe.
other Account isn't rejected but is disabled for another reason while being reviewed.
Using the disabled_reason, you can assess whether the user needs to be notified with a request for more information (i.e., requirements.past_due), whether they need to be notified for another reason (e.g., rejected.listed), or whether you need to make programmatic changes to the user's Stripe account (e.g., action_required.requested_capabilities).

Decouple account and user service

Im trying to decouple these 2, account and user service.
On user register, the user enters in their account information (business name, tax number etc) + user information (profile settings etc).
Im wondering what is the best approach to decouple these two services on signup to get the one form submission to be stored in these 2 services?
Currently i have a setup, where when a user registers, it sends a post request to the auth service where, the auth service fires a webhook to account and user service.
I feel like this is prone to errors and might not be the standard approach.
I’d like to know how to properly do this - i.e how does netflix, google etc do this
Many thanks!
You need a mechanism for implementing transactions that spans multiple microservices. In this case, the microservice architecture suggests using the SAGA design pattern with compensating transactions.
SAGA is actually a sequence of transactions for a set of microservices. Each transaction updates each microservice, which updates its own database and publishes a message or event to start the next transaction in the saga to update the next microservice. If such a transaction fails in any microservice, then SAGA performs a series of compensating transactions that undo the changes made by previous transactions.
SAGA are divided into 2 types:
Choreography - each transaction publishes events that trigger transactions in other microservices in turn (without an orchestrator).
Orchestration - there is a separate orchestrator that manages transactions and tells the participants what transactions need to be performed.
You should also keep in mind that when you add such a pattern to the architecture, the complexity of the architecture increases significantly, for more information about the problems and disadvantages, see the link.
More details can be found at links 1 and 2.
Likely yes, the approach is correct, the service among the two which gets data directly from frontend has the responsibility of sending an asynchronous event to the other service so that it can also mutate the data in it's database and data consistency remains maintained, i.e. paradigm of eventual consistency itself.
Only the thing is that the message carriers/ brokers should be highly available.

How to access multiple remote services in transaction-like manner

I have an endpoint responsible for creating a paid subscription. However, in order to create subscription I need to access multiple different services in succession:
1) create subscription with a token provided by front-end (generated by a direct call from front-end app to payment system) (Call to the payment system)
2) get Billing information to save in database (Call to the payment system)
3) save some of billing info (f_name, l_name) and provided shipping info (Call to the database)
4) subscribe customer to the mailing list (Call to the email service provider)
Any of these steps can fail due to service being unavailable, problems with internet connection in the DC or any other number of problems that are not controllable by developers. Is there any options to process all of this in a transaction-like manner to avoid partial completion? e.g. We create subscription, but don't write to database.
I am using Node.js, if this helps.
Have a look at the Saga pattern for microservices. This could essentially be laid out as a service which you contact when you want to create a subscription. It knows every step involved and on top of that, also knows how to roll back every transaction, should any step fail.
Upon making a request, the service would just start doing all the necessary requests/queries and then either:
Return successfully
Rollback all transactions that have happened so far and return an error
This obviously relies on all of your services being able to revert to known good state.
Another approach would be to use two-phase/n-phase commits, but they may impose a big performance drop which is not desirable for something user-facing.
You may want to read through this discussion on HackerNews where this problem is discussed in far more detail.

How/where to load value object that is entity in different BC in DDD and CQRS/ES based system

I have gone through this great tutorial: http://www.cqrs.nu/tutorial/cs/01-design and trying to do my app same way.
According to my previous questions I know, that what is entity in one BC can be represented as value object containing identifier of entity in other BC and optionally any other parameter. (I put those identifiers to Core BC.)
Now, let's say I have two BC: PlansEditor and Subscribing.
In PlansEditor, given there is Editor (person with role), when he creates Plan with some params like frequency and vip, then Plan is created.
In Subscribing, given there is Customer and some available Plans, when customer subscribes to a Plan, then Subscription is created.
Plans here could be just VO with frequency and VIP parameters, because Subscription needs just them (to protect invariants). But customer clicks to subscribe to a plan and request with id of plan is sent.
So I start my BC lifecycle with id of customer and id of plan he subscribed to. I need to load/create that Plan "value object" (which is really entity in PlansEditor BC) by given identifier somewhere.
Where and how should I acquire that Plan VO in Subscribing BC?
I thought in application layer through Event Repository - to be able to send SubscribeCustomerToPlanCommand(Customer, Plan) containing this value object. But Subscribing BC would then need to know that Plan as entity in PlansEditor exists in order to load it and translate it to Plan as VO, which is unacceptable - one BC would need other to function properly.
Or should I just get informations needed to create Plan VO from some read model and do that in application layer, is that convenient? Or how and where? :)
As far as I can tell Plan wouldn't be a VO in either contexts. It's not because one BC has the authority on an entity's life cycle that the entity becomes a VO in other contexts (it could).
If the downstream context needs to stay aware of the lifecycle of an entity in a remote context then this entity should probably be modeled as an entity in the downstream context as well.
For instance, I suppose that Plans can be deactivated at some point that and it wouldn't make sense to allow subscribing to these? That would be a good indicator that Plan is not a simple value in the Subscribing context.
There are multiple strategies when it comes to BCs integration, but if you favor availability over consistency then messaging is most likely the better option. The upstream context would publish events on a messaging infrastructure and these would get consumed by the downstream context, allowing it to keep it's local copy of the entity state in sync.
"BC would then need to know that Plan as entity in PlansEditor exists
in order to load it and translate it to Plan as VO, which is
unacceptable - one BC would need other to function properly"
Any integration strategy will require some level of coupling, but that coupling should be abstracted away in an anti-corruption layer.

Resources