I have a Provision, this Provision has state with relative constraints. For example: I can accept a Provision only if its actual state is authorized.
Since this is clearly a business rule I code it inside my domain model:
public void Accept()
{
if (this.State == ProvisionsState.Authorized)
this.State = ProvisionsState.Accepted;
else
throw new InvalidOperationForProvision("The provision have to be authorized.");
}
So far so good, but... I have a command (a sort of DTO with only the ProvisionId) and a relative handler. When a client wants to accept a provision will put AcceptCommand DTO on a bus. Right now the AcceptCommandHandler take this command from the bus and handle it.
public void Handle(AcceptCommand command)
{
var provision = Repository.GetById(command.ProvisionId);
provision.Accept();
...
}
If the InvalidOperationForProvision isn't raised everything will be ok and ProvisionAcceptedEvent will be send. So far so good (as far as I know). But the question is: what happens if exception is raised?
Bearing in mind that some bus will retry the command many times, even if the command will surely fail (eg: a disabled provision will never be authorized, so I will never accept it, but the accept command will still be there).
Considering that you intend to process the command asynchronously, InvalidOperationForProvision exception must be handled in the command-handler itself. Some kind of compensating action might be required.
Having said that, its probably a good idea to do some validation, if possible, on the command itself before putting it on the Bus and reject it if it is not good enough. For example, if you happen to have access to the ProvisionsState before issuing the command, do your check and accept or reject the it accordingly. Hence, you would be able to give immediate feedback to the user.
http://codingcraft.wordpress.com/2014/07/05/application-service-design-with-commands/
http://www.udidahan.com/2009/12/09/clarified-cqrs/
Related
I am working on a backend and try to implement CQRS patterns.
I'm pretty clear about events, but sometimes struggle with commands.
I've seen that commands are requested by users, or example ChangePasswordCommand. However in implementation level user is just calling an endpoint, handled by some controller.
I can inject an UserService to my controller, which will handle domain logic and this is how basic tutorials do (I use Nest.js). However I feel that maybe this is where I should use command - so should I execute command ChangePasswordCommand in my controller and then domain module will handle it?
Important thing is that I need return value from the command, which is not a problem from implementation perspective, but it doesn't look good in terms of CQRS - I should ADD and GET at the same time.
Or maybe the last option is to execute the command in controller and then emit an event (PasswordChangedEvent) in command handler. Next, wait till event comes back and return the value in controller.
This last option seems quite good to me, but I have problems with clear implementation inside request lifecycle.
I base on
https://docs.nestjs.com/recipes/cqrs
While the answer by #cperson is technically correct, I would like to add a few nuances to it.
First something that may not be clear from the answer description where it advises to "emit an event (PasswordChangedEvent) in command handler". This is what I would prefer as well, but watch out:
The Command is part of the infrastructure layer, and the Event is part of the domain.
So from the command you should trigger code on the AggregateRoot that emits the event.
This can be done with mergeObjectContext or eventBus.publish (see the NestJS docs).
Events can be applied from other domain objects, but the aggregate usually is the emitter (upon commit).
The other point I wanted to address is that an event-sourced architecture is assumed, i.e. applying CQRS/ES. While CQRS is often used in combination with Event Sourcing there is nothing that prescribes doing so. Event Sourcing can give additional advantages, but also comes with significant added complexity. You should carefully weigh the pros and cons of having ES.
In many cases you do not need Event Sourcing. Having just CQRS already gives you a lot of benefits, such as having your domain / bounded contexts well-contained. Separation between reads and writes, single-responsibility commands + queries (more SOLID in general), cleaner architecture, etc. On a higher level it is easier to shift focus from 'how do I implement this (CRUD-wise)?', to 'how do these user requirements fit in the domain model?'.
Without ES you can have a single relational database and e.g. persist using TypeORM. You can persist events, but it is not needed. In many scenario's you can avoid the eventual consistency where clients need to subscribe to events (maybe you just use them to drive saga's and update read-side views/projections).
You can always start with just CQRS and add Event Sourcing later, when the need arises.
As your architecture evolves, you may find that you require a command bus if you are using Processes/Sagas to manage workflows and inter-aggregate communication. If and when that is the case, it will naturally make sense to use that bus for all commands.
The following is the method I would prefer:
execute the command in controller and then emit an event (PasswordChangedEvent) in command handler. Next, wait till event comes back and return the value in controller.
As for implementation details, in .NET, we use a SignalR websockets service that will read the event bus (where all events are published) and will forward events to clients that have subscribed to them.
In this case, the workflow would be:
The user posts to the controller.
The controller appends the command to the command bus.
The controller returns an ID identifying the command.
The client (browser client) subscribes to events relating to this command.
The command is received by the domain service and handled. An event is emitted to the event store.
The event is published to the event bus.
The event listener subscription service receives the event, finds the subscription, and sends the event to the client.
The client receives the event and notifies the user.
I have an asp.net core Web Api application.
In my application I have Web Api method which I want to prevent multi request from the same user to enter simultaneously. I don't mind request from different users to perform simultaneously.
I am not sure how to create the lock and where to put it. I thought about creating some kind of a dictionary which will contains the user id and perform the lock on the item but I don't think i'm getting it right. Also, what will happen if there is more than one server and there is a load balancer?
Example:
Let assume each registered user can do 10 long task each month. I need to check for each user if he exceeded his monthly limit. If the user will send many simultaneously requests to the server, he might be allowed to perform more than 10 operations. I understand that I need to put a lock on the method but I do want to allow other users to perform this action simultaneously.
What you're asking for is fundamentally not how the Internet works. The HTTP and underlying IP protocols are stateless, meaning each request is supposed to run independent of any knowledge of what has occurred previously (or concurrently, as the case may be). If you're worried about excessive load, your best bet is to implement rate limiting/throttling tied to authentication. That way, once a user burns through their allotted requests, they're cut off. This will then have a natural side-effect of making the developers programming against your API more cautious about sending excessive requests.
Just to be a bit more thorough, here, the chief problem with the approach you're suggesting is that I know of no way it can be practically implemented. You can use something like SemaphoreSlim to create a lock, but that needs to be static so that the same instance is used for each request. Being static is going to limit your ability to use a dictionary of them, which is what you'll need for this. It can technically be done, I suppose, but you'd have to use a ConcurrentDictionary and even then, there's no guarantee of single-thread additions. So, concurrent requests for the same user could load concurrent semphaphores into it, which defeats the entire point. I suppose you could front-load the dictionary with a semphaphore for each user from the start, but that could become a huge waste of resources, depending on your user-base. Long and short, it's one of those things where when you're finding a solution this darn difficult, it's a good sign you're likely trying to do something you shouldn't be doing.
EDIT
After reading your example, I think this really just boils down to an issue of trying to handle the work within the request pipeline. When there's some long-running task to be completed or just some heavy work to be done, the first step should always be to pass it off to a background service. This allows you to return a response quickly. Web servers have a limited amount of threads to handle requests with, and you want to service the request and return a response as quickly as possible to keep from exhausting your threadpool.
You can use a library like Hangfire to handle your background work or you can implement an IHostedService as described here to queue work on. Once you have your background service ready, you would then just immediately hand off to that any time your get a request to this endpoint, and return a 202 Accepted response with a URL the client can hit to check the status. That solves your immediate issue of not wanting to allow a ton of requests to this long-running job to bring your API down. It's now essentially doing nothing more that just telling something else to do it and then returning immediately.
For the actual background work you'd be queuing, there, you can check the user's allowance and if they have exceeded 10 requests (your rate limit), you fail the job immediately, without doing anything. If not, then you can actually start the work.
If you like, you can also enable webhook support to notify the client when the job completes. You simply allow the client to set a callback URL that you should notify on completion, and then when you've finish the work in the background task, you hit that callback. It's on the client to handle things on their end to decide what happens when the callback is it. They might for instance decide to use SignalR to send out a message to their own users/clients.
EDIT #2
I actually got a little intrigued by this. While I still think it's better for your to offload the work to a background process, I was able to create a solution using SemaphoreSlim. Essentially you just gate every request through the semaphore, where you'll check the current user's remaining requests. This does mean that other users must wait for this check to complete, but then your can release the semaphore and actually do the work. That way, at least, you're not blocking other users during the actual long-running job.
First, add a field to whatever class you're doing this in:
private static readonly SemaphoreSlim _semaphore = new SemaphoreSlim(1, 1);
Then, in the method that's actually being called:
await _semaphore.WaitAsync();
// get remaining requests for user
if (remaining > 0)
{
// decrement remaining requests for user (this must be done before this next line)
_semaphore.Release();
// now do the work
}
else
{
_semaphore.Release();
// handle user out of requests (return error, etc.)
}
This is essentially a bottle-neck. To do the appropriate check and decrementing, only one thread can go through the semaphore at a time. That means if your API gets slammed, requests will queue up and may take a while to complete. However, since this is probably just going to be something like a SELECT query followed by an UPDATE query, it shouldn't take that long for the semaphore to release. You should definitely do some load testing and watch it, though, if you're going to go this route.
I am using the Simple Service Bus from Codeplex and have a handler that provides me with a message and an IMessageContext.
public void Handle(MyEnquiryMessage message, IMessageContext context)
I store both these in a list and let the handler complete. At some point in the future I do some processing and try to send a reply by taking the context that I stored and calling:
context.Endpoint.MessageBus.Reply(myResponse)
Unfortunately this throws an exception “Object reference not set to an instance of an object”. Is this asynchronous way of replying possible or can “reply” only be used within the handler?
I don't know Simple Service Bus but I would guess that your context is only valid in the handler. If you want to send back a response you need to gather all the data you need from the context and simply do a 'send' at that later stage.
Even so, it sounds a bit strange performing process 'later' when it could probably be handled in another endpoint that processes a relevant message type. Without more information it is difficulty to tell, but your design may not be optimal.
I've have a couple of questions to which I am not finding any exact answer. I've used CQRS before, but probably I was not using it properly.
Say that there are 5 services in the domain: Gateway, Sales, Payments, Credit and Warehouse, and that during the process of a user registering with the application, the front-end submits a few commands, the same front-end will then, once the user is registered, send a few other commands to create an order and apply for a credit.
Now, what I usually do is create a gateway, which receives all pubic commands, which are then validated, and if valid, are transformed into domain commands. I only use events to store data and if one service needs some action to be performed in other service, a domain command is sent directly from one service to the other. But I've seen in other systems that event handlers are used for more than store data. So my question is, what are the limits to what event handlers can do? And is it correct to send commands between services when a specific service requires that some other service performs an action or is it more correct to have the initial event raise and event and let the handler in the other service perform that action in the event handler. I am asking this because I've seen events like: INeedCreditAproved, when I was hoping to see a domain command like: ApprovedCredit.
Any input is welcome.
You're missing an important concept here - Sagas (Process Managers). You have a long-running workflow and it's better expressed centrally.
Sagas listen to events and emit commands. So OrderAccepted event will start a Saga, which then emit ApproveCredit and ReserveStock commands, to be sent to Credit and Warehouse services respectively. Saga can then listen to command success/failure events and compensate approprietely, like say emiting SendEmail command or whatever else.
One year ago I was sending commands like "send commands between services by event handlers when a specific service requires that some other service performs an action" but a stupid decision made by me switched to using events like you said "to have the initial event raise and event and let the handler in the other service perform that action in the event handler" and it worked at first. The most stupid decision I could make. Now I am switching back to sending commands from event handlers.
You can see that other people like Rinat do similar things with event ports/receptors and it is working for them, I think:
http://abdullin.com/journal/2012/7/22/bounded-context-is-a-team-working-together.html
http://abdullin.com/journal/2012/3/31/anatomy-of-distributed-system-a-la-lokad.html
Good luck
Background:
I have a system that hosts WCF services inside a Windows Service with NetTCP binding. To add a new service to the collection you simply add the standard WCF config entries inside <system.serviceModel -> services /> and then add a line inside a custom configuration section that tells the hosting framework it needs to initialize the service. Each service is initialized with its own background thread and AppDomain instance to keep everything isolated.
Here is an example of how the services are initialized:
Host
- ServerManager
- ServiceManager
- BaseServerHost
The ServerManager instance has a collection of ServiceManagers that each correlate to a single service instance which is where the standard WCF implementation lies (ServiceHost.Open/Close, etc). The ServiceManager instance instantiates (based on the config - it has the standard assembly/type definition) an instance of the service by use of the BaseServerHost base class (abstract). Every service must inherit from this for the framework to be able to use it. As part of the initialization process BaseServerHost exposes a couple of events, specifically an UnhandledException event that the owning ServiceManager attaches itself to. (This part is critical in relation to the question below.)
This entire process works exceptionally well for us (one instance is running 63 services) as I can bring someone on who doesn't know anything about WCF and they can create services very quickly.
Question:
The problem I have run into is with background threading. A majority of the exposed methods on our endpoints do a significant amount of activity after a standard insert/update/delete method call such as sending messages to other systems. To keep performance up (the front-end is web-based) we let the initial insert/update/delete method do its thing and then fire off a background thread to handle all the stuff an end-user doesn't need to wait for to complete. This option works great until something in that background thread goes unhandled and brings the entire Windows service down, which I understand is by design (and I'm OK with).
Based on all of my research I have found that there is no way to implement a global try/catch (minus using the hacked config of enabling 1.1 handling of background crashing) so my team will have to go back and get those in the appropriate places. That aside, what I've found is on the endpoint side of the WCF hosting appears to be in its own thread on each call and getting that thread to talk to the "parent" has been a nightmare. From the service viewpoint here is the layout:
Endpoint (svc - inherits from BaseServerHost, mentioned above)
- Business Layer
- Data Layer
When I catch an exception on a background thread in the business layer I bubble it up to the Endpoint instance (which inherits from BaseServerHost) which then attempts to fire BaseServerHost's UnhandledException event for this particular service (which was attached to by the owning ServiceManager that instantiated it). Unfortunately the event handler is no longer there so it does nothing at all. I've tried numerous things to get this to work and thus far all of my efforts have been in vain.
When looking at the full model (shown below), I need to make the Business layer know about its parent Endpoint (this works) and the endpoint needs to know about the running BaseServerHost instance which needs to know about the ServiceManager that is hosting it so the errors can be bubbled up to this for use in our standard logging procedures.
Host
- ServerManager
- ServiceManager <=====================
- BaseServerHost ||
- Endpoint (svc) ||
- Business Layer <======
- Data Layer
I've tried static classes with no luck and even went as far as making ServerManager static and expoting its previously internal collection of ServiceManagers (so they can be shutdown), but that collection is always empty or null too.
Thoughts on making this work?
EDIT: After digging a little further I found an example of exactly how I envision this working. In a standard ASP.NET website, on any page/handler etc. you can use the HttpContext.Current property to access the current context for that request. This is exactly how I would want this to work with a "ServiceManager.Current" returning the owning ServiceManager for that service. Perhaps that helps?
Maybe you should look into doing something with CallContext:
http://msdn.microsoft.com/en-us/library/system.runtime.remoting.messaging.callcontext.aspx
You can use either SetData/GetData or LogicalSetData/LogicalGetData, depending on whether you want your ServiceManager to be associated with one physical thread (SetData) or a "logical" thread (LogicalSetData). With LogicalSetData you could make the same ServiceManager instance available within a thread as well as within that thread's "child" threads. Will try to post a couple of potentially useful links later when I can find them.
Here is a link to the "Virtual Singleton Pattern" on codeproject.
Here is a link to "Thread Singleton"
Here is a link to "Ambient Context"
All of these ideas are similar. Essentially, you have an object with a static Current property (can be get or get/set). Current puts its value in (and gets it from) the CallContext using either SetData (to associate the "Current" value with the current thread) or LogicalSetData (to associate the "Current" value with the current thread and to flow the value to any "child" threads).
HttpContext is implemented in a similar fashion.
System.Diagnostics.CorrelationManager is another good example that is implemented in a similar fashion.
I think the Ambient Context article does a pretty good job of explaining what you can accomplish with this idea.
Whenever I dicsuss CallContext, I try to also include this link to this entry from Jeffrey Richter's blog.
Ultimately, I'm not sure if any of this will help you or not. One it would be useful would be if you had a multithreaded server application (maybe each request is fulfilled by a thread and multiple requests can be fulfilled at the same time on different threads), you might have a ServiceManager per thread. In that case, you could have a static Current method on ServiceManager that would always return the correct ServiceManager instance for a particular thread because it stores the ServiceManager in the CallContext. Something like this:
public class ServiceManager
{
static string serviceManagerSlot = "ServiceManager";
public static ServiceManager Current
{
get
{
ServiceManager sm = null;
object o = CallContext.GetData(serviceManagerSlot);
if (o == null)
{
o = new ServiceManager();
CallContext.SetData(serviceManagerSlot, o);
}
sm = (ServiceManager)o;
return sm;
}
set
{
CallContext.SetData(serviceManagerSlot, value);
}
}
}
Early in your process, you might configure a ServiceManager for use in the current thread (or current "logical" thread) and then store in the "Current" property:
ServiceManager sm = new ServiceManager(thread specific properties?);
ServiceManager.Current = sm;
Now, whenever you retrieve ServiceManager.Current in your code, it will be the correct ServiceManager for the thread in which you are current executing.
This whole idea might not really be what you want.
From your comment you say that the CallContext data that you try to retrieve in the event of an exception is null. That probably means that exception is being raised and/or caught on a different thread than the thread on which the CallContext data was set. You might try using LogicalSetData to see if that helps.
As I said, I don't know if any of this will help you, but hopefully I have been clear enough (and the examples have also been clear enough) so you can tell if these ideas apply to your situation or not.
Good luck.