please, help me to understand this picture
please, what is the explanation of this process
difference between authorizing and checking
There is no "difference". It's just that these two states exist independently of each other at the same time. E.g. when Rejected the whole SM ends exceptionally. It will further end and transfer to Delivered after Dispatching and Authorize have passed, no matter in which order any of these occurred.
Related
Customers can pay for an order instantly or later. When the order is pay-later, I want to draw a notation that signifies that the customer must pay within two-day time in an activity diagram. If the customer does not pay within two-day time, the system will mark the order as canceled.
In this attached image, the first swimlane is for the actor Customer, and the second swimlane is for the actor System. I created a time event notation that signifies that the customer must pay within 48 hours. Then, I placed the merge/branch node on the customer swimlane to signify that the customer is the actor that must make the payment.
The issue that I thought of about my current diagram is that someone might misunderstand the time event notation. Someone might understand the notation as a sign that the system will always wait 48 hours before marking the order as canceled or awaiting shipment. In reality, the system will mark the order as awaiting shipment as soon as the customer pays. However, if the customer doesn't pay after 48 hours, the system will mark the order as canceled.
How can I draw a better diagram to signify the above description?
An accept time event action (e.g. an AcceptEventAction with a single TimeEvent trigger) cannot have an input flow, so your diagram is invalid, and then does not show what you want.
The guards of the flow after a Decision must be written between brackets ([]).
I placed the merge/branch node on the customer swimlane to signify that the customer is the actor that must make the payment.
but this is check by the system independently of the customer, so this is wrong / unclear
The fact the two actions creating order are not in the customer swimlane is also wrong / unclear for me
After the action create an order with the awaiting payment status you can create a new timer dedicated to the current order of the customer. In case a customer pays before 2 days the corresponding timer is deleted.
But that can produce a lot of timers, you can also memorize the current order more timeout in a fifo and you have a unique timer. In case a customer pays before 2 days the corresponding order is removed from the fifo.
That unique timer can periodically check the memorized orders, but that pooling wake-up the system even when nothing must be done.
That unique timer can be started when a first order is memorized, then when the system wake-up it manages the too older orders, then if the fifo become empty the timer is stopped else it is updated depending on the delay of the first (older) order in the fifo
Per #qwerty_so's comment, I have decided to use an interruptible region. This interruption trigger of this region is the system accepting payment. Here's the new diagram.
EDIT
As per #bruno's comments and #Axel Scheithauer's comments, I have cropped the more complete image of my activity diagram. An Accept Time Event Action seems to be able to have incoming edges/flows, contrary to #bruno's comments. Furthermore, I believe that the incomplete screenshot was what caused confusion in my diagram.
I also revised my diagram so that the interruptible region's signal comes from the Accept Time Event Action instead of Accept Event Action.
Diagram 1:
Diagram 2:
I am trying to learn more about CQRS and Event Sourcing (Event Store).
My understanding is that a message queue/bus is not normally used in this scenario - a message bus can be used to facilitate communication between Microservices, however it is not typically used specifically for CQRS. However, the way I see it at the moment - a message bus would be very useful guaranteeing that the read model is eventually in sync hence eventual consistency e.g. when the server hosting the read model database is brought back online.
I understand that eventual consistency is often acceptable with CQRS. My question is; how does the read side know it is out of sync with the write side? For example, lets say there are 2,000,000 events created in Event Store on a typical day and 1,999,050 are also written to the read store. The remaining 950 events are not written because of a software bug somewhere or because the server hosting the read model is offline for a few secondsetc. How does eventual consistency work here? How does the application know to replay the 950 events that are missing at the end of the day or the x events that were missed because of the downtime ten minutes ago?
I have read questions on here over the last week or so, which talk about messages being replayed from event store e.g. this one: CQRS - Event replay for read side, however none talk about how this is done. Do I need to setup a scheduled task that runs once per day and replays all events that were created since the date the scheduled task last succeeded? Is there a more elegant approach?
I've used two approaches in my projects, depending on the requirements:
Synchronous, in-process Readmodels. After the events are persisted, in the same request lifetime, in the same process, the Readmodels are fed with those events. In case of a Readmodel's failure (bug or catchable error/exception) the error is logged and that Readmodel is just skipped and the next Readmodel is fed with the events and so on. Then follow the Sagas, that may generate commands that generate more events and the cycle is repeated.
I use this approach when the impact of a Readmodel's failure is acceptable by the business, when the readiness of a Readmodel's data is more important than the risk of failure. For example, they wanted the data immediately available in the UI.
The error log should be easily accessible on some admin panel so someone would look at it in case a client reports inconsistency between write/commands and read/query.
This also works if you have your Readmodels coupled to each other, i.e. one Readmodel needs data from another canonical Readmodel. Although this seems bad, it's not, it always depends. There are cases when you trade updater code/logic duplication with resilience.
Asynchronous, in-another-process readmodel updater. This is used when I use total separation of the Readmodel from the other Readmodels, when a Readmodel's failure would not bring the whole read-side down; or when a Readmodel needs another language, different from the monolith. Basically this is a microservice. When something bad happens inside a Readmodel it necessary that some authoritative higher level component is notified, i.e. an Admin is notified by email or SMS or whatever.
The Readmodel should also have a status panel, with all kinds of metrics about the events that it has processed, if there are gaps, if there are errors or warnings; it also should have a command panel where an Admin could rebuild it at any time, preferable without a system downtime.
In any approach, the Readmodels should be easily rebuildable.
How would you choose between a pull approach and a push approach? Would you use a message queue with a push (events)
I prefer the pull based approach because:
it does not use another stateful component like a message queue, another thing that must be managed, that consume resources and that can (so it will) fail
every Readmodel consumes the events at the rate it wants
every Readmodel can easily change at any moment what event types it consumes
every Readmodel can easily at any time be rebuild by requesting all the events from the beginning
there order of events is exactly the same as the source of truth because you pull from the source of truth
There are cases when I would choose a message queue:
you need the events to be available even if the Event store is not
you need competitive/paralel consumers
you don't want to track what messages you consume; as they are consumed they are removed automatically from the queue
This talk from Greg Young may help.
How does the application know to replay the 950 events that are missing at the end of the day or the x events that were missed because of the downtime ten minutes ago?
So there are two different approaches here.
One is perhaps simpler than you expect - each time you need to rebuild a read model, just start from event 0 in the stream.
Yeah, the scale on that will eventually suck, so you won't want that to be your first strategy. But notice that it does work.
For updates with not-so-embarassing scaling properties, the usual idea is that the read model tracks meta data about stream position used to construct the previous model. Thus, the query from the read model becomes "What has happened since event #1,999,050"?
In the case of event store, the call might look something like
EventStore.ReadStreamEventsForwardAsync(stream, 1999050, 100, false)
Application doesn't know it hasn't processed some events due to a bug.
First of all, I don't understand why you assume that the number of events written on the write side must equal number of events processed by read side. Some projections may subscribe to the same event and some events may have no subscriptions on the read side.
In case of a bug in projection / infrastructure that resulted in a certain projection being invalid you might need to rebuild this projection. In most cases this would be a manual intervention that would reset the checkpoint of projection to 0 (begining of time) so the projection will pick up all events from event store from scratch and reprocess all of them again.
The event store should have a global sequence number across all events starting, say, at 1.
Each projection has a position tracking where it is along the sequence number. The projections are like logical queues.
You can clear a projection's data and reset the position back to 0 and it should be rebuilt.
In your case the projection fails for some reason, like the server going offline, at position 1,999,050 but when the server starts up again it will continue from this point.
Can someone give me some guidance on how to use OmniThreadLibrary to perform a "queued" task. Could be anything, but in this question let me use the example of sending 100 emails. I want to only be using 3 threads at a time.
How do I queue the sending of emails?
How do I get notified when 1 thread is finished and ready to receive the next email/task?
How do I get the result of the email send? (i.e. OK, or an error occurred).
I have seen some very short examples of using OTL, but I need something a bit more comprehensive, to understand how to perform the above.
Can someone explain how to do the above or point me to an example that covers something similar.
Thanks
I have a scenario where a BPEL process with a parallel flow is calling an asynchronous process in parallel and waits for their callbacks. I added two correlation sets one to correlate to the calling BPEL process instance and one to correlate to the Receive in which flow path. But I am receiving a conflictingReceive fault response. And the error:
ERROR [PICK] org.apache.ode.bpel.common.FaultException: {Selector plinkInstnace= {PartnerLinkInstance partnerLinkDecl=OPartnerLink#41,scopeInstanceId=9601},ckeySet=[{CorrelationKey setId=AsynchCorr, values=[hello]}, {CorrelationKey setId=FlowCorr, values=[flow 2:]}],opName=onResult,oneWay=yes,mexId=<null>,idx=0,route=one}
I am using Apache ODE with Tomcat. Can you help me find a solution for this problem, it is driving me mad!! I can send you sample projects if you can help.
The problem was that I thought the msg is matched with the correlationSet property value .. I have defined another correlationSet with the same flow_property .. I updated the files in the comment above with the true solution. CallerProcess.bpel
I've been looking at CQRS but I find it restricting when it comes to showing the result of commands in lets say a Web Application.
It seems to me that using CQRS, one is forced to refresh the whole view or parts of it to see the changes (using a second request) because the original command request will only store an event which is to be processed in future.
In a Web Application, is it possible that a Command request could carry the result of the event it creates back to the browser?
The answer to the headline of this question is quite simple: nothing, void or from a webbrower/rest point of view 200 OK with an empty body.
Commands applied to the system (if the change is successfully committed) does not yield a result. And in the case that you wish to leave the business logic on the server side, yes you do need to refresh the data by executing yet another request (query) to the server.
However most often you can get rid of the 2nd roundtrip to the server. Take a table where you modify a row and press a save button. Do you really need to update the table? Or in the case a user submits a comment on a blog post just append the comment to the other comments in the dom without the round trip.
If you find yourself wanting the modified state returned from the server you need to think hard about what you are trying to achieve. Most scenarios can be changed so that a simple 200 OK is more than enough.
Update: Regarding your question about queuing incoming commands. It's not recommended that incoming commands are queued since this can return false positives (a command was successfully received and queued but when the command tries to modify the state of the system it fails). There is one exception to the rule and that is if you are having a system with an append only model as state. Then is safe to queue the mutation of the system state till later if the command is valid.
Udi Dahans article called Clarified CQRS is always a good read on this topic http://www.udidahan.com/2009/12/09/clarified-cqrs/
Async commands are a strange thing to do in CQRS considering that commands can be accepter or rejected.
I wrote about it, mentioning the debate between Udi Dahan's vision and Greg Young's vision on my blog: https://www.sunnyatticsoftware.com/blog/asynchronous-commands-are-dangerous
Answering your question, if you strive to design the domain objects (aggregates?) in a transactional way, where every command initiates a transaction that ends in zero, one or more events (independently on whether there are some process managers later on, picking one event and initiating another transaction), then I see no reason to have an empty command result. It's extremely useful for the external actor that initates the use case, to receive a command result indicating things like whether the command was accepted or not, which events did it produce, or which specific state has now the domain (e.g: aggregate version).
When you design a system in CQRS with asynchronous commands, it's a fallacy to expect that the command will succeed and that there will be a quick state change that you'll be notified about.
Sometimes the domain needs to communicate with external services (domain services?) in an asynchronous way depending on those services api. That does not mean that the domain cannot produce meaningful domain events informing of what's going on and which changes have occured in the domain in a synchronous way. For example, the following flow makes a lot of sense:
Actor sends a sync command PurchaseBasket
Domain uses an external service to MakePayment and knows that the payment is being processed
Domain produces the events BasketPurchaseAttempted and/or PaymentRequested or similar
Still, synchronously, the command returns the result 200 Ok with a payload indicating some information about what has happened. Even if the payment hasn't completed because the payment platform is asynchronous, at least the actor has a meaningful knowledge about the result of the transaction it initiated.
Compare this design with an asynchronous one
Actor sends an async command PurchaseBasket
The system returns a 202 Accepted with a transaction Id indicating "thanks for your interest, we'll call you, this is the ticket number")
In a separate process, the domain initiates a process manager or similar with the payment platform, and when the process completes (if it completes, assuming the command is accepted and there are no business rules that forbid the purchase basket), then the system can start the notifying process to the actor.
Think about how to test both scenarios. Think about how to design UX to accommodate this. What would you show in the second scenario in the UI? Would you assume the command was accepted? Would you display the transaction Id with a thank you message and "please wait"? Would you take a big faith leap and keep the user waiting with a loading screen waiting for the async process to finish and be notified with a web socket or polling strategy for XXX seconds?
Async commands in CQRS are a dangerous thing and make us lazy domain designers.
UPDATE: the accepted answer suggest not to return anything and I fully disagree. Checkout Eventuous library and you'll see that returning a result is extremely helpful.
Also, if an async command can't be rejected it's... because it's not really a command but a fact.
UPDATE: I am surprised my answer got negative votes. Especially because Greg Young, the creator of CQRS term, says literally in his book about CQRS
One important aspect of Commands is that they are always in the imperative tense; that is they are
telling the Application Server to do something. The linguistics with Commands are important. A situation
could for with a disconnected client where something has already happened such as a sale and could
want to send up a “SaleOccurred” Command object. When analyzing this, is the domain allowed to say
no that this thing did not happen? Placing Commands in the imperative tense linguistically shows that
the Application Server is allowed to reject the Command, if it were not allowed to, it would be an Event
for more information on this see “Events”.
While I understand certain authors are biased towards the solutions they sell, I'd go to the main source of info in CQRS, regardless of how many hundred of implementations are there returning void when they can return something to inform requester asap. It's just an implementation detail, but it'll help model better the solution to think that way.
Greg Young, again, the guy who coined the CQRS term, also says
CQRS and Event Sourcing describe something inside a single system or component.
The communication between different components/bounded contexts (which ideally should be event driven and asynchronous, although that's not a requirement either) is outside the scope of CQRS.
PS: ignoring an event is not the same as rejecting a command. Rejection implies a direct answer to the command sender. Something "difficult" if you return nothing to the sender (not even a correlation ID?)
Source:
https://gregfyoung.wordpress.com/tag/cqrs/
https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf