I have a situation in a CQRS project where I have to log a user's request for information (query) then optionally start a workflow based on the response from the data store. The user is making a request for information which requires immediate feedback. At the same time, the system optionally starts a workflow to analyse the request. How do I implement this in CQRS since the request is neither a 'pure' query nor a 'pure' command?
Edit:
To add some more context to this: The application is like a search application, where the user types in a query and the application returns with a result. But the application also logs the query and could start a workflow depending on the response from the server. The application also "remembers" the user's last few queries and uses it to give context to the new query.
Additionally, the query response may not be synchronous. A background worker may be responsible for delivering the result to the client.
Though you've given us little to work with I think this question has a simple answer:
I disagree with you that the request is neither a 'pure' query nor a 'pure' command. The request is a pure query, because the request is not a request for an analysis, but a request for information. The analysis that optionally gets triggered by the request is a command, but a command in the context of the query event. The system, or more specifically the event handler, is therefore the actor in the context of the command, not the user, which is the actor in the context of the query.
No query is ever side-effect free. It is the intention what makes it a query.
Such request is a command.
In simple OOP, I've often modeled such kind of message as a void method with an out params.
For example, in a financial model, I had an advisory contract (entity and aggregate root) enforcing rules to build a financial recommendation (entity, immutable). The pubblication of the definitive recommentation was modeled with a command like this:
public interface IAdvisoryContract
{
ContractNumber Number { get; }
// lot of well documented commands and queries here...
/// 90 lines of documentation here...
void PublishRecommendation(
IUser advisor, IAssetClassBreakdownAnalysis assetClassAnalysis,
IVolatilityAnalysis tevAnalysis, Time time,
out IRecommendation newRecommendation);
event EventHandler<EventArgs<RecommendationNumber>> RecommendationPublished;
}
In CQRS it depends on your infrastructure: for example, in a similar situation over HTTP, I used a POST returning the relevant info to the client.
What about after the execution of the query send a message for notification? I probably will use a decorator like:
public QueryRs query(QueryRq rq) {
final QueryRs rs = target.query(rq);
notifier.notifyQueryDone(rs);
}
And make the workflow subscribe and consume the message. I'm not sure is this query considered changing the state still in this solution?
Related
Using Axon Sramework, what is the best way to trigger a command after another command as succeeded.
For example, a command will create an Aggregate (and then the entity, after the entity has been created, we need to create another aggregate/entity on another domain (as a child of the first entity for example).
The second entity has to be created using a command on an aggregate, but where to initiate the second command ? in the #EventSourcingHandler of the first aggregate ? in the #EventHandler when saving the first entity ? by using Saga ?
Another point/question :
What is the best way to trigger a command from a query. For example, I have a query to get some records and if a record does not exist, I would like to create it automatically and then send the result back from the query method. Do I have to use the command gateway to send a create command from the query handler class and then wait for the result ? or is there another way to do it ?
Thanks for support and help.
Alexandre
what is the best way to trigger a command after another command as
succeeded?
Ignoring the "best way" in that question, you are most likely looking for a process manager. In Axon Framework, those are called Sagas.
What is the best way to trigger a command from a query. For example, I have a query to get some records and if a record does not exist, I would like to create it automatically
I would be very, very careful doing something like that.
First of all, what if you receive the same query multiple times in a short time? How do you know if the command has been already sent or not?
Second, you don't want to wait for an update while responding to a query as you can't know how long that would take or if it will happen at all.
One alternative is to use a subscription query. Indicate in the initial response that data is incomplete and let the client trigger an update via appropriate command flow. Since the client is subscribed to the query, it will receive the update once it is complete.
Using the Axon framework, what is the best way to trigger a command after another command as succeeded. For example, a command will create an aggregate (and then the entity,9 after the entity has been created, we need to create another aggregate/entity on another domain (as a child of the first entity for example).
First of all, a command NEVER creates an aggregate. A command defines the intention to create an aggregate. The command-constructor should verify if all conditions to create the aggregate are met and then send an aggregate-created-event. This event actually creates the aggregate and initializes its state. Check the Axon documentation for commands and events for more details about this. Note also that only events are stored on the event store, commands are fire and forget.
The aggregate-created-event sent by the aggregate can create the entity using an event handler on a projection for example (I assume the entity exist in the scope of a projection in a read model).
Since you mention the creation of the entity is followed by another aggregate being created, sagas (as mentioned by Milen) come into play. The saga is started by the event-created-event of the first aggregate and sends a create-command for the second aggregate. Then the same flow as for the first aggregate repeats (verify all conditions to create the aggregate and send the creation event).
If you need to have the guarantee the first entity exists before the second aggregate will be created, you could tried to send a kind of first-entity-created-event in the projection, catch this event in your saga and let this saga handler fire the command to create the second aggregate. I don't know for sure if projection's are allowed to send commands although I cannot imagine why not... Domain events can be used to cross boundaries of a microservice.
It sounds more complicated than it is, but be sure to check the chapter about sagas in the Axon docs as well.
The second entity has to be created using a command on an aggregate, but where to initiate the second command ? in the #EventSourcingHandler of the first aggregate ? in the #EventHandler when saving the first entity ? by using Saga ?
I think it is a bit weird to create entities using commands on an aggregate. Aggregates handle commands related to the state changes they allow from business point of view. What is your intention to do so? Entities (assuming you mean JPA entities), as mentioned earlier, are used to "summarize" all events that lead to the current state of the aggregate and often live in the read model (in a CQRS architecture). #EventHandlers listen for events and update the entity correspondingly in a projection typically. #EventSourcingHandler live on the aggregate to respond to events to update the current state of the aggregate which has to be known to verify the validity of future commands. Commands are very closely related to business processes' related actions performed on the aggregate.
I hope this clarifies your questions a bit although I fear they just create more questions ;-)
I am working on a project and want to try to adhere to DDD principles. As I've been going about it I've come across some questions that I hope someone will be able to help me with.
The project is a request system with each request having multiple request types inside it.
When a request is submitted it will be in a status of AwaitingApproval and will get routed to different people sequentially according to a set of rules as below:-
1) If the request only contains request types that don't need
intermediate approval it will be routed to a processing department
who will be the one and only approval in the chain.
2) If the initiator of the request is a Level 1 manager it will require
approvals from Level2, Level 3 and Level 4 managers
3) If the initiator is a Level 2 manager the request will be as 2) but without the need for Level 2 approval for obvious reasons
4) If the request contains a request type that increases a monetary value by lets say >$500 it will require the approval of a Level 4 manager
A request at any of the stages can either be Approved, Rejected or Rejected With Changes. Approve it will take it take the next level in the approval chain. Reject ends the process entirely.
Reject With Changes allows the user to send back to any of the previous approvers of the request as appropriate who will then be able to do the same with an Approve potentially sending it back through the chain again if it was a monetary change or if the reject with changes came from the processing department it will be re-assigned straight back to them.
Initially, I considered that we had an aggregate route of a Request with a RequestStatus using the State Pattern.
So I would have something like
class Request{
_currentstate = new AwaitingApprovalState();
void AssignTo(string person){
_assignee = person;
}
void Approve(){
_currentstate = _currentstate.Approve();
}
}
class AwaitingApprovalState : IState{
void Approve(){
return new ApprovedState();
}
}
class ApprovedState : IState{
void Approve(){
return new Level2ManagerApprovedState();
}
}
This got me to a point but I kept getting caught in knots. I think I am missing something in my initial model.
Some questions that occur
1) Where does the responsibility of working out who the next manager in the chain is to assign the request? Does that belong in the state class implementations or somewhere else like on the Request itself?
2) Currently a new request is in AwaitingApprovalState and if I approve it goes straight to ApprovedState. Where does the logic go that determines that because I don't require any intermediate approvals it should go straight to the processing department?
3) If there is a reject with modifications how do we go back to previous levels - I have considered some sort of StatusHistory entity.
I have considered maybe that this is some sort of workflow component but want to avoid that as much as possible.
Any pointers or ideas would be very much appreciated
If often makes sense to model processes as histories of related events. You might imagine this as a log of activity related to a specific request. Imagine somebody getting messages from different departments, and writing down the messages in a book
Request #12354 submitted.
Request #12354 received level 2 approval: details....
Request #12354 received level 3 approval: details....
To figure out what work needs to be done next, you just review what has already happened. Load all of the events, fold them into an in memory representation, and then query that structure.
Where does the responsibility of working out who the next manager in the chain is to assign the request?
Something like that would probably be implemented in a domain service; if the aggregate doesn't contain the information that it needs to do work, then it has to ask somebody else.
A common pattern for this would be a "stateless" service that knows how to find the right manager, given a set of values which describe the state of the aggregate. The aggregate knows what state it is in, so it passes the values describing its state to the service to get the answer.
Manager levelFourManager = managers.getLevelFourManager(...)
Where does the logic go that determines that because I don't require any intermediate approvals it should go straight to the processing department?
Probably into the aggregate itself, eventually.
Rinat Abdullin put together a very good tutorial on evolving process managers, which is very much in line with Greg Young's talk Stop Over Engineering.
You've got some query in your model like
request.isReadyForProcessing()
In the early versions of your model, the request might answer false until some human operator has told it that "yes, you are ready"; then, over time you start adding in the easy cases to compute.
boolean isReadyForProcessing() {
return aHumanSaidImReadyForProcessing() || ImOneOfTheEasyCasesToProcess();
}
What "send to processing" actually means probably doesn't live in the aggregate. We might borrow the domain service idea again, this time to communicate with an external system
void notify(ProcessingClient client) {
if (this.isReadyForProcessing()) {
client.process(this.id);
}
}
The processing client might be doing real work, or it might just be sending a message somewhere else -- the aggregate model doesn't really care.
Part of the point of domain model, as a pattern, is that our domain calls for the coordination/orchestration of messages between objects in the model. If we didn't need that complexity, we'd probably look at something more straight forward, like transaction scripts. The printed version of Patterns of Enterprise Application Architecture dedicates a number of pages to describing these.
If there is a reject with modifications how do we go back to previous levels - I have considered some sort of StatusHistory entity.
Yes, that -- RejectWithModifications is just another message to write into the book, and that gives you more information to consider when answering questions.
Request #12354 submitted.
Request #12354 received level 2 approval: details....
Request #12354 received level 3 approval: details....
Request #12354 rejected with modifications: details....
I understand what you're saying and it makes great sense. I still get caught up in implementation details.
That is not your fault.
The literature is weak.
does the log of events lets call it ActivityLog live on the Request aggregate or is its own aggregate like in the Cargo DDD samples?
Putting it into the aggregate is probably the right place to start; it might not stay there. Finding a decent local minimum for your current system is probably better than trying to find the global minimum right away.
Are there differences between domain events as per Evans in the blue book and more recent domain events.
Maybe; it's also tangled because domain events aren't necessarily the sort of thing people are talking about when they say "event sourcing".
Need to see the wood for the trees.
The only thing that has worked for me, is to regularly go back to first principles, working through solutions piece by piece, and watching like a hawk for implicit assumptions.
1) Where does the responsibility of working out who the next manager
in the chain is to assign the request? Does that belong in the state
class implementations or somewhere else like on the Request itself?
It depends. It could be in Request itself, it could be in a Domain Service.
As an aside, I would recommend, if feasible, not determining exactly who the next validator is when the Request transitions to its next state but later. Sending a notification and displaying the validation request on a dashboard are consequences of domain state changes but not state changes per se - they don't need to happen atomically with the operation on Request but can happen at a later time.
If you manage to dissociate the bit that looks up validator data for request followup from the logic that determines who the next type of validators is (Level1 manager, Level 2 manager, etc.) you will probably spare yourself some complex modelling of the Request aggregate.
2) Currently a new request is in AwaitingApprovalState and if I
approve it goes straight to ApprovedState. Where does the logic go
that determines that because I don't require any intermediate
approvals it should go straight to the processing department?
Same as 1)
3) If there is a reject with modifications how do we go back to
previous levels - I have considered some sort of StatusHistory entity.
You could either work out who the previous validation group was, using the same kind of logic as for determining the next group. Or you could store a history of past states as a private member of Request alongside _currentState
for explaining this lets make assumption that there are there types of request types:
Purchase (require manager approval, eg: level 2 require level 3 and above managers approval)
BusinessMeet (No Approval Needed)
and as we can see there are diff. type of requests with diff. approval cycle and more such type of requests will be added in future.
Now lets see for the current structure how we would define it in DDD:
PurchaseRequest Aggregate extends RequestAgg
requestid
requested by
purchase info - description about purchase
requested by manager level
pending on mangers lists -- list of manager with level
approved by mangers lists -- list of manager with level
next manager for approval -- manager with level
status {approved , pending}
BusinessMeetRequest Aggregate extends RequestAgg
requestid
requested by
status {approved , pending} -- by default it should be approved
ApprovalRequestAgg
requestid
manager id
request type
status - (Approved , Rejected)
When user request he either hit api with purchase request or BusinessMeetRequest
In this case lets say user hit with purchase request then PurchaseRequestAgg will be created.
Based on the event PurchaseRequestCreated one ProcessManager will listen to the event and create a new agg ApprovalRequestAgg which has the manager id.
Manager will be able to see the request which it needs to approve from ApprovalRequest Read Model. and to see the info of request as ApprovalRequest has the request id and request type he will be able to fetch the actual purchase request, after this he can either approve or reject and send a event ApprovalRequestRejected or ApprovalRequestApproved.
Based on the above event one will update the PurchaseRequestAgg. and PurchaseRequest Agg will give a event (lets say after approval) PurchaseRequestAcceptedByManager.
Now someone will listen and the above loop work.
**In the above solution only problem is adding a new type of request will take time **
Another way could be there is a single RequestAgg. for request
RequestAgg
- request id
- type
- info
- status
and the algo for giving update to the manager is written in ProcessManager.
I think this would help you. if still has doubts , ping again :)
When you use Node's EventEmitter, you subscribe to a single event. Your callback is only executed when that specific event is fired up:
eventBus.on('some-event', function(data){
// data is specific to 'some-event'
});
In Flux, you register your store with the dispatcher, then your store gets called when every single event is dispatched. It is the job of the store to filter through every event it gets, and determine if the event is important to the store:
eventBus.register(function(data){
switch(data.type){
case 'some-event':
// now data is specific to 'some-event'
break;
}
});
In this video, the presenter says:
"Stores subscribe to actions. Actually, all stores receive all actions, and that's what keeps it scalable."
Question
Why and how is sending every action to every store [presumably] more scalable than only sending actions to specific stores?
The scalability referred to here is more about scaling the codebase than scaling in terms of how fast the software is. Data in flux systems is easy to trace because every store is registered to every action, and the actions define every app-wide event that can happen in the system. Each store can determine how it needs to update itself in response to each action, without the programmer needing to decide which stores to wire up to which actions, and in most cases, you can change or read the code for a store without needing to worrying about how it affects any other store.
At some point the programmer will need to register the store. The store is very specific to the data it'll receive from the event. How exactly is looking up the data inside the store better than registering for a specific event, and having the store always expect the data it needs/cares about?
The actions in the system represent the things that can happen in a system, along with the relevant data for that event. For example:
A user logged in; comes with user profile
A user added a comment; comes with comment data, item ID it was added to
A user updated a post; comes with the post data
So, you can think about actions as the database of things the stores can know about. Any time an action is dispatched, it's sent to each store. So, at any given time, you only need to think about your data mutations a single store + action at a time.
For instance, when a post is updated, you might have a PostStore that watches for the POST_UPDATED action, and when it sees it, it will update its internal state to store off the new post. This is completely separate from any other store which may also care about the POST_UPDATED event—any other programmer from any other team working on the app can make that decision separately, with the knowledge that they are able to hook into any action in the database of actions that may take place.
Another reason this is useful and scalable in terms of the codebase is inversion of control; each store decides what actions it cares about and how to respond to each action; all the data logic is centralized in that store. This is in contrast to a pattern like MVC, where a controller is explicitly set up to call mutation methods on models, and one or more other controllers may also be calling mutation methods on the same models at the same time (or different times); the data update logic is spread through the system, and understanding the data flow requires understanding each place the model might update.
Finally, another thing to keep in mind is that registering vs. not registering is sort of a matter of semantics; it's trivial to abstract away the fact that the store receives all actions. For example, in Fluxxor, the stores have a method called bindActions that binds specific actions to specific callbacks:
this.bindActions(
"FIRST_ACTION_TYPE", this.handleFirstActionType,
"OTHER_ACTION_TYPE", this.handleOtherActionType
);
Even though the store receives all actions, under the hood it looks up the action type in an internal map and calls the appropriate callback on the store.
Ive been asking myself the same question, and cant see technically how registering adds much, beyond simplification. I will pose my understanding of the system so that hopefully if i am wrong, i can be corrected.
TLDR; EventEmitter and Dispatcher serve similar purposes (pub/sub) but focus their efforts on different features. Specifically, the 'waitFor' functionality (which allows one event handler to ensure that a different one has already been called) is not available with EventEmitter. Dispatcher has focussed its efforts on the 'waitFor' feature.
The final result of the system is to communicate to the stores that an action has happened. Whether the store 'subscribes to all events, then filters' or 'subscribes a specific event' (filtering at the dispatcher). Should not affect the final result. Data is transferred in your application. (handler always only switches on event type and processes, eg. it doesn't want to operate on ALL events)
As you said "At some point the programmer will need to register the store.". It is just a question of fidelity of subscription. I don't think that a change in fidelity has any affect on 'inversion of control' for instance.
The added (killer) feature in facebook's Dispatcher is it's ability to 'waitFor' a different store, to handle the event first. The question is, does this feature require that each store has only one event handler?
Let's look at the process. When you dispatch an action on the Dispatcher, it (omitting some details):
iterates all registered subscribers (to the dispatcher)
calls the registered callback (one per stores)
the callback can call 'waitfor()', and pass a 'dispatchId'. This internally references the callback of registered by a different store. This is executed synchronously, causing the other store to receive the action and be updated first. This requires that the 'waitFor()' is called before your code which handles the action.
The callback called by 'waitFor' switches on action type to execute the correct code.
the callback can now run its code, knowing that its dependancies (other stores) have already been updated.
the callback switches on the action 'type' to execute the correct code.
This seems a very simple way to allow event dependancies.
Basically all callbacks are eventually called, but in a specific order. And then switch to only execute specific code. So, it is as if we only triggered a handler for the 'add-item' event on the each store, in the correct order.
If subscriptions where at a callback level (not 'store' level), would this still be possible? It would mean:
Each store would register multiple callbacks to specific events, keeping reference to their 'dispatchTokens' (same as currently)
Each callback would have its own 'dispatchToken'
The user would still 'waitFor' a specific callback, but be a specific handler for a specific store
The dispatcher would then only need to dispatch to callbacks of a specific action, in the same order
Possibly, the smart people at facebook have figured out that this would actually be less performant to add the complexity of individual callbacks, or possibly it is not a priority.
In our wicket application I need to start a long-running operation. It will communicate with an external device and provide a result after some time (up to a few minutes).
Java-wise the long running operation is started by a method where I can provide a callback.
public interface LegacyThingy {
void startLegacyWork(WorkFinished callback);
}
public interface WorkFinished {
public void success(Whatever ...);
// failure never happens
}
On my Wicket Page I plan to add an Ajax Button to invoke startLegacyWork(...) providing an appropriate callback. For the result I'd display a panel that polls for the result using an AbstractTimerBehavior.
What boggles my mind is the following problem:
To keep state Wicket serializes the component tree along with the data, thus the data needs to be wrapped in serializable models (or detachable models).
So to keep the "connection" between the result panel and the WorkFinished callback I'd need some way to create a link between the "we serialize everything" world of Wicket and the "Hey I'm a Java Object and nobody manages my lifetime" world of the legacy interface.
Of course I could store ongoing operations in a kind of global map and use a Wicket detachable model that looks them up by id ... but that feels dirty and I don't assume that's the correct way. (It opens up a whole can of worms regarding lifetime of such things).
Or I'm looking at a completly wrong angle on how to do long running operations from wicket?
I think the approach with the global map is good. Wicket also uses something similar internally - org.apache.wicket.protocol.http.StoredResponsesMap. This is a special map that keeps the generated responses for REDIRECT_TO_BUFFER strategy. It has the logic to keep the entries for at most some pre-configured duration and also can have upper limit of entries.
I've been looking at CQRS but I find it restricting when it comes to showing the result of commands in lets say a Web Application.
It seems to me that using CQRS, one is forced to refresh the whole view or parts of it to see the changes (using a second request) because the original command request will only store an event which is to be processed in future.
In a Web Application, is it possible that a Command request could carry the result of the event it creates back to the browser?
The answer to the headline of this question is quite simple: nothing, void or from a webbrower/rest point of view 200 OK with an empty body.
Commands applied to the system (if the change is successfully committed) does not yield a result. And in the case that you wish to leave the business logic on the server side, yes you do need to refresh the data by executing yet another request (query) to the server.
However most often you can get rid of the 2nd roundtrip to the server. Take a table where you modify a row and press a save button. Do you really need to update the table? Or in the case a user submits a comment on a blog post just append the comment to the other comments in the dom without the round trip.
If you find yourself wanting the modified state returned from the server you need to think hard about what you are trying to achieve. Most scenarios can be changed so that a simple 200 OK is more than enough.
Update: Regarding your question about queuing incoming commands. It's not recommended that incoming commands are queued since this can return false positives (a command was successfully received and queued but when the command tries to modify the state of the system it fails). There is one exception to the rule and that is if you are having a system with an append only model as state. Then is safe to queue the mutation of the system state till later if the command is valid.
Udi Dahans article called Clarified CQRS is always a good read on this topic http://www.udidahan.com/2009/12/09/clarified-cqrs/
Async commands are a strange thing to do in CQRS considering that commands can be accepter or rejected.
I wrote about it, mentioning the debate between Udi Dahan's vision and Greg Young's vision on my blog: https://www.sunnyatticsoftware.com/blog/asynchronous-commands-are-dangerous
Answering your question, if you strive to design the domain objects (aggregates?) in a transactional way, where every command initiates a transaction that ends in zero, one or more events (independently on whether there are some process managers later on, picking one event and initiating another transaction), then I see no reason to have an empty command result. It's extremely useful for the external actor that initates the use case, to receive a command result indicating things like whether the command was accepted or not, which events did it produce, or which specific state has now the domain (e.g: aggregate version).
When you design a system in CQRS with asynchronous commands, it's a fallacy to expect that the command will succeed and that there will be a quick state change that you'll be notified about.
Sometimes the domain needs to communicate with external services (domain services?) in an asynchronous way depending on those services api. That does not mean that the domain cannot produce meaningful domain events informing of what's going on and which changes have occured in the domain in a synchronous way. For example, the following flow makes a lot of sense:
Actor sends a sync command PurchaseBasket
Domain uses an external service to MakePayment and knows that the payment is being processed
Domain produces the events BasketPurchaseAttempted and/or PaymentRequested or similar
Still, synchronously, the command returns the result 200 Ok with a payload indicating some information about what has happened. Even if the payment hasn't completed because the payment platform is asynchronous, at least the actor has a meaningful knowledge about the result of the transaction it initiated.
Compare this design with an asynchronous one
Actor sends an async command PurchaseBasket
The system returns a 202 Accepted with a transaction Id indicating "thanks for your interest, we'll call you, this is the ticket number")
In a separate process, the domain initiates a process manager or similar with the payment platform, and when the process completes (if it completes, assuming the command is accepted and there are no business rules that forbid the purchase basket), then the system can start the notifying process to the actor.
Think about how to test both scenarios. Think about how to design UX to accommodate this. What would you show in the second scenario in the UI? Would you assume the command was accepted? Would you display the transaction Id with a thank you message and "please wait"? Would you take a big faith leap and keep the user waiting with a loading screen waiting for the async process to finish and be notified with a web socket or polling strategy for XXX seconds?
Async commands in CQRS are a dangerous thing and make us lazy domain designers.
UPDATE: the accepted answer suggest not to return anything and I fully disagree. Checkout Eventuous library and you'll see that returning a result is extremely helpful.
Also, if an async command can't be rejected it's... because it's not really a command but a fact.
UPDATE: I am surprised my answer got negative votes. Especially because Greg Young, the creator of CQRS term, says literally in his book about CQRS
One important aspect of Commands is that they are always in the imperative tense; that is they are
telling the Application Server to do something. The linguistics with Commands are important. A situation
could for with a disconnected client where something has already happened such as a sale and could
want to send up a “SaleOccurred” Command object. When analyzing this, is the domain allowed to say
no that this thing did not happen? Placing Commands in the imperative tense linguistically shows that
the Application Server is allowed to reject the Command, if it were not allowed to, it would be an Event
for more information on this see “Events”.
While I understand certain authors are biased towards the solutions they sell, I'd go to the main source of info in CQRS, regardless of how many hundred of implementations are there returning void when they can return something to inform requester asap. It's just an implementation detail, but it'll help model better the solution to think that way.
Greg Young, again, the guy who coined the CQRS term, also says
CQRS and Event Sourcing describe something inside a single system or component.
The communication between different components/bounded contexts (which ideally should be event driven and asynchronous, although that's not a requirement either) is outside the scope of CQRS.
PS: ignoring an event is not the same as rejecting a command. Rejection implies a direct answer to the command sender. Something "difficult" if you return nothing to the sender (not even a correlation ID?)
Source:
https://gregfyoung.wordpress.com/tag/cqrs/
https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf