First of all, let me state that I am new to Command Query Responsibility Segregation and Event Sourcing (Message-Drive Architecture), but I'm already seeing some significant design benefits. However, there are still a few issues on which I'm unclear.
Say I have a Customer class (an aggregate root) that contains a property called postalAddress (an instance of the Address class, which is a value object). I also have an Order class (another aggregate root) that contains (among OrderItem objects and other things) a property called deliveryAddress (also an instance of the Address class) and a string property called status.
The customer places an order by issueing a PlaceOrder command, which triggers the OrderReceived event. At this point in time, the status of the order is "RECEIVED". When the order is shipped, someone in the warehouse issues an ShipOrder command, which triggers the OrderShipped event. At this point in time, the status of the order is "SHIPPED".
One of the business rules is that if a Customer updates their postalAddress before an order is shipped (i.e., while the status is still "RECEIVED"), the deliveryAddress of the Order object should also be updated. If the status of the Order were already "SHIPPED", the deliveryAddress would not be updated.
Question 1. Is the best place to put this "conditionally cascading address update" in a Saga (a.k.a., Process Manager)? I assume so, given that it is translating an event ("The customer just updated their postal address...") to a command ("... so update the delivery address of order 123").
Question 2. If a Saga is the right tool for the job, how does it identify the orders that belong to the user, given that an aggregate can only be retrieved by it's unique ID (in my case a UUID)?
Continuing on, given that each aggregate represents a transactional boundary, if the system were to crash after the Customer's postalAddress was updated (the CustomerAddressUpdated event being persisted to the event store) but before the OrderDeliveryAddressUpdated could be updated (i.e., between the two transactions), then the system is left in an inconsistent state.
Question 3. How are such "violations" of consistency rules detected and rectified?
In most instances the delivery address of an order should be independent of any other data change as a customer may want he order sent to an arbitrary address. That being said, I'll give my 2c on how you could approach this:
Is the best place to handle this in a process manager?
Yes. You should have an OrderProcess.
How would one get hold of the correct OrderProcess instance given that it can only be retrieve by aggregate id?
There is nothing preventing one from adding any additional lookup mechanism that associates data to an aggregate id. In my experimental, going-live-soon, mechanism called shuttle-recall I have a IKeyStore mechanism that associates any arbitrary key to an AR Id. So you would be able to associate something like [order-process]:customerId=CID-123; as a key to some aggregate.
How are such "violations" of consistency rules detected and rectified?
In most cases they could be handled out-of-band, if possible. Should I order something from Amazon and I attempt to change my address after the order has shipped the order is still going to the original address. If your case of linking the customer postal address to the active order address you could notify the customer that n number of orders have had their addresses updated but that a recent order (within some tolerance) has not.
As for the system going down before processing you should have some guaranteed delivery mechanism to handle this. I do not regard these domain event in the same way I regard system events in a messaging infrastructure such as a service bus.
Just some thoughts :)
Related
Aggregate can use View this fact is described in Vaughn Vernon's book:
Such Read Model Projections are frequently used to expose information to various clients (such as desktop and Web user interfaces), but they are also quite useful for sharing information between Bounded Contexts and their Aggregates. Consider the scenario where an Invoice Aggregate needs some Customer information (for example, name, billing address, and tax ID) in order to calculate and prepare a proper Invoice. We can capture this information in an easy-to-consume form via CustomerBillingProjection, which will create and maintain an exclusive instance of CustomerBilling-View. This Read Model is available to the Invoice Aggregate through the Domain Service named IProvideCustomerBillingInformation. Under the covers this Domain Service just queries the document store for the appropriate instance of the CustomerBillingView
Let's imagine our application should allow to create many users, but with unique names. Commands/Events flow:
CreateUser{Alice} command sent
UserAggregate checks UsersListView, since there are no users with name Alice, aggregate decides to create user and publish event.
UserCreated{Alice} event published // By UserAggregate
UsersListProjection processed UserCreated{Alice} // for simplicity let's think UsersListProjection just accumulates users names if receives UserCreated event.
CreateUser{Bob} command sent
UserAggregate checks UsersListView, since there are no users with name Bob, aggregate decides to create user and publish event.
UserCreated{Bob} event published // By UserAggregate
CreateUser{Bob} command sent
UserAggregate checks UsersListView, since there are no users with name Bob, aggregate decides to create user and publish event.
UsersListProjection processed UserCreated{Bob} .
UsersListProjection processed UserCreated{Bob} .
The problem is - UsersListProjection did not have time to process event and contains irrelevant data, aggregate used this irrelevant data. As result - 2 users with the same name created.
how to avoid such situations?
how to make aggregates and projections consistent?
how to make aggregates and projections consistent?
In the common case, we don't. Projections are consistent with the aggregate at some time in the past, but do not necessarily have all of the latest updates. That's part of the point: we give up "immediate consistency" in exchange for other (higher leverage) benefits.
The duplication that you refer to is usually solved a different way: by using conditional writes to the book of record.
In your example, we would normally design the system so that the second attempt to write Bob to our data store would fail because conflict. Also, we prevent duplicates from propagating by ensuring that the write to the data store happens-before any events are made visible.
What this gives us, in effect, is a "first writer wins" write strategy. The writer that loses the data race has to retry/fail/etc.
(As a rule, this depends on the idea that both attempts to create Bob write that information to the same place, using the same locks.)
A common design to reduce the probability of conflict is to NOT use the "read model" of the aggregate itself, but to instead use its own data in the data store. That doesn't necessarily eliminate all data races, but you reduce the width of the window.
Finally, we fall back on Memories, Guesses and Apologies.
It's important to remember in CQRS that every write model is also a read model for the reads that are required to validate a command. Those reads are:
checking for the existence of an aggregate with a particular ID
loading the latest version of an entire aggregate
In general a CQRS/ES implementation will provide that read model for you. The particulars of how that's implemented will depend on the implementation.
Those are the only reads a command-handler ever needs to perform, and if a query can be answered with no more than those reads, the query can be expressed as a command (e.g. GetUserByName{Alice}) which when handled does not emit events. The benefit of such read-only commands is that they can be strongly consistent because they are limited to a single aggregate. Not all queries, of course, can be expressed this way, and if the query can tolerate eventual consistency, it may not be worth paying the coordination tax for strong consistency that you typically pay by making it a read-only command. (Command handling limited to a single aggregate is generally strongly consistent, but there are cases, e.g. when the events form a CRDT and an aggregate can live in multiple datacenters where even that consistency is loosened).
So with that in mind:
CreateUser{Alice} received
user Alice does not exist
persist UserCreated{Alice}
CreateUser{Alice} acknowledged (e.g. HTTP 200, ack to *MQ, Kafka offset commit)
UserListProjection updated from UserCreated{Alice}
CreateUser{Bob} received
user Bob does not exist
persist UserCreated{Bob}
CreateUser{Bob} acknowledged
CreateUser{Bob} received
user Bob already exists
command-handler for an existing user rejects the command and persists no events (it may log that an attempt to create a duplicate user was made)
CreateUser{Bob} ack'd with failure (e.g. HTTP 401, ack to *MQ, Kafka offset commit)
UserListProjection updated from UserCreated{Bob}
Note that while the UserListProjection can answer the question "does this user exist?", the fact that the write-side can also (and more consistently) answer that question does not in and of itself make that projection superfluous. UserListProjection can also answer questions like "who are all of the users?" or "which users have two consecutive vowels in their name?" which the write-side cannot answer.
I am working on a project in which we define two aggregates: "Project" and "Task". The Project, in addition to other attributes, has the points attribute. These points are distributed to the tasks as they are defined by users. In a use case, the user assigns points for some task, but the project must have these points available.
We currently model this as follows:
“task.RequestPoints(points)“, this method will create an aggregate PointsAssignment with attributes points and taskId, which in its constructor issues a PointsAssignmentRequested domain event.
The handler of the event issued will fetch the project related to the task and the aggregate PointsAssigment and call the method “project.assignPoints(pointsAssigment, service)”, that is, it will pass PointAssignment aggregate as a parameter and a service to calculate the difference between the current points of the task and the desired points.
If points are available, the project will modify its points attribute and issue a “ProjectPointsAssigned” domain event that will contain the pointsAssignmentId attribute (in addition to others)
The handler of this last event will fetch the PointsAssingment and confirm “pointsAssigment.Confirm ()”, this aggregate will issue a PointsAssigmentConfirmed domain event
The handler for this last event will bring up the associated task and call “task.AssignPoints (pointsAssignment.points)”
My question is: is it correct to pass in step 2 the aggregate PointsAssignment in the project method? That was the only way I found to be able to relate the aggregates.
Note: We have created the PointsAssignment aggregate so that in case of failure I could save the error “pointsAssignment.Reject(reasonText)” and display it to the user, since I am using eventual consistency (1 aggregate per transaction).
We think about use a Process Manager (PointsAssingmentProcess), but the same way we need the third aggregate PointsAssingment to correlate this process.
I would do it a little bit differently (it doesn´t mean more correct).
Your project doesn´t need to know anything about the PointsAssignment.
If your project is the one that has the available points for use, it can have simple methods of removing or adding points.
RemovePointsCommand -> project->removePoints(points)
AddPointsCommand -> project->addPoints(points)
Then, you would have an eventHandler that would react to the PointsAssignmentRequested (i imagine this guy has the id of the project and the number of points and maybe a status field from what you said)
This eventHandler would only do:
on(PointsAssignmentRequested) -> dispatch command (RemovePointsCommand)
// Note that, in here it would be wise to the client to send an ID for this operation, so it can do it asynchronously.
That command can either success or fail, and both of them can dispatch events:
RemovePointsSucceeded
RemovePointsFailed
// Remember that you have a correlation id from earlier persisted
Then, you would have a final eventHandler that would do:
on(RemovePointsSucceeded) -> PointsAssignment.succeed() //
Dispatches PointsAssignmentSuceeded
on(PointsAssignmentSuceeded) -> task.AssignPoints
(pointsAssignment.points)
On the fail side
on(RemovePointsFailed) -> PointsAssignment.fail() // Dispatches PointsAssignmentFailed
This way you dont have to mix aggregates together, all they know are each others id´s and they can work without knowing anything about the schema of other aggregates, avoiding undesired coupling.
I see the semantics of the this problem exactly as a bank transfer.
You have the bank account (project)
You have money in this bank account(points)
You are transferring money through a transfer process (pointsAssignment)
You are transferring money to an account (task)
The bank account only should have minimal operations, of withdrawing and depositing, it does not need to know anything about the transfer process.
The transfer process need to know from which bank it is withdrawing from and to which account it is depositing to.
I imagine your PointsAssignment being like
{
"projectId":"X",
"taskId":"Y",
"points" : 10,
"status" : ["issued", "succeeded", "failed"]
}
Suppose there are two microservices: Order and Inventory. There is an API in order service that takes ProductId, Qty etc and place the order.
Ideally order should only be allowed to place if inventory exists in inventory service. People recommend to have Saga pattern or any other distributed transactions. That is fine and eventually consistency will be utilized.
But what if somebody wants to abuse the system. He can push orders with products (ProductIds) which are either invalid or out of inventory. System will be taking all these orders and place these orders in queue and Inventory service will be handling these invalid order.
Shouldn't this be handled upfront (in order service) rather than pushing these invalid orders to the next level (specially where productId is invalid)
What are the recommendations to handle these scenarios?
What are the recommendations to handle these scenarios?
Give your order service access to the data that it needs to filter out undesirable orders.
The basic plot would be that, while the Inventory service is the authority for the state of inventory, your Orders service can work with a cached copy of the inventory to determine which orders to accept.
Changes to the Inventory are eventually replicated into the cache of the Orders service -- that's your "eventual consistency". If Inventory drops off line for a time, Order's can continue providing business value based on the information in its cache.
You may want to be paying attention to the age in the data in the cache as well -- if too much time has passed since the cache was last updated, then you may want to change strategies.
Your "aggregates" won't usually know that they are dealing with a cache; you'll pass along with the order data a domain service that supports the queries that the aggregate needs to do its work; the implementation of the domain service accesses the cache to provide answers.
So long as you don't allow the abuser to provide his own instance of the domain service, or to directly manipulate the cache, then the integrity of the cached data is ensured.
(For example: when you are testing the aggregate, you will likely be providing cached data tuned to your specific test scenario; that sort of hijacking is not something you want the abuser to be able to achieve in your production environment).
You most definitely would want to ensure up-front that you can catch as many invalid business cases as possible. There are a couple ways to deal with this. It is the same situation as one would have when booking a seat on an airline. Although they do over-booking which we'll ignore for now :)
Option 1: You could reserve an inventory item as part of the order. This is more of a pessimistic approach but your item would be reserved while you wait for the to be confirmed.
Option 2: You could accept the order only if there is an inventory item available but not reserve it and hope it is available later.
You could also create a back-order if the inventory item isn't available and you want to support back-orders.
If you go with option 1 you could miss out on a customer if an item has been reserved for customer A and customer B comes along and cannot order. If customer A decides not to complete the order the inventory item becomes available again but customer B has now gone off somewhere else to try and source the item.
As part of the fulfillment of your order you have to inform the inventory bounded context that you are now taking the item. However, you may now find that both customer A and B have accepted their quote and created an order for the last item. One is going to lose out. At this point the one not able to be fulfilled will send a mail to the customer and inform them of the unfortunate situation and perhaps create a back-order; or ask the customer to try again in X-number of days.
Your domain experts should make the call as to how to handle the scenarios and it all depends on item popularity, etc.
I will not try to convince you to not do this checking before placing an order and to rely on Sagas as it is usually done; I will consider that this is a business requirement that you must implement.
This seems like a new sub-domain to me: bad-behavior-prevention (or how do you want to call it) that comes with a new responsibility: to prevent abusers. You could add this responsibility to the Order microservice but you would break the SRP. So, it should be done in another microservice.
This new microservice is called from your API Gateway (if you have one) or from the Orders microservice.
If you do not to add a new microservice (from different reasons) then you could implement this new functionality as a module inside of the Orders microservice but I strongly recommend to make it highly decoupled from its host (separate and private persistence/database/table).
I'm looking for an advice related to the proper way of implementing a rollback feature in a CQRS/event-sourcing application.
This application allows to a group of editors to edit and update some editorial content, an editorial news for instance. We implemented the user interface so that each field has an auto save feature and now we would like to provide our users the possibility to undo the operations they did, so that it is possible to rollback the editorial news to a previous known state.
Basically we would like to implement something like to the undo command that you have in Microsoft Word and similar text editors. In the backend, the editorial news is an instance of an aggregate defined in our domain and called Story.
We have discussed some ideas to implement the rollback and we are looking for an advice based on real world experiences in similar projects. Here is our considerations about this feature.
How rollback works in real world business domains
First of all, we all know that in real world business domains what we are calling rollback is obtained via some form of compensation event.
Imagine a domain related to some sort of service for which it is possible to buy a subscription: we could have an aggregate representing a user subscription and an event describing that a charge has been associated to an instance of the aggregate (the particular subscription of one of the customers). A possible implementation of the event is as follows:
public class ChargeAssociatedToSubscriptionEvent: DomainEvent
{
public Guid SubscriptionId {get; set;}
public decimal Amount {get; set;}
public string Description {get; set;}
public DateTime DueDate {get; set;}
}
If a charge is wrongly associated to a subscription, it is possible to fix the error by means of an accreditation associated to the same subscription and having the same amount, so that the effect of the charge is completely balanced and the user get back its money. In other words, we could define the following compensation event:
public class AccreditationAssociatedToSubscription: DomainEvent
{
public Guid SubscriptionId {get; set;}
public decimal Amount {get; set;}
public string Description {get; set;}
public DateTime AccreditationDate {get; set;}
}
So if a user is wrongly charged for an amount of 50 dollars, we can compensate the error by means of an accreditation of 50 dollars to the user subscription: this way the state of the aggregate has been rolled back to the previous state.
Why things are not as easy as they seem
Based on the previous discussion, the rollback seems quite easy to be implemented. If you have an instance of the story aggregate at the aggregate revision B and you want to roll it back to a previous aggregate revision, say A (with A < B), you just have to do the following steps:
check the event store and get all the events between revisions A and B
compute the compensation event for each of the occurred events
apply the compensation events to the aggregate in the reverse order
Unfortunately, the second step of the previous procedure is not always possible: given a generic domain event it is not always possible to compute its compensation event, because the amount of information contained inside the event could not be enough to do that. Maybe it is possible to wisely define all the events so that they contain enough information to be able to compute the corresponding compensation event, but at the current state of our application there are several events for which computing the compensation event is not possible and we would prefer to avoid changing the shape of our events.
A possible solution based on state comparison
The first idea to overcome the issues with compensation event is computing the minimum set of events needed to roll back the aggregate by comparing the current state of the aggregate with the target state. The algorithm is basically the following:
get an instance of the aggregate at the current state (call it B)
get an instance of the aggregate at the target state (call it A) by applying only the first n events persisted inside event store (our repository allows to do that by specifying the aggregate id and the desired point in time to which materialize the aggregate)
compare the two instances and compute the minimum set of events to be applied to the aggregate in the state B in order to change its state to A
apply the computed events to the aggregate
A smarter approach based on event replay
Another way to solve the problem of rolling back to a previous state of the aggregate could be doing the same thing that the aggregate repository does when an aggregate is materialized at a specific point in time. In order to do that we should define an event, say StoryResettedEvent, whose effect is to reset the state of the aggregate by completely emptying it and do the following steps:
apply the StoryResettedEvent to our aggregate so that its state is emptied
get the first n events for the aggregate we are working on (all the events from the first saved event up to the target state A)
apply all the events to the aggregate instance
The main problem I see with this approach is the event to empty the state of the aggregate: it seems somewhat artificial, not a real domain event with a business meaning, but rather a trick to implement the rollback functionality.
The third way: persisting the compensation event each time an event is saved inside the event store
The third way we figured out to get what we need is based again on the concept of compensation event. The basic idea is that each event of the application could be enriched with a property containing the corresponding compensation event.
In the point of the code where an event is raised it is possible to immediately compute the compensation event for the event to be raised (based on the current state of the aggregate and the shape of the event), so that the event could be enriched with this information that this way will be saved inside the event store. By doing so the compensation events events are always available, ready to be used in case of a rollback request. The downside of this solution is that each domain event must be modified and only a minimum part of the compensation events we must compute and save inside the event store will be useful for an actual rollback (most of them will never be used).
Conclusions
In my opinion the best option to solve the problem is using the algorithm based on state comparison (the first proposed solution), but we are still evaluating what to do.
Does anyone have already had a similar requirement ? Is there any other way to implement a rollback ? Are we completely missing the point and following bad approaches to the problem ?
Thanks for helping, any advice will be appreciated.
How the compensation events are generated should be the concern of the Story aggregate (after all, that's the point of an aggregate in event sourcing - it's just the validator of commands and generator of events for a particular stream).
Presumably you are following something like a typical CQRS/ES flow:
client sends an Undo command, which presumably says what version it wants to undo back to, and what story it is targetting
The Undo Command Handler loads the Story aggregate in the usual way, either possibly from a snapshot and/or by applying the aggregate's events to the aggregate.
In some way, the command is passed to the aggregate (possibly a method call with args extracted from the command, or just passing the command directly to the aggregate)
The aggregate "returns" in some way the events to persist, assuming the undo command is valid. These are the compensating events.
compute the compensation event for each of the occurred events
...
Unfortunately, the second step of the previous procedure is not always possible
Why not? The aggregate has been passed all previous events, so what does it need that it doesn't have? The aggregate doesn't just see the events you want to roll back, it necessarily processes all events for that aggregate ever.
You have two options really - reduce the book-keeping that the aggregate needs to do by having the command handler help out in some way, or the whole process is managed internally by the aggregate.
Command handler helps out:
The command handler extracts from the command the version the user wants to roll back to, and then recreates the aggregate as-of that version (applying events in the usual way), in addition to creating the current aggregate. Then the old aggregate gets passed to the aggregate's undo method along with the command, so that the aggregate can then do state comparison more easily.
You might consider this to be a bit hacky, but it seems moderately harmless, and could significantly simplify the aggregate code.
Aggregate is on its own:
As events are applied to the aggregate, it adds to its state whatever book-keeping it needs to be able to compute the compensating events if it receives an undo command. This could be a map of compensating events, pre-computed, a list of every previous state that can potentially be reverted to (to allow state comparison), the list of events the aggregate has processed (so it can compute the previous state itself in the undo method), or whatever it needs, and it just stores it in its in-memory state (and snapshot state, if applicable).
The main concern with the aggregate doing it on its own is performance - if the size of the book-keeping state is large, the simplification of allowing the command handler to pass the previous state would be worthwhile. In any case, you should be able to switch between the approaches at any time in the future without any issues (except possibly needing to rebuild your snapshots, if you have them).
My 2 cents.
For rollback operation, an orchestration class will be responsible to handle it. It will publish a aggregate_modify_generated event and a projection on the other end for this event will fetch the current state of the aggregates after receiving it. Now when any of the aggregate failed, it should generate a failure event, upon receiving it, orchestration class will generate a aggregate_modify_rollback event that will received by that projection and will set aggregate state with the previously fetched state .
One common projector can do the task, because the events will have aggregate id.
I am implementing an application with domain driven design and event sourcing. I am storing all domain events in a DomainEvents table in SQL Server.
I have the following aggregates:
- City
+ Id
+ Enable()
+ Disable()
- Company
+ Id
+ CityId
+ Enable()
+ Disable()
- Employee
+ Id
+ CompnayId
+ Enable()
+ Disable()
Each one encapsulates its own domain logic and invariants. I designed them as separate aggregates, because one city may have thousands (maybe more) companies and company may also have very large number of employees. If this entities would belong to the same aggregate I had to load them together, which in most cases would be unnecessary.
Calling Enable or Disable will produce a domain event (e.g. CityEnabled, CompanyDisabled or EmployeeEnabled). These events contain the primary key of the enabled or disabled entity.
Now my problem is a new requirement forcing me to enable/disable all related Companies if a City is enabled/disabled. The same is required for Employees, if a Company is enabled/disabled.
In my event handler, which is invoked if for example CityDisabled has occurred
I need to execute DisableCompanyCommand for each company belonging to that city.
But how would I know what companies should be affected by that change?
My thoughts:
Querying the event store is not possible, because I can't use conditions like 'where CityId = event.CityId'
Letting the parent know its child ids and putting all child ids in every event the parent produces. Is also a bad idea because the event creator shouldn't care who will consume the events later. So only information belonging to the happening event should be in the event.
Executing the DisableCompanyCommand for every company. Only the companies having the matching CityId would change their state. Even though I would do that asynchronously it would produce a huge overhead loading every company on those events. And also for every company getting disabled the same procedure should be repeated to disable all users.
Creating read models mapping ParentIds to ChildIds and loading the childIds according to the parentId in the event. This sounds like the most appropriate solution, but the problem is, how would I know if a new Company is created while I am disabling the existing ones?
I am not satisfied with any of the solutions above. Basically the problem is to determine the affected aggregates for a happened event.
Maybe you have better solutions ?
What you are describing can be resolved by a Saga/Process manager that listen to the CityDisabled event. Then it finds the CompanyIds of all Companies in that City (by using an existing Read model or by maintaining a private state of CityIdsxCompanyIds) and sends each one a DisableCompany command.
The same applies to CompanyDisabled event, regarding the disabling of Employee.
P.S. Disabling a City/Company/Employee seems like CRUD to me, these don't seem terms from a normal ubiquitous language, it's not very DDD-ish but I consider your design as being correct in regard to this question.
Do your requirements mean you have to fire a CompanyDisabled event when disabling a city?
If not - and your requirement is just that a disabled city means all companies are disabled, then what you would do is on your city read model projection you'd listen for CityDisabled events and mark the companies disabled in your read model. (If your requirements are to fire an event for each city then Constantin's answer is good)
Your model is more of a child / parent relationship - its kind of a break in traditional "blue book" thought, but I recommend represent this relationship in your domain with more than a CityId.
In my app something like this would be coded as
public Task Handle(DoSomething command, IHandlerContext ctx)
{
var city = ctx.For<City>().Get(command.CityId);
var company = city.For<Company>().Get(command.CompanyId);
company.DoSomething();
}
public Company : Entity<City>
{
public void DoSomething()
{
// Parent is the City
if(Parent.Disabled)
throw new BusinessException("City is disabled");
Apply<SomethingDone>(x => {
x.CityId = Parent.Id;
x.CompanyId = Id;
...
});
}
}
(Psuedo code is NServiceBus style code and using my library Aggregates.NET)
It's quite probable that you don't have to explicitly force rules like 'enable/disable all related Companies if a City is enabled/disabled' in the domain (write) side at all.
If so, there's no need to disable all Companies when a City is disabled, within the domain. As Charles mentioned in his answer, just introduce a rule that e.g., "a Company is disabled if it is disabled itself (directly) or its City is disabled". The same with Company and its Employees.
This rule should be realized at the read side. A Company in the read model will have 2 properties: the first one is Enabled which is directly mapped from the domain; the second one is EnabledEffective which is calculatable based on the Company's Enabled value and its City's Enabled value. When a CityDisabled event happens, the read model's event handler traverses the City's all Companies in the read model and sets their EnabledEffective property to false; when a CityEnabled event happens, the handler sets the City's every Company's EnabledEffective property back to its own Enabled value. It is EnabledEffective property that you will use in the UI.
The logic can be a bit more complex with CompanyEnabled/CompanyDisabled event handling (in respect to Empoyees) as you must take into account both event info and enabled/disabled status of the host City.
If (effective) enabled/disabled status of a Company/Employee is really needed in the domain side (e.g. affecting the way these aggregates handle their commands), consider taking EnabledEffective value from the read side and passing it along with the command object.