Azure Service Bus: Ordered Processing of Session Sequences - azure

Are there any recommended architectural patterns with Service Bus for ensuring ordered processing of nested groups of messages which are sent out of order? We are using Sessions, but when it comes down to ensuring that a set of Sessions must be processed sequentially in a certain order before moving onto another set of Sessions, the architecture becomes cumbersome very quickly. This question might best be illustrated with an example.
We are using Service Bus to integrate changes in real-time from a database to a third-party API. Every N minutes, we get notified of a new 'batch' of changes from the database which consists of individual records of data across different entities. We then transform/map each record and send it along to an API. For example, a 'batch' of changes might include 5 new/changed 'Person' records, 3 new/changed 'Membership' records, etc.
At the outer-most level, we must always process one entire batch before we can move on to another batch of data, but we also have a requirement to process each type of entity in a certain order. For example, all 'Person' changes must be processed for a given batch before we can move on to any other objects.
There is no guarantee that these records will be queued up in any order which is relevant to how they will need to be processed, particularly within a 'batch' of changes (e.g. the data from different entity types will be interleaved).
We actually do not necessarily need to send the individual records of entity data in any order to the API (e.g. it does not matter in which order I send those 5 Person records for that batch, as long as they are all sent before the 3 Membership records for that batch). However, we do group the messages into Sessions by entity type so that we can guarantee homogeneous records in a given session and target all records for that entity type (this also helps us support a separate requirement we have when calling the API to send a batch of records when possible instead of an individual call per record to avoid API rate limiting issues). Currently, our actual Topic Subscription containing the record data is broken up into Sessions which are unique to the entity type and the batch.
"SessionId": "Batch1234\Person"
We are finding that it is cumbersome to manage the requirement that all changes for a given batch must be processed before we move on to the next batch, because there is no Session which reliably groups those "groups of entities" together (let alone processing those groups of entities themselves in a certain order). There is, of course, no concept of a 'session of sessions', and we are currently handling this by having a separate 'Sync' queue to represent an entire batch of changes which needs to be processed what sessions of data are contained in that batch:
"SessionId": "Batch1234",
"Body":
{
"targets": ["Batch1234\Person", "Batch1234\Membership", ...]
}
This is quite cumbersome, because something (e.g. a Durable Azure Function) now has to orchestrate the entire process by watching the Sync queue and then spinning off separate processors that it oversees to ensure correct ordering at each level (which makes concurrency management and scalability much more complicated to deal with). If this is indeed a good pattern, then I do not mind implementing the extra orchestration architecture to ensure a robust, scalable implementation. However, I cannot help from feeling that I am missing something or not thinking about the architecture the right way.
Is anyone aware of any other recommended pattern(s) in Service Bus for handling ordered processing of groups of data which themselves contain groups of data which must be processed in a certain order?

For the record I'm not a service bus expert, specifically.
The entire batch construct sounds painful - can you do away with it? Often if you have a painful input, you'll have a painful solution - the old "crap in, crap out" maxim. Sometimes it's just hard to find an elegant solution.
Do the 'sets of sessions' need to be processed in a specific order?
Is a 'batch' of changes = a session?
I can't think of a specific pattern, but a "divide and conquer" approach seems reasonable (which is roughly what you have already?):
Watch for new batches, when one occurs hand it off to a BatchProcessor.
BatchProcessor applies all the rules to the batch, as you outlined.
Consider having the BatchProcessor dump it's results on a queue of some kind which is the source for the API - that way you have some kind of isolation between the batch processing and the API.

Related

How to handle (partially) dependant aggregate roots?

I have domain concept of Product.
Product have some GeneralDetails, lets say: sku, name, description.
At the same time, Product have some ProductCalculations part where accountants can put different values like purchasePrice, stockLevelExpenses, wholeSalesPrice, retailPrice.
So, so far, Product would look something like:
class Product{
GeneralDetails Details;
ProductCalculations Calculations;
ChangeDetails(GeneralDetails details){}
Recalculate(ProductCalculations calculations{}
}
This setup would make Product an aggregate root. But now, i want to split it in a way that Product manager can input/update product details but then that accountant can step in and intependently change calculations for given product without concurrency issues.
That would suggest splitting it into 2 separate aggregate roots.
But then, deleting ProductDetails aggregate must mean deleting ProductCalculations too and it should happen in transactional way.
Assuming they are 2 aggregate roots, meaning they have 2 separate repositories with corresponding Delete methods, how to implement this as an atomic transaction?
The only thing i can think about is to raise event when ProductDetails gets deleted, have a handler (DomainService) that uses some special repository that handles transactions over multiple aggregate roots.
Is there some problem with that approach and/or is there some better way to handle it?
PS.
I cannot allow eventual consistency when ProductDetails is deleted.
PS2.
Based on comments from #Jon, Details and Calculations create&delete should be synced in a way that when Details are created/deleted, Calculations should also be created/deleted.
On the other hand, their updates should be completely independent.
I think the answer to your question depends somewhat on what data storage technology you're using and your data storage model, because if you can push operation transactionality to the data layer, things get much easier.
If you're using a document-oriented database (Cosmos DB, MongoDB, etc...), I would model and store your Product aggregate (including Details and Calculations) as a single document and you get the atomic transaction and concurrency checking for free from the database.
If you must store these as separate documents/records in your data store, then providing atomic transactions and concurrency checking becomes your concern. For years folks (especially those using Entity Framework) have been using the Unit of Work pattern to batch up multiple repository operations and submit them to the database as a single operation (EF-specific UoW implementation). Rob Conery suggests here that a better option is to use Command objects to encapsulate a multi-part operation that needs to be executed as a single transaction.
In any event, I would encourage you to keep the management of this operation within Product, so that consumers of Product are unaware of what's going on during the save - they just blissfully call product.SaveAsync() and they don't need to know whether that's causing one record update or ten. As long as Product is injected with the repositories it needs to get the job done, there's no need to have a separate domain service to coordinate this operation. There's nothing wrong with Product listening for events that its children raise and responding appropriately.
Hope this helps!
" I cannot allow eventual consistency when ProductDetails is deleted"
Why not? What would be the business cost of having Inventory.Product exist while Finance.Product doesn't or vice-versa?
"but then that accountant can step in and intependently change calculations for given product"
That's pretty much what eventual consistency is, no?
If you really can't have eventual consistency then use a domain service to create/delete two distinct aggregate roots in a single transaction, but ask yourself how you are going to do that if the information is not entirely provided by the same end user?
I agree with #plalx in almost every point. However I want to do my bit to the discussion.
I've found that there is usually a very little cost in creating two or more related aggregates inside a single transaction (inside a single bounded context). After all, if those aggregates don't exist yet there cannot be a concurrency conflict, there is no contention and no much difference. Furher, you don't need to deal with partially created state (thinking that state is split between aggregates). It is possible to do that using eventual consistency, and there are situations where that is a better approach, but most of the time there is no great benefit. Even Vernon in his book Implementing Domain-Driven Design mentions this use case as "valid reason to break the rules".
Deleting more than one aggregate is a different story. What should happen if you delete and aggregate that another user is updating at the same time? The probability of such a conflict increases as more aggregates you try to modify/delete in the same transaction. Is there always an upstream/downstream relationship between those aggregates? I mean, if an user deletes A and B must be also deleted, have the user that is updating B no "power" or "voice" to cancel that deletion since she is providing more information to the state of the aggregate?
Those are a very tricky questions and most of the time it is something you need to discuss with a domain expert, and there are very few real scenarios when the answer is something you can't afford with eventual consistency. I discovered that in many cases is preferable to put a "flag" marking the aggregate as "inactive", notifying that will be deleted after some period of time. If no user with enough permission request that aggregate to become active again, then it gets deleted. That helped users to not kill themselves when they delete some aggregate by mistake.
You've mentioned that you don't want a user to spend hours modifying one aggregate if there is a deletion, but that is something that a transaction doesn't contribute much. This is very dependent in the whole architecture, though. That user could have loaded the aggregate into her own memory space and then a deletion occurs. It doesn't matter if you delete inside a transaction, the user is still wasting time. A better solution could be to publish a domain event that triggers some sort of push notification to the user, so she knows that a deletion happened and can stop working (or request a cancellation of that deletion, if you follow such approach).
For the reports and calculations, there are many cases when those "scripts" can skip records where the sibling aggregate is gone, so users doesn't notice there is a missing part or there is no complete consistency yet.
If for some reason you still need to delete several aggregates in the same transaction you just start a transaction in an application service and use repositories to perform the deletion, analogous to the creation case.
So, to summarize:
The rule of "modify one aggregate per transaction" is not that important when there is a creation of aggregates.
Deletion of many aggregates works quite well (most of the time) with eventual consistency, and very often just disabling those aggregates, one at a time, is better than performing the deletion immediately.
Preventing an user from wasting time is better achieved with proper notifications than transactions.
If there is a real need to perform those actions inside a single transaction, then manage that transaction in the application an be explicit. Using a domain service to perform all the required operations (except for the transaction that is mostly an application concern) brings that logic back to the domain layer.

Cassandra counter usage

I am finding some difficulties in the data modeling of an application which may involve the use of counters.
The app is basically a messaging app. Messages are bounded for free users, hence the initial plan of using a counter column to keep track of the total count.
I've discovered that batches (logged or not) cannot contain operations on both standard tables and counter ones. How do I ensure correctness if I cannot batch the operation I am trying to perform and the counter update together? Is the counter type really needed if there's basically no race condition on the column, being that associated to each individual user?
My second idea would be to use a standard int column to use only inside batches. Is this a viable option?
Thank you
If you can absolutely guarantee that each user will produce only one update at time then you could rely on plain ints to perform the job.
The problem however is that you will need to perform a read-before-write anti-pattern. You could solve this as well, eg skipping the read part by caching your ints and performing in-memory updates followed by writes only. This is viable by coupling your system with a caching server (e.g. Redis).
And thinking about it, you should still need to read these counters at some point, because if the number of messages a free user can send is bound to some value then you need to perform a check when they login/try to send a new message/look at the dashboard/etc and block their action.
Another option (if you store the messages sent by each user somewhere and don't want to add complexity to your system) could be to directly count them with a SELECT COUNT... type query, even if this could be become pretty inefficient very soon in the Cassandra world.

Choosing a NoSQL database for storing events in a CQRS designed application

I am looking for a good, up to date and "decision helping" explanation on how to choose a NoSQL database engine for storing all the events in a CQRS designed application.
I am currently a newcomer to all things around NoSQL (but learning): please be clear and do not hesitate to explain your point of view in an (almost too much) precise manner. This post may deserve other newcomers like me.
This database will:
Be able to insert 2 to 10 rows per updates asked by the front view (in my case, updates are frequent). Think of thousand of updates per minute, how would it scale?
Critically need to be consistent and failure safe, since events are the source of truth of the application
Not need any link between entities (like RDBMS does) except maybe a user ID/GUID (I don't know if it's critical or needed yet)
Receive events containing 3 to 10 "columns" (a sequence ID, an event name, a datetime, a JSON/binary encoded parameter bag, some context informations..). Without orientating your point of view in a column-oriented type of database, it may be document-oriented if it fits all other requirements
Be used as a queue or sent to/read from an external AMQP system like RabbitMQ or ZeroMQ (didn't worked that part yet, if you could also argument/explain..) since view projections will be built upon events
Need some kind of filtering by sequence ID like SELECT * FROM events WHERE sequence_id > last_sequence_id for subscribers (or queue systems) to be able to synchronize from a given point
I heard of HBase for CQRS event storing, but maybe MongoDB could fit? Or even Elasticsearch (would not bet on that one..)? I'm also open to RDBMS for consistency and availability.. but what about the partition tolerance part..?
Really I'm lost, I need arguments to make a pertinent choice.
https://geteventstore.com/ is a database designed specifically for event streams.
They take consistency and reliability of the source of truth (your events) very seriously and I use it myself to read/write thousands of events a second.
I have a working, in production implementation of MongoDB as an Event store. It is used by a CQRS + Event sourcing web based CRM application.
In order to provide 100% transaction-less but transaction-like guarantee for persisting multiple events in one go (all events or none of them) I use a MongoDB document as an events commit, with events as nested documents. As you know, MongoDB has document level locking.
For concurrency I use optimistic locking, using a version property for each Aggregate steam. An Aggregate stream is identified by the dublet (Aggregate class x Aggregate ID).
The event store also stores the commits in relative order using a sequence on each commit, incremented on each commit, protected using optimistic locking.
Each commit contains the following:
aggregateId : string, probably a GUID,
aggregateClass: string,
version: integer, incremented for each aggregateId x aggregateClass,
sequence, integer, incremented for each commit,
createdAt: UTCDateTime,
authenticatedUserId: string or null,
events: list of EventWithMetadata,
Each EventWithMetadata contains the event class/type and the payload as string (the serialized version of the actual event).
The MongoDB collection has the following indexes:
aggregateId, aggregateClass, version as unique
events.eventClass, sequence
sequence
other indexes for query optimization
These indexes are used to enforce the general event store rules (no events are stored for the same version of an Aggregate) and for query optimizations (the client can select only certain events - by type - from all streams).
You could use sharding by aggregateId to scale, if you strip the global ordering of events (the sequence property) and you move that responsibility to an event publisher but this complicates things as the event publisher needs to stay synchronized (even in case of failure!) with the event store. I recommend to do it only if you need it.
Benchmarks for this implementation (on Intel I7 with 8GB of RAM):
total aggregate write time was: 7.99, speed: 12516 events wrote per second
total aggregate read time was: 1.43, speed: 35036 events read per second
total read-model read time was: 3.26, speed: 30679 events read per second
I've noticed that MongoDB was slow on counting the number of events in the event store. I don't know why but I don't care as I don't need this feature.
I recommend using MongoDB as an event store.
I have an .NET Core event sourcing implementation project https://github.com/jacqueskang/EventSourcing
I started with relational database (SQL Server and MySQL) using entity framework core.
Then moved to AWS so I wrote a DynamoDB extension.
My experience is that relational DB can do the job perfectly but it depends on requirement and your technical stack. If your project is cloud based then the best option might probably be cloud provider's no-sql database like AWS DynamoDB or Azure CosmosDB, which are powerful in proformance and provide additional features (e.g. DynamoDB can trigger a notification or lambda function)

Rebuild queries from domain events by multiple aggregates

I'm using a DDD/CQRS/ES approach and I have some questions about modeling my aggregate(s) and queries. As an example consider the following scenario:
A User can create a WorkItem, change its title and associate other users to it. A WorkItem has participants (associated users) and a participant can add Actions to a WorkItem. Participants can execute Actions.
Let's just assume that Users are already created and I only need userIds.
I have the following WorkItem commands:
CreateWorkItem
ChangeTitle
AddParticipant
AddAction
ExecuteAction
These commands must be idempotent, so I cant add twice the same user or action.
And the following query:
WorkItemDetails (all info for a work item)
Queries are updated by handlers that handle domain events raised by WorkItem aggregate(s) (after they're persisted in the EventStore). All these events contain the WorkItemId. I would like to be able to rebuild the queries on the fly, if needed, by loading all the relevant events and processing them in sequence. This is because my users usually won't access WorkItems created one year ago, so I don't need to have these queries processed. So when I fetch a query that doesn't exist, I could rebuild it and store it in a key/value store with a TTL.
Domain events have an aggregateId (used as the event streamId and shard key) and a sequenceId (used as the eventId within an event stream).
So my first attempt was to create a large Aggregate called WorkItem that had a collection of participants and a collection of actions. Participant and Actions are entities that live only within a WorkItem. A participant references a userId and an action references a participantId. They can have more information, but it's not relevant for this exercise. With this solution my large WorkItem aggregate can ensure that the commands are idempotent because I can validate that I don't add duplicate participants or actions, and if I want to rebuild the WorkItemDetails query, I just load/process all the events for a given WorkItemId.
This works fine because since I only have one aggregate, the WorkItemId can be the aggregateId, so when I rebuild the query I just load all events for a given WorkItemId.
However, this solution has the performance issues of a large Aggregate (why load all participants and actions to process a ChangeTitle command?).
So my next attempt is to have different aggregates, all with the same WorkItemId as a property but only the WorkItem aggregate has it as an aggregateId. This fixes the performance issues, I can update the query because all events contain the WorkItemId but now my problem is that I can't rebuild it from scratch because I don't know the aggregateIds for the other aggregates, so I can't load their event streams and process them. They have a WorkItemId property but that's not their real aggregateId. Also I can't guarantee that I process events sequentially, because each aggregate will have its own event stream, but I'm not sure if that's a real problem.
Another solution I can think of is to have a dedicated event stream to consolidate all WorkItem events raised by the multiple aggregates. So I could have event handlers that simply append the events fired by the Participant and Actions to an event stream whose id would be something like "{workItemId}:allevents". This would be used only to rebuild the WorkItemDetails query. This sounds like an hack.. basically I'm creating an "aggregate" that has no business operations.
What other solutions do I have? Is it uncommon to rebuild queries on the fly? Can it be done when events for multiple aggregates (multiple event streams) are used to build the same query? I've searched for this scenario and haven't found anything useful. I feel like I'm missing something that should be very obvious, but I haven't figured what.
Any help on this is very much appreciated.
Thanks
I don't think you should design your aggregates with querying concerns in mind. The Read side is here for that.
On the domain side, focus on consistency concerns (how small can the aggregate be and the domain still remain consistent in a single transaction), concurrency (how big can it be and not suffer concurrent access problems / race conditions ?) and performance (would we load thousands of objects in memory just to perform a simple command ? -- exactly what you were asking).
I don't see anything wrong with on-demand read models. It's basically the same as reading from a live stream, except you re-create the stream when you need it. However, this might be quite a lot of work for not an extraordinary gain, because most of the time, entities are queried just after they are modified. If on-demand becomes "basically every time the entity changes", you might as well subscribe to live changes. As for "old" views, the definition of "old" is that they are not modified any more, so they don't need to be recalculated anyways, regardless of if you have an on-demand or continuous system.
If you go the multiple small aggregates route and your Read Model needs information from several sources to update itself, you have a couple of options :
Enrich emitted events with additional data
Read from multiple event streams and consolidate their data to build the read model. No magic here, the Read side needs to know which aggregates are involved in a particular projection. You could also query other Read Models if you know they are up-to-date and will give you just the data you need.
See CQRS events do not contain details needed for updating read model

update 40+ million entities in azure table with many instances how to handle concurrency issues

So here is the problem. I need to update about 40 million entities in an azure table. Doing this with a single instance (select -> delete original -> insert with new partitionkey) will take until about Christmas.
My thought is use an azure worker role with many instances running. The problem here is the query grabs the top 1000 records. That's fine with one instance but with 20 running their selects will obviously overlap.. a lot. this would result in a lot of wasted compute trying to delete records that were already deleted by another instance and updating a record that has already been updated.
I've run through a few ideas, but the best option I have is to have the roles fill up a queue with partition and row keys then have the workers dequeue and do the actual processing?
Any better ideas?
Very interesting question!!! Extending #Brian Reischl's answer (and a lot of it is thinking out loud, so please bear with me :))
Assumptions:
Your entities are serializable in some shape or form. I would assume that you'll get raw data in XML format.
You have one separate worker role which is doing all the reading of entities.
You know how many worker roles would be needed to write modified entities. For the sake of argument, let's assume it is 20 as you mentioned.
Possible Solution:
First you will create 20 blob containers. Let's name them container-00, container-01, ... container-19.
Then you start reading entities - 1000 at a time. Since you're getting raw data in XML format out of table storage, you create an XML file and store those 1000 entities in container-00. You fetch next set of entities and save them in XML format in container-01 and so on and so forth till the time you hit container-19. Then the next set of entities go into container-00. This way you're evenly distributing your entities across all the 20 containers.
Once all the entities are written, your worker role for processing these entities would come into picture. Since we know that instances in Windows Azure are sequentially ordered, you get instance names like WorkerRole_IN_0, WorkerRole_IN_1, ... and so on.
What you would do is take the instance name, get the number "0", "1" etc. Based on this you would determine which worker role instance will read from which blob container...WorkerRole_IN_0 will read files from container-00, WorkerRole_IN_1 will read files from container-01 and so on.
Now your individual worker role instance will read the XML file, create the entities from that XML file, update those entities and save it back into table storage. Once this process is done, you would then delete the XML file and you move on to next file in that container. Once all files are read and processed, you can just delete the container.
As I said earlier, this is a lot "thinking out loud" kind of solution and some things must be considered like what happens when "reader" worker role goes down and other things.
If your PartitionKeys and/or RowKeys fall into a known range, you could attempt to divide them into disjoint sets of roughly equal size for each worker to handle. eg, Worker1 handles keys starting with 'A' through 'C', Worker2 handles keys starting with 'D' through 'F', etc.
If that's not feasible, then your queuing solution would probably work. But again, I would suggest that each queue message represent a range of keys if possible. eg, a single queue message specifies deleting everything in the range 'A' through 'C', or something like that.
In any case, if you have multiple entities in the same PartitionKey then use batch transactions to your advantage for both inserting and deleting. That could cut down the number of transactions by almost a factor of ten in the best case. You should also use parallelism within each worker role. Ideally use the async methods (either Begin/End or *Async) to do the writing, and run several transactions (12 is probably a good number) in parallel. You can also run multiple threads, but that's somewhat less efficient. In either case, a single worker can push a lot of transactions with table storage.
As a side note, your process should go "Select -> Insert New -> Delete Old". Going "Select -> Delete Old -> Insert New" could result in permanent data loss if a failure occurs between steps 2 & 3.
I think you should mark your question as the answer ;) I cant think of a better solution since I don't know what your partition and row keys look like. But to enhance your solution, you may choose to pump multiple partition/row keys into each queue message to save on transaction cost. Also when consuming from the queue, get them in batches of 32. Process asynchronously. I was able to transfer 170 million records from SQL server (Azure) to Table storage in less than a day.

Resources