Implementing transactional outbox event architecture in azure - azure

I have a large message submitted to a rest service, it could be 100k or 50mb. I need to process it asynchronously, and it looks like the transactional outbox event pattern is suitable for my needs.
Effectively, my service would commit the data to a database, along with an event record in a single transaction. A process would poll for events, and push the event to a message queue. THe event contains a reference to the data in the db, usually through a unique identifier of some sort. The consumer of the queue would query the data from the database, do whatever it needs to do and then remove the event record from the database.
This pattern is well documented. Here and here are two places.
I have a reasonable understanding of how one could implement this on-premis, in a .net/sql server environment that we are familiar with. In azure what would this look like? are there other ways I can transitionally write to the database and a queue that do not require the outbox pattern, or following the outbox pattern, what would be the mechanism that polls for events in the db, and what would provide the queue service?

Usually if you want to use the transactional outbox event pattern in azure you can use a logic app or an azure function to get events in the db and send them to the queue.
Doing that would be great if you use cosmos change feed so that your architecture is also reactive and will perform well with less resource consumption.
To avoid this pattern well....you should find a queue in azure that is able to be in transaction with your db and for what I know is not possible at least that you don ' t use a 3rd part queue .

Related

Selecting one producer for multiple consumers

In a Producer-Consumer case with multiple app instances, I know I am supposed to have some type of queue for the distribution of events to the consumers. But how do I deal with the producer?
I must query a database for objects with an expired deadline every minute. That will push work to a message queue, so distribution is not a problem. My concern is that if I have multiple instances of the app, I have to make sure that only one is producing work.
Am I supposed to solve this electing a cluster leader? Is there a common algorithm or library in NodeJS for this? My guess is that I will have to reach for some magic Redis command and make my instances aware of each other.
There are always many different ways to achieve things, but my suggestion is to create an idempotent outbox table in your database, where multiple producers throw the records to be published to the message queue.
Then, you can deploy a tool like Debezium that does transaction log tailing (reads the database transaction log) and pushes the message to whatever message queue technology you're using.
Please note that it's also a good practice to implement the idempotency check on your consumers to make sure they don't process the same message twice.
Wix - How We Implemented Idempotency in a Billing System at Scale

How to send message to Microsoft EventHub with Db Transaction?

I want to send the event to Microsoft Event-hub with Db transaction:
Explanation:
User hit a endpoint of order creation.
OrderService accept the order and put that order into the db.
Now Order service want to send that orderId as event to another services using the Event-hub.
How can I achieve transactional behaviour for step 2 and 3?
I know these solutions:
Outbox pattern: Where I put message in another table with order creation transaction. And there is one cron/scheduler, that takes the message from table and mark them delivered. and next time cron will take only not delivered messages.
Use Database audit log and library that taken of this things. Library will bind the database table to Event-hub. Then on every update library will send that change to Event-hub.
I wanted to know is there any in-built transactional feature in Event-hub?
Or
Is there any better way to handle this thing?
There is no concept of transactions within Event Hubs at present. I'm not sure, given the limited context that was shared, that Event Hubs is the best fit for your scenario. Azure Service Bus has transaction support and may be a more natural fit for your intended flow.
In this kind of distributed scenario, regardless of which message broker you decide on, I would advise embracing eventual consistency and considering a pattern similar to:
Your order creation endpoint receives a request
The order creation endpoint assigns a unique identifier for the request and emits the event to Event Hubs; if the send was successful it returns a 202 (Accepted) to the caller and a Retry-After header to indicate to the caller that they should wait for that period of time before checking the status of that order's creation.
Some process is responsible for reading events from the Event Hub and creating that order within the database. Depending on your ecosystem's tolerance, this may be a dedicated process or could be something like an Azure Function with an Event Hubs trigger.
Other event consumers interested in orders will also see the creation request and will call into your order service or database for the details using the unique identifier that as assigned by the order creation endpoint; this may or may not be the official order number within the system.

Architecture issue - Azure servicebus and message order guarantee

Ok so i'm relatively new to the servicebus. Working on a project where we use Azure servicebus for queueing messages. Our architecture roughly looks like the following:
So the idea is that in our SourceSystem all kinds of stuff happens, which leads to messages being put on the servicebustopics. Now our responsibility is syncing these events to the external client so they are aware of what we are doing.
Now the issue is that currently we dont use servicebus sessions so message order isnt guaranteed. Also consider the following scenario:
OrderCreated
OrderUpdate 1
OrderUpdate 2
OrderClosed
What happens now is if the externalclients API is down for say OrderUpdate 1 and OrderUpdate 2, we could potentially send the messages in order: OrderCreated, OrderClosed, OrderUpdate 1, OrderUpdate 2.
Currently we just retry a message a few times and then it moves into the deadletter queue for manual reprocessing.
What steps should we take to better guarantee message order? I feel like in the scope of an order, message order needs to be guaranteed.
Should we force the sourcesystem to put all messages for a order in a servicebus session? But how can we handle this with multiple topics? And what do we do if message 1 from a session ends up in the deadletter?
There are a lot of considerations here, should we use a single topic so its easier to manage the sessions? But this opens up other problems with different message structures being in a single topic?
Id love to hear your opinions on this
Have a look at Durable Functions in Azure. You can use the 'Async Http API' or one of the other patterns to achieve the orchestration you need to do.
NServicebus' Sagas might also be a good option, here is an article that does a very good comparison between NServicebus and Durable Functions.
If the external client has to receive all those events and order matters, sending those messages to multiple topics where a topic is per message type will make your mission extremely hard to accomplish. For ordered messaging first you need to use a single entity (queue or topic) with Sessions enabled. That way you can guarantee ordered message processing. In case you have multiple external clients, you'd need to have a session-enabled entity (topic) per external client.
Another option is to implement a pattern known as Process Manager. The process manager would be responsible to make the decisions about the incoming messages and conclude when the work for a given order is completed or not.
There are also libraries (MassTransit, NServiceBus, etc) that can help you. NServiceBus implements Process Manager via a feature called Saga (tutorial) and MassTransit has it as well (documentation).

How to reliably store event to Azure CosmosDB and dispatch to Event Grid exactly once

I'm experimenting with event sourcing / cqrs pattern using serverless architecture in Azure.
I've chosen Cosmos DB document database for Event Store and Azure Event Grid for dispachting events to denormalizers.
How do I achieve that events are reliably delivered to Event Grid exactly once, when the event is stored in Cosmos DB? I mean, if delivery to Event Grid fails, it shouldn't be stored in the Event Store, should it?
Look into Cosmos Db Change Feed. Built in event raiser/queue for each change in db. You can register one or many listeners/handlers. I.e. Azure functions.
This might be exactly what you are asking for.
Some suggest you can go directly to cosmos db and attach eventgrid at the backside of changefeed.
You cannot but you shouldn't do it anyway. Maybe there are some very complicated methods using distributed transactions but they are not scalable. You cannot atomically store and publish events because you are writing to two different persistences, with different transactional boundaries. You can have a synchronous CQRS monolith, but only if you are using the same technology for the events persistence and readmodels persistence.
In CQRS the application is split in Write/Command and Read/Query sides (this long video may help). You are trying to unify the two parts into a single one, a downgrade if you will. Instead you should treat them separately, with different models (see Domain driven design).
The Write side should not depend on the outcome from the Read side. This means, that after the Event store persist the events, the Write side is done. Also, the Write side should contain all the data it needs to do its job, the emitting of events based on the business rules.
If you have different technologies in the Write and Read part then your Read side should be decoupled from the Write side, that is, it should run in a separate thread/process.
One way to do this is to have a thread/process that listens to appends to the Event store, fetch new events then publish them to the Event Grid. If this process fails or is restarted, it should resume from where it left off. I don't know if CosmosDB supports this but MongoDB (also a document database) has the rslog that you can tail to get the new events, in a few milliseconds.

How to control idempotency of messages in an event-driven architecture?

I'm working on a project where DynamoDB is being used as database and every use case of the application is triggered by a message published after an item has been created/updated in DB. Currently the code follows this approach:
repository.save(entity);
messagePublisher.publish(event);
Udi Dahan has a video called Reliable Messaging Without Distributed Transactions where he talks about a solution to situations where a system can fail right after saving to DB but before publishing the message as messages are not part of a transaction. But in his solution I think he assumes using a SQL database as the process involves saving, as part of the transaction, the correlationId of the message being processed, the entity modification and the messages that are to be published. Using a NoSQL DB I cannot think of a clean way to store the information about the messages.
A solution would be using DynamoDB streams and subscribe to the events published either using a Lambda or another service to transformed them into domain-specific events. My problem with this is that I wouldn't be able to send the messages from the domain logic, the logic would be spread across the service processing the message and the Lambda/service reacting over changes and the solution would be platform-specific.
Is there any other way to handle this?
I can't say a specific solution based on DynamoDB since I've not used this engine ever. But I've built an event driven system on top of MongoDB so I can share my learnings you might find useful for your case.
You can have different approaches:
1) Based on an event sourcing approach you can just save the events/messages your use case produce within a transaction. In Mongo when you are just inserting/appending new items to the same collection you can ensure atomicity. Anyway, if the engine does not provide that capability the query operation is so centralized that you are reducing the possibility of an error at minimum.
Once all the events are stored, you can then consume them and project them to a given state and then persist the updated state in another transaction.
Here you have to deal with eventual consistency as data will be stale in your read model until you have projected the events.
2) Another approach is applying the UnitOfWork pattern where you cache all the query operations (insert/update/delete) to save both events and the state. Once your use case finishes, you execute all the cached queries against the database (flush). This way although the operations are not atomic you are again centralizing them quite enough to minimize errors.
Of course the best is to use an ACID database if you require that capability and any other approach will be a workaround to get close to it.
About publishing the events I don't know if you mean they are published to a messaging transportation mechanism such as rabbitmq, Kafka, etc. But that must be a background process where you fetch the events from the DB and publishes them in order to break the 2 phase commit within the same transaction.

Resources