Notification Service in microservices architecture [closed] - azure

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
We have a microservices architecture to support a big application. All the services communicate using azure service bus as a medium. Currently, we are sending the notifications(immediate/scheduled) from different services on per need basis. Here comes the need for a separate notifications service that could take that load and responsibility of formatting and sending notifications(email, text etc).
What I have thought:
Notification service will have its own database which will have data related to notifications(setup, templates, schedules etc) and also some master data(copied from other sources). I don't want to copy all the transactional data to this DB(for abvious reasons) but we might need transactional and historic data to form a notification. I am planning to subscribe to service bus events (published by other services) and onus of sending the data needed for formatting the notification will be on service raising the service bus event. Notification service will rely on that data to fill up the template(stored in ots own DB) and then send the notification.
Job of notifications service will be to listen to service bus events and then fill up the template from data in event and then send the notification.
Questions:
What if the data received by notification service from service bus event does not have all necessary data needed in notification template. How do I query/get the missing data from other service.?
Suppose a service publishes 100 events for a single operation and we need to send single notification that that whole operation. How does the notification service manage that since it will get 100 different messages separately.?
Since the notification trigger depends on data sent from other sources(service bus event), what happens when we have a notification which is scheduled(lets say 6am everyday). How do we get the data needed for notification(since data is not there in notification DB)?
I am looking for some experience advice and some material to refer. Thanks in advance.

You might have to implement a notification as a service which means, imagine you are exporting your application as a plugin in Azure itself. few points here.....
your notification will only accept when it is valid information,
Have a caching system both front end(State management) and backend, microservices(Redis or any caching system)
Capture EventId on each operation, it's a good practice we track the complex operation of our application in this way you can solve duplicate notification, take care that if possible avoid such type of notifications to the user, or try to send one notification convening a group of notifications in one message,
3.Put a circuit breaker logic here to handle your invalid notification, put this type of notification in the retry queue of 30mins maybe? and republish the event again
References
https://www.rabbitmq.com/dlx.html
https://microservices.io/patterns/reliability/circuit-breaker.html
https://redis.io/topics/introduction
Happy coding :)

In microservice and domain driven design it's sometimes hard to work out when to start splitting services. Having each service be responsible for construction and sending its own notifications is perfectly valid.
It is when there is a need to have additional decisions be made, that are not related to the 'origin' service, where things become more tricky.
EG. 1
You have an order microservice that sends an email to the sales team and the user when an order is placed.
Then the payment service updates sales and the user with an sms message when the payment is processed.
You could then decide you and the user to manage their notification preferences. They can now decide if they want sms / email / push message, and which messages they would like to receive.
We now have a problem. These notification prefrences would need to be understood by every service sending messages. Any new team or service that starts sending messages needs to also remember to implement these preferences.
You may also want the user to view all historic messages they have been sent. Again you get into a problem where there is no single source for that information.
EG 2
We now have notification service, it is listening for order created, order updated, order completed events and payment processed events.
It is listing for:
Order Created
Order Updated
Only to make sure it has the information it needs to construct the messages. It is common and in a lot of requirements to have system wide redundancy of data when using microservices. You need to imagine that each service is an island, so while it feels wasteful to store that information again, if it is required that service to perform is work then it is valid.
Note: don't store the data wholesale, store only what is relevant for that service.
We can then use the:
Order Complete
Payment Processed
events as triggers to actually start constructing and sending the messages.
Problems:
Understanding if the service has all the required data
This is up to the service to determine. If the Order Complete event comes through, but it has not yet received an order created event, then the service should store the order complete event and try to process again in the future when all the information is available.
100 events resulting in a notification
Data aggregation is also an important microservice concept, and there are many ways to ensure completeness that will come down to your specific use case.

Related

What Azure Messaging Service to use? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a specific case that I would like to solve using an Azure Messaging Service, but I'm not sure which one to use. There are 7 to choose from and I think I have narrowed it down to 2 options.
Azure Service Bus Topic
Azure Event Hubs
I will try to explain my needs using the following diagram. Please bare in mind that this is just a fictional scenario, just to illustrate what I'm after.
A user updates a Product. An HTTP request is send to an MVC application.
The MVC applications puts a UpdateProductCommandon the bus (what Azure Bus?)
Inside Azure, wether it is an Azure Function, or something else, the command must be processed.
Inside the command handler I want to publish an Event telling all listening parties that a certain task has been processed.
So an ProductUpdatedEvent gets published
There will be more than 1 application interested in consuming this event. These apps can live inside Azure as an Azure Function. But it must also be possible for applications to consume events that are hosted on an external server. Inside my own IIS server for example.
Requirements:
When an application/eventhandler that consumes an event goes down for x time, and later gets up again, then it should be able to process all the events it missed.
An event can be consumed by more than one event handler. All event handlers should process the event.
An event must be able to carry data. Like on the right in the screenshot. I want to be able to send data about the product that got updated.
Which Azure Messaging Service technology would best fit this description?
I would recommend Azure Service Bus.
Clients can be responsible for creating their own subscriptions and get their own instance of the message you send. So in the final part of your diagram the Product Microservice/Inventory Microservice would each be able to process the message at their own rate and if one went down it wouldn't affect the other, they would both be able to read their own message. It gets sent once and read twice.
You can have a look at the tiers of Service Bus to see if the cost/storage meets your needs but you should be able to store millions and millions (80Gb) of messages on the topic depending on what you put in the message. Each message can be up to 1Mb of text so your number will differ depending on what you're doing. Then the microservices comes back online it can work through the backlog.
My 2 cents on this, as both can be used to achieve the same, and some architects would decide for one, the others for the other:
I believe you have read the documentation which starts from the distinction between the message and the event, and the philosophy behind.
"The publisher of the message has an expectation about how the consumer handles the message"
"The publisher of the event has no expectation about how the event is handled"
From this, and based on your requirements, I understood that your publisher does not care how the handler will handle the event - for the publisher it does not represent the value as handlers should not send any response/confirmation. This goes more in the Event-Hub 'spirit'
Now, you said that you want to carry the data. By the definition, again from the documentation:
"An event is a lightweight notification of a condition or a state change"
"The message contains the data that triggered the message pipeline."
So, 'carrying a data' is more a message, than the event, as the message contains the information, and the event contains the fact that the state has changed. This goes more in the Service bus topic 'spirit' and philosophy, as the message contains high-value transactional data that should not be lost.
Another requirement, and in my opinion the most important one, says that you would like to have more than one event handler.
Now, the way to have more than one event-handler with Event-Hub is to create for each one of them a separate Consumer Group.
The way how you can have more than one event-handler with the Service-Bus topic is that you just subscribe.
So finally, my 2 cents, if you want more to have more flexibility with handlers, I would go with Service Bus topics, as you can add as many as you want subscribers(event handlers) in the runtime as you want, without any adjustment on Service Bus topic itself,
If you think that your solution will go more in the direction that you will have a finite number of handlers/consumers, and you might have concurrent event publishers, I would then choose the Event-Hub here, and I would just create for each of my handlers a Consumer group - which is not what I understood from your initial requirements.

How to send message to Microsoft EventHub with Db Transaction?

I want to send the event to Microsoft Event-hub with Db transaction:
Explanation:
User hit a endpoint of order creation.
OrderService accept the order and put that order into the db.
Now Order service want to send that orderId as event to another services using the Event-hub.
How can I achieve transactional behaviour for step 2 and 3?
I know these solutions:
Outbox pattern: Where I put message in another table with order creation transaction. And there is one cron/scheduler, that takes the message from table and mark them delivered. and next time cron will take only not delivered messages.
Use Database audit log and library that taken of this things. Library will bind the database table to Event-hub. Then on every update library will send that change to Event-hub.
I wanted to know is there any in-built transactional feature in Event-hub?
Or
Is there any better way to handle this thing?
There is no concept of transactions within Event Hubs at present. I'm not sure, given the limited context that was shared, that Event Hubs is the best fit for your scenario. Azure Service Bus has transaction support and may be a more natural fit for your intended flow.
In this kind of distributed scenario, regardless of which message broker you decide on, I would advise embracing eventual consistency and considering a pattern similar to:
Your order creation endpoint receives a request
The order creation endpoint assigns a unique identifier for the request and emits the event to Event Hubs; if the send was successful it returns a 202 (Accepted) to the caller and a Retry-After header to indicate to the caller that they should wait for that period of time before checking the status of that order's creation.
Some process is responsible for reading events from the Event Hub and creating that order within the database. Depending on your ecosystem's tolerance, this may be a dedicated process or could be something like an Azure Function with an Event Hubs trigger.
Other event consumers interested in orders will also see the creation request and will call into your order service or database for the details using the unique identifier that as assigned by the order creation endpoint; this may or may not be the official order number within the system.

Can domain events be deleted?

In order to make the domain event handling consistent, I want to persist the domain events to database while saving the AggregateRoot. later react to them using an event processor, for example let's say I want to send them to an event bus as integration events, I wonder whether or not the event is allowed to be deleted from the database after passing it through the bus?
So the events will never ever be loaded with the AggregateRoot root anymore.
I wonder whether or not the reactor is allowed to remove the event from db after the reaction.
You'll probably want to review Reliable Messaging Without Distributed Transactions, by Udi Dahan; also Pat Helland's paper Life Beyond Distributed Transactions.
In event sourced systems, meaning that the history of domain events is the persisted history of the aggregate, you will almost never delete events.
In a system where the log of domain events is simply a journal of messages to be communicated to other "partners": fundamentally, the domain events are messages that describe information to be copied from one part of the system to another. So when we get an acknowledgement that the message has been copied successfully, we can remove the copy stored "here".
In a system where you can't be sure that all of the consumers have received the domain event (because, perhaps, the list of consumers is not explicit), then you probably can't delete the domain events.
You may be able to move them -- which is, instead of having implicit subscriptions to the aggregate, you could have an explicit subscription from an event history to the aggregate, and then implicit subscriptions to the history.
You might be able to treat the record of domain events as a cache -- if the partner's aren't done consuming the message within 7 days of it being available, then maybe the delivery of the message isn't the biggest problem in the system.
How many nines of delivery guarantee do you need?
Domain events are things that have happened in the past. You can't delete the past, assuming you're not Martin McFly :)
Domain events shouldn't be deleted from event store. If you want to know whether you already processed it before, you can add a flag to know it.
UPDATE ==> DESCRIPTION OF EVENT MANAGEMENT PROCESS
I follow the approach of IDDD (Red Book by Vaughn Vernon, see picture on page 287) this way:
1) The aggregate publish the event locally to the BC (lightweight publisher).
2) In the BC, a lightweight subscriber store all the event published by the BC in an "event store" (which is a table in the same database of the BC).
3) A batch process (worker) reads the event store and publish the events to a message queue (or an event bus as you say).
4) Other BCs interested in the event (or even the same BC) subscribe to the message queue (or event bus) for listening and react to the event.
Anyway, even the worker had sent the event away ok to the message queue, you shouldn't delete the domain event from the event store. Instead simply dont send it again, but events are things that have happened and you cannot (should not) delete a thing that have occurred in the past.
Message queue or event bus are just a mechanism to send/receive events, but the events should remain stored in the BC they were created and published.

How/Where to store temporary data without using claim check pattern?

I have a usecase that requires our application to send a notification to an external system in case when a particular event occurs. The notification to external system happens by putting a message into a JMS queue.
The transactional requirements are not that strict. Hence, instead of using JTA for such a trivial usecase I decided to use JMS local transaction, as spring understands how to synchronize JMS local transaction with any managed transaction(e.g. database transaction) to elevate 1PC.
The problem I am facing is that the notification has to be enriched with some data before sending the notification. This extra information has no relevance to my business domain which is responsible for generating the event. So, I am not sure where to temporary store that extra data to reclaim it before sending the notification. Probably, below illustration may help in understanding the problem.
HTTP Request ---> Rest API ---> Application Domain ---> Event Generation ---> Notification
As per the above illustration I do not want to pass that extra data and pollute my domain layer, which is part of Rest API request payload, to send the notification.
One solution I thought of is to use thread scoped queue channel to reclaim it before sending the notification. This way Rest API will initiate the process by putting the extra data into the queue and before sending the notification I will pull it from the queue to enrich the message for notification.
The part which I am unable to achieve in this solution is that how to pull the message from queue when I receive the event somewhere in the application (between event generation and notification phase).
If my approach does not make any sense than please suggest any solution without using claim/check pattern.
Why not simply store the information in a header (or headers)? The domain layer doesn't need to know it's there.
Or, for your solution, create a new QueueChannel for each request, and store a reference to it in a header and receive() from it on the back end, but it's easier to just use a header directly.

CQRS and DDD boundaries

I've have a couple of questions to which I am not finding any exact answer. I've used CQRS before, but probably I was not using it properly.
Say that there are 5 services in the domain: Gateway, Sales, Payments, Credit and Warehouse, and that during the process of a user registering with the application, the front-end submits a few commands, the same front-end will then, once the user is registered, send a few other commands to create an order and apply for a credit.
Now, what I usually do is create a gateway, which receives all pubic commands, which are then validated, and if valid, are transformed into domain commands. I only use events to store data and if one service needs some action to be performed in other service, a domain command is sent directly from one service to the other. But I've seen in other systems that event handlers are used for more than store data. So my question is, what are the limits to what event handlers can do? And is it correct to send commands between services when a specific service requires that some other service performs an action or is it more correct to have the initial event raise and event and let the handler in the other service perform that action in the event handler. I am asking this because I've seen events like: INeedCreditAproved, when I was hoping to see a domain command like: ApprovedCredit.
Any input is welcome.
You're missing an important concept here - Sagas (Process Managers). You have a long-running workflow and it's better expressed centrally.
Sagas listen to events and emit commands. So OrderAccepted event will start a Saga, which then emit ApproveCredit and ReserveStock commands, to be sent to Credit and Warehouse services respectively. Saga can then listen to command success/failure events and compensate approprietely, like say emiting SendEmail command or whatever else.
One year ago I was sending commands like "send commands between services by event handlers when a specific service requires that some other service performs an action" but a stupid decision made by me switched to using events like you said "to have the initial event raise and event and let the handler in the other service perform that action in the event handler" and it worked at first. The most stupid decision I could make. Now I am switching back to sending commands from event handlers.
You can see that other people like Rinat do similar things with event ports/receptors and it is working for them, I think:
http://abdullin.com/journal/2012/7/22/bounded-context-is-a-team-working-together.html
http://abdullin.com/journal/2012/3/31/anatomy-of-distributed-system-a-la-lokad.html
Good luck

Resources