Microservices for job/cron tasks - cron

For example I want to have a microservice to send notifications(email, sms, push notification). Everything is ok for few users. After some time our application have a lot of users, so, this microservice doe not manage, and email a sent after 1 hour.
So, how to handle this situation? Deploy another instance of microservice, but how to handle that only one microservice process one email and user don't receive multiple emails?

Need to setup messaging for that.
It’s common to use a persistent queue such as RabbitMQ. The microservice responsible for sending emails then consumes the messages from the queue and handles them appropriately.
If you run into a problem of your single instance of email microservice not being enough you can simply fork another instance and deploy it instantly. This is because when a message from the message queue is consumed it’s gone unless you tell it to return (to be requeued). I.e. any successfully sent email will consume the the message hence the request to send an email is no longer within the system.

1) You can create coordinating service that will schedule tasks for senders using persistent storage like database table. This service will add send job records into table and sender services will scan table in a loop get job, mark it as processing so other instances will not get the same job.
2) You can use queue like Azure ServiceBus to send jobs from coordinating service.
Also if you are using micro services I will suggest to separate sending services by transport so you can scale them separately.
I can see next structure:
NotificationSenderService - send coordinator you usually need only one instance of this. The responsibility of this service is to receive send notification request and create job using queue or database
EmailNotificationService, SMSNotificationService, PuthNotificationService - actual senders. You can run as many instances of each as you need. They need to have access to database or queue of NotificationSenderService.

Related

Azure Service Bus Queues vs Topics for one to many(unique)

I have an online service hosted on Azure, that asynchronously sends data to on-premise clients.
Each client is identified by an unique code.
Actually there is a single topic, with a subscription for each client which has a filter on the unique code, that is sent as a parameter in the message. No message will ever be broadcasted to all the clients.
I feel that using topic this way is wrong.
The alternative that comes to my mind is to use a dedicated queue for each client, that is created on first contact
Could this be a better approach?
Thanks
In my opinion using Topics and Subscriptions is the right way to go. Here's the reason why:
Currently the routing logic (which message needs to go to which subscription) is handled by Azure Service Bus based on the rules you have configured. If you go with queues, the routing logic will need to come to your hosted service. You'll need to ensure that the queue exists before sending each message. I think it will increase the complexity at your service level somehow.
Furthermore, topics and subscriptions would enable you to do build an audit trail kind of functionality (not sure if you're looking for this kind of functionality). You can create a separate subscription that has a rule to deliver all messages (True SQL Rule) to that subscription along with client specific subscription.
Creating a separate Queue for each client is not advisable. This is the problem solved by Topics.
If you have separate Queue for each client, then you need to send messages to multiple Queues from Server. This will become tedious when the number of clients increases.
Having a single Topic and multiple Subscriptions is easy to manage as the message will be sent only to a single Topic from Server.

Scalable Request Response pattern using Azure Service Bus

We are evaluating "Azure service bus" to use between web server and app server for request response pattern. We are planning to have two queues:
Request Queue
Response Queue
Webserver will push a message to request queue and subscribe to response queue.
By comparing the MessageID and CorrelationId, it can receive the response back, which can be sent back to browser.
But over cloud, using elastic scaling, we can increase/decrease web server (and app server) instances.
We are wondering if this pattern will work here optimally.
To make this work, we will have to have one Request queue and multiple topics (one for each web server instance).
This will have two down sides:
Along with increasing/decreasing web server instance, we will have
to create/delete topic as well.
All the message will be pushed to
all the topics. So, every message will be processed by all the web
servers. And this is not an efficient way.
Please share your thoughts.
Thanks In Advance
When you scale out your endpoint, you don't want to have an instance affinity. You want to rely on the competing consumers and not care which instance of your endpoint processes messages.
For example, if you receive a response and write that to a database, most likely you don't care what instance of an endpoint has written the data. But if you have some in-memory state or anything other info only available to the endpoint that has originated the request and processing reply messages requires that information, then you have instance affinity and need to either remove it or use technology that allows to address that. For example, something like a SignalR with a backplane to communicate a reply message to all your web endpoint instances.
Note that ideally you should avoid instance affinity as much as you can.
I know this is old, but thought I should comment to complete this thread.
I agree with Sean.
In principle, Do not design with instance affinity in mind.
Any design should work irrespective of number of instances and whichever instance runs the code.
Microsoft does recommend the same when designing application architecture for running in the cloud.
In your case, I do not think you should plan to have one topic for each instance.
You should just put the request messages into one topic, with a subscription to allow your receiving app service to process those request messages.
When your receiving app service scales out, that's where your design needs to allow reading messages from the subscription from multiple receivers (multiple instances), which is described in the Competing consumers pattern.
https://learn.microsoft.com/en-us/azure/architecture/patterns/competing-consumers
Please post what you have finally implemented.

notification in saas application over azure

We are working on a SaaS based application (built over azure). In this application Web server and App server is shared among all tenants but their database are separate (Sql Azure).
Now there is a need to implement notification service which can generate notifications based on events subscriptions. System can generate different kind of event (like account locked and many other) and user can configure notification rule on these events. Notification can be in the form of email and sms.
We are planning to implement a queue for events. Event notifier will push an even on this queue. Notification engine will subscribe to this queue. Whenever it receive a new event, it will check if there is a notification rule configured on this type of event or not. If yes, it will create a notification, which will result into emails/sms. These emails/sms can be stored in database or pushed to another queue. A different background process (worker role) can process these emails.
Here are my queries.
Should we keep one single queue (for events) for all tenants or create separate queue for different tenants. If we keep a single queue, we can a shared subscriber service which can subscribe to this queue. We can easily scale in-out this machine.
Since we have different databases for each tenant, we can store their emails to their respective databases and using some service, we can pool database and send email after defined interval. But I am not sure how will we share the subscriber code in this case.
We can store mails in a nosql database (like table storage in azure). A subscriber (window service/worker role) can pool this table and send mails after defined interval. Again, scaling can a challenge here too.
We can store emails in queue (RabbitMQ for instance). A worker role can subscribe to this queue. Scaling of worker role should not be any issue in case we keep a single queue for all tenant.
Please provide your inputs on these points.
Thanks In Advance
I would separate queues not by tenants but by function. So that queue handlers are specific for the type of a message that they are processing.
IE: order processing queue, an account setup queue, and etc.
Creating queues by tenant is a /headache/ to manage when you want to scale based on them and you want to presumably sync/add/remove them as customers come and leave. So, I would avoid this scenario
Ultimately, scaling based on multiple queues will be harder without auto-scaling services such as CloudMonix (a commercial product I help built)
HTH

Azure Service Bus Queue grouped messages

I have a web api application which performs different type of actions an a Domain entity, called Application. Actions like "Copy", "Update", "Recycle", "Restore" etc.
This actions needs to be executed, per Application, in First In First Out order, not randomly or simultaneous. However, it can process simultaneously two Actions as long as they are for two separate Applications.
Is some kind of a queue, but not a big queue for all the requests, but a queue of actions for each Application in database.
Knowing this, i think that azure service bus queue is a good solution for this scenario.
However, the solution i can think of right now is to programmatically create a queue for each Application i have in database, and start listening to that queue.
Is possible to get messages from the queue based on a filter? (using FIFO principle) So i have to subscribe only to one queue? (instead of having a queue for each Application - which is very hard to maintain)
What you want is Azure Service Bus Topics/Subscriptions.
Subscriptions allow you to filter messages that are published to a topic using a SqlFilter on the message headers.
The article linked above should provide enough examples to meet your needs.
I think u can solve this by using Sessions.
I just came across this very clear article: https://dev.to/azure/ordered-queue-processing-in-azure-functions-4h6c which explains in to detail how Azure Service Bus Queue sessions work.
In short: by defining a SessionId on the messages you can force the ordering of the processing within a session, the downside is that there will be no parallelization for all messages in a session between multiple consumers of the queue.

Azure Loosely Coupled / Scalable

I have been struggling with this concept for a while. I am attempting to come up with a loosely coupled Azure component design that is completely scalable using Queues and worker roles, which dequeue and process the items. I can scale the worker roles at will, and publishing to the queue is never an issue. So far so good, but, it seems that the only real world model this could work in is fire and forget. It would work fantastic for logging and other one way operations, but let's say I want to up load a file using queues/worker roles, save it to blob, then get a response back once it is complete. Or should this type of model not be used for online apps? What is the best way to send a notification back once an operation is completed? Do I create a response Q, then (somehow) retrieve the associated response? Any help is greatly appreciated!!!!!
I usually do a polling model.
Client (usually a browser) sends a request to do some work.
Front-end (web role) enqueues the work and replies with an ID.
Back-end (worker role) processes the queue and stores the result in a blob or table entity named .
Client polls ("Is done yet?") at some interval.
Front-end checks to see if the blob or table entity is there and replies accordingly.
See http://blog.smarx.com/posts/web-page-image-capture-in-windows-azure for one example of this pattern.
you could also look into the servicebus appfabric instead of using queues. with the servicebus you can send messages, use queues etc all from the servicebus appfabric. you could go to publish and subscribe instead of polling then!

Resources