Algorithm to trigger bulk events by schedule - node.js

I'd like to create a web app that allows users to do email outreach, but I'm having trouble with a good solution.
I'd like each user to be able to send 100 emails per day, which would be configurable during certain times, e.g. 6 am to 10 am. I'm able to determine a delivery schedule per user (based on times that they configure), but b/c users can change their email schedules at any time, I'd have to reconfigure the order of processing.
Is there a queue type in Redis (for instance) that triggers by time?
Or a way to trigger events on a schedule in nodejs that's scalable?

There is a Redis feature: Keyspace notifications, which allow clients to subscribe to Pub/Sub channels in order to receive redis events( an example event is a"key expiry" event).
Documentation: http://redis.io/topics/notifications
For your use case, you can maybe use the "key expiry" event.

Related

The most effective way to implement event scheduling using aws and serverless

Use case:
User creates meeting appointment and should be notified in 24 hours/1 hour/5 minutes before appointment.
Current implemetation:
When appointment is created, it is saved in DynamoDB with ttl (appointment time - 24 hours)
When ttl is expired, DynamoDB removes this item.
There is a lambda that listens to DynamoDB stream event and is triggered by previous action. There are 3 additional boolean flags in item: 24hours, 1hour, 5minutes. When item is removed before 24 hours, this lambda sets 24hours flag to true (in order to know in the next step that push notification before 24 hours already has been sent) and saves it again with new ttl (appointment time - 1 hour), push notification is sent.
Same as in 2 and 3: ttl expired, lambda sets 1hour flag to true and sets new ttl (appointment time - 5 min) and saves item again, push notification is sent.
Again: ttl expired, push notification is sent.
Concern: DynamoDB does not guarantee that item will be removed exactly when ttl expired.
Is there any another solutions: more efficient than mine?
Current stack: AWS/Serverless Framework/NodeJS.
Recently(10-Nov-2022) AWS launched a new service called EventBridge Scheduler. I hope this will fulfill all of your requirements and it's serverless too. What you have to do is create 3 One-time schedule schedules in EventBridge Scheduler. Then set the target as your current AWS Lambda function. Then you don't need Amazon DynamoDB anymore. I hope this will answer your problem.
If you just want to use the Serverless stack then this is one of the best ways to do it. But if you want something more functional then you can create a bull queue server [https://www.npmjs.com/package/bull][1] and host it inside an EC2. It provides delayed jobs and queues.
One suggestion that I would like to give is to not directly use lambda to send notifications and instead create a queue and send notifications via that because if your system scales your lambdas will start getting throttled when it has to send thousands of notifications at a single point of time.

Notification Service in microservices architecture [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
We have a microservices architecture to support a big application. All the services communicate using azure service bus as a medium. Currently, we are sending the notifications(immediate/scheduled) from different services on per need basis. Here comes the need for a separate notifications service that could take that load and responsibility of formatting and sending notifications(email, text etc).
What I have thought:
Notification service will have its own database which will have data related to notifications(setup, templates, schedules etc) and also some master data(copied from other sources). I don't want to copy all the transactional data to this DB(for abvious reasons) but we might need transactional and historic data to form a notification. I am planning to subscribe to service bus events (published by other services) and onus of sending the data needed for formatting the notification will be on service raising the service bus event. Notification service will rely on that data to fill up the template(stored in ots own DB) and then send the notification.
Job of notifications service will be to listen to service bus events and then fill up the template from data in event and then send the notification.
Questions:
What if the data received by notification service from service bus event does not have all necessary data needed in notification template. How do I query/get the missing data from other service.?
Suppose a service publishes 100 events for a single operation and we need to send single notification that that whole operation. How does the notification service manage that since it will get 100 different messages separately.?
Since the notification trigger depends on data sent from other sources(service bus event), what happens when we have a notification which is scheduled(lets say 6am everyday). How do we get the data needed for notification(since data is not there in notification DB)?
I am looking for some experience advice and some material to refer. Thanks in advance.
You might have to implement a notification as a service which means, imagine you are exporting your application as a plugin in Azure itself. few points here.....
your notification will only accept when it is valid information,
Have a caching system both front end(State management) and backend, microservices(Redis or any caching system)
Capture EventId on each operation, it's a good practice we track the complex operation of our application in this way you can solve duplicate notification, take care that if possible avoid such type of notifications to the user, or try to send one notification convening a group of notifications in one message,
3.Put a circuit breaker logic here to handle your invalid notification, put this type of notification in the retry queue of 30mins maybe? and republish the event again
References
https://www.rabbitmq.com/dlx.html
https://microservices.io/patterns/reliability/circuit-breaker.html
https://redis.io/topics/introduction
Happy coding :)
In microservice and domain driven design it's sometimes hard to work out when to start splitting services. Having each service be responsible for construction and sending its own notifications is perfectly valid.
It is when there is a need to have additional decisions be made, that are not related to the 'origin' service, where things become more tricky.
EG. 1
You have an order microservice that sends an email to the sales team and the user when an order is placed.
Then the payment service updates sales and the user with an sms message when the payment is processed.
You could then decide you and the user to manage their notification preferences. They can now decide if they want sms / email / push message, and which messages they would like to receive.
We now have a problem. These notification prefrences would need to be understood by every service sending messages. Any new team or service that starts sending messages needs to also remember to implement these preferences.
You may also want the user to view all historic messages they have been sent. Again you get into a problem where there is no single source for that information.
EG 2
We now have notification service, it is listening for order created, order updated, order completed events and payment processed events.
It is listing for:
Order Created
Order Updated
Only to make sure it has the information it needs to construct the messages. It is common and in a lot of requirements to have system wide redundancy of data when using microservices. You need to imagine that each service is an island, so while it feels wasteful to store that information again, if it is required that service to perform is work then it is valid.
Note: don't store the data wholesale, store only what is relevant for that service.
We can then use the:
Order Complete
Payment Processed
events as triggers to actually start constructing and sending the messages.
Problems:
Understanding if the service has all the required data
This is up to the service to determine. If the Order Complete event comes through, but it has not yet received an order created event, then the service should store the order complete event and try to process again in the future when all the information is available.
100 events resulting in a notification
Data aggregation is also an important microservice concept, and there are many ways to ensure completeness that will come down to your specific use case.

How to send message to Microsoft EventHub with Db Transaction?

I want to send the event to Microsoft Event-hub with Db transaction:
Explanation:
User hit a endpoint of order creation.
OrderService accept the order and put that order into the db.
Now Order service want to send that orderId as event to another services using the Event-hub.
How can I achieve transactional behaviour for step 2 and 3?
I know these solutions:
Outbox pattern: Where I put message in another table with order creation transaction. And there is one cron/scheduler, that takes the message from table and mark them delivered. and next time cron will take only not delivered messages.
Use Database audit log and library that taken of this things. Library will bind the database table to Event-hub. Then on every update library will send that change to Event-hub.
I wanted to know is there any in-built transactional feature in Event-hub?
Or
Is there any better way to handle this thing?
There is no concept of transactions within Event Hubs at present. I'm not sure, given the limited context that was shared, that Event Hubs is the best fit for your scenario. Azure Service Bus has transaction support and may be a more natural fit for your intended flow.
In this kind of distributed scenario, regardless of which message broker you decide on, I would advise embracing eventual consistency and considering a pattern similar to:
Your order creation endpoint receives a request
The order creation endpoint assigns a unique identifier for the request and emits the event to Event Hubs; if the send was successful it returns a 202 (Accepted) to the caller and a Retry-After header to indicate to the caller that they should wait for that period of time before checking the status of that order's creation.
Some process is responsible for reading events from the Event Hub and creating that order within the database. Depending on your ecosystem's tolerance, this may be a dedicated process or could be something like an Azure Function with an Event Hubs trigger.
Other event consumers interested in orders will also see the creation request and will call into your order service or database for the details using the unique identifier that as assigned by the order creation endpoint; this may or may not be the official order number within the system.

Schedule Nodemailer email based on info in database

I am creating an application that stores events and sends reminder emails to people who signed up 1 hour before the event(the time of each event is stored in the database). At first I was thinking about using CronJobs to schedule these emails, but now I am not sure if that will work. Is there any other node module that will allow me to implement the reminder email functionality.
If you have Redis available to backend it, you might look at something like bull.
From the readme:
Minimal CPU usage due to a polling-free design.
Robust design based on Redis.
Delayed jobs.
Schedule and repeat jobs according to a cron specification.
Rate limiter for jobs.
Retries.
Priority.
Concurrency.
Pause/resume—globally or locally.
Multiple job types per queue.
Threaded (sandboxed) processing functions.
Automatic recovery from process crashes.
You can give a try node-schedule. It is using cron-job underneath.
In a quality interval, you can check if there is an upcoming interval, and send the reminder to the appropriate persons.

How to make multiple API calls with rate limits per user using RabbitMQ?

In my app I am getting data on behalf of different users via one API which has a rate limit of 1 API call every 2 seconds per user.
Currently I am storing all the calls I need to make in a single message queue. I am using RabbitMQ for this.
There is currently one consumer who is taking one message at a time, doing the call, processing the result and then start with the next message.
The queue is filling up faster than this single consumer can make the API calls (1 call every 2 seconds as I don't know which user comes next and I don't want to hit API limits).
My problem is now that I don't know how to add more consumers which in theory would be possible as the queue holds jobs for different users and the API rate limit is per user so e.g. I could do 2 API calls every 2 seconds if they are from different users.
However I have no information about the messages in the queue. Could be from a single user, could be from many different users.
Only solution I see right now is to create separate queues for each user. But I have many different users (say 1,000) and would rather stay with 1 queue.
If possible I would stick with RabbitMQ as I use this for other similar tasks as well. But if I need to change my stack I would be willing to do so.
App is using the MEAN stack.
You will need to maintain a state somewhere, I had a similar application and what i did was maintain state in Redis, before every call check if user has made request in last 2 seconds eg:
Redis key:
user:<user_id> // value is epoch time-stamp
update Redis once request is made.
refrence:
redis

Resources