Which amazon service should i use to implement time based queue dispatcher (serverless application)? - node.js

User submit a csv file which contains time (Interval) with message. I want to submit that message on the time mentioned with message to chat API. I am using DynamoDB to store message and a lambda function which read the message from DynamoDB and one at a time use setTimeout function to publish message on chat. I am using node js to implement that functionality. I also created a amazon API to trigger that lambda fUnction.
But this approach is not working. Can any one suggest me which other service should i use to do same ? Is there any amazon queue service for that?

From the context of your question what I understand is that you basically need to create a futuristic timer. A system that can inform you sometime in the future with some metadata to take an action.
If this is the case, on top of my head I think you can use the below solution to achieve your goal:
Pre-requisites: I assume, you are already using Dynamo DB(aka DDB) as a primary store. So all CSV data is persisted in the dynamo and you are using dynamo stream to read the insert and updated records to trigger your lambda function(let's call this lambda function as Proxy_Lambda).
Create another lambda function that processes records and sends a message to your chat system(let's call this lambda function as Processor_Lambda)
Option 1: AWS SQS
Proxy_Lambda reads records from DDB stream and based on the future timestamp attribute present in the record, it publishes a message to AWS SQS queue with initial visibility timeout equals to the timestamp. Sample example: Link. Remember, these messages will not be visible to any of the consumer until the visibility timeout.
Add a trigger for Processor_Lambda to start polling from this SQS queue.
Once message becomes visible in the queue(after the initial timeout), Processor_Lambda consumes the message and send the chat events.
Result: You will be able to create a futuristic timer using the SQS visibility timeout feature. Cons here is that you will not be able to view the in-flight SQS message content until the visibility timeout of the message occurs.
Note: Max visibility timeout can be set for 12 hours. So if your use-case demand a timer for more then 12 hours, you need to add code logic in Processor_Lambda to send that message back to queue with new visibility timeout.
Option 2: AWS Step function (my preferred approach ;) )
Crate state machine in AWS Step function to generate task timers (let's call it Timer_Function). These task timers will keep looping between the wait state until the timer expires. Timer window will be provided as an input to this step function.
Link Timer_Function to trigger Processor_Lambda once the task timer expires. Basically, that will be the next step after the Timer step.
Connect Proxy_Lambda with Timer_Function i.e. Proxy_Lambda will read records from DDB stream and invoke the Timer_Function with message interval attribute present the Dynamo DB record and the necessary payload.
Result: A Timer_Function that keep looping until the time window(message interval) expires. Which in turn provide you a mechanism to trigger Proxy_Lambda in the future(i.e. the timer window)
Having said that, now I will leave this up to you to choose the right solution based on the use-case and business requirement.

Related

The most effective way to implement event scheduling using aws and serverless

Use case:
User creates meeting appointment and should be notified in 24 hours/1 hour/5 minutes before appointment.
Current implemetation:
When appointment is created, it is saved in DynamoDB with ttl (appointment time - 24 hours)
When ttl is expired, DynamoDB removes this item.
There is a lambda that listens to DynamoDB stream event and is triggered by previous action. There are 3 additional boolean flags in item: 24hours, 1hour, 5minutes. When item is removed before 24 hours, this lambda sets 24hours flag to true (in order to know in the next step that push notification before 24 hours already has been sent) and saves it again with new ttl (appointment time - 1 hour), push notification is sent.
Same as in 2 and 3: ttl expired, lambda sets 1hour flag to true and sets new ttl (appointment time - 5 min) and saves item again, push notification is sent.
Again: ttl expired, push notification is sent.
Concern: DynamoDB does not guarantee that item will be removed exactly when ttl expired.
Is there any another solutions: more efficient than mine?
Current stack: AWS/Serverless Framework/NodeJS.
Recently(10-Nov-2022) AWS launched a new service called EventBridge Scheduler. I hope this will fulfill all of your requirements and it's serverless too. What you have to do is create 3 One-time schedule schedules in EventBridge Scheduler. Then set the target as your current AWS Lambda function. Then you don't need Amazon DynamoDB anymore. I hope this will answer your problem.
If you just want to use the Serverless stack then this is one of the best ways to do it. But if you want something more functional then you can create a bull queue server [https://www.npmjs.com/package/bull][1] and host it inside an EC2. It provides delayed jobs and queues.
One suggestion that I would like to give is to not directly use lambda to send notifications and instead create a queue and send notifications via that because if your system scales your lambdas will start getting throttled when it has to send thousands of notifications at a single point of time.

Combine SQS messages that arrive within milliseconds of each other

I am faced with a situation that I am not quite sure how to solve. Basically my system receives data from a third-party source via API gateway, publishes this data to an SNS topic which triggers a lambda function. Based on the message parameters, the lambda function pushes the message to one of three different SQS queues. These queues trigger one of three lambda functions which perform one of three possible actions - create, update or delete items in that order in another third-party system through their API endpoints.
The usual flow would be to first create an entity on the destination system and then each subsequent action should be to update/delete this entity. The problem is, sometimes I receive data for the same entity from the source within milliseconds, thus my system is unable to create the entity on the destination due to the fact that their API requires at least 300-400ms to do so. So when my system tries to update the entity, it's not existing yet, thus my system creates it. But since I have a create action in the process of executing, it creates a duplicate entry on my destination.
So my question is, what is the best practice to consolidate messages for the same entity that arrive within less than a second of each other?
My Thoughts so far:
I am thinking of using redis to consolidate messages that are for the same entity before pushing them to the SNS topic, but I was hoping there would be a more straight-forward approach as I don't want to introduce another layer of logic.
Any help would be much appreciated. Thank you.
The best option would be to use an Amazon SQS FIFO queue, with each message using a Message Group ID that is set to the unique ID of the item that is being created.
In a FIFO queue, SQS will ensure that messages are processed in-order, and will only allow one message per Message Group ID to be received at a time. Thus, any subsequent messages for the same Message Group ID will wait until an existing message has been fully processed.
If this is not acceptable, then AWS Lambda now supports batch windows of up to 5 minutes for functions with Amazon SQS as an event source:
AWS Lambda now allows customers using Amazon Simple Queue Service (Amazon SQS) as an event source to define a wait period, called MaximumBatchingWindowInSeconds, to allow messages to accumulate in their SQS queue before invoking a Lambda function. In addition to Batch Size, this is a second option to send records in batches, to reduce the number of Lambda invokes. This option is ideal for workloads that are not time-sensitive, and can choose to wait to optimize cost.
Previously, Lambda functions polling from an SQS queue would send messages in batches of up to 10 before invoking the function. Now, customers can also define a time window that Lambda should wait to poll messages from their SQS queue before invoking their function. Lambda will wait for up to 300 seconds to poll messages from the SQS queue. When a batch window is defined, Lambda will also allow customers to define a batch size of up to 10,000 messages.
To get started, when creating a new Lambda function or updating an existing function with SQS as an event source, customers can set the MaximumBatchingWindowInSeconds field to any value between 0 and 300 seconds on the AWS Management Console, the AWS CLI, AWS SAM or AWS SDK for Lambda. This feature is available in all AWS Regions where AWS Lambda and Amazon SQS are available, and requires no additional charge to use.
the lambda function pushes the message to one of three different SQS queues
...
So when my system tries to update the entity, it's not existing yet, thus my system creates it. But since I have a create action in the process of executing, it creates a duplicate entry on my destination
By using multiple queue you created yourself a thread race and now you are trying to patch it.
Based on the provided information and context - as already answered - a single fifo queue with context id could be more appropriate (do you really need 3 queues?)
If latency is critical, then a streaming could be a solution as well.
As you described your issue, I think you don't need to combine the messages (indeed you could use Redis, AWS Kinesis Analytics, DynamoDB..), but rather not to create the issue at thecfirst place
Options
having a single fifo queue
having an idempotent and thread-safe backend service able handling concurrent updates (transactions, atomic updates,..)
As well if you can create "duplicate" entries, it means the unique indexes are not enforced. They exist exactly for that reason.
You did not specify the backend service (RDBMS, DynamoDB, MongoDB, other?) each has an option to handle the problem somehow.

How to set intervals between multiple requests AWS Lambda API

I have created an API using AWS Lambda function (using Python). Now my react js code hits this API whenever an event fire. So user can request API as many times the events are fired. Now the problem is we are not getting the response from lambda API sequentially. Sometime we are getting the response of our last request faster than the previous response of previous request.
So we need to handle our response in Lambda function sequentially, may be adding some delay between 2 request or may be implementing throttling. So how can I do that.
Did you check the concurrency setting on Lambda? You can throttle the lambda there.
But if you throttle the lambda and the requests being sent are not being received, the application sending the requests might be receiving an error unless you are storing the requests somewhere on AWS for being processed later.
I think putting an SQS in front of lambda might help. You will be hitting API gateway, the requests get sent to SQS, lambda polls requests concurrently (you can control the concurrency) and then send the response back.
You can use SQS FIFO Queue as a trigger on the Lambda function, set Batch size to 1, and the Reserved Concurrency on the Function to 1. The messages will always be processed in order and will not concurrently poll the next message until the previous one is complete.
SQS triggers do not support Batch Window - which will 'wait' until polling the next message. This is a feature for Stream based Lambda triggers (Kinesis and DynamoDB Streams)
If you want to streamlined process, Step Function will let you manage states using state machines and supports automatic retry based off the outputs of individual states.
As a previous response said, potentially what could help is to put an SQS in front of the Lambda - if order of processing is important, you could also look at setting the SQS queue up as a FIFO queue, which preserves order:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html
As the other comment said, the other option is to limit concurrency, but even then you're probably best off putting SQS in front as you're then limiting your throughput.

Azure EventGrid Webhook timeout

Came to know from the documentation that timeout for webhook is 60 secs. If that's the case then are we expecting developers to do asynchronous operations? I mean what if the work that I want to do as part of the webhook takes more than 60 secs? But if we make that operation asynchronous and the work I want do as part of the webhook fails then how do we recover from that situation because we already responded event grid 200 OK. In that case - would we lose the event?
In the scenario like yours such as the event handler processing over 60 seconds, the following can be implemented, based on the retrying and dead-lettering technique:
use the primary event subscription with a retry policy and
dead-lettering. This subscriber (function) with a binding to the storage table will handle a state of the long running (max 24 hrs) event processing and also forwarding the first event message to to the storage queue for triggering a long running process. The response from this primary subscriber will depend from the state of the StorageQueueTrigger function.
every new retry event message will check the state of the long running process and based on that, the response code (for instance OK(200) or Service.Unavailable(503)) is sent back to the Event Grid.
In the above scenario, the retry mechanism represents a "watchdog timer" for watching a long running event message processing. The second function such as QueueTrigger function is yielding a process between the Event Grid and long running process.
In summary, your scenario will require the following:
EventSubscriber with retry policy and dead-lettering for Webhook (EventGridTrigger or HttpTrigger function)
EventGridTrigger or HttpTrigger function
Storage Table
QueueTrigger Function
If anything unusually happen during the watchdog timer, the dead-lettering is sent to your container storage with a deadLetterReason.
Note, that in the case if your long running process is over 5/10 minutes, the StorageQueue trigger needs to be considered in the App Service plan or using your custom worker processor.
Update:
The following screen snippet shows the above solution for "long running subscriber" with a Watchdog timer:
also it can be used directly a StorageQueue Event Handler to yield the long running process from the EventGrid, but in this case, the function has a more responsibilities such as retrying, notification, dead-lettering, etc., see the following picture:

Requeue or delete messages in Azure Storage Queues via WebJobs

I was hoping if someone can clarify a few things regarding Azure Storage Queues and their interaction with WebJobs:
To perform recurring background tasks (i.e. add to queue once, then repeat at set intervals), is there a way to update the same message delivered in the QueueTrigger function so that its lease (visibility) can be extended as a way to requeue and avoid expiry?
With the above-mentioned pattern for recurring background jobs, I'm also trying to figure out a way to delete/expire a job 'on demand'. Since this doesn't seem possible outside the context of WebJobs, I was thinking of maybe storing the messageId and popReceipt for the message(s) to be deleted in Table storage as persistent cache, and then upon delivery of message in the QueueTrigger function do a Table lookup to perform a DeleteMessage, so that the message is not repeated any more.
Any suggestions or tips are appreciated. Cheers :)
Azure Storage Queues are used to store messages that may be consumed by your Azure Webjob, WorkerRole, etc. The Azure Webjobs SDK provides an easy way to interact with Azure Storage (that includes Queues, Table Storage, Blobs, and Service Bus). That being said, you can also have an Azure Webjob that does not use the Webjobs SDK and does not interact with Azure Storage. In fact, I do run a Webjob that interacts with a SQL Azure database.
I'll briefly explain how the Webjobs SDK interact with Azure Queues. Once a message arrives to a queue (or is made 'visible', more on this later) the function in the Webjob is triggered (assuming you're running in continuous mode). If that function returns with no error, the message is deleted. If something goes wrong, the message goes back to the queue to be processed again. You can handle the failed message accordingly. Here is an example on how to do this.
The SDK will call a function up to 5 times to process a queue message. If the fifth try fails, the message is moved to a poison queue. The maximum number of retries is configurable.
Regarding visibility, when you add a message to the queue, there is a visibility timeout property. By default is zero. Therefore, if you want to process a message in the future you can do it (up to 7 days in the future) by setting this property to a desired value.
Optional. If specified, the request must be made using an x-ms-version of 2011-08-18 or newer. If not specified, the default value is 0. Specifies the new visibility timeout value, in seconds, relative to server time. The new value must be larger than or equal to 0, and cannot be larger than 7 days. The visibility timeout of a message cannot be set to a value later than the expiry time. visibilitytimeout should be set to a value smaller than the time-to-live value.
Now the suggestions for your app.
I would just add a message to the queue for every task that you want to accomplish. The message will obviously have the pertinent information for processing. If you need to schedule several tasks, you can run a Scheduled Webjob (on a schedule of your choice) that adds messages to the queue. Then your continuous Webjob will pick up that message and process it.
Add a GUID to each message that goes to the queue. Store that GUID in some other domain of your application (a database). So when you dequeue the message for processing, the first thing you do is check against your database if the message needs to be processed. If you need to cancel the execution of a message, instead of deleting it from the queue, just update the GUID in your database.
There's more info here.
Hope this helps,
As for the first part of the question, you can use the Update Message operation to extend the visibility timeout of a message.
The Update Message operation can be used to continually extend the
invisibility of a queue message. This functionality can be useful if
you want a worker role to “lease” a queue message. For example, if a
worker role calls Get Messages and recognizes that it needs more time
to process a message, it can continually extend the message’s
invisibility until it is processed. If the worker role were to fail
during processing, eventually the message would become visible again
and another worker role could process it.
You can check the REST API documentation here: https://msdn.microsoft.com/en-us/library/azure/hh452234.aspx
For the second part of your question, there are really multiple ways and your method of storing the id/popReceipt as a lookup is a possible option, you can actually have a Web Job dedicated to receive messages on a different queue (e.g plz-delete-msg) and you send a message containing the "messageId" and this Web Job can use Get Message operation then Delete it. (you can make the job generic by passing the queue name!)
https://msdn.microsoft.com/en-us/library/azure/dd179474.aspx
https://msdn.microsoft.com/en-us/library/azure/dd179347.aspx

Resources