Use case:
User creates meeting appointment and should be notified in 24 hours/1 hour/5 minutes before appointment.
Current implemetation:
When appointment is created, it is saved in DynamoDB with ttl (appointment time - 24 hours)
When ttl is expired, DynamoDB removes this item.
There is a lambda that listens to DynamoDB stream event and is triggered by previous action. There are 3 additional boolean flags in item: 24hours, 1hour, 5minutes. When item is removed before 24 hours, this lambda sets 24hours flag to true (in order to know in the next step that push notification before 24 hours already has been sent) and saves it again with new ttl (appointment time - 1 hour), push notification is sent.
Same as in 2 and 3: ttl expired, lambda sets 1hour flag to true and sets new ttl (appointment time - 5 min) and saves item again, push notification is sent.
Again: ttl expired, push notification is sent.
Concern: DynamoDB does not guarantee that item will be removed exactly when ttl expired.
Is there any another solutions: more efficient than mine?
Current stack: AWS/Serverless Framework/NodeJS.
Recently(10-Nov-2022) AWS launched a new service called EventBridge Scheduler. I hope this will fulfill all of your requirements and it's serverless too. What you have to do is create 3 One-time schedule schedules in EventBridge Scheduler. Then set the target as your current AWS Lambda function. Then you don't need Amazon DynamoDB anymore. I hope this will answer your problem.
If you just want to use the Serverless stack then this is one of the best ways to do it. But if you want something more functional then you can create a bull queue server [https://www.npmjs.com/package/bull][1] and host it inside an EC2. It provides delayed jobs and queues.
One suggestion that I would like to give is to not directly use lambda to send notifications and instead create a queue and send notifications via that because if your system scales your lambdas will start getting throttled when it has to send thousands of notifications at a single point of time.
Related
I'd like to create a web app that allows users to do email outreach, but I'm having trouble with a good solution.
I'd like each user to be able to send 100 emails per day, which would be configurable during certain times, e.g. 6 am to 10 am. I'm able to determine a delivery schedule per user (based on times that they configure), but b/c users can change their email schedules at any time, I'd have to reconfigure the order of processing.
Is there a queue type in Redis (for instance) that triggers by time?
Or a way to trigger events on a schedule in nodejs that's scalable?
There is a Redis feature: Keyspace notifications, which allow clients to subscribe to Pub/Sub channels in order to receive redis events( an example event is a"key expiry" event).
Documentation: http://redis.io/topics/notifications
For your use case, you can maybe use the "key expiry" event.
I am faced with a situation that I am not quite sure how to solve. Basically my system receives data from a third-party source via API gateway, publishes this data to an SNS topic which triggers a lambda function. Based on the message parameters, the lambda function pushes the message to one of three different SQS queues. These queues trigger one of three lambda functions which perform one of three possible actions - create, update or delete items in that order in another third-party system through their API endpoints.
The usual flow would be to first create an entity on the destination system and then each subsequent action should be to update/delete this entity. The problem is, sometimes I receive data for the same entity from the source within milliseconds, thus my system is unable to create the entity on the destination due to the fact that their API requires at least 300-400ms to do so. So when my system tries to update the entity, it's not existing yet, thus my system creates it. But since I have a create action in the process of executing, it creates a duplicate entry on my destination.
So my question is, what is the best practice to consolidate messages for the same entity that arrive within less than a second of each other?
My Thoughts so far:
I am thinking of using redis to consolidate messages that are for the same entity before pushing them to the SNS topic, but I was hoping there would be a more straight-forward approach as I don't want to introduce another layer of logic.
Any help would be much appreciated. Thank you.
The best option would be to use an Amazon SQS FIFO queue, with each message using a Message Group ID that is set to the unique ID of the item that is being created.
In a FIFO queue, SQS will ensure that messages are processed in-order, and will only allow one message per Message Group ID to be received at a time. Thus, any subsequent messages for the same Message Group ID will wait until an existing message has been fully processed.
If this is not acceptable, then AWS Lambda now supports batch windows of up to 5 minutes for functions with Amazon SQS as an event source:
AWS Lambda now allows customers using Amazon Simple Queue Service (Amazon SQS) as an event source to define a wait period, called MaximumBatchingWindowInSeconds, to allow messages to accumulate in their SQS queue before invoking a Lambda function. In addition to Batch Size, this is a second option to send records in batches, to reduce the number of Lambda invokes. This option is ideal for workloads that are not time-sensitive, and can choose to wait to optimize cost.
Previously, Lambda functions polling from an SQS queue would send messages in batches of up to 10 before invoking the function. Now, customers can also define a time window that Lambda should wait to poll messages from their SQS queue before invoking their function. Lambda will wait for up to 300 seconds to poll messages from the SQS queue. When a batch window is defined, Lambda will also allow customers to define a batch size of up to 10,000 messages.
To get started, when creating a new Lambda function or updating an existing function with SQS as an event source, customers can set the MaximumBatchingWindowInSeconds field to any value between 0 and 300 seconds on the AWS Management Console, the AWS CLI, AWS SAM or AWS SDK for Lambda. This feature is available in all AWS Regions where AWS Lambda and Amazon SQS are available, and requires no additional charge to use.
the lambda function pushes the message to one of three different SQS queues
...
So when my system tries to update the entity, it's not existing yet, thus my system creates it. But since I have a create action in the process of executing, it creates a duplicate entry on my destination
By using multiple queue you created yourself a thread race and now you are trying to patch it.
Based on the provided information and context - as already answered - a single fifo queue with context id could be more appropriate (do you really need 3 queues?)
If latency is critical, then a streaming could be a solution as well.
As you described your issue, I think you don't need to combine the messages (indeed you could use Redis, AWS Kinesis Analytics, DynamoDB..), but rather not to create the issue at thecfirst place
Options
having a single fifo queue
having an idempotent and thread-safe backend service able handling concurrent updates (transactions, atomic updates,..)
As well if you can create "duplicate" entries, it means the unique indexes are not enforced. They exist exactly for that reason.
You did not specify the backend service (RDBMS, DynamoDB, MongoDB, other?) each has an option to handle the problem somehow.
At the moment, we are calling cloudfront.listDistributions() every minute to identify a change in the status of the distribution we are deploying. This cause Lambda to timeout because CloudFront never deploys faster than 30 minutes (where Lambda timeouts after 15 min).
I would like to notify a Lambda function after a CloudFront Distribution is successfully created. This would allow us to execute the post-creation actions while saving valuable Lambda exec time.
Creating a Rule on CloudWatch does not offer the option to chose CloudFront. Nevertheless, it seems to accept creating a Custom Event Pattern with the source aws.cloudformation.
Considering options:
Trigger a lambda every 5 minutes to list distributions and compare states with previous states stored in DynamoDB.
Anybody with an idea to overcome this lack of feature from AWS?
If you want and have time, there's a trickier and a bit more complex solution for doing that leveraging CloudTrail.
Disclaimer
CloudTrail is not a real-time log system, but ensure that all API calls will be reported on the console within 15 minutes (as stated here under the CloudTrail FAQs). Due to this, what's following makes sense only for long-running tasks like creating a CloudFront distribution, taking up an Aurora DB ans so on.
You can create a CloudWatch event based rule (let's call it CW-r1)
on specific pattern like CreateDistribution or
UpdateDistribution.
CW-r1 triggers a Lambda (LM-1) which enables another CloudWatch
event base rule (CW-r2).
CW-r2 on a scheduled base, triggers a Lambda (LM-2) which via API
request the state of specific distribution. Once distribution is
"Deployed", LM-2 can send a notification via SNS for example (you can
send EMAIL, SMS, Push Notification whatever is supported on SNS).
Once everything is finished, LM-2 can disable the CW-r2 rule in order
to stop processing information.
In this way you can have an automatic notification system based on which API call you desire.
User submit a csv file which contains time (Interval) with message. I want to submit that message on the time mentioned with message to chat API. I am using DynamoDB to store message and a lambda function which read the message from DynamoDB and one at a time use setTimeout function to publish message on chat. I am using node js to implement that functionality. I also created a amazon API to trigger that lambda fUnction.
But this approach is not working. Can any one suggest me which other service should i use to do same ? Is there any amazon queue service for that?
From the context of your question what I understand is that you basically need to create a futuristic timer. A system that can inform you sometime in the future with some metadata to take an action.
If this is the case, on top of my head I think you can use the below solution to achieve your goal:
Pre-requisites: I assume, you are already using Dynamo DB(aka DDB) as a primary store. So all CSV data is persisted in the dynamo and you are using dynamo stream to read the insert and updated records to trigger your lambda function(let's call this lambda function as Proxy_Lambda).
Create another lambda function that processes records and sends a message to your chat system(let's call this lambda function as Processor_Lambda)
Option 1: AWS SQS
Proxy_Lambda reads records from DDB stream and based on the future timestamp attribute present in the record, it publishes a message to AWS SQS queue with initial visibility timeout equals to the timestamp. Sample example: Link. Remember, these messages will not be visible to any of the consumer until the visibility timeout.
Add a trigger for Processor_Lambda to start polling from this SQS queue.
Once message becomes visible in the queue(after the initial timeout), Processor_Lambda consumes the message and send the chat events.
Result: You will be able to create a futuristic timer using the SQS visibility timeout feature. Cons here is that you will not be able to view the in-flight SQS message content until the visibility timeout of the message occurs.
Note: Max visibility timeout can be set for 12 hours. So if your use-case demand a timer for more then 12 hours, you need to add code logic in Processor_Lambda to send that message back to queue with new visibility timeout.
Option 2: AWS Step function (my preferred approach ;) )
Crate state machine in AWS Step function to generate task timers (let's call it Timer_Function). These task timers will keep looping between the wait state until the timer expires. Timer window will be provided as an input to this step function.
Link Timer_Function to trigger Processor_Lambda once the task timer expires. Basically, that will be the next step after the Timer step.
Connect Proxy_Lambda with Timer_Function i.e. Proxy_Lambda will read records from DDB stream and invoke the Timer_Function with message interval attribute present the Dynamo DB record and the necessary payload.
Result: A Timer_Function that keep looping until the time window(message interval) expires. Which in turn provide you a mechanism to trigger Proxy_Lambda in the future(i.e. the timer window)
Having said that, now I will leave this up to you to choose the right solution based on the use-case and business requirement.
I was hoping if someone can clarify a few things regarding Azure Storage Queues and their interaction with WebJobs:
To perform recurring background tasks (i.e. add to queue once, then repeat at set intervals), is there a way to update the same message delivered in the QueueTrigger function so that its lease (visibility) can be extended as a way to requeue and avoid expiry?
With the above-mentioned pattern for recurring background jobs, I'm also trying to figure out a way to delete/expire a job 'on demand'. Since this doesn't seem possible outside the context of WebJobs, I was thinking of maybe storing the messageId and popReceipt for the message(s) to be deleted in Table storage as persistent cache, and then upon delivery of message in the QueueTrigger function do a Table lookup to perform a DeleteMessage, so that the message is not repeated any more.
Any suggestions or tips are appreciated. Cheers :)
Azure Storage Queues are used to store messages that may be consumed by your Azure Webjob, WorkerRole, etc. The Azure Webjobs SDK provides an easy way to interact with Azure Storage (that includes Queues, Table Storage, Blobs, and Service Bus). That being said, you can also have an Azure Webjob that does not use the Webjobs SDK and does not interact with Azure Storage. In fact, I do run a Webjob that interacts with a SQL Azure database.
I'll briefly explain how the Webjobs SDK interact with Azure Queues. Once a message arrives to a queue (or is made 'visible', more on this later) the function in the Webjob is triggered (assuming you're running in continuous mode). If that function returns with no error, the message is deleted. If something goes wrong, the message goes back to the queue to be processed again. You can handle the failed message accordingly. Here is an example on how to do this.
The SDK will call a function up to 5 times to process a queue message. If the fifth try fails, the message is moved to a poison queue. The maximum number of retries is configurable.
Regarding visibility, when you add a message to the queue, there is a visibility timeout property. By default is zero. Therefore, if you want to process a message in the future you can do it (up to 7 days in the future) by setting this property to a desired value.
Optional. If specified, the request must be made using an x-ms-version of 2011-08-18 or newer. If not specified, the default value is 0. Specifies the new visibility timeout value, in seconds, relative to server time. The new value must be larger than or equal to 0, and cannot be larger than 7 days. The visibility timeout of a message cannot be set to a value later than the expiry time. visibilitytimeout should be set to a value smaller than the time-to-live value.
Now the suggestions for your app.
I would just add a message to the queue for every task that you want to accomplish. The message will obviously have the pertinent information for processing. If you need to schedule several tasks, you can run a Scheduled Webjob (on a schedule of your choice) that adds messages to the queue. Then your continuous Webjob will pick up that message and process it.
Add a GUID to each message that goes to the queue. Store that GUID in some other domain of your application (a database). So when you dequeue the message for processing, the first thing you do is check against your database if the message needs to be processed. If you need to cancel the execution of a message, instead of deleting it from the queue, just update the GUID in your database.
There's more info here.
Hope this helps,
As for the first part of the question, you can use the Update Message operation to extend the visibility timeout of a message.
The Update Message operation can be used to continually extend the
invisibility of a queue message. This functionality can be useful if
you want a worker role to “lease” a queue message. For example, if a
worker role calls Get Messages and recognizes that it needs more time
to process a message, it can continually extend the message’s
invisibility until it is processed. If the worker role were to fail
during processing, eventually the message would become visible again
and another worker role could process it.
You can check the REST API documentation here: https://msdn.microsoft.com/en-us/library/azure/hh452234.aspx
For the second part of your question, there are really multiple ways and your method of storing the id/popReceipt as a lookup is a possible option, you can actually have a Web Job dedicated to receive messages on a different queue (e.g plz-delete-msg) and you send a message containing the "messageId" and this Web Job can use Get Message operation then Delete it. (you can make the job generic by passing the queue name!)
https://msdn.microsoft.com/en-us/library/azure/dd179474.aspx
https://msdn.microsoft.com/en-us/library/azure/dd179347.aspx