AWS Lambda Function Keep It Alive - node.js

I have a nodejs function on AWS Lambda that runs multiple setTimeouts within Async Parallel. Some are instant and some could be in 30min+ from now. The problem I am running into is that It will never get to the 30min timeout because it is going idle and then dies. Is there anyway to keep the lambda function alive while it is waiting to fire off the other timeout functions.

The lifetime of a Lambda is maximum of 300 seconds.
See: AWS Lambda Limits
There is no way to increase it beyond 300 seconds. When Lambda was introduced, the maximum execution time was 60 seconds. It was later increased to 300 seconds.
You need to revisit your design and check if Lambda is the correct solution. Running an on-demand EC2 instance that matches Lambda specifications could be a solution. Or state your problem, and we can propose a solution.

In addition to the fact that you can’t do that, see hellov’s answer, I would say that this is an incorrect design choice anyway. If you needed a long lived service, using an ec2 instance directly would be a better choice.
If you just need to do something once 30 minutes later, then I would see about generating a AWS Lambda event at that time outside of the lambda code itself. In other words, Lambda is meant for pure calculations, waiting for anything inside it seems the Wrong Approach.

As others have mentioned, there is a hard-limit of 300 seconds for the maximum execution time for a Lambda function. Based on the quick overview of your problem, I don't think Lambda is the correct solution.
If you need to handle these long-running asynchronous tasks then you will need to add some type of "connector" between these different tasks. One possible solution is to use SQS Queues.
Component A --> SQS 1 --> Component B

Your Lambda function does some parallel tasks. The best way to do this in Lambda is to split each task into a separate Lambda and then coordinate those tasks in a way that best makes sense to your application.
This can be done in several different ways (the best approach depends on your application):
Step Functions
AWS Lambda + SNS
AWS Lambda + SNS/SQS
AWS Lambda + Kinesis
AWS Lambda + DynamoDB Streams

Related

Durable Functions could reduce my time execution?

I can execute a process "x" in parallel using Azure Functions Durable Fan In/Fan Out.
If I divide my unique process "x" in multiple process using this concept, can I reduce the execution time for the function?
In general Azure Functions Premium allow for higher timeout values. So, if you don't want to deal with the issue, just upgrade ;-)
Azure Durable Functions might or might not reduce your total runtime.
BUT every "Activity Call" is a new function execution with an own timeout.
Either Fanning out or even calling activities it in serial will prevent timeout issue as long the called activity will not extend the timeout period for functions.
If, however you have an activity which will run for an extended period, you will need premium functions anyway. But your solution with "batch" processing looks quite promising to avoid this.
Making use of the fan-out/fan-in approach, you will run tasks in parallel instead of sequentially so the duration of the execution will be the duration of your longest single task to execute. It's the best approach to use if the requests do not require information from each other to process.
You could make use of Task Asynchronous Programming (TAP) to build tasks, call relevant methods and wait for all tasks to finish if you don't want them to be on Durable Functions

Firebase Cloud function: how many users can the same function (onCall) serve? With how many resources can it serve them?

Hy everyone. I have only one "onCall" firebase function but it is called many times by all users. Every single function can be called 3000 times per second as the documentation says. Since mine needs 5 seconds then would I have 600 calls per second available?
Since my function manipulates images and needs to store them in the tmp folder of the server (or virtual machine). I happened to get an error message telling me that I have exceeded the allowable memory for the function. I fixed the function trying to take up less memory and now it works. My question is this: assuming the memory available for the function is X, and the call requires X / 2 memory, does that mean I can only have two simultaneous function calls (so as not to run out of X memory)?
Also, I'm not sure how much this X is. Thanks in advance.
Cloud Functions auto-scale up and down to meet the load.
On Cloud Functions only a single instance of your function will ever run in parallel on a single container. So that indeed means your code has the full resources of that container each time it runs.

Is there a way to set concurrency in a linux EC2 instance?

I currently have a script inside a linux ec2 instance that processes some documents. This script gets called from AWS Lambda using (SSM) send_command. It works fine when it processes one or two documents but when it gets past that, I get empty responses. Im assuming the system bottlenecks as there is essentially no limit to the amount of calls that I can send to the instance. So is there a way to set the concurrency level on the instance to only process say 2 commands at a time?
I know I can set the concurrency level on the lambdas, but the execution time is usually less than 200ms. Meanwhile the processing time in the instance is about 5 to 15 seconds.
Ultimately, I can have the lambdas wait for the job to be completed but it would be expensive as I need to process thousands of documents.
Thank you!

AWS lambda throttle concurrent invocations from a specific event source

I've created dynamic sqs standard queues which are used as an event source for my lambda function invocation. Whenever any message is pushed into the queues the lambda function is invoked. Now, i want to add some throttling on my lambda function like a single event source can have only one active invocation of lambda at a time. There are some answers but they only work on throttling overall lambda concurrency.
From Managing Concurrency for a Lambda Function:
When a function has reserved concurrency, no other function can use that concurrency. Reserved concurrency also limits the maximum concurrency for the function, and applies to the function as a whole, including versions and aliases.
Therefore, reserved concurrency can be used to limit the number of concurrent executions of a specific AWS Lambda function.
AWS Lambda functions can also be triggered from an Amazon SQS FIFO queue.
From New for AWS Lambda – SQS FIFO as an event source | AWS Compute Blog:
In SQS FIFO queues, using more than one MessageGroupId enables Lambda to scale up and process more items in the queue using a greater concurrency limit. Total concurrency is equal to or less than the number of unique MessageGroupIds in the SQS FIFO queue.
So, it seems that if you specify all messages with the same MessageGroupId and a batch size of 1, then it will only process one message at a time.
Short answer is yes it can be done but only in a roundabout way.
When you have a Lambda function set as a triggered function on an SQS queue, the Lambda service polls the queue and handles the receiving and deletion of a message from the queue. The only control you have over how many messages the Lambda service reads and how many instances of your function the Lambda service invokes is (a) batch size, and (b) function concurrency.
Neither of these will help you when applied directly to your function, because setting the batch size to a small number (e.g. 1) will result in more instances being started (takes longer to process 1 message at a time), and setting it to a high number may not be desirable in your case, and if it is then it still won't help if the number of messages is higher than the batch size or they are received frequently and your function is already busy processing the previous batch. And you already said function concurrency is a no go because you only want to limit the concurrency from a source, not overall.
So here's a way it can be accomplished: create another function with a concurrency limit of 1, set it as the triggered function instead of your function. That function will receive messages, and it in turn will invoke your function with said message(s). It will wait for your function to return before returning itself. Only when the new function returns can it receive another message/batch from the Lambda service, and invoke your function again. So your "real" function has no overall concurrency limit, but there is only ever one instance invoked/running at a time from your SQS source (via the new function).

Why are concurrent lambda requests being kicked off late?

I'm running load tests on AWS Lambda with Charlesproxy, but am confused by the timelines chart produced. I've setup a test with 100 concurrent connections and expect varying degrees of latency, but expect all 100 requests to be kicked off at the same time (hence concurrent setting in charlesproxy repeat advanced feature), but I'm seeing some requests get started a bit late ... that is if I understand the chart correctly.
With only 100 invocations, I should be well within the concurrency max set by AWS Lambda, so why then are these request being kicked off late (see requests 55 - 62 on attached image)?
Lambda can take from a few hundred milliseconds to 1-2 seconds to start up when it's in "cold state". Cold means it needs to download your package, unpack it, load in memory, then start executing your code. After execution, this container is kept "alive" for about 5 to 30 minutes ("warm state"). If you request again while it's warm, container startup is much faster.
You probably had a few containers already warm when you started your test. Those started up faster. Since the other requests came concurrently, Lambda needed to start more containers and those came from a "cold state", thus the time difference you see in the chart.

Resources