I have to run my code every 12 hours. I wrote the following code, and I deployed it to AWS Lambda for the code to run every 12 hours. However, I see that the code does not run every 12 hours. Could you guys help me with this?
nodeCron.schedule("0 */12 * * *", async () => {
let ids = ["5292865", "2676271", "5315840"];
let filternames = ["Sales", "Engineering", ""];
await initiateProcess(ids[0], filternames[0]);
await initiateProcess(ids[1], filternames[1]);
await initiateProcess(ids[2], filternames[2]);
});
AWS Lambda is an event-driven service. It runs your code in response to events. AWS Lambda functions can be configured to run up to 15 minutes per execution. They cannot run longer than that.
I would suggest you use Amazon EventBridge to trigger your Lambda function to run on a 12-hour schedule:
Create a Rule with a schedule to run every 12 hours
In this Rule, create a Target, which is your Lambda function.
Then, you will be able to see if it was executed properly or not, and Lambda logs will be available in CloudWatch Logs.
Related
I have to create a Nodejs script to perform the S3 bucket to bucket sync. I don't want to run this when a file is just uploaded to the master S3, so I think lambda is not an option. I need to run the task daily once at a particular time.
How can I achieve this S3 bucket sync using NodeJS using aws-sdk?
Cron can be used for scheduling. I found only aws-sdk code to copy from S3 to another S3. Do we have a code in place to sync two S3 buckets?
AWS S3 Bucket synchronization using Nodejs and aws-sdk can be performed by the method of the s3sync package. If you use it with node-cron, you will be able to implement AWS S3 bucket synchronization scheduling through Nodejs.
I don't know if it'll help, if Cron and aws-cli are available, the purpose can be achieved without Nodejs.
You simply add the code below to the crontab.
0 0 * * * aws s3 sync s3://bucket-name-1 s3://bucket-name-2
You will need a cron job and nodejs provides a library named node-cron
let cron = require('node-cron');
cron.schedule('* * * * *', () => {
// TODO
...
});
For daily cron you can use something like
0 0 * * *
The first 0 specifies the minutes and the second the hours so this cron will run every day at midnight.
I am working on a Bot, using BotFramework-node.js hosted on Azure.
I am using four cronjobs in my bot to run different functions at different points of a day. The only issue is that the jobs are not running automatically. They will run normally if i compile/run the code, but wont if i don't compile/run it within 12 hours i.e, if i run the code manually today morning, the jobs will execute fine for today but tomorrow it won't unless i run it again.
Tried changing the modules. I have tried using node-schedule, node-cron. Now using cron
var Name = new CronJob({
cronTime:'0 15 11 * * 0-6',
onTick: function(){
//function call / another js file call
}
});
Name.start();
I have a Python 3.6 - Flask application deployed onto AWS Lambda using Zappa, in which I have an asynchronous task execution function defined using #Task as discussed here
However, I find that the function call still times out at 30 seconds as against the 5 minute timeout that AWS Lambda enforces for non-API calls. I even checked the timeout in my Lambda settings and it is set to 5 minutes.
The way I discovered this is when the lambda's debug output started repeating without a request - something that happens because the lamba is called 2 more times because of either an error or timeout (as per the AWS Lambda documentation).
Can anyone help me with getting this resolved?
[EDIT : The lambda function is also not part of any VPC and is set to be accessible from the internet.]
Here are the logs below. Basically, the countdown is a sleep timer counting to 20 seconds, followed by a #task call to application.reviv_assign_responder, but as we see, there is no outpust past 'NEAREST RESPONDER' and the countdown starts again, indicating that the function has timed out and has been called again by (AWS') design.
Log output in Pastebin : https://pastebin.com/VEbdCALg
Second incident - https://pastebin.com/ScNhbMcn
As we can see in the second log, it clearly states:
[1515842321866] wait_one_and_notify : 30 : 26 [1515842322867]
wait_one_and_notify : 30 : 27 [1515842323868] wait_one_and_notify : 30
: 28 [1515842324865] 2018-01-13T11:18:44.865Z
72a8d34a-f853-11e7-ac2f-dd12a3d35bcb Task timed out after 30.03
seconds
You can check the default settings that Zappa applies to all your lambda functions here, and you will see that by default timeout_seconds is set-up to 30 seconds, This will apply over the default Lambda setup in AWS Console, because by default this is 3 seconds (you can check this limit in AWS Lambda FAQ.
For your #Task you must increase/setup your timeout_seconds in your zappa_settings.(json|yaml) file and redeploy this, You can put 5 mins (5*60==300 seconds) but this increase will be for all your functions defined in your virtualenv deployed with zappa.
You can check more details exposed in this issue in Zappa repo.
The timeout_seconds parameter in Zappa is misleading. That is, it does limit the timeout of the Lambda function, but the requests are served through CloudFront, which has a default timeout of 30 seconds. To verify that, try lowering the timeout_seconds to 20 - it will correctly timeout in 20 seconds. However past 30 there is no effect because of CloudFront limitation.
The default timeout is 30 seconds. You can change the value to be from 4 to 60 seconds. If you need a timeout value outside that range, request a change to the limit.
In other words, there is nothing you can do in either Zappa or Lambda to fix this, because the problem lies elsewhere (CloudFront).
I haven't tried it myself, but you might be able to up the limit by creating the cloudfront distribution in front of Lambda, though it seems you are still limited by max. 60s (unless you request more through AWS support, as indicated in the previous link).
I'm using serverless-warmup-plugin to run a cron that invokes a Lambda function every 10 minutes. The code for the Lambda function looks like this:
exports.lambda = (event, context, callback) => {
if (event.source === 'serverless-plugin-warmup') {
console.log('Thawing lambda...')
callback(null, 'Lambda is warm!')
} else {
// ... logic for the lambda function
}
}
This works on paper but in practice the cron doesn't keep the Lambda function warm even though it successfully invokes it every 10 minutes.
When the Lambda is invoked via a different event source (other than the cron) it takes around 2-3 seconds for the code to execute. Once it's executed this way, Lambda actually warms up and starts responding under 400ms. And it stays warm for a while.
What am I missing here?
As the official documentation states:
Note
When you write your Lambda function code, do not assume that AWS Lambda always reuses the container because AWS Lambda may choose not to reuse the container. Depending on various other factors, AWS Lambda may simply create a new container instead of reusing an existing container.
It seems like a "bad architecture design" to try to keep a Lambda Container up, but, apparently it's a normal scenario your warmed container not being used when a different event source triggers a new container.
I have a NodeJS endpoint that receives requests to gather data from a reporting engine.
To keep the request endpoint light and because some of the reports generated have a few steps (Gather data -> assemble report -> convert to PDF -> Email to relevant person) I want to separate the inbound request from the job itself.
Using AWS.SQS I can accept the request, put the variables into SQS and the respond with a 200 / 201.
What are some of the better practices around picking this job up on the other end?
If I were to trigger a lambda function would I have to wait for that function to complete before 200 / 201 can be sent? or can I:
Accept Request ->
Job to SQS ->
Initiate Lamba function ->
200 Response.
Alternatively what other options would be available to decouple the inbound request from the processing itself?
Here are a few options:
Insert the request in your SQS queue and return a 200 response immediately. Have a process on an EC2 server polling the SQS queue and performing the query when it gets a message out of SQS.
Invoke a Lambda function asynchronously, passing it the properties needed to perform the query, and return a 200 response immediately. Since you invoked the Lambda function asynchronously your NodeJS code that invoked the Lambda function doesn't wait for the function to complete.
An alternative to #2 is to send the request to an SNS topic, and have the SNS topic configured to invoke the Lambda function. This is probably the best method if you are using Lambda, because SNS will retry if the Lambda function fails for some reason.
I don't recommend combining SQS with Lambda because those two services don't integrate very well. SNS on the other hand does integrate very well with Lambda.
Also, you need to make sure your Lambda function invocations can be completed in under 5 minutes since that's currently the maximum time a Lambda function can execute. If you need individual steps to run for longer than 5 minutes you will need to use EC2 or ECS.
I think AWS Step Functions may be a good fit for your use case.