aws lambda node js not starting ec2 instance - node.js

I am writing a ec2 scheduler logic to start and stop ec2 instances.
The lambda works for stopping instances. However the start function is not initiating ec2 start.
The logic is to filter based on tags and status of ec2 and start or stop based on current status.
Below is the code snippet to start EC2 instances. But this isn't starting the instances.
The filtering happens correctly and pushes the instances to "stopParams" object.
The same code works if I change the logic to ec2.stopInsatnces by filtering the running state instances. The role has permissions to start and stop .
Any ideas why its not triggering start ?
if (instances.length > 0){
var stopParams = { InstanceIds: instances };
ec2.startInstances(stopParams, function(err,data) {
if (err) {
console.log(err, err.stack);
} else {
console.log(data);
}
context.done(err,data);
});

Finally got this working. There were no issues with the nodejs lambda code. Even though was able to stop instances but start instances were not invoking the start method. Found that all volumes are encrypted.
To start an instance using API call the lambda role used by lambda should have permission to kms key which is used for encrypting the volume. After adding the lambda role arn in the principal section of kms key policy permission the lambda was able to start instances. But key permission is not necessary for stopping the instance. Hope this helps

Related

How to start an ec2 instance using sqs and trigger a python script inside the instance

I have a python script which takes video and converts it to a series of small panoramas. Now, theres an S3 bucket where a video will be uploaded (mp4). I need this file to be sent to the ec2 instance whenever it is uploaded.
This is the flow:
Upload video file to S3.
This should trigger EC2 instance to start.
Once it is running, I want the file to be copied to a particular directory inside the instance.
After this, I want the py file (panorama.py) to start running and read the video file from the directory and process it and then generate output images.
These output images need to be uploaded to a new bucket or the same bucket which was initially used.
Instance should terminate after this.
What I have done so far is, I have created a lambda function that is triggered whenever an object is added to that bucket. It stores the name of the file and the path. I had read that I now need to use an SQS queue and pass this name and path metadata to the queue and use the SQS to trigger the instance. And then, I need to run a script in the instance which pulls the metadata from the SQS queue and then use that to copy the file(mp4) from bucket to the instance.
How do i do this?
I am new to AWS and hence do not know much about SQS or how to transfer metadata and automatically trigger instance, etc.
Your wording is a bit confusing. It says that you want to "start" an instance (which suggests that the instance already exists), but then it says that it wants to "terminate" an instance (which would permanently remove it). I am going to assume that you actually intend to "stop" the instance so that it can be used again.
You can put a shell script in the /var/lib/cloud/scripts/per-boot/ directory. This script will then be executed every time the instance starts.
When the instance has finished processing, it can call sudo shutdown now -h to turn off the instance. (Alternatively, it can tell EC2 to stop the instance, but using shutdown is easier.)
For details, see: Auto-Stop EC2 instances when they finish a task - DEV Community
I tried to answer in the most minimalist way, there are many points below that can be further improved. I think below is still quite some as you mentioned you are new to AWS.
Using AWS Lambda with Amazon S3
Amazon S3 can send an event to a Lambda function when an object is created or deleted. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy.
When the object uploaded it will trigger the lambda function. Which creates the instance with ec2 user data Run commands on your Linux instance at launch.
For the ec2 instance make you provide the necessary permissions via Using instance profiles for download and uploading the objects.
user data has a script that does the rest of the work which you need for your workflow
Download the s3 object, you can pass the name and s3 bucket name in the same script
Once #1 finished, start the panorama.py which processes the videos.
In the next step you can start uploading the objects to the S3 bucket.
Eventually terminating the instance will be a bit tricky which you can achieve Change the instance initiated shutdown behavior
OR
you can use below method for terminating the instnace, but in that case your ec2 instance profile must have access to terminate the instance.
ec2-terminate-instances $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
You can wrap the above steps into a shell script inside the userdata.
Lambda ec2 start instance:
def launch_instance(EC2, config, user_data):
ec2_response = EC2.run_instances(
ImageId=config['ami'], # ami-0123b531fc646552f
InstanceType=config['instance_type'],
KeyName=config['ssh_key_name'],
MinCount=1,
MaxCount=1,
SecurityGroupIds=config['security_group_ids'],
TagSpecifications=tag_specs,
# UserData=base64.b64encode(user_data).decode("ascii")
UserData=user_data
)
new_instance_resp = ec2_response['Instances'][0]
instance_id = new_instance_resp['InstanceId']
print(f"[DEBUG] Full ec2 instance response data for '{instance_id}': {new_instance_resp}")
return (instance_id, new_instance_resp)
Upload file to S3 -> Launch EC2 instance

How to use Amazon SQS as Celery broker, without creating / listing queues?

For a project, my organisation is planning on shifting the Celery broker to SQS from redis. Can somebody please guide me as to how to tweak the Celery settings so that I can use a predefined SQS queue, without celery trying to create / list queues (since I do not have those permissions).
I have tried the settings below:
CELERY_BROKER_URL = 'sqs://'
CELERY_BROKER_TRANSPORT_OPTIONS = {
'predefined_queues':{
'MyQueue' : {
'url' : '<SQS Queue URL>',
}
}
}
CELERY_TASK_DEFAULT_QUEUE = 'MyQueue'
CELERY_ROUTES = {
'tasks.*':{
'queue' : 'MyQueue'
}
}
After applying these settings I still get the following error whenever I tried to send a message to SQS queue through celery: An error occurred (AccessDenied) when calling the CreateQueue operation: Access to the resource https://queue.amazonaws.com/ is denied.
Why is celery still attempting to create a queue even when I have passed the predefined_queues setting: https://docs.celeryproject.org/en/stable/getting-started/brokers/sqs.html#predefined-queues ?
Thanks in advance!
Celery worker(s) need to be associated to an IAM role which has CreateQueue action allowed. If your Celery workers run on EC2 instances then the simplest thing to do is to use instance-profile, and let the instance role be able to execute CreateQueue actions.
Even though my company is a heavy user of AWS (literally everything is on AWS) I suggest you think twice when you decide to use an AWS service you can't have on your workstation, and SQS is one such service.

Cron cluster with one Redis instance and multiple server roles

I am trying to create a clustered Cronjob using
cron-cluster and redis
on Node.js. We are running multiple servers with different roles (production, staging, test) and the same codebase. They are all connected to the same Redis instance.
How do I make cron-cluster run only once on each role?
Currently, it picks only one server across the whole fleet (production, staging, test) and runs everything there.
I had the same issue with cron-cluster library.
To solve it, you should pass a unique key, while initializing an instance. For example:
const ClusterCronJob = require('cron-cluster')(redisClient, { key: leaderKey }).CronJob;
Where { key: leaderKey } can be taken as follow:
const leaderKey = process.env.NODE_ENV;

Can't sync s3 with ec2 folder from aws lambda

I am trying to automate data processing using AWS. I have setup an AWS lambda function in python that:
Gets triggered by an S3 PUT event
Ssh into an EC2 instance using paramiko layer
Copy the new objects from the bucket into some folder in the instance, unzip the file inside the instance and run a python script that cleans the csv files.
The problem is the aws cli call to sync s3 bucket with ec2 folder is not working, but when I manually ssh into the ec2 instance and runn the command it works.My aws-cli is configured with my access_keys and the ec2 has an s3 role that allows it full access.
import boto3
import time
import paramiko
def lambda_handler(event, context):
#create a low level client representing s3
s3 = boto3.client('s3')
ec2 = boto3.resource('ec2', region_name='eu-west-a')
instance_id = 'i-058456c79fjcde676'
instance = ec2.Instance(instance_id)
------------------------------------------------------'''
#start instance
instance.start()
#allow some time for the instance to start
time.sleep(30)
# Print few details of the instance
print("Instance id - ", instance.id)
print("Instance public IP - ", instance.public_ip_address)
print("Instance private IP - ", instance.private_ip_address)
print("Public dns name - ", instance.public_dns_name)
print("----------------------------------------------------")
print('Downloading pem file')
s3.download_file('some_bucket', 'some_pem_file.pem', '/tmp/some_pem_file.pem')
# Allowing few seconds for the download to complete
print('waiting for instance to start')
time.sleep(30)
print('sshing to instsnce')
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('/tmp/some_pem_file.pem')
# username is most likely 'ec2-user' or 'root' or 'ubuntu'
# depending upon yor ec2 AMI
#s3_path = "s3://some_bucket/" + object_name
ssh.connect(
instance.public_dns_name, username='ubuntu', pkey=privkey)
print('inside machine...running commands')
stdin, stdout, stderr = ssh.exec_command('aws s3 sync s3://some_bucket/ ~/ec2_folder;\
bash ~/ec2_folder/unzip.sh; python3 ~/ec2_folder/process.py;')
stdin.flush()
data = stdout.read().splitlines()
for line in data:
print(line)
print('done, closing ssh session')
ssh.close()
# Stop the instance
instance.stop()
return('Triggered')
The use of an SSH tool is somewhat unusual.
Here are a few more 'cloud-friendly' options you might consider.
Systems Manager Run Command
The AWS Systems Manager Run Command allows you to execute a script on an Amazon EC2 instance (and, in fact, on any computer that is running the Systems Manager agent). It can even run the command on many (hundreds!) of instances/computers at the same time, keeping track of the success of each execution.
This means that, instead of connecting to the instance via SSH, the Lambda function could call the Run Command via an API call and Systems Manager would run the code on the instance.
Pull, Don't Push
Rather than 'pushing' the work to the instance, the instance could 'pull the work':
Configure the Amazon S3 event to push a message into an Amazon SQS queue
Code on the instance could be regularly polling the SQS queue
When it finds a message on the queue, it runs a script that downloads the file (the bucket and key are passed in the message) and then runs the processing script
Trigger via HTTP
The instance could run a web server, listening for a message.
Configure the Amazon S3 event to push a message into an Amazon SNS topic
Add the instance's URL as an HTTP subscription to the SNS topic
When a message is sent to SNS, it forwards it to the instance's URL
Code in the web server then triggers your script
This answer is based on the additional information that you wish to shutdown the EC2 instance between executions.
I would recommend:
Amazon S3 Event triggers Lambda function
Lambda function starts the instance, passing filename information via the User Data field (it can be used to pass data, not just scripts). The Lambda function can then immediately exit (which is more cost-effective than waiting for the job to complete)
Put your processing script in the /var/lib/cloud/scripts/per-boot/ directory, which will cause it to run every time the instance is started (every time, not just the first time)
The script can extract the User Data passed from the Lambda function by retrieving curl http://169.254.169.254/latest/user-data/, so that it knows the filename from S3
The script then processes the file
The script then runs sudo shutdown now -h to stop the instance
If there is a chance that another file might come while the instance is already processing a file, then I would slightly change the process:
Rather than passing the filename via User Data, put it into an Amazon SQS queue
When the instance is started, it should retrieve the details from the SQS queue
After the file is processed, it should check the queue again to see if another message has been sent
If yes, the process the file and repeat
If no, shutdown itself
By the way, things can sometimes go wrong, so it's worth putting a 'circuit breaker' in the script so that it does not shutdown the instance if you want to debug things. This could be a matter of passing a flag, or even adding a tag to the instance, which is checked before calling the shutdown command.

Thawing Lambda functions doesn't decrease latency

I'm using serverless-warmup-plugin to run a cron that invokes a Lambda function every 10 minutes. The code for the Lambda function looks like this:
exports.lambda = (event, context, callback) => {
if (event.source === 'serverless-plugin-warmup') {
console.log('Thawing lambda...')
callback(null, 'Lambda is warm!')
} else {
// ... logic for the lambda function
}
}
This works on paper but in practice the cron doesn't keep the Lambda function warm even though it successfully invokes it every 10 minutes.
When the Lambda is invoked via a different event source (other than the cron) it takes around 2-3 seconds for the code to execute. Once it's executed this way, Lambda actually warms up and starts responding under 400ms. And it stays warm for a while.
What am I missing here?
As the official documentation states:
Note
When you write your Lambda function code, do not assume that AWS Lambda always reuses the container because AWS Lambda may choose not to reuse the container. Depending on various other factors, AWS Lambda may simply create a new container instead of reusing an existing container.
It seems like a "bad architecture design" to try to keep a Lambda Container up, but, apparently it's a normal scenario your warmed container not being used when a different event source triggers a new container.

Resources