How can I run the aws-cli in an AWS Lambda Python 3.6 environment? - python-3.x

I would like to call the aws s3 sync command from within an AWS Lambda function with a runtime version of Python 3.6. How can I do this?
Why don't you just use the included boto3 SDK?
boto3 does not have an equivalent to the sync command
boto3 does not automatically find MIME types ("If you do not provide anything for ContentType to ExtraArgs, the end content type will always be binary/octet-stream.")
aws cli does automatically find MIME types ("By default the mime type of a file is guessed when it is uploaded")
Architecturally this doesn't make sense!
For my use case I think it makes sense architecturally and financially, but I'm open to alternatives. My Lambda function:
downloads Git and Hugo
downloads my repository
runs Hugo to generate my small (<100 pages) website
uploads the generated files to s3
Right now, I'm able to do all of the above on a 1536 MB (the most powerful) Lambda function in around 1-2 seconds. This function is only triggered when I commit changes to my website, so it's inexpensive to run.
Maybe it is already installed in the Lambda environment?
As of the time of this writing, it is not.

From Running aws-cli Commands Inside An AWS Lambda Function:
import subprocess
command = ["./aws", "s3", "sync", "--acl", "public-read", "--delete",
source_dir + "/", "s3://" + to_bucket + "/"]
print(subprocess.check_output(command, stderr=subprocess.STDOUT))
The AWS CLI isn't installed by default on Lambda, so you have to include it in your deployment. Despite running in a Python 3.6 Lambda environment, Python 2.7 is still available in the environment, so the approach outlined in the article will continue to work.
To experiment on Lambda systems, take a look at lambdash.

Related

How can i add a file to my AWS SAM lambda function runtime?

While working with aws i need to load a WSDL file in order to setup a soap service. The problem I now encounter however is that i don't know how i can possibly add a file to the docker container running my lambda function so that i can just read the file inside my lambda like in the code snippet below.
const files = readdirSync(__dirname + pathToWsdl);
files.forEach(file => {
log.info(file);
});
any suggestions on how i can do this are greatly appreciated!
Here are a few options:
If the files are static and small, you can bundle them in the Lambda package.
If the files are static or change infrequently then you can store them in S3 and pull from there on Lambda cold start.
If the files need to be accessed and modified by multiple Lambda functions concurrently or if you have a large volume of data with low-latency access requirements, then use EFS.
EFS is overkill for a small, static file. I would just package the file with the Lambda function.

Using the SSM send_command in Boto3

I'm trying to create a lambda function that will shutdown systemd services running on an EC2 instance. I think using the ssm client from the boto3 module probably is the best choice, and the specific command I was considering to use is the send_command(). Ideally I would like to use Ansible to shutdown the systemd service. So I'm trying to use the "AWS-ApplyAnsiblePlaybooks" It's here that I get stuck, it seems like the boto3 ssm client wants some parameters, I've tried following the boto3 documentation here, but really isn't clear on how it wants me to present the parameters, I found the parameters it's looking for inside the "AWS-ApplyAnsiblePlaybooks" document - but when I include them in my code, it tells me that the parameters are invalid. I also tried going to AWS' GitHub repository because I know they sometime have examples of code but they didn't have anything for the send_command(). I've upload a gist in case people are interested in what I've written so far, I would definitely be interested in understanding how others have gotten their Ansible playbooks to run using ssm via boto3 python scripts.
As far I can see by looking at the documentation for that SSM document and the code you shared in the gist. you need to add "SourceType":["S3"] and you need to have a path in the Source Info like:
{
"path":"https://s3.amazonaws.com/path_to_directory_or_playbook_to_download"
}
so you need to adjust your global variable S3_DEVOPS_ANSIBLE_PLAYBOOKS.
Take a look at the CLI example from the doc link, it should give you ideas on how yo re-structure your Parameters:
aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
--targets Key=tag:TagKey,Values=TagValue \
--parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_playbook_to_download\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"]}' \
--association-name "name" --schedule-expression "cron_or_rate_expression"

How to upload downloaded file to s3 bucket using Lambda function

I saw different questions/answers but I could not find the one that worked for me. Hence, I am really new to AWS, I need your help. I am trying to download gzip file and load it to the json file then upload it to the S3 bucket using Lambda function. I wrote the code to download the file and convert it to json but having problem while uploading it to the s3 bucket. Assume that file is ready as x.json. What should I do then?
I know it is really basic question but still help needed :)
This code will upload to Amazon S3:
import boto3
s3_client = boto3.client('s3', region_name='us-west-2') # Change as appropriate
s3._client.upload_file('/tmp/foo.json', 'my-bucket', 'folder/foo.json')
Some tips:
In Lambda functions you can only write to /tmp/
There is a limit of 512MB
At the end of your function, delete the files (zip, json, etc) because the container can be reused and you don't want to run out of disk space
If your lambda has proper permission to write a file into S3, then simply use boto3 package which is an AWS SDK for python.
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html
Be aware that if the lambda locates inside of VPC then lambda cannot access to the public internet, and also boto3 API endpoint. Thus, you may require a NAT gateway to proxy lambda to the public.

AWS Lambda spin up EC2 and triggers user data script

I need to spin up an instance using Lambda on S3 trigger. Lambda has to spin up an EC2 and trigger a user data script.
I have an aws cli something like aws —region use-east-1 s3 cp s3://mybucket/test.txt /file/
Looking for python boto3 implementation.Since lambda is new to me, can someone share if its doable?
One way is Lambda runs CFT and UserData is part of CFT, but think there should be an easier way to achieve this.
Just include UserData parameter in your Boto3 function.
You should use a code like that:
ec2.create_instances(
ImageId='<ami-image-id>',
InstanceType='t1.micro',
UserData='string',
....
If you don't need to create, but just run, you should use:
ec2.client.run_instances(
...
UserData='string',
...
You can see all arguments that create_instance and run_instances support in:
http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Subnet.create_instances

AWS Lambda function to connect to a Postgresql database

Does anyone know how I can connect to a PostgreSQL database through an AWS Lambda function. I searched it up online but I couldn't find anything about it. If you could tell me how to go about it that would be great.
If you can find something wrong with my code (node.js) that would be great otherwise can you tell me how to go about it?
exports.handler = (event, context, callback) => {
"use strict"
const pg = require('pg');
const connectionStr =
"postgres://username:password#host:port/db_name";
var client = new pg.Client(connectionStr);
client.connect(function(err){
if(err) {
callback(err)
}
callback(null, 'Connection established');
});
context.callbackWaitsForEmptyEventLoop = false;
};
The code throws an error:
cannot find module 'pg'
I wrote it directly on AWS Lambda and didn't upload anything if that makes a difference.
I wrote it directly on AWS Lambda and didn't upload anything if that makes a difference.
Yes this makes the difference! Lambda doesnt provide 3rd party libraries out of the box. As soon as you have a dependency on a 3rd party library you need to zip and upload your Lambda code manually or with the use of the API.
Fore more informations: Lambda Execution Environment and Available Libraries
You need to refer Creating a Deployment Package (Node.js)
Simple scenario – If your custom code requires only the AWS SDK library, then you can use the inline editor in the AWS Lambda console. Using the console, you can edit and upload your code to AWS Lambda. The console will zip up your code with the relevant configuration information into a deployment package that the Lambda service can run.
and
Advanced scenario – If you are writing code that uses other resources, such as a graphics library for image processing, or you want to use the AWS CLI instead of the console, you need to first create the Lambda function deployment package, and then use the console or the CLI to upload the package.
Your case like mine falls under Advanced scenario. So we need to create a deployment package and then upload it. Here what I did -
mkdir deployment
cd deployment
vi index.js
write your lambda code in this file. Make sure your handler name is index.handler when you create it.
npm install pg
You should see node_modules directory created in deployment directory which has multiple modules in it
Package the deployment directory into a zip file and upload to Lambda.
You should be good then
NOTE : npm install will install node modules in same directory under node_modules directory unless it sees a node_module directory in parent directory. To be same first do npm init followed by npm install to ensure modules are installed in same directory for deployment.

Resources