I need to spin up an instance using Lambda on S3 trigger. Lambda has to spin up an EC2 and trigger a user data script.
I have an aws cli something like aws —region use-east-1 s3 cp s3://mybucket/test.txt /file/
Looking for python boto3 implementation.Since lambda is new to me, can someone share if its doable?
One way is Lambda runs CFT and UserData is part of CFT, but think there should be an easier way to achieve this.
Just include UserData parameter in your Boto3 function.
You should use a code like that:
ec2.create_instances(
ImageId='<ami-image-id>',
InstanceType='t1.micro',
UserData='string',
....
If you don't need to create, but just run, you should use:
ec2.client.run_instances(
...
UserData='string',
...
You can see all arguments that create_instance and run_instances support in:
http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Subnet.create_instances
Related
I'm trying to create a lambda function that will shutdown systemd services running on an EC2 instance. I think using the ssm client from the boto3 module probably is the best choice, and the specific command I was considering to use is the send_command(). Ideally I would like to use Ansible to shutdown the systemd service. So I'm trying to use the "AWS-ApplyAnsiblePlaybooks" It's here that I get stuck, it seems like the boto3 ssm client wants some parameters, I've tried following the boto3 documentation here, but really isn't clear on how it wants me to present the parameters, I found the parameters it's looking for inside the "AWS-ApplyAnsiblePlaybooks" document - but when I include them in my code, it tells me that the parameters are invalid. I also tried going to AWS' GitHub repository because I know they sometime have examples of code but they didn't have anything for the send_command(). I've upload a gist in case people are interested in what I've written so far, I would definitely be interested in understanding how others have gotten their Ansible playbooks to run using ssm via boto3 python scripts.
As far I can see by looking at the documentation for that SSM document and the code you shared in the gist. you need to add "SourceType":["S3"] and you need to have a path in the Source Info like:
{
"path":"https://s3.amazonaws.com/path_to_directory_or_playbook_to_download"
}
so you need to adjust your global variable S3_DEVOPS_ANSIBLE_PLAYBOOKS.
Take a look at the CLI example from the doc link, it should give you ideas on how yo re-structure your Parameters:
aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
--targets Key=tag:TagKey,Values=TagValue \
--parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_playbook_to_download\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"]}' \
--association-name "name" --schedule-expression "cron_or_rate_expression"
I am using lambda as an ETL tool to process raw files coming in the s3 bucket.
As time will pass, functionality of lambda function will grow.
Each month, I will change lambda function. so, I want to publish version 1,2,3
How do I make the s3 bucket trigger particular version of lambda for the files ?
How do I test this functionality of production vs test in this case ?
From AWS Lambda function aliases - Documentation:
When you use a resource-based policy to give a service, resource, or account access to your function, the scope of that permission depends on whether you applied it to an alias, to a version, or to the function. If you use an alias name (such as helloworld:PROD), the permission is valid only for invoking the helloworld function using the alias ARN. You get a permission error if you use a version ARN or the function ARN. This includes the version ARN that the alias points to.
For example, the following AWS CLI command grants Amazon S3 permissions to invoke the PROD alias of the helloworld Lambda function. Note that the --qualifier parameter specifies the alias name.
$ aws lambda add-permission --function-name helloworld \
--qualifier PROD --statement-id 1 --principal s3.amazonaws.com --action lambda:InvokeFunction \
--source-arn arn:aws:s3:::examplebucket --source-account 123456789012
In this case, Amazon S3 is now able to invoke the PROD alias. Lambda can then execute the helloworld Lambda function version that the PROD alias references. For this to work correctly, you must use the PROD alias ARN in the S3 bucket's notification configuration.
How do I make the s3 bucket trigger particular version of lambda for the files ?
Best practice is not to point to lambda versions, but to use lambda alias which will point to the version you will configure. You can just append the alias name after the ARN of the Lambda.
arn:aws:lambdaName:aliasName
How do I test this functionality of production vs test in this case ?
You can trigger the same event multiple times with different lambda aliases (like a production version and a testing one)
Example of multiple event notifications
I am deploying nodejs code to AWS lambda and I'd like to know how I can check whether it is running in lambda. Because I need to do something different in code between lambda and local.
AWS Lambda sets various runtime environment variables that you can leverage. You can use the following in Node.js, for example:
const isLambda = !!process.env.LAMBDA_TASK_ROOT;
console.log("Running on Lambda:", isLambda);
Note that the double bang !! converts a truthy/falsey object to a boolean (true/false).
I'd advise using a Lambda environment variable rather than attempting to check against any runtimes of the Lambda executing.
By doing this you can ensure that any infrastructure changes on the AWS side of Lambda will not affect your code.
It also allows you test it locally if you are trying to reproduce a scenario without the need to hardcode logic.
I am attempting to set a NodeJS Lambda function to be triggered when an image is uploaded to an Amazon S3 bucket. I have seen multiple tutorials and have the yml file set up as shown. Below is the YML config file:
functions:
image-read:
handler: handler.imageRead
events:
- s3:
bucket: <bucket-name-here>
event: s3:ObjectCreated:*
Is there something I am missing for the configuration? Is there something I need to do in an IAM role to set this up properly?
The YAML that you have here looks good but there may be some other problems.
Just to get you started:
are you deploying the function using the right credentials? (I've seen it many times that people are deploying in some other account etc. than they think - verify in the web console that it's there)
can you invoke the function in some other way? (from the serverless command line, using http trigger etc.)
do you see anything in the logs of that function? (add console.log statements to see if anything is being run)
do you see the trigger installed in the web console?
can you add trigger manually on the web console?
Try to add a simple function that would only print some logs when it is run and try to add a trigger for that function manually. If it works then try to do the same with the serverless command line but start with a simple function with just one log statement and if it works then go from there.
See also this post for more hints - S3 trigger is not registered after deployment:
https://forum.serverless.com/t/s3-trigger-is-not-registered-after-deployment/1858
If I want to call AWS SES from AWS Lambda, I normally write the following when instantiating the AWS Helper Class:
var ses = new aws.SES({apiVersion: '2010-12-01', region: 'eu-west-1'});
I'm wondering, do I actually need to specify the AWS Region? Or will the AWS SES helper class just run in the region where the AWS Lambda Function is running.
What is the best practice here? Might I encounter problems later if I omit this?
I have always specified the region for the sake of being explicit. I went and changed one of my NodeJS Lambda functions using SNS to using an empty constructor instead of providing region and deployed it...it appears to still work. It looks like the service will try to run in the region of the lambda function it is being called from. I imagine the IAM role for the lambda function would play a part as well. As far as best practice, I think it is best to be explicit when possible assuming it isn't creating a ton of overhead/hassle. The problem you risk running into in the future is the use of a resource that isn't in certain regions.