Ive been looking for ways to create a lambda function that can access directly the contents of an EC2 Linux instance. My goal is to be able to call a script residing in the home directory AND pass in a variable which the script will use for processing.
Ive been looking at different ways of doing it, but I cant seem to find a concise solution.
Thank you in advance!
If your instance has the SSM agent running on it then your Lambda can run the send-command.
Using the AWS-RunShellScript document your Lambda can execute a number of commands on your EC2 instance for you.
You could also create your own document for SSM passing in an argument for each of the variables.
Related
I am trying to execute Lambda function using SAM-CLI on local without docker.
So, I have question that, is it possible to run lambda function without AWS account and Docker on Local?
No, docker is a required prerequisite to invoke sam function locally. Even if you use localstack, you need docker.
We are taking over a whole application from another company, and they have built the whole pipeline for deploying, but we still don't have access to it. What we know, that there's a lambda function is running triggered by certain SNS messages, and all the code is in Node.js, and the development is in VS Code. We also have issues debugging it locally, but it's a bigger problem, that we need to debug it remotely.
Since I am new in AWS services, I'd really appreciate if somebody could help me in this.
Does it necessary to open a port? How is it possible to connect to a lambda? Do we need serverless to setup? Many unresolved questions.
I don't think there is way you can debug a lambda function remotely. Your best bet is to download the code on local machine, setup the env variables as you have set up on your lambda function and take it from there.
Remember at the end of the day lambda is just a container which is running the code for you. AWS doesn't allow any ssh or connection with those container. In your case you should be able to debug it on local till you have the same env variables. There are other things as well which are lambda specific but considering it is a running code which you have got so you should be able to find out the issue.
Hope it makes sense.
Thundra (https://www.thundra.io/aws-lambda-debugger) has live/remote debugging support for AWS Lambda through its native IDE plugins (VSCode and IntelliJ IDEA).
The way AWS have you 'remote' debug is to execute the lambda locally through Docker as it proxies the requests to the cloud for you, using AWS Toolkit. You have a lambda running on your local computer via docker that can access resources on the cloud, such as databases, api's etc. You can step through debug them using editors like vscode.
I use SAM with a template.yaml . This way, I can pass event data to the handler, reference dependency layers (shared code libraries) and have a deployment manifest to create a Cloudformation stack (release instance with history and resource management).
Debugging can be a bit slow as it compiles, deploys to Docker and invokes, but allows step through debugging and variable inspection.
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging.html
While far from ideal, any console-printing actions would likely get logged to CloudWatch, which you could then access to go through printed data.
For local debugging, there are many Github projects with Dockerfiles which which you can build a docker container locally, just like AWS does when your Lambda is invoked.
I have an AWS Windows Server 2016 VM. This VM has a bunch of libraries/software installed (dependencies).
I'd like to, using python3, launch and deploy multiple clones of this instance. I want to do this so that I can use them almost like batch compute nodes in Azure.
I am not very familiar with AWS, but I did find this tutorial.
Unfortunately, it shows how to launch an instance from the store, not an existing configured one.
How would I do what I want to achieve? Should I create an AMI from my configured VM and then just launch that?
Any up-to-date links and/or advice would be appreciated.
Yes, you can create an AMI from the running instance, then launch N instances from that AMI. You can do both using the AWS console or you could call boto3 create_image() and run_instances(). Alternatively, look at Packer for creating AMIs.
You don't strictly need to create an AMI. You could simply the bootstrap each instance as it launches via a user data script or some form of CM like Ansible.
I have an ec2 instance running and have setup it up where it takes SFTP writes (I have to use SFTP unfortunately so I am aware of better solutions but I can't use them). I have an s3 bucket mounted but I ran into an issue with allowing SFTP writes directly into the bucket. My work around is to run
aws s3 sync <directory> s3://<s3-bucket-name>/
And this works. My problem I don't know how to run this script automatically, I would prefer to run it whenever there is a write to a specified directory but I will settle for it running on regular intervals.
So essentially my question is "How do I fire a script automatically in a ec2 aws instance running linux"
Thanks.
use inotifywait for file watcher or use cronjob to kick-off your S3 Sync script at regular interval.
I was wondering if I could use a shell script on a remote server to create an amazon EC2 instance from an existing saved snapshot, and also delete that instance too.
I was sure it was possible, but I havent been able to run any example code to do it so im starting to doubt it now.
So, can anyone tell me how this is done please?
There is a quite detailed description on the Amazon webpage how to do this:
http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/Welcome.html
At which point are you struggling?