I am having a Node + Express application running on EC2 server and trying to add a new search feature to it. I am thinking about using Lambda function and ElasticSearch. When the client fires a request to update a table in dynamodb, Lambda function will react to this event and update the elastcsearch index.
I know lambda runs serverless whereas my original application runs within a server. Can anybody give me some hints about how to do it or let me know if it's even possible?
The link between a DynamoDB update and a Lambda is "DynamoDB Streams".
The documentation says, in part,
Amazon DynamoDB is integrated with AWS Lambda so that you can create
triggers—pieces of code that automatically respond to events in
DynamoDB Streams. With triggers, you can build applications that react
to data modifications in DynamoDB tables.
If you enable DynamoDB Streams on a table, you can associate the
stream Amazon Resource Name (ARN) with an AWS Lambda function that you
write. Immediately after an item in the table is modified, a new
record appears in the table's stream. AWS Lambda polls the stream and
invokes your Lambda function synchronously when it detects new stream
records.
For object secgmentation process with flask I have using the pixellib library but in AWS EC2 instance (instance type t2 xlarge). but this process is holding at semantic_segmentation(). anyone please support to how to solve this issue or which server need to use?
I have a requirement that I need to check if a specific lambda is currently under execution or not using another lambda.
I went through boto3 documentation and found get_function() method returns State of the function, But this doesn't return function state, It gives State that if Lambda creation is Pending/Inactive/Active, etc.
Is there a way I can find out if the function state of lambda using another lambda?
Any help or guidance is appreciated.
Documentation Link
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.get_function
You can use CloudWatch metrics of your lambda function to determine whether it's currently being invoked. https://docs.aws.amazon.com/lambda/latest/dg/monitoring-metrics.html#monitoring-metrics-types
I have a python script which takes video and converts it to a series of small panoramas. Now, theres an S3 bucket where a video will be uploaded (mp4). I need this file to be sent to the ec2 instance whenever it is uploaded.
This is the flow:
Upload video file to S3.
This should trigger EC2 instance to start.
Once it is running, I want the file to be copied to a particular directory inside the instance.
After this, I want the py file (panorama.py) to start running and read the video file from the directory and process it and then generate output images.
These output images need to be uploaded to a new bucket or the same bucket which was initially used.
Instance should terminate after this.
What I have done so far is, I have created a lambda function that is triggered whenever an object is added to that bucket. It stores the name of the file and the path. I had read that I now need to use an SQS queue and pass this name and path metadata to the queue and use the SQS to trigger the instance. And then, I need to run a script in the instance which pulls the metadata from the SQS queue and then use that to copy the file(mp4) from bucket to the instance.
How do i do this?
I am new to AWS and hence do not know much about SQS or how to transfer metadata and automatically trigger instance, etc.
Your wording is a bit confusing. It says that you want to "start" an instance (which suggests that the instance already exists), but then it says that it wants to "terminate" an instance (which would permanently remove it). I am going to assume that you actually intend to "stop" the instance so that it can be used again.
You can put a shell script in the /var/lib/cloud/scripts/per-boot/ directory. This script will then be executed every time the instance starts.
When the instance has finished processing, it can call sudo shutdown now -h to turn off the instance. (Alternatively, it can tell EC2 to stop the instance, but using shutdown is easier.)
For details, see: Auto-Stop EC2 instances when they finish a task - DEV Community
I tried to answer in the most minimalist way, there are many points below that can be further improved. I think below is still quite some as you mentioned you are new to AWS.
Using AWS Lambda with Amazon S3
Amazon S3 can send an event to a Lambda function when an object is created or deleted. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy.
When the object uploaded it will trigger the lambda function. Which creates the instance with ec2 user data Run commands on your Linux instance at launch.
For the ec2 instance make you provide the necessary permissions via Using instance profiles for download and uploading the objects.
user data has a script that does the rest of the work which you need for your workflow
Download the s3 object, you can pass the name and s3 bucket name in the same script
Once #1 finished, start the panorama.py which processes the videos.
In the next step you can start uploading the objects to the S3 bucket.
Eventually terminating the instance will be a bit tricky which you can achieve Change the instance initiated shutdown behavior
OR
you can use below method for terminating the instnace, but in that case your ec2 instance profile must have access to terminate the instance.
ec2-terminate-instances $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
You can wrap the above steps into a shell script inside the userdata.
Lambda ec2 start instance:
def launch_instance(EC2, config, user_data):
ec2_response = EC2.run_instances(
ImageId=config['ami'], # ami-0123b531fc646552f
InstanceType=config['instance_type'],
KeyName=config['ssh_key_name'],
MinCount=1,
MaxCount=1,
SecurityGroupIds=config['security_group_ids'],
TagSpecifications=tag_specs,
# UserData=base64.b64encode(user_data).decode("ascii")
UserData=user_data
)
new_instance_resp = ec2_response['Instances'][0]
instance_id = new_instance_resp['InstanceId']
print(f"[DEBUG] Full ec2 instance response data for '{instance_id}': {new_instance_resp}")
return (instance_id, new_instance_resp)
Upload file to S3 -> Launch EC2 instance
I created a simple Node.js script to connect to a RDS instance but sadly it always returns with timeout error. It is strange because it works perfectly from my machine.
The instance is public accessible and context.callbackWaitsForEmptyEventLoop = false;
have you got any idea?
Lambda has a property of time out. Have you checked this configuration in the Lambda via console (Basic configuration) or in your deployment template file?
i have same error i fix with incress the timeout time
new aws UI
lambda function -> configuration -> General configuration -> Edit Timeout