For object secgmentation process with flask I have using the pixellib library but in AWS EC2 instance (instance type t2 xlarge). but this process is holding at semantic_segmentation(). anyone please support to how to solve this issue or which server need to use?
Related
I'm trying to create a file in my EC2 instance using the InitFile construct in CDK. Below is the code i'm using to create my EC2 instance into which i'm trying to create a file textfile.txt which would contain a text 'welcome' going by https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_ec2/InitFile.html reference
during cdk initialisation,
init_data = ec2.CloudFormationInit.from_elements(
ec2.InitFile.from_string("/home/ubuntu/textfile.txt", "welcome")
)
self.ec2_instance = ec2.Instance(self,
id='pytenv-instance',
vpc=self.vpc,
instance_type=ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.NANO),
machine_image=ec2.MachineImage.generic_linux(
{'us-east-1': 'ami-083654bd07b5da81d'}
),
key_name="demokeyyt18",
security_group=self.sg,
vpc_subnets=ec2.SubnetSelection(
subnet_type=ec2.SubnetType.PUBLIC
),
init=init_data,
)
From the EC2 configuration it is evident that the machine image here is Ubuntu. Getting this error: Failed to receive 1 resource signal(s) within the specified duration.
Am I missing something? Any inputs?
UPDATE: This same code works with EC2 machine image as Amazon_linux but not for Ubuntu. Am I doing something wrong ?
CloudFormation init requires the presence of cfn-init helper script on the instance. Ubuntu does not come with it, so you have to set it up yourself.
Here's the AWS guide that contains links to the installation scripts for Ubuntu 16.04/18.04/20.04. You need to add these to the user_data prop of your instance. Then cloudformation-init will work.
If you just want to create a file when the instance starts, though, you don't have to use cfn-init at all - you could just supply the command that creates your file to the user_data prop directly:
self.ec2_instance.user_data.add_commands("echo welcome > /home/ubuntu/textfile.txt")
I have a python script which takes video and converts it to a series of small panoramas. Now, theres an S3 bucket where a video will be uploaded (mp4). I need this file to be sent to the ec2 instance whenever it is uploaded.
This is the flow:
Upload video file to S3.
This should trigger EC2 instance to start.
Once it is running, I want the file to be copied to a particular directory inside the instance.
After this, I want the py file (panorama.py) to start running and read the video file from the directory and process it and then generate output images.
These output images need to be uploaded to a new bucket or the same bucket which was initially used.
Instance should terminate after this.
What I have done so far is, I have created a lambda function that is triggered whenever an object is added to that bucket. It stores the name of the file and the path. I had read that I now need to use an SQS queue and pass this name and path metadata to the queue and use the SQS to trigger the instance. And then, I need to run a script in the instance which pulls the metadata from the SQS queue and then use that to copy the file(mp4) from bucket to the instance.
How do i do this?
I am new to AWS and hence do not know much about SQS or how to transfer metadata and automatically trigger instance, etc.
Your wording is a bit confusing. It says that you want to "start" an instance (which suggests that the instance already exists), but then it says that it wants to "terminate" an instance (which would permanently remove it). I am going to assume that you actually intend to "stop" the instance so that it can be used again.
You can put a shell script in the /var/lib/cloud/scripts/per-boot/ directory. This script will then be executed every time the instance starts.
When the instance has finished processing, it can call sudo shutdown now -h to turn off the instance. (Alternatively, it can tell EC2 to stop the instance, but using shutdown is easier.)
For details, see: Auto-Stop EC2 instances when they finish a task - DEV Community
I tried to answer in the most minimalist way, there are many points below that can be further improved. I think below is still quite some as you mentioned you are new to AWS.
Using AWS Lambda with Amazon S3
Amazon S3 can send an event to a Lambda function when an object is created or deleted. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy.
When the object uploaded it will trigger the lambda function. Which creates the instance with ec2 user data Run commands on your Linux instance at launch.
For the ec2 instance make you provide the necessary permissions via Using instance profiles for download and uploading the objects.
user data has a script that does the rest of the work which you need for your workflow
Download the s3 object, you can pass the name and s3 bucket name in the same script
Once #1 finished, start the panorama.py which processes the videos.
In the next step you can start uploading the objects to the S3 bucket.
Eventually terminating the instance will be a bit tricky which you can achieve Change the instance initiated shutdown behavior
OR
you can use below method for terminating the instnace, but in that case your ec2 instance profile must have access to terminate the instance.
ec2-terminate-instances $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
You can wrap the above steps into a shell script inside the userdata.
Lambda ec2 start instance:
def launch_instance(EC2, config, user_data):
ec2_response = EC2.run_instances(
ImageId=config['ami'], # ami-0123b531fc646552f
InstanceType=config['instance_type'],
KeyName=config['ssh_key_name'],
MinCount=1,
MaxCount=1,
SecurityGroupIds=config['security_group_ids'],
TagSpecifications=tag_specs,
# UserData=base64.b64encode(user_data).decode("ascii")
UserData=user_data
)
new_instance_resp = ec2_response['Instances'][0]
instance_id = new_instance_resp['InstanceId']
print(f"[DEBUG] Full ec2 instance response data for '{instance_id}': {new_instance_resp}")
return (instance_id, new_instance_resp)
Upload file to S3 -> Launch EC2 instance
I am recently started learning AWS EC2 Services, Yesterday I just launched an EC2 Instance describing my launch configurations (attached Review Launch Configurations), and I am able to successfully ssh into my pem file, But I also wanted to Launch EC2 instance using boto3 python, which I did. But I can't ssh into EC2 instance launched through the below script.
#!/usr/bin/python
import boto3
client = boto3.client('ec2')
response = client.run_instances(
BlockDeviceMappings=[
{
'DeviceName' :'/dev/xvda',
'Ebs' : {
'DeleteOnTermination' : True,
},
},
],
ImageId= 'ami-04590e7389a6e577c',
InstanceType= 't2.micro',
KeyName= 'ec2-keypair',
MaxCount = 1,
MinCount = 1,
Monitoring={
'Enabled' : False
},
)
for instance in response['Instances']:
print(instance['InstanceId'])
The above script is able to launch EC2 instance , but I unable to login from ubuntu subsystem.
Troubleshooting done so far: Grabbed the details of the EC2 I launched from AWS page, using client.describe_instances() and defined client.run_istances() above.
Please educate, why I am unable to ssh if I launch EC2 instance through above script whereas I can ssh EC2 when launched from AWS Page.
I really appreciate your Expertise on this.
Your run_instances() is not showing a Security Group to attach. Therefore, no access will be available.
When launching an Amazon EC2 instance through the management console, there is a default "Launch Wizard" Security Group provided that includes SSH (port 22) inbound access, as shown in your question.
If you want to permit this for the instance launched via code, you will need to reference an existing Security Group that has the desired ports open.
I'm using RDS instance and I want to know the status of instance from AWS Lambda function written in Node.js
You can use the describeDBInstances() method of the AWS SDK for Node.js. List of instance statuses can be found here: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Status.html
I looked in the AWS documentation, but can't find, if an stopped instance might launch with an IP once I start it. I'm scripting something with boto3 and python, but neither boto3.resource nor boto3.client have given successful information.
The public IP is released when you stop the instance. For a stopped instance:
import boto3
session = boto3.Session(profile_name='your_profile')
ec2 = session.resource('ec2')
instance = ec2.Instance('i-09f00f00f00')
print(instance.public_ip_address)
Returns: None
To get the private IP from your instance ID (running or stopped instance):
print(instance.network_interfaces_attribute[0]['PrivateIpAddress'])
Returns: 10.0.0.200
Reference: Boto / EC2 / Instance / network_interfaces_attribute