Determine instance IP or DNS for Stopped instance with no EIP assigned - python-3.x

I looked in the AWS documentation, but can't find, if an stopped instance might launch with an IP once I start it. I'm scripting something with boto3 and python, but neither boto3.resource nor boto3.client have given successful information.

The public IP is released when you stop the instance. For a stopped instance:
import boto3
session = boto3.Session(profile_name='your_profile')
ec2 = session.resource('ec2')
instance = ec2.Instance('i-09f00f00f00')
print(instance.public_ip_address)
Returns: None
To get the private IP from your instance ID (running or stopped instance):
print(instance.network_interfaces_attribute[0]['PrivateIpAddress'])
Returns: 10.0.0.200
Reference: Boto / EC2 / Instance / network_interfaces_attribute

Related

pixellib is hanging on aws ec2 instance

For object secgmentation process with flask I have using the pixellib library but in AWS EC2 instance (instance type t2 xlarge). but this process is holding at semantic_segmentation(). anyone please support to how to solve this issue or which server need to use?

How to create user-data in Windows EC2 Instance using Boto3?

I have created one Boto3 script to launch Windows EC2 instance with User-Data(Batch Script) using Boto3. When i run my boto3 script, its launching instance successfully, but user-data not applied to my Windows Ec2 Instance. I have checked stack-overflow regarding solutions. But everything about user-data with Linux based EC2 instance. I have attached my boto3 script.
I don't get any solution, that's why i created new question.
ec2Resource = boto3.resource('ec2',region_name='us-west-2')
ec2 = boto3.resource('ec2')
windata = '''<script>net user /add Latchu ABC#2020</script>'''
# Create the instance
instanceDict = ec2Resource.create_instances(
DryRun = dryRun,
ImageId = "ami-xxxxxxxxx",
KeyName = "ZabbixServerPrivateKey",
InstanceType = "t2.micro",
SecurityGroupIds = ["sg-xxxxx"],
MinCount = 1,
MaxCount = 1,
UserData = windata
)
i have created above question to catch solutions. But its simple issue, just now i found that. This boto3 code is exactly right.
Why user-data is not applied? - Because password length is not allow me to run the simple user creation command. When i used simple password than complex (like 8 digit password than 22 digit password), then user-data is applied.

unable ssh into launched ec2 instance when launched through below script, but able to ssh when launched from AWS page

I am recently started learning AWS EC2 Services, Yesterday I just launched an EC2 Instance describing my launch configurations (attached Review Launch Configurations), and I am able to successfully ssh into my pem file, But I also wanted to Launch EC2 instance using boto3 python, which I did. But I can't ssh into EC2 instance launched through the below script.
#!/usr/bin/python
import boto3
client = boto3.client('ec2')
response = client.run_instances(
BlockDeviceMappings=[
{
'DeviceName' :'/dev/xvda',
'Ebs' : {
'DeleteOnTermination' : True,
},
},
],
ImageId= 'ami-04590e7389a6e577c',
InstanceType= 't2.micro',
KeyName= 'ec2-keypair',
MaxCount = 1,
MinCount = 1,
Monitoring={
'Enabled' : False
},
)
for instance in response['Instances']:
print(instance['InstanceId'])
The above script is able to launch EC2 instance , but I unable to login from ubuntu subsystem.
Troubleshooting done so far: Grabbed the details of the EC2 I launched from AWS page, using client.describe_instances() and defined client.run_istances() above.
Please educate, why I am unable to ssh if I launch EC2 instance through above script whereas I can ssh EC2 when launched from AWS Page.
I really appreciate your Expertise on this.
Your run_instances() is not showing a Security Group to attach. Therefore, no access will be available.
When launching an Amazon EC2 instance through the management console, there is a default "Launch Wizard" Security Group provided that includes SSH (port 22) inbound access, as shown in your question.
If you want to permit this for the instance launched via code, you will need to reference an existing Security Group that has the desired ports open.

Can't sync s3 with ec2 folder from aws lambda

I am trying to automate data processing using AWS. I have setup an AWS lambda function in python that:
Gets triggered by an S3 PUT event
Ssh into an EC2 instance using paramiko layer
Copy the new objects from the bucket into some folder in the instance, unzip the file inside the instance and run a python script that cleans the csv files.
The problem is the aws cli call to sync s3 bucket with ec2 folder is not working, but when I manually ssh into the ec2 instance and runn the command it works.My aws-cli is configured with my access_keys and the ec2 has an s3 role that allows it full access.
import boto3
import time
import paramiko
def lambda_handler(event, context):
#create a low level client representing s3
s3 = boto3.client('s3')
ec2 = boto3.resource('ec2', region_name='eu-west-a')
instance_id = 'i-058456c79fjcde676'
instance = ec2.Instance(instance_id)
------------------------------------------------------'''
#start instance
instance.start()
#allow some time for the instance to start
time.sleep(30)
# Print few details of the instance
print("Instance id - ", instance.id)
print("Instance public IP - ", instance.public_ip_address)
print("Instance private IP - ", instance.private_ip_address)
print("Public dns name - ", instance.public_dns_name)
print("----------------------------------------------------")
print('Downloading pem file')
s3.download_file('some_bucket', 'some_pem_file.pem', '/tmp/some_pem_file.pem')
# Allowing few seconds for the download to complete
print('waiting for instance to start')
time.sleep(30)
print('sshing to instsnce')
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('/tmp/some_pem_file.pem')
# username is most likely 'ec2-user' or 'root' or 'ubuntu'
# depending upon yor ec2 AMI
#s3_path = "s3://some_bucket/" + object_name
ssh.connect(
instance.public_dns_name, username='ubuntu', pkey=privkey)
print('inside machine...running commands')
stdin, stdout, stderr = ssh.exec_command('aws s3 sync s3://some_bucket/ ~/ec2_folder;\
bash ~/ec2_folder/unzip.sh; python3 ~/ec2_folder/process.py;')
stdin.flush()
data = stdout.read().splitlines()
for line in data:
print(line)
print('done, closing ssh session')
ssh.close()
# Stop the instance
instance.stop()
return('Triggered')
The use of an SSH tool is somewhat unusual.
Here are a few more 'cloud-friendly' options you might consider.
Systems Manager Run Command
The AWS Systems Manager Run Command allows you to execute a script on an Amazon EC2 instance (and, in fact, on any computer that is running the Systems Manager agent). It can even run the command on many (hundreds!) of instances/computers at the same time, keeping track of the success of each execution.
This means that, instead of connecting to the instance via SSH, the Lambda function could call the Run Command via an API call and Systems Manager would run the code on the instance.
Pull, Don't Push
Rather than 'pushing' the work to the instance, the instance could 'pull the work':
Configure the Amazon S3 event to push a message into an Amazon SQS queue
Code on the instance could be regularly polling the SQS queue
When it finds a message on the queue, it runs a script that downloads the file (the bucket and key are passed in the message) and then runs the processing script
Trigger via HTTP
The instance could run a web server, listening for a message.
Configure the Amazon S3 event to push a message into an Amazon SNS topic
Add the instance's URL as an HTTP subscription to the SNS topic
When a message is sent to SNS, it forwards it to the instance's URL
Code in the web server then triggers your script
This answer is based on the additional information that you wish to shutdown the EC2 instance between executions.
I would recommend:
Amazon S3 Event triggers Lambda function
Lambda function starts the instance, passing filename information via the User Data field (it can be used to pass data, not just scripts). The Lambda function can then immediately exit (which is more cost-effective than waiting for the job to complete)
Put your processing script in the /var/lib/cloud/scripts/per-boot/ directory, which will cause it to run every time the instance is started (every time, not just the first time)
The script can extract the User Data passed from the Lambda function by retrieving curl http://169.254.169.254/latest/user-data/, so that it knows the filename from S3
The script then processes the file
The script then runs sudo shutdown now -h to stop the instance
If there is a chance that another file might come while the instance is already processing a file, then I would slightly change the process:
Rather than passing the filename via User Data, put it into an Amazon SQS queue
When the instance is started, it should retrieve the details from the SQS queue
After the file is processed, it should check the queue again to see if another message has been sent
If yes, the process the file and repeat
If no, shutdown itself
By the way, things can sometimes go wrong, so it's worth putting a 'circuit breaker' in the script so that it does not shutdown the instance if you want to debug things. This could be a matter of passing a flag, or even adding a tag to the instance, which is checked before calling the shutdown command.

Start VM from powered off state using pyvmomi

So, I am trying to make a Python script using pyvmomi to control the state of a virtual machine I'm running on my ESXi server. Basically, I tried using connection.content.searchIndex.FindByIp(ip="the ip of the VM", vmSearch=True) to grab my VM and then power it on, but of course I cannot get the IP of the VM when it's off. So, I was wondering if there was any way I could get the VM, maybe by name or its ID? I searched around quite a bit but couldn't really find a solution. Either way, here's my code so far:
from pyVim import connect
# Connect to ESXi host
connection = connect.Connect("192.168.182.130", 443, "root", "password")
# Get a searchIndex object
searcher = connection.content.searchIndex
# Find a VM
vm = searcher.FindByIp(ip="192.168.182.134", vmSearch=True)
# Print out vm name
print (vm.config.name)
# Disconnect from cluster or host
connect.Disconnect(connection)
The searchindex doesn't have any methods to do a 'findbyname' so you'll probably have to resort to pulling back all of VMs and filtering through them client side.
Here's an example of returning all the VMs: https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/getallvms.py
Another option, if you're using vCenter 6.5+, there's the vSphere Automation SDK for Python where you can interact with the REST APIs to do a server side filter. More info: https://github.com/vmware/vsphere-automation-sdk-python
This code might prove helpful:
from pyVim.connect import SmartConnect
from pyVmomi import vim
import ssl
s=ssl.SSLContext(ssl.PROTOCOL_TLSv1)
s.verify_mode=ssl.CERT_NONE
si= SmartConnect(host="192.168.100.10", user="admin", pwd="admin123",sslContext=s)
content=si.content
def get_all_objs(content, vimtype):
obj = {}
container = content.viewManager.CreateContainerView(content.rootFolder, vimtype, True)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
vmToScan = [vm for vm in get_all_objs(content,[vim.VirtualMachine]) if "ubuntu-16.04.4" == vm.name]

Resources