So, I am trying to make a Python script using pyvmomi to control the state of a virtual machine I'm running on my ESXi server. Basically, I tried using connection.content.searchIndex.FindByIp(ip="the ip of the VM", vmSearch=True) to grab my VM and then power it on, but of course I cannot get the IP of the VM when it's off. So, I was wondering if there was any way I could get the VM, maybe by name or its ID? I searched around quite a bit but couldn't really find a solution. Either way, here's my code so far:
from pyVim import connect
# Connect to ESXi host
connection = connect.Connect("192.168.182.130", 443, "root", "password")
# Get a searchIndex object
searcher = connection.content.searchIndex
# Find a VM
vm = searcher.FindByIp(ip="192.168.182.134", vmSearch=True)
# Print out vm name
print (vm.config.name)
# Disconnect from cluster or host
connect.Disconnect(connection)
The searchindex doesn't have any methods to do a 'findbyname' so you'll probably have to resort to pulling back all of VMs and filtering through them client side.
Here's an example of returning all the VMs: https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/getallvms.py
Another option, if you're using vCenter 6.5+, there's the vSphere Automation SDK for Python where you can interact with the REST APIs to do a server side filter. More info: https://github.com/vmware/vsphere-automation-sdk-python
This code might prove helpful:
from pyVim.connect import SmartConnect
from pyVmomi import vim
import ssl
s=ssl.SSLContext(ssl.PROTOCOL_TLSv1)
s.verify_mode=ssl.CERT_NONE
si= SmartConnect(host="192.168.100.10", user="admin", pwd="admin123",sslContext=s)
content=si.content
def get_all_objs(content, vimtype):
obj = {}
container = content.viewManager.CreateContainerView(content.rootFolder, vimtype, True)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
vmToScan = [vm for vm in get_all_objs(content,[vim.VirtualMachine]) if "ubuntu-16.04.4" == vm.name]
Related
I am trying to make a connection from a server running Ubuntu to a Beckhoff PLC with TwinCAT 3. With Windows everything works fine but with the same server on Linux I can't get a connection.
The Linux server has a static IP and in the route manager in the PLC I can find the route and see the server. I have tried adding the route by the route manager in the PLC and with "add_route_to_plc" but both ways my connection is refused. I have already turned off all firewalls. Any of you guys any idea what goes wrong here? In the attachment I have added some picture to see my settings and code that I try to run.
Python error: "connection closed by remote"
Python code:
import pyads
SENDER_AMS = '192.168.1.180.1.1'
PLC_IP = '192.168.1.100'
PLC_USERNAME = 'Administrator'
PLC_PASSWORD = '1'
ROUTE_NAME = 'GID_TEST_ROUTE'
HOSTNAME = 'Grid-stabilizer'
pyads.open_port()
pyads.set_local_address(SENDER_AMS)
pyads.add_route_to_plc(SENDER_AMS, HOSTNAME, PLC_IP, PLC_USERNAME, PLC_PASSWORD, route_name=ROUTE_NAME)
pyads.close_port()
plc=pyads.Connection('192.168.1.100.1.1', pyads.PORT_TC3PLC1)
plc.open()
plc.read_state()
If you are running python on linux and the plc on windows try
plc=pyads.Connection('192.168.1.100.1.1', pyads.PORT_TC3PLC1, PLC_IP)
This will create a route on the linux system. In your code the ip is missing to create a proper route.
Check the port of your plc. It should be 851.
I have my RDS(Postgresql) database in Private subnet.
I want to query this db using a Python Program
Is this possible ?
I have a bastion running SSM and I can easily connect to the bastion without any keys and then connect to the DB.
Is there a way of doing port forwarding in a python program ?
THANKS
Actually, it very simple if you use the article - https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/
just run,
aws ssm start-session --target $INSTANCE_ID
this will create a connection to the ec2. After this you can run any python program by using psycopg2
import psycopg2
connection = psycopg2.connect(user="joe",
password="joe",
host="######",
port="5432",
database="stackdb")
Just putting here as it might help someone
I would like to use an Azure Machine Learning Compute Cluster as a compute target but do not want it to containerize my project. Is there a way to deactivate this "feature" ?
The main reasons behind this request is that :
I already set up a docker-compose file that is used to specify 3 containers for Apache Airflow and want to avoid a Docker-in-Docker situation. Especially that I already tried to do so but failed so far (here's the link my other related SO question).
I prefer not to use a Compute Instance as it is tied to an Azure account which is not ideal for automation purposes.
Thanks in advance !
Use the provisioning_configuration method of the AmlCompute class to specify configuration parameters.
In the following example, a persistent compute target provisioned by AmlCompute is created. The provisioning_configuration parameter in this example is of type AmlComputeProvisioningConfiguration, which is a child class of ComputeTargetProvisioningConfiguration.
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster"
# Verify that cluster does not exist already
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True)
Refer - https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute.amlcompute(class)?view=azure-ml-py
I looked in the AWS documentation, but can't find, if an stopped instance might launch with an IP once I start it. I'm scripting something with boto3 and python, but neither boto3.resource nor boto3.client have given successful information.
The public IP is released when you stop the instance. For a stopped instance:
import boto3
session = boto3.Session(profile_name='your_profile')
ec2 = session.resource('ec2')
instance = ec2.Instance('i-09f00f00f00')
print(instance.public_ip_address)
Returns: None
To get the private IP from your instance ID (running or stopped instance):
print(instance.network_interfaces_attribute[0]['PrivateIpAddress'])
Returns: 10.0.0.200
Reference: Boto / EC2 / Instance / network_interfaces_attribute
I am trying to access a remote ArangoDb install (on a windows server).
I've tried changing the endpoint in the arangod.conf as mentioned in another post here but as soon as I do the database stops responding both remotely and locally.
I would like to be able to do the following remotely:
Connect to the server in my application code (during development).
Connect to the server from a local arangosh shell.
Connect to the Arango server dashboard (http://127.0.0.1:8529/_db/_system/_admin/aardvark/standalone.html)
Long time since I came back to this. Thanks to the previous comments I was able to sort this out.
The file to edit is arangod.conf. On a windows machine located at:
C:\Program Files\ArangoDB 2.6.9\etc\arangodb\arangod.conf
The comments under the [server] section helped. I changed the endpoint to be the IP address of my server (bottom line)
[server]
# Specify the endpoint for HTTP requests by clients.
# tcp://ipv4-address:port
# tcp://[ipv6-address]:port
# ssl://ipv4-address:port
# ssl://[ipv6-address]:port
# unix:///path/to/socket
#
# Examples:
# endpoint = tcp://0.0.0.0:8529
# endpoint = tcp://127.0.0.1:8529
# endpoint = tcp://localhost:8529
# endpoint = tcp://myserver.arangodb.com:8529
# endpoint = tcp://[::]:8529
# endpoint = tcp://[fe80::21a:5df1:aede:98cf]:8529
#
endpoint = tcp://192.168.0.14:8529
Now I am able to access the server from my client using the above address.
Please have a look at the managing endpoints documentation.It explains how to bind and how to check whether it worked out.