I am using python 3.7 and pyvmomi 6.7
I am connecting to exsi host (version 6.7 free license) and trying to deploy a vm with my python script. In one of the step I'm trying to create a directory (to store iso and vmdk) in the datastore.
This is the code snippet to create a directory,
fmgr = host['content'].fileManager
dco = vm['storage']['root']['dc']
dirname = '[' + dso.info.name + '] ' + vm['name']
logger.info('Creating Directory {} on {}'.format(
dirname, dso.info.name))
try:
fmgr.MakeDirectory(name=dirname, datacenter=dco,
createParentDirectories=False)
except vim.fault.FileAlreadyExists as e:
logger.info('Directory {} already exists on {} - {}'.format(
dirname, dso.info.name, str(e)))
return True
except vim.fault.InvalidDatastore as e:
logger.error('Invalid datastore: {} - {}'.format(
dso.info.name, str(e)))
return False
except vim.fault.RuntimeFault as e:
logger.error('Runtime error while creating directory {} on {} - {}'.format(
dirname, dso.info.name, str(e)))
return False
except Exception as e:
logger.error('Failed to create top directory {}. - {}'.format(
dirname, str(e)))
I am getting this error while it is trying to create the directory,
pyVmomi.VmomiSupport.RestrictedVersion: (vim.fault.RestrictedVersion) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'Current license or ESXi version prohibits execution of the requested operation.',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) []
}
The same code is able to create directory for exsi version 6.5 (free license)
According to compatibility policy section from https://github.com/vmware/pyvmomi, exsi 6.7 should be supported.
Are there any functionality restrictions on versions?
Do we have any other way of creating top level directory in the datastore?
Are there any other python library for managing VMs in VMware (which supports from exsi 6.0)?
The key thing is the type of license the ESXi host has. If it has a free license then the API will allow read-only operations and all the other operations will be blocked with 'Current license or ESXi version prohibits execution of the requested operation' message.
Quoting from one of the vmware blogs,
Access to the vSphere API is governed by the variousvSphere Editions which provides both read and write access to the API. If you are using vSphere Hypervisor (free edition of ESXi), the vSphere API will only be available as read-only.
I have tried my same code on another licensed version of ESXi and guess what, the code executed successfully and created VMs.
Found one of the answer in ServerFault stating the same.
The problem was the original documents for SDK, Python API or the community samples doesn't say anything on this limitation.
Related
As per document (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/managing-repositories_composing-a-customized-rhel-system-image) tried to override the system repository with custom base url . But blueprint depsolve is showing error as below
##composer-cli blueprints depsolve Test1-blueprint
2022-06-09 08:06:58,841: Test1-blueprint: This system does not have any valid subscriptions. Subscribe it before specifying rhsm: true in sources.
And with next service restart osbuild-composer does not start
ERROR: Info Error: Get "http://localhost/api/v1/projects/source/info/appstream": dial unix /run/weldr/api.socket: connect: connection refused
Am I missing something here ?
Having all manner of issues with this myself. A trawl of my /var/log/messages file, and it looks like, for me at least, osbuild-composer is failing to start due to the non existence of /etc/osbuild-composer/osbuild-composer.toml. Actual error is permission denied, but it doesnt exist..
This is on RHEL 8.5, and just updated to 8.6 this morning, and same problem
/edit Ive removed everything, and reverted to using the lorax backend, as per chapter 2.2 in the doc linked (same one I was following). My 'composer-cli compose types' command now at least works. Fingers crossed..
I am trying to deploy a machine learning model through an ACI (Azure Container Instances) service. I am working in Python and I followed the following code (from the official documentation : https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where?tabs=azcli) :
The entry script file is the following (score.py):
import os
import dill
import joblib
def init():
global model
# Get the path where the deployed model can be found
model_path = os.getenv('AZUREML_MODEL_DIR')
# Load existing model
model = joblib.load('model.pkl')
# Handle request to the service
def run(data):
try:
# Pick out the text property of the JSON request
# Expected JSON details {"text": "some text to evaluate"}
data = json.loads(data)
prediction = model.predict(data['text'])
return prediction
except Exception as e:
error = str(e)
return error
And the model deployment workflow is as:
from azureml.core import Workspace
# Connect to workspace
ws = Workspace(subscription_id="my-subscription-id",
resource_group="my-ressource-group-name",
workspace_name="my-workspace-name")
from azureml.core.model import Model
model = Model.register(workspace = ws,
model_path= 'model.pkl',
model_name = 'my-model',
description = 'my-description')
from azureml.core.environment import Environment
# Name environment and call requirements file
# requirements: numpy, tensorflow
myenv = Environment.from_pip_requirements(name = 'myenv', file_path = 'requirements.txt')
from azureml.core.model import InferenceConfig
# Create inference configuration
inference_config = InferenceConfig(environment=myenv, entry_script='score.py')
from azureml.core.webservice import AciWebservice #AksWebservice
# Set the virtual machine capabilities
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 0.5, memory_gb = 3)
from azureml.core.model import Model
# Deploy ML model (Azure Container Instances)
service = Model.deploy(workspace=ws,
name='my-service-name',
models=[model],
inference_config=inference_config,
deployment_config=deployment_config)
service.wait_for_deployment(show_output = True)
I succeded once with the previous code. I noticed that during the deployment the Model.deploy created a container registry with a specific name (6e07ce2cc4ac4838b42d35cda8d38616).
The problem:
The API was working well and I wanted to deploy an other model from scratch. I deleted the API service and model from Azure ML Studio and the container registry from Azure ressources.
Unfortunately I am not able to deploy again anything.
Everything goes fine until the last step (the Model.deploy step), I have the following error message :
Service deployment polling reached non-successful terminal state, current service state: Unhealthy
Operation ID: 46243f9b-3833-4650-8d47-3ac54a39dc5e
More information can be found here: https://machinelearnin2812599115.blob.core.windows.net/azureml/ImageLogs/46245f8b-3833-4659-8d47-3ac54a39dc5e/build.log?sv=2019-07-07&sr=b&sig=45kgNS4sbSZrQH%2Fp29Rhxzb7qC5Nf1hJ%2BLbRDpXJolk%3D&st=2021-10-25T17%3A20%3A49Z&se=2021-10-27T01%3A24%3A49Z&sp=r
Error:
{
"code": "AciDeploymentFailed",
"statusCode": 404,
"message": "No definition exists for Environment with Name: myenv Version: Autosave_2021-10-25T17:24:43Z_b1d066bf Reason: Container > registry 6e07ce2cc4ac4838b42d35cda8d38616.azurecr.io not found. If private link is enabled in workspace, please verify ACR is part of private > link and retry..",
"details": []
}
I do not understand why the first time a new container registry was well created, but now it seems that it is sought (the message is saying that container registry identified by name 6e07ce2cc4ac4838b42d35cda8d38616 is missing). I never found where I can force the creation of a new container registry ressource in Python, neither specify a name for it in AciWebservice.deploy_configuration or Model.deploy.
Does anyone could help me moving on with this? The best solution would be I think to delete totally this 6e07ce2cc4ac4838b42d35cda8d38616 container registry but I can't find where the reference is set so Model.deploy always fall to find it.
An other solution would be to force Model.deploy to generate a new container registry, but I could find how to make that.
It's been 2 days that I am on this and I really need your help !
PS : I am not at all a DEVOPS/MLOPS guy, I make data science and good models, but infrastructure and deployment is not really my thing so please be gentle on this part ! :-)
What I tried
Creating the container registry with same name
I tried to create the container registry by hand, but this time, this is the container that cannot be created. The Python output of the Model.deploy is the following :
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running
2021-10-25 19:25:10+02:00 Creating Container Registry if not exists.
2021-10-25 19:25:10+02:00 Registering the environment.
2021-10-25 19:25:13+02:00 Building image..
2021-10-25 19:30:45+02:00 Generating deployment configuration.
2021-10-25 19:30:46+02:00 Submitting deployment to compute.
Failed
Service deployment polling reached non-successful terminal state, current service state: Unhealthy
Operation ID: 93780de6-7662-40d8-ab9e-4e1556ef880f
Current sub-operation type not known, more logs unavailable.
Error:
{
"code": "InaccessibleImage",
"statusCode": 400,
"message": "ACI Service request failed. Reason: The image '6e07ce2cc4ac4838b42d35cda8d38616.azurecr.io/azureml/azureml_684133370d8916c87f6230d213976ca5' in container group 'my-service-name-LM4HbqzEBEi0LTXNqNOGFQ' is not accessible. Please check the image and registry credential.. Refer to https://learn.microsoft.com/azure/container-registry/container-registry-authentication#admin-account and make sure Admin user is enabled for your container registry."
}
Setting admin user enabled
I tried to follow the recommandation of the last message saying to set Admin user enabled for the container registry. All what I saw in Azure interface is that a username and password appeared when enabling on user admin.
Unfortunately the same error message appears again if I try to relaunche my code and I am stucked here...
Changing name of the environment and model
This does not produces any change. Same errors.
As you tried with first attempt it was worked. After deleting the API service and model from Azure ML Studio and the container registry from Azure resources you are not able to redeploy again.
My assumption is your first attempt you are already register the Model Environment variable. So when you try to reregister by using the same model name while deploying it will gives you the error.
Thanks # anders swanson Your solution worked for me.
If you have already registered your env, myenv, and none of the details of the your environment have changed, there is no need re-register it with myenv.register(). You can simply get the already register env using Environment.get() like so:
myenv = Environment.get(ws, name='myenv', version=11)
My Suggestion is to name your environment as new value.
"model_scoring_env". Register it once, then pass it to the InferenceConfig.
Refer here
I am in the process of setting up an AWS lambda function to connect to a MS SQL Server database using pyodbc to extract records from a table.
I am receiving an error message
('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 13 for SQL Server' : file not found (0) (SQLDriverConnect))
I have built a deployment package on a linux EC2 instance) using the process detailing in the following post:
https://gist.github.com/carlochess/658a98589709f46dbb3d20502e48556b
I have read extensively on this and have changed the path in the odbcinst.ini file to match the directory structure of the lambda layer, but with no luck.
I have also directory referenced the location of the driver file (libmsodbcsql-13.1.so.9.2).
The error message changes slightly to state that it cannot find the driver efile at the certain location(even though the file does exist)
If you're using pyodbc in a layer, lambda will look for the odbc driver on /opt instead of /var/task. That's probably why you get an error with file not found.
Take a look at the following link on how to get pyodbc as a lambda layer
So, I am trying to make a Python script using pyvmomi to control the state of a virtual machine I'm running on my ESXi server. Basically, I tried using connection.content.searchIndex.FindByIp(ip="the ip of the VM", vmSearch=True) to grab my VM and then power it on, but of course I cannot get the IP of the VM when it's off. So, I was wondering if there was any way I could get the VM, maybe by name or its ID? I searched around quite a bit but couldn't really find a solution. Either way, here's my code so far:
from pyVim import connect
# Connect to ESXi host
connection = connect.Connect("192.168.182.130", 443, "root", "password")
# Get a searchIndex object
searcher = connection.content.searchIndex
# Find a VM
vm = searcher.FindByIp(ip="192.168.182.134", vmSearch=True)
# Print out vm name
print (vm.config.name)
# Disconnect from cluster or host
connect.Disconnect(connection)
The searchindex doesn't have any methods to do a 'findbyname' so you'll probably have to resort to pulling back all of VMs and filtering through them client side.
Here's an example of returning all the VMs: https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/getallvms.py
Another option, if you're using vCenter 6.5+, there's the vSphere Automation SDK for Python where you can interact with the REST APIs to do a server side filter. More info: https://github.com/vmware/vsphere-automation-sdk-python
This code might prove helpful:
from pyVim.connect import SmartConnect
from pyVmomi import vim
import ssl
s=ssl.SSLContext(ssl.PROTOCOL_TLSv1)
s.verify_mode=ssl.CERT_NONE
si= SmartConnect(host="192.168.100.10", user="admin", pwd="admin123",sslContext=s)
content=si.content
def get_all_objs(content, vimtype):
obj = {}
container = content.viewManager.CreateContainerView(content.rootFolder, vimtype, True)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
vmToScan = [vm for vm in get_all_objs(content,[vim.VirtualMachine]) if "ubuntu-16.04.4" == vm.name]
I know that the particular issue appears when you run Domino as a service that runs as local account or different user. However here I'm running it with user rights as a normal application. Then I start the agent from server console: tell amgr run etc.
I'm trying to enumerate available drives in two ways - as Filesystem roots using Java functionality and using Windows specific wmic. The relevant code is:
System.out.println("os:"+System.getProperty("os.name") + " user:" + System.getProperty("user.name"));
File[] roots = File.listRoots();
for (File root : roots) {
if (root.canWrite()) {
System.out.println("[rw] " + root.getPath());
} else {
System.out.println("[ro] " + root.getPath());
}
}
Process process = Runtime.getRuntime().exec(
new String[] { "wmic", "logicaldisk", "get",
"deviceid,volumename,volumeserialnumber" });
process.getOutputStream().close();
BufferedReader reader = new BufferedReader(
new InputStreamReader(process.getInputStream()));
String line = null;
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
reader.close();
Running if from client (or simply from a standalone Java application) I get all drives:
os:Windows 7 user:normunds
[rw] C:\
[ro] D:\
[rw] N:\
[rw] W:\
DeviceID VolumeName VolumeSerialNumber
C: Acer 12857911
D:
N: video EE1C7944
W: DB_70 18389143
Where N: 'video' is a mapped share on network drive.
However when I'm running it on the server (same PC) I get only the local ones, not the remote smb drive:
19.11.2013 23:00:42 Agent Manager: Agent printing: os:Windows 7 user:normunds
19.11.2013 23:00:42 Agent Manager: Agent printing: [rw] C:\
19.11.2013 23:00:42 Agent Manager: Agent printing: [ro] D:\
19.11.2013 23:00:42 Agent Manager: Agent printing: [rw] W:\
19.11.2013 23:00:42 Agent Manager: Agent printing: DeviceID VolumeName VolumeSerialNumber
19.11.2013 23:00:42 Agent Manager: Agent printing: C: Acer 12857911
19.11.2013 23:00:42 Agent Manager: Agent printing: D:
19.11.2013 23:00:42 Agent Manager: Agent printing: W: DB_70 18389143
Notice the user name, the code is running under my name; at least that's what Java thinks. I cannot figure out what causes the problem.
I even tried to write another version of this by calling Windows API method from LotusScript code:
Declare Private Function GetLogicalDriveStrings Lib "kernel32" Alias "GetLogicalDriveStringsA" (Byval nBufferLength As Long, Byval lpBuffer As String) As Long
with the same result - from client it works and returns all drives, on the server it misses. I guess one more step would be to ask Windows API the user name. Update: just did it, it also returns "normunds".
Any ideas of how to approach this issue?
Edited: What I think is happening is that Domino runs servertasks as separate processes impersonating the user that the server has been started with. In this way it closes down access to remote resources that would have been available if it ran servertasks with delegation level (of impersonation).
Now can this situation be changed by modifying some security policies or registry? As far as I understand network access in this situation happens as NullSession (anonymous user), so I assume one solution would be to enable share access by anonymous on remote end and allow NullSession to access this share locally. Edited: does not seem to help either :-/
Other, pretty wild solution, would be to log in from the agent using LogonUser Windows API to log in the same user again but with full rights (not sure if this is feasible and even if it is, imo it means to store somewhere username/password :-) And yes, it would limit us to the LotusScript solution, unless we want to install JNI wrapper; all this should actually sit in XPage (the agent is just an example of the problem)
Third solution would be to use UNC pathes instead of mapped drives and access the path with appropriate username/password (or anonymously + allowing NullSession access), but this solution kind of beats my purpose to discover the mapped drives and do this or that depending on which ones are available.
the problem is due to the way a network drive is mapped on a Windows Server.
A background service cannot access a network drive by definition since the object is mapped at the moment of interactive logon.
Official Microsoft documentation:
http://support.microsoft.com/kb/180362
More information can be found here:
Map a network drive to be used by a service
I have developed several windows services in the past and this was a constraint to be kept under control.
I hope this helps in some way.
Cheers
Maurizio