Creating a VMware HA cluster using pyvmomi throws exception - python-3.x

Error while creating VMware HA cluster using pyvmomi library. I'm following the official documentation (https://vdc-download.vmware.com/vmwb-repository/dcr-public/6b586ed2-655c-49d9-9029-bc416323cb22/fa0b429a-a695-4c11-b7d2-2cbc284049dc/doc/index.html) of pyvmomi for it.
I'm able to create a normal cluster without HA enabled if i don't set the ha_spec i.e. if i comment out line 2,3,4,5 in the following code.
Here is my piece of code:
cluster_spec = vim.cluster.ConfigSpecEx()
ha_spec = vim.cluster.DasConfigInfo()
ha_spec.enabled = True
ha_spec.hostMonitoring = vim.cluster.DasConfigInfo.ServiceState.enabled
cluster_spec.dasConfig = ha_spec
cluster = host_folder.CreateClusterEx(name=cluster_name, spec=cluster_spec)
The error it throws is:
InvalidArgument: (vmodl.fault.InvalidArgument) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'A specified parameter was not correct: ',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
invalidProperty = <unset>
}
I'm using Python 3.7, Pyvmomi 6.7.3 any ESX 6.5.
Does anybody know if that's the right way of doing it?
Thank you.

I have same issue.this is my solusion:
cluster_spec = vim.cluster.ConfigSpecEx()
ha_spec = vim.cluster.DasConfigInfo()
ha_spec.enabled = True
ha_spec.hostMonitoring = vim.cluster.DasConfigInfo.ServiceState.enabled
# this field is not optional, but the docs say it's optional.
ha_spec.failoverLevel = 1
cluster_spec.dasConfig = ha_spec
cluster = host_folder.CreateClusterEx(name=cluster_name, spec=cluster_spec)

Related

MongoDB Atlas Provider - Terraform

I am not able to figure out below in Terraform (>= 0.13) using MongoDB Atlas provider (version >= 0.9.1)
How to set below 2 properties. Did a lot of google search with no luck
As per the documentation here:
https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cluster
I would like to set below 2 properties:
providerSettings.autoScaling.compute.maxInstanceSize
providerSettings.autoScaling.compute.minInstanceSize
I have not tried above keys as it has . in it
tried below with no luck
providerAutoScalingComputeMaxInstanceSize = "M20"
providerAutoScalingComputeMinInstanceSize = "M10"
provider_autoScaling_compute_maxInstanceSize = "M20"
provider_autoScaling_compute_minInstanceSize = "M10"
On terraform plan. I see error:
Error: Unsupported argument
on .terraform/modules/mongodb_test_b/main.tf line 10, in resource "mongodbatlas_cluster" "mongodbatlas_cluster":
10: providerAutoScalingComputeMaxInstanceSize = var.providerAutoScalingComputeMaxInstanceSize
An argument named "providerAutoScalingComputeMaxInstanceSize" is not expected
here.
Error: Unsupported argument
on .terraform/modules/mongodb_test_b/main.tf line 12, in resource "mongodbatlas_cluster" "mongodbatlas_cluster":
12: providerAutoScalingComputeMinInstanceSize = var.providerAutoScalingComputeMinInstanceSize
An argument named "providerAutoScalingComputeMinInstanceSize" is not expected
here.
Code snippet
resource "mongodbatlas_cluster" "mongodbatlas_cluster" {
project_id = var.project_id
provider_name = var.provider_name
name = var.name
provider_instance_size_name = var.provider_instance_size_name
provider_disk_type_name = var.provider_disk_type_name
auto_scaling_compute_enabled = var.auto_scaling_compute_enabled
providerAutoScalingComputeMaxInstanceSize = var.providerAutoScalingComputeMaxInstanceSize
auto_scaling_compute_scale_down_enabled = var.auto_scaling_compute_scale_down_enabled
providerAutoScalingComputeMinInstanceSize = var.providerAutoScalingComputeMinInstanceSize
pit_enabled = var.pit_enabled
cluster_type = var.cluster_type
replication_specs {
num_shards = var.replication_specs_num_shards
regions_config {
region_name = var.region_name
electable_nodes = var.replication_specs_regions_config_electable_nodes
priority = var.replication_specs_regions_config_priority
read_only_nodes = var.replication_specs_regions_config_read_only_nodes
analytics_nodes = var.analytics_nodes
}
}
mongo_db_major_version = var.mongo_db_major_version
provider_backup_enabled = var.provider_backup_enabled
auto_scaling_disk_gb_enabled = var.auto_scaling_disk_gb_enabled
}
Any assistance. Much appreciated.
You are using the wrong argument names, you need these two:
provider_auto_scaling_compute_min_instance_size [1]
provider_auto_scaling_compute_max_instance_size [2]
Your code should look like this:
provider_auto_scaling_compute_max_instance_size = var.providerAutoScalingComputeMaxInstanceSize
provider_auto_scaling_compute_min_instance_size = var.providerAutoScalingComputeMinInstanceSize
You might also consider naming your variables differently, i.e., using the same names for those as for the argument names as that helps with mapping between what an argument is and what value will it have.
[1] https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cluster#provider_auto_scaling_compute_min_instance_size
[2] https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cluster#provider_auto_scaling_compute_max_instance_size

H2OTypeError: 'training_frame' must be a valid H2OFrame

"After running the following Code…"
gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])
params = gbm.params
new_params = {"nfolds":5, "model_id":None}
for key in new_params.keys():
params[key]['actual'] = new_params[key]
gbm_best = H2OGradientBoostingEstimator()
for key in params.keys():
if key in dir(gbm_best) and getattr(gbm_best,key) != params[key]['actual']:
setattr(gbm_best,key,params[key]['actual'])
"I get the following error…H2OTypeError: 'training_frame' must be a valid H2OFrame!
It is a valid H2OFrame as I have not only imported using the import_file but also ran successfully all the
GBM hyperparameter tuning code until I ran into this error.
I am using Python 3.6. I have been following this particular notebook https://github.com/h2oai/h2o-3/blob/master/h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb "
You will need to set training_frame and validation_frame to None in new_params. Try using the code below and see if that help.
gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])
params = gbm.params
new_params = {"nfolds":5, "model_id":None, "training_frame":None, "validation_frame":None,
"response_column":None, "ignored_columns":None}
for key in new_params.keys():
params[key]['actual'] = new_params[key]
gbm_best = H2OGradientBoostingEstimator()
for key in params.keys():
if key in dir(gbm_best) and getattr(gbm_best,key) != params[key]['actual']:
setattr(gbm_best,key,params[key]['actual'])
I will get the tutorial you referred to updated.

Creating a pool in Azure with python SDK

I'm trying to create a pool based on standard marketplace ubuntu image. I'm using Azure 4.0.0, image refernce, vm config reference and other things are written based off learn.microsoft.com
Here's my code:
import azure.batch as batch
from azure.batch import BatchServiceClient
from azure.batch.batch_auth import SharedKeyCredentials
from azure.batch import models
import sys
account = 'mybatch'
key = 'Acj1hh7vMR6DSodYgYEghjce7mHmfgfdgodYgYEghjce7mHmfgodYgYEghjce7mHmfgCj/7f3Zs1rHdfgPsdlA=='
batch_url = 'https://mybatch.westeurope.batch.azure.com'
creds = SharedKeyCredentials(account, key)
batch_client = BatchServiceClient(creds, base_url = batch_url)
pool_id = 'mypool3'
if batch_client.pool.exists( pool_id ):
print( 'pool exists' )
sys.exit()
vmc = models.VirtualMachineConfiguration(
image_reference = models.ImageReference(
offer = 'UbuntuServer',
publisher = 'Canonical',
sku = '16.04.0-LTS',
version = 'latest',
virtual_machine_image_id = None
) ,
node_agent_sku_id = 'batch.node.ubuntu 16.04'
)
pool_config = models.CloudServiceConfiguration(os_family = '5')
new_pool = models.PoolAddParameter(
id = pool_id,
vm_size = 'small',
cloud_service_configuration = pool_config,
target_dedicated_nodes = 1,
virtual_machine_configuration = vmc
)
batch_client.pool.add(new_pool)
Here are some image values I took from the azure portal ( Add pool JSON Editor ):
>
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS"
},
But when I ran the code I get an error:
Traceback (most recent call last):
File "a.py", line 80, in <module>
batch_client.pool.add(new_pool)
File "/root/miniconda/lib/python3.6/site-packages/azure/batch/operations/pool_operations.py", line 310, in add
raise models.BatchErrorException(self._deserialize, response)
azure.batch.models.batch_error_py3.BatchErrorException: {'additional_properties': {}, 'lang': 'en-US', 'value': 'The value provided for one of the properties in the request body is invalid.\nRequestId:d8a1f7fa-6f40-4e4e-8f41-7958egas6efa\nTime:2018-12-05T16:18:44.5453610Z'}
What image values are wrong ? Is this possible to get more information on this error with RequestId ?
UPDATE
I found a newer example here which is using this helper select_latest_verified_vm_image_with_node_agent_sku to get the image ref. Same error The value provided for one of the properties in the request body is invalid.
I did the test with your code and get the same error. Then I research and change some things in the code. And the problem caused by two things.
First:
pool_config = models.CloudServiceConfiguration(os_family = '5')
You can take a look at the description of the models.CloudServiceConfiguration:
os_family: The Azure Guest OS family to be installed on the virtual
machines in the pool. Possible values are: 2 - OS Family 2, equivalent to
Windows Server 2008 R2 SP1. 3 - OS Family 3, equivalent to Windows Server
2012. 4 - OS Family 4, equivalent to Windows Server 2012 R2. 5 - OS Family
5, equivalent to Windows Server 2016. For more information, see Azure
Guest OS Releases
Maybe this property is set for windows. You can take away this configuration.
Second:
vm_size = 'small',
You should set the vmSize with a real VM size. For example, Standard_A1. See Choose a VM size for compute nodes in an Azure Batch pool.
Hope this will help you. If you need more help please give me the message.
I think there are a lof of confusing examples on the net, or they simply match older version of SDK.
Digging deeper into the docs I found this.
cloud_service_configuration CloudServiceConfiguration The cloud
service configuration for the pool. This property and
virtualMachineConfiguration are mutually exclusive and one of the
properties must be specified. This property cannot be specified if the
Batch account was created with its poolAllocationMode property set to
'UserSubscription'.
In my case I could use only
cloud_service_configuration = pool_config or virtual_machine_configuration = vmc, but not both at the same time.
This is the working code:
new_pool = models.PoolAddParameter(
id = pool_id,
vm_size = 'BASIC_A1',
target_dedicated_nodes = 1,
virtual_machine_configuration = vmc
)

Executemany command on linux box

if platform[0:3]=='lin':
oracledriver = '{Oracle}'
elif platform[0:3]=='win':
oracledriver = 'Oracle in OraClient12home2'
oracledbq = 'uat:1521/uat'
oracleuid = 'user'
oraclepwd = 'pwd'
oracleConn = pyodbc.connect(DRIVER=oracledriver, UID=oracleuid, PWD=oraclepwd, DBQ=oracledbq)
cursor = oracleConn.cursor()
cursor.fast_executemany = True
cursor.executemany("INSERT INTO matrix_new (A,B,C,D,E,F,G,H) values (?,?,?,?,?,?,?,?)",tuples)
pyodbc.Error: ('HY000', 'The driver did not supply an error!')
I am trying to batch insert about 30000 rows. I even tried to use insert in chunks of 100 but still this failed.
The code works fine on windows machine. Not exactly sure what is missing.
Currently using pyodbc, oracle server.
Any ideas?

How to use dcmtk/dcmprscp in Windows

How can I use dcmprscp to receive from SCU Printer a DICOM file and save it, I'm using dcmtk 3.6 & I've some trouble to use it with the default help, this's what I'm doing in CMD:
dcmprscp.exe --config dcmpstat.cfg --printer PRINT2FILE
each time I receive this messagebut (database\index.da) don't exsist in windows
W: $dcmtk: dcmprscp v3.6.0 2011-01-06 $
W: 2016-02-21 00:08:09
W: started
E: database\index.dat: No such file or directory
F: Unable to access database 'database'
I try to follow some tip, but the same result :
http://www.programmershare.com/2468333/
http://www.programmershare.com/3020601/
and this's my printer's PRINT2FILE config :
[PRINT2FILE]
hostname = localhost
type = LOCALPRINTER
description = PRINT2FILE
port = 20006
aetitle = PRINT2FILE
DisableNewVRs = true
FilmDestination = MAGAZINE\PROCESSOR\BIN_1\BIN_2
SupportsPresentationLUT = true
PresentationLUTinFilmSession = true
PresentationLUTMatchRequired = true
PresentationLUTPreferSCPRendering = false
SupportsImageSize = true
SmoothingType = 0\1\2\3\4\5\6\7\8\9\10\11\12\13\14\15
BorderDensity = BLACK\WHITE\150
EmptyImageDensity = BLACK\WHITE\150
MaxDensity = 320\310\300\290\280\270
MinDensity = 20\25\30\35\40\45\50
Annotation = 2\ANNOTATION
Configuration_1 = PERCEPTION_LUT=OEM001
Configuration_2 = PERCEPTION_LUT=KANAMORI
Configuration_3 = ANNOTATION1=FILE1
Configuration_4 = ANNOTATION1=PATID
Configuration_5 = WINDOW_WIDTH=256\WINDOW_CENTER=128
Supports12Bit = true
SupportsDecimateCrop = false
SupportsTrim = true
DisplayFormat=1,1\2,1\1,2\2,2\3,2\2,3\3,3\4,3\5,3\3,4\4,4\5,4\6,4\3,5\4,5\5,5\6,5\4,6\5,6
FilmSizeID = 8INX10IN\11INX14IN\14INX14IN\14INX17IN
MediumType = PAPER\CLEAR FILM\BLUE FILM
MagnificationType = REPLICATE\BILINEAR\CUBIC
The documentation of the "dcmprscp" tool says:
The dcmprscp utility implements the DICOM Basic Grayscale Print
Management Service Class as SCP. It also supports the optional
Presentation LUT SOP Class. The utility is intended for use within the
DICOMscope viewer.
That means, it is usually not run from the command line (as most of the other DCMTK tools) but started automatically in the background by DICOMscope.
Anyway, I think the error message is clear:
E: database\index.dat: No such file or directory
F: Unable to access database 'database'
Did you check whether there is a subdirectory "database" and whether the "index.dat" file exists in this directory? If you should ask why there is a need for a "database" then please read the next paragraph of the documentation:
The dcmprscp utility accepts print jobs from a remote Print SCU.
It does not create real hardcopies but stores print jobs in the local
DICOMscope database as a set of Stored Print objects (one per page)
and Hardcopy Grayscale images (one per film box N-SET)

Resources