I'm trying to create a pool based on standard marketplace ubuntu image. I'm using Azure 4.0.0, image refernce, vm config reference and other things are written based off learn.microsoft.com
Here's my code:
import azure.batch as batch
from azure.batch import BatchServiceClient
from azure.batch.batch_auth import SharedKeyCredentials
from azure.batch import models
import sys
account = 'mybatch'
key = 'Acj1hh7vMR6DSodYgYEghjce7mHmfgfdgodYgYEghjce7mHmfgodYgYEghjce7mHmfgCj/7f3Zs1rHdfgPsdlA=='
batch_url = 'https://mybatch.westeurope.batch.azure.com'
creds = SharedKeyCredentials(account, key)
batch_client = BatchServiceClient(creds, base_url = batch_url)
pool_id = 'mypool3'
if batch_client.pool.exists( pool_id ):
print( 'pool exists' )
sys.exit()
vmc = models.VirtualMachineConfiguration(
image_reference = models.ImageReference(
offer = 'UbuntuServer',
publisher = 'Canonical',
sku = '16.04.0-LTS',
version = 'latest',
virtual_machine_image_id = None
) ,
node_agent_sku_id = 'batch.node.ubuntu 16.04'
)
pool_config = models.CloudServiceConfiguration(os_family = '5')
new_pool = models.PoolAddParameter(
id = pool_id,
vm_size = 'small',
cloud_service_configuration = pool_config,
target_dedicated_nodes = 1,
virtual_machine_configuration = vmc
)
batch_client.pool.add(new_pool)
Here are some image values I took from the azure portal ( Add pool JSON Editor ):
>
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS"
},
But when I ran the code I get an error:
Traceback (most recent call last):
File "a.py", line 80, in <module>
batch_client.pool.add(new_pool)
File "/root/miniconda/lib/python3.6/site-packages/azure/batch/operations/pool_operations.py", line 310, in add
raise models.BatchErrorException(self._deserialize, response)
azure.batch.models.batch_error_py3.BatchErrorException: {'additional_properties': {}, 'lang': 'en-US', 'value': 'The value provided for one of the properties in the request body is invalid.\nRequestId:d8a1f7fa-6f40-4e4e-8f41-7958egas6efa\nTime:2018-12-05T16:18:44.5453610Z'}
What image values are wrong ? Is this possible to get more information on this error with RequestId ?
UPDATE
I found a newer example here which is using this helper select_latest_verified_vm_image_with_node_agent_sku to get the image ref. Same error The value provided for one of the properties in the request body is invalid.
I did the test with your code and get the same error. Then I research and change some things in the code. And the problem caused by two things.
First:
pool_config = models.CloudServiceConfiguration(os_family = '5')
You can take a look at the description of the models.CloudServiceConfiguration:
os_family: The Azure Guest OS family to be installed on the virtual
machines in the pool. Possible values are: 2 - OS Family 2, equivalent to
Windows Server 2008 R2 SP1. 3 - OS Family 3, equivalent to Windows Server
2012. 4 - OS Family 4, equivalent to Windows Server 2012 R2. 5 - OS Family
5, equivalent to Windows Server 2016. For more information, see Azure
Guest OS Releases
Maybe this property is set for windows. You can take away this configuration.
Second:
vm_size = 'small',
You should set the vmSize with a real VM size. For example, Standard_A1. See Choose a VM size for compute nodes in an Azure Batch pool.
Hope this will help you. If you need more help please give me the message.
I think there are a lof of confusing examples on the net, or they simply match older version of SDK.
Digging deeper into the docs I found this.
cloud_service_configuration CloudServiceConfiguration The cloud
service configuration for the pool. This property and
virtualMachineConfiguration are mutually exclusive and one of the
properties must be specified. This property cannot be specified if the
Batch account was created with its poolAllocationMode property set to
'UserSubscription'.
In my case I could use only
cloud_service_configuration = pool_config or virtual_machine_configuration = vmc, but not both at the same time.
This is the working code:
new_pool = models.PoolAddParameter(
id = pool_id,
vm_size = 'BASIC_A1',
target_dedicated_nodes = 1,
virtual_machine_configuration = vmc
)
Related
In a Google Cloud function (python 3.7) , I need to fetch the compliance state of all VMs in a given location in a project.
From available google documentation here I could see the REST API format:
https://cloud.google.com/compute/docs/os-configuration-management/view-compliance#view_compliance_state
On searching for the client library here , I found this:
class google.cloud.osconfig_v1alpha.types.ListInstanceOSPoliciesCompliancesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]
Bases: proto.message.Message
A request message for listing OS policies compliance data for all Compute Engine VMs in the given location.
parent
Required. The parent resource name.
Format: projects/{project}/locations/{location}
For {project}, either Compute Engine project-number or project-id can be provided.
Type
str
page_size
The maximum number of results to return.
Type
int
page_token
A pagination token returned from a previous call to ListInstanceOSPoliciesCompliances that indicates where this listing should continue from.
Type
str
filter
If provided, this field specifies the criteria that must be met by a InstanceOSPoliciesCompliance API resource to be included in the response.
Type
str
And the response class as:
class google.cloud.osconfig_v1alpha.types.ListInstanceOSPoliciesCompliancesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]
Bases: proto.message.Message
A response message for listing OS policies compliance data for all Compute Engine VMs in the given location.
instance_os_policies_compliances
List of instance OS policies compliance objects.
Type
Sequence[google.cloud.osconfig_v1alpha.types.InstanceOSPoliciesCompliance]
next_page_token
The pagination token to retrieve the next page of instance OS policies compliance objects.
Type
str
property raw_page
But I am not sure how to use this information in the python code.
I have written this but not sure if this is correct:
from google.cloud.osconfig_v1alpha.services.os_config_zonal_service import client
from google.cloud.osconfig_v1alpha.types import ListInstanceOSPoliciesCompliancesRequest
import logging
logger = logging.getLogger(__name__)
import os
def handler():
try:
project_id = os.environ["PROJECT_ID"]
location = os.environ["ZONE"]
#list compliance state
request = ListInstanceOSPoliciesCompliancesRequest(
parent=f"projects/{project}/locations/{location}")
response = client.instance_os_policies_compliance(request)
return response
except Exception as e:
logger.error("Unable to get compliance - %s " % str(e))
I could not find any usage example for the client library methods anywhere.
Could someone please help me here?
EDIT:
This is what I am using now:
from googleapiclient.discovery import build
def list_policy_compliance():
projectId = "my_project"
zone = "my_zone"
try:
service = build('osconfig', 'v1alpha', cache_discovery=False)
compliance_response = service.projects().locations(
).instanceOsPoliciesCompliances().list(
parent='projects/%s/locations/%s' % (
projectId, zone)).execute()
return compliance_response
except Exception as e:
raise Exception()
Something like this should work:
from google.cloud import os_config_v1alpha as osc
def handler():
client = osc.OsConfigZonalService()
project_id = "my_project"
location = "my_gcp_zone"
parent = f"projects/{project_id}/locations/{location}"
response = client.list_instance_os_policies_compliances(
parent=parent
)
# response is an iterable yielding
# InstanceOSPoliciesCompliance objects
for result in response:
# do something with result
...
You can also construct the request like this:
response = client.list_instance_os_policies_compliances(
request = {
"parent": parent
}
)
Answering my own question here , this is what I used:
from googleapiclient.discovery import build
def list_policy_compliance():
projectId = "my_project"
zone = "my_zone"
try:
service = build('osconfig', 'v1alpha', cache_discovery=False)
compliance_response = service.projects().locations(
).instanceOsPoliciesCompliances().list(
parent='projects/%s/locations/%s' % (
projectId, zone)).execute()
return compliance_response
except Exception as e:
raise Exception()
I have this block of code that basically translates text from one language to another using the cloud translate API. The problem is that this code always throws the error: "Caller's project doesn't match parent project". What could be the problem?
translation_separator = "translated_text: "
language_separator = "detected_language_code: "
translate_client = translate.TranslationServiceClient()
# parent = translate_client.location_path(
# self.translate_project_id, self.translate_location
# )
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = (
os.getcwd()
+ "/translator_credentials.json"
)
# Text can also be a sequence of strings, in which case this method
# will return a sequence of results for each text.
try:
result = str(
translate_client.translate_text(
request={
"contents": [text],
"target_language_code": self.target_language_code,
"parent": f'projects/{self.translate_project_id}/'
f'locations/{self.translate_location}',
"model": self.translate_model
}
)
)
print(result)
except Exception as e:
print("error here>>>>>", e)
Your issue seems to be related to the authentication method that you are using on your application, please follow the guide for authention methods with the translate API. If you are trying to pass the credentials using code, you can explicitly point to your service account file in code with:
def explicit():
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json(
'service_account.json')
Also, there is a codelab for getting started with the translation API with Python, this is a great step by step getting started guide for running the translate API with Python.
If the issue persists, you can try creating a Public Issue Tracker for Google Support
I've got a problem with an Azure Stream Analytics (ASA) job that should call an Azure ML Service function to score the provided input data.
The query was developed und tested in Visual Studio (VS) 2019 with the "Azure Data Lake and Stream Analytics Tools" Extension.
As input the job uses an Azure IoT-Hub and as output the VS local output for testing purposes (and later even with Blobstorage).
Within this environment everything works fine, the call to the ML Service function is successfull and it returns the desired response.
Using the same query, user-defined functions and aggregates like in VS in the cloud job, no output events are generated (with neither Blobstorage nor Power BI as output).
In the ML Webservice it can be seen, that ASA successfully calls the function, but somehow does not return any response data.
Deleting the ML function call from the query results in a successfull run of the job with output events.
For the deployment of the ML Webservice I tried the following (working for VS, no output in cloud):
ACI (1 CPU, 1 GB RAM)
AKS dev/test (Standard_B2s VM)
AKS production (Standard_D3_v2 VM)
The inference script function schema:
input: array
output: record
Inference script input schema looks like:
#input_schema('data', NumpyParameterType(input_sample, enforce_shape=False))
#output_schema(NumpyParameterType(output_sample)) # other parameter type for record caused error in ASA
def run(data):
response = {'score1': 0,
'score2': 0,
'score3': 0,
'score4': 0,
'score5': 0,
'highest_score': None}
And the return value:
return [response]
The ASA job subquery with ML function call:
with raw_scores as (
select
time, udf.HMMscore(udf.numpyfySeq(Sequence)) as score
from Sequence
)
and the UDF "numpyfySeq" like:
// creates a N x 18 size array
function numpyfySeq(Sequence) {
'use strict';
var transpose = m => m[0].map((x, i) => m.map(x => x[i]));
var array = [];
for (var feature in Sequence) {
if (feature != "time") {
array.push(Sequence[feature])
}
}
return transpose(array);
}
"Sequence" is a subquery that aggregates the data into sequences (arrays) with an user-defined aggregate.
In VS the data comes from the IoT-Hub (cloud input selected).
The "function signature" is recognized correctly in the portal as seen in the image: Function signature
I hope the provided information is sufficient and you can help me.
Edit:
The authentication for the Azure ML webservice is key-based.
In ASA, when selecting to use an "Azure ML Service" function, it will automatically detect and use the keys from the deployed ML model within the subscription and ML workspace.
Deployment code used (in this example for ACI, but looks nearly the same for AKS deployment):
from azureml.core.model import InferenceConfig, Model
from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.webservice import AciWebservice
ws = Workspace.from_config()
env = Environment(name='scoring_env')
deps = CondaDependencies(conda_dependencies_file_path='./deps')
env.python.conda_dependencies = deps
inference_config = InferenceConfig(source_directory='./prediction/',
entry_script='score.py',
environment=env)
deployment_config = AciWebservice.deploy_configuration(auth_enabled=True, cpu_cores=1,
memory_gb=1)
model = Model(ws, 'HMM')
service = Model.deploy(ws, 'hmm-scoring', models,
inference_config,
deployment_config,
overwrite=True,)
service.wait_for_deployment(show_output=True)
with conda_dependencies:
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.7.5
- pip:
- sklearn
- azureml-core
- azureml-defaults
- inference-schema[numpy-support]
- hmmlearn
- numpy
- pip
channels:
- anaconda
- conda-forge
The code used in the score.py is just a regular score operation with the loaded models and formatting like so:
score1 = model1.score(data)
score2 = model2.score(data)
score3 = model3.score(data)
# Same scoring with model4 and model5
# scaling of the scores to a defined interval and determination of model that delivered highest score
response['score1'] = score1
response['score2'] = score2
# and so on
I’ve managed to establish a connection to a public queue with the older pymqi version for Python2 using the following python code:
import logging
import pymqi
logging.basicConfig(level=logging.INFO)
queue_manager = 'QM1'
channel = 'BZU.UAT.CHNL'
host = '245.274.46.56'
port = '1416'
queue_name = 'BZU.UAT.QUEUE'
conn_info = '%s(%s)' % (host, port)
ssl_cipher_spec = 'TLS_RSA_WITH_3DES_EDE_CBC_SHA'
key_repo_location = 'D:\\App\\BZU\\keydb\\key'
message = 'Hello from Python!'
cd = pymqi.CD()
cd.ChannelName = channel
cd.ConnectionName = conn_info
cd.ChannelType = pymqi.CMQC.MQCHT_CLNTCONN
cd.TransportType = pymqi.CMQC.MQXPT_TCP
cd.SSLCipherSpec = ssl_cipher_spec
cd.UserIdentifier = 'BZU'
cd.Password = ''
sco = pymqi.SCO()
sco.KeyRepository = key_repo_location
qmgr = pymqi.QueueManager(None)
qmgr.connect_with_options(queue_manager, cd, sco)
put_queue = pymqi.Queue(qmgr, queue_name)
put_queue.put(message)
get_queue = pymqi.Queue(qmgr, queue_name)
logging.info('Here is the message again: [%s]' % get_queue.get())
put_queue.close()
get_queue.close()
qmgr.disconnect()
Unfortunately, this code doesn’t work with pymqi version 1.9.3 for Python 3.
In this case, I get the following error message:
Traceback (most recent call last):
File ".\mq_conn_with_ssl.py", line 33, in <module>
qmgr.connect_with_options(queue_manager, cd, sco)
File "D:\App\BZU\arn-basis-common\py\pymqi\__init__.py", line 1347, in connect_with_options
raise MQMIError(rv[1], rv[2])
pymqi.MQMIError: MQI Error. Comp: 2, Reason 2393: FAILED: MQRC_SSL_INITIALIZATION_ERROR
I had to convert all strings in this code to bytes, since the program demanded all strings as bytes .
Example:
queue_manager = b'QM1'
In the comments you stated you found the following error in the AMQERR01.LOG file:
AMQ9716: Remote SSL certificate revocation status check failed for channel 'BZU.UAT.CHNL'.
Compare the mqclient.ini file on your working server and on the non-working server for differences in the SSL: stanza that would account for the OCSP check failing.
The location of the mqclient.ini file can be found in the IBM MQ Knowledge center page IBM MQ>Configuring>Configuring connections between the server and clients>Configuring a client using a configuration file>Location of the client configuration file. See the summary is below:
The location specified by the environment variable MQCLNTCF.
A file called mqclient.ini in the present working directory of the application.
A file called mqclient.ini in the IBM MQ data directory for Windows, UNIX and Linux systems.
A file called mqclient.ini in a standard directory appropriate to the platform, and accessible to users:
The documentation on the SSL stanza of the mqclient.ini can be found in the IBM MQ Knowledge center page IBM MQ>Configuring>Configuring connections between the server and clients>Configuring a client using a configuration file>SSL stanza of the client configuration file. See the summary is below:
OCSPAuthentication = OPTIONAL | REQUIRED | WARN
OCSPCheckExtensions = YES | NO
SSLHTTPProxyName = string
def get_instance_id_from_pip(self, pip):
subscription_id="69ff3a41-a66a-4d31-8c7d-9a1ef44595c3"
compute_client = ComputeManagementClient(self.credentials, subscription_id)
network_client = NetworkManagementClient(self.credentials, subscription_id)
print("Get all public IP")
for public_ip in network_client.public_ip_addresses.list_all():
if public_ip.ip_address == pip:
print(public_ip)
# Get id
pip_id= public_ip.id.split('/')
print("pip id : {}".format(pip_id))
rg_from_pip = pip_id[4].lower()
print("RG : {}".format(rg_from_pip))
pip_name = pip_id[-1]
print("pip name : {}".format(pip_name))
for vm in compute_client.virtual_machines.list_all():
vm_id = vm.id.split('/')
#print("vm ref id: {}".format(vm_id))
rg_from_vm = vm_id[4].lower()
if rg_from_pip == rg_from_vm:
## this is the VM in the same rg as pip
for ni_reference in vm.network_profile.network_interfaces:
ni_reference = ni_reference.id.split('/')
ni_name = ni_reference[8]
print("ni reference: {}".format(ni_reference))
net_interface = network_client.network_interfaces.get(rg_from_pip, ni_name)
print("net interface ref {}".format(net_interface))
public_ip_reference = net_interface.ip_configurations[0].public_ip_address
if public_ip_reference:
public_ip_reference = public_ip_reference.id.split('/')
ip_group = public_ip_reference[4]
ip_name = public_ip_reference[8]
print("IP group {}, IP name {}".format(ip_group, ip_name))
if ip_name == pip_name:
print("Thank god. Finallly !!!!")
print("VM ID :-> {}".format(vm.id))
return vm.id
I have above code to get the VM instance ID from Public ip but its not working. What is real surprising is that for all instances, I am getting x.public_ip_address.ip_address as None value.
I had multiple readthedocs references for python SDK of Azure. but, some how all links not working. Good job Azure :)
Some of them:
https://azure-sdk-for-python.readthedocs.io/en/v1.0.3/resourcemanagementcomputenetwork.html
https://azure-storage.readthedocs.io/en/latest/ref/azure.storage.file.fileservice.html
Second edit:
I got the answer to this and above code will return vm id given the public ip address. Though, you can see it is not absolutely optimized answer. Hence, better answers are welcome. Thanks!
Docs have moved here:
https://learn.microsoft.com/python/azure/
We made some redirection, but unfortunately it's not possible to go global redirection on RTD, so some pages are 404 :/
About your trouble, I would try to use the publicip operation group directly:
https://learn.microsoft.com/en-us/python/api/azure.mgmt.network.v2017_11_01.operations.publicipaddressesoperations?view=azure-python
You get this one in the Network client, as client.public_ip_addresses