Accessing Azure webhook data with python - python-3.x

How do you access using the Python programming language the WebhookData object in the Azure automation webhooks. I read the documentation regarding this, but it is in PowerShell, and not helping in my instance. My Azure webhook URL endpoint is successfully receiving data from a custom external application. I would like to read the received data and run logic driven by the received data. As shown on the below screenshot, I am receiving the data in Azure.
This is the error message I am getting when I attempt to access the WEBHOOKDATA input parameter:
Traceback (most recent call last): File "C:\Temp\rh0xijl1.ayb\3b9ba51c-73e7-44ba-af36-3c910e659c71", line 7, in <module> received_data = WEBHOOKDATA NameError: name 'WEBHOOKDATA' is not defined
This is the code producing the error message:
#!/usr/bin/env python3
import json
# Here is where my question is. How do I get this in Python?
# Surely, I should be able to access this easily. But how.
# Powershell does have a concept of param in the documentation - but I want to do this in Python.
received_data = WEBHOOKDATA
#convert JSON to string
received_as_text = json.dumps(received_data)
print(received_as_text)

You access runbook input parameters with sys.argv. See Tutorial: Create a Python 3 runbook

Related

GitlabParsingError when accessing a project with GitLab API with Python

I try to access a project on a private GitLab instance (please take a look at the screenshot for the version numbers) by using the python-gitlab module.
I have created an access token with all permissions over the web UI of the GitLab repository and copied this token into my Python code:
import gitlab
gl = gitlab.Gitlab("https://MyGit.com/.../MyProject", "k1WMD-fE5nY5V-RWFb-G")
print(gl.projects.list())
Python throws the following error during the execution
Exception: GitlabParsingError (note: full exception trace is shown but execution is paused at: <module>)
Failed to parse the server message
During handling of the above exception, another exception occurred:
The above exception was the direct cause of the following exception:
File ".../app.py", line 226, in <module> (Current frame)
print(gl.projects.list())
The first argument to gitlab.Gitlab() is the base URL of the instance not the full path to your project. e.g. https://gitlab.example.com. You should also use the keyword private_token
So, unless your instance lives at a relative path, you should have:
gl = gitlab.Gitlab('https://MyGit.com', private_token='your API key')

How to provide serializer, deserializer, and config arguments when instantiating Databricks in Azure Python SDK? [duplicate]

[Previously in this post I asked how to provision a databricks services without any workspace. Now I'm asking how to provision a service with a workspace as the first scenario seems unfeasible.]
As a cloud admin I'm asked to write a script using the Azure Python SDK which will provision a Databricks service for one of our big data dev teams.
I can't find much online about Databricks within the Azure Python SDK other than https://azuresdkdocs.blob.core.windows.net/$web/python/azure-mgmt-databricks/0.1.0/azure.mgmt.databricks.operations.html
and
https://azuresdkdocs.blob.core.windows.net/$web/python/azure-mgmt-databricks/0.1.0/azure.mgmt.databricks.html
These appear to offer some help provisioning a workspace, but I am not quite there yet.
What am I missing?
EDITS:
Thanks to #Laurent Mazuel and #Jim Xu for their help.
Here's the code I'm running now, and the error I'm receiving:
client = DatabricksClient(credentials, subscription_id)
workspace_obj = client.workspaces.get("example_rg_name", "example_databricks_workspace_name")
WorkspacesOperations.create_or_update(
workspace_obj,
"example_rg_name",
"example_databricks_workspace_name",
custom_headers=None,
raw=False,
polling=True
)
error:
TypeError: create_or_update() missing 1 required positional argument: 'workspace_name'
I'm a bit puzzled by that error as I've provided the workspace name as the third parameter, and according to this documentation, that's just what this method requires.
I also tried the following code:
client = DatabricksClient(credentials, subscription_id)
workspace_obj = client.workspaces.get("example_rg_name", "example_databricks_workspace_name")
client.workspaces.create_or_update(
workspace_obj,
"example_rg_name",
"example_databricks_workspace_name"
)
Which results in:
Traceback (most recent call last):
File "./build_azure_visibility_core.py", line 112, in <module>
ca_databricks.create_or_update_databricks(SUB_PREFIX)
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/expd_az_databricks.py", line 34, in create_or_update_databricks
self.databricks_workspace_name
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/azure-visibility-core/lib64/python3.6/site-packages/azure/mgmt/databricks/operations/workspaces_operations.py", line 264, in create_or_update
**operation_config
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/azure-visibility-core/lib64/python3.6/site-packages/azure/mgmt/databricks/operations/workspaces_operations.py", line 210, in _create_or_update_initial
body_content = self._serialize.body(parameters, 'Workspace')
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/azure-visibility-core/lib64/python3.6/site-packages/msrest/serialization.py", line 589, in body
raise ValidationError("required", "body", True)
msrest.exceptions.ValidationError: Parameter 'body' can not be None.
ERROR: Job failed: exit status 1
So Line 589 in serialization.py has an error. I don't see where an error in my code is causing that. Thanks to all who have been generous to assist!
you need to create a databrick client, and workspaces will be attached to it:
client = DatabricksClient(credentials, subscription_id)
workspace = client.workspaces.get(resource_group_name, workspace_name)
I don't think creating a service without a workspace is even possible, trying to create databricks service on the portal, you will see workspace name is required as well
so using the SDK I would look at the doc for client.workspaces.create_or_update
(I work at MS in the SDK team)
with help from #Laurent Mazuel and support engineers at Microsoft, I have a solution:
managed_resource_group_ID = ("/subscriptions/"+sub_id+"/resourceGroups/"+managed_rg_name)
client = DatabricksClient(credentials, subscription_id)
workspace_obj = client.workspaces.get(rg_name, databricks_workspace_name)
client.workspaces.create_or_update(
{
"managedResourceGroupId": managed_resource_group_ID,
"sku": {"name":"premium"},
"location":location
},
rg_name,
databricks_workspace_name
).wait()

Error: 503 DNS resolution failed in Google Translate API but ONLY when executing the Python file via terminal

I am running into a strange issue when using Google Translate API with a JSON authorization key. I can run the file without any issue directly in my editor of choice (VSCode). No errors.
But when I run the same file via terminal, the file executes up until after loading the Google Translate credentials from disk. The call then throws below error message.
Since this only ever happens when running the file from terminal, I find this bug hard to tackle.
The goal is to then have this file collect data and translate some of the fields using Google services, then store the data in a data base.
Below is the error message:
Error: 503 DNS resolution failed
Traceback (most recent call last):
File "/home/MY-USER-NAME/anaconda3/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
File "/home/MY-USER-NAME/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 690, in __call__
File "/home/MY-USER-NAME/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 592, in _end_unary_response_blocking
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed"
debug_error_string = "{"created":"#1584629013.091398712","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3934,"referenced_errors":[{"created":"#1584629013.091395769","description":"Resolver transient failure","file":"src/core/ext/filters/client_channel/resolving_lb_policy.cc","file_line":262,"referenced_errors":[{"created":"#1584629013.091394954","description":"DNS resolution failed","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/dns_resolver_ares.cc","file_line":370,"grpc_status":14,"referenced_errors":[{"created":"#1584629013.091389655","description":"C-ares status is not ARES_SUCCESS: Could not contact DNS servers","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244,"referenced_errors":[{"created":"#1584629013.091380513","description":"C-ares status is not ARES_SUCCESS: Could not contact DNS servers","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244}]}]}]}]}"
>
This is the relevant part of my code:
Dedicated CRON file (I use this two-step setup in many other projects without any issue)
#! anaconda3/bin/python
import os
os.chdir(r'/home/MY-USER-NAME/path-to-project')
import code-file
Code file (simple .py script)
[...]
from google.oauth2 import service_account
from google.cloud import translate
key_path = 'path-to-file/credentials.json'
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=['https://www.googleapis.com/auth/cloud-platform'])
def translate_to_en(contents, credentials=None, verbose=False, use_pinyin=False):
[...]
client = translate.TranslationServiceClient(credentials=credentials)
parent = client.location_path(credentials.project_id, 'global')
[...]
response = client.translate_text(
parent=parent,
contents=[contents],
mime_type='text/plain', # mime types: text/plain, text/html
target_language_code='en-US')
[...]
[...]
for c in trans_cols:
df[f'{c}__trans'] = df[c].apply(lambda x: translate_to_en(contents=x, credentials=credentials, verbose=False))
[...]
If there is anyone with a good idea to solve this, your help is greatly appreciated.
It seems the issue is relevant to the grpc bug reported in github. For gRPC name resolution, you can follow this doc on github.

How do I query GCP log viewer and obtain json results in Python 3.x (like gcloud logging read)

I'm building a tool to download GCP logs, save the logs to disk as single line json entries, then perform processing against those logs. The program needs to support both logs exported to cloud storage and logs currently in stackdriver (to partially support environments where exports to cloud storage hasn't been pre-configured). The cloud storage piece is done, but I'm having difficulty with downloading logs from stackdriver.
I would like to implement similar functionality to the gcloud function 'gcloud logging read' in Python. Yes, I could use gcloud, however I would like to build everything into the one tool.
I currently have this sample code to print the result of hits, however I can't get the full log entry in JSON format:
def downloadStackdriver():
client = logging.Client()
FILTER = "resource.type=project"
for entry in client.list_entries(filter_=FILTER):
a = (entry.payload.value)
print(a)
How can I obtain full JSON output of matching logs like it works using gcloud logging read?
Based on other stackoverflow pages, I've attempted to use MessageToDict and MessageToJson, however I receive the error
"AttributeError: 'ProtobufEntry' object has no attribute 'DESCRIPTOR'"
You can use the to_api_repr function on the LogEntry class from the google-cloud-logging package to do this:
from google.cloud import logging
client = logging.Client()
logger = client.logger('log_name')
for entry in logger.list_entries():
print(entry.to_api_repr())

Proton connection to azure iothub amqps endpoints

I'm trying to use python 2.7 qpid proton (0.17.0) on Debian 4.8.4-1 on google cloud to connect to the service AMQPS 1.0 endpoints on an Azure IoT hub.
I'm using a shared access policy and a SAS token pair (which successfully work in ampq net lite under C#) for SASL PLAIN. For azure iothub, you reference the shared access policy as the username in the format policyName#sas.root.IoTHubName
When I use these (obfuscated) credentials with qpid proton as follows
import sys
from proton import Messenger, Message
base = "amqps://iothubowner#sas.root.HubName:SharedAccessSignature sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner#HubName.azure-devices.net:5671"
entityName = "/messages/servicebound/feedback"
messenger = Messenger()
messenger.subscribe("%s%s" % ( base, entityName))
messenger.start()
I get the following error
(debug print of connection string being passed) amqps://iothubowner#sas.root.HubName:SharedAccessSignature
sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner#HubName.azure-devices.net:5671
Traceback (most recent call last): File "readProton2.py", line 27,
in
messenger.subscribe("%s%s" % ( base, entityName)) File "/usr/lib/python2.7/dist-packages/proton/__init__.py", line 496, in
subscribe
self._check(pn_error_code(pn_messenger_error(self._mng))) File "/usr/lib/python2.7/dist-packages/proton/__init__.py", line 300, in
_check
raise exc("[%s]: %s" % (err, pn_error_text(pn_messenger_error(self._mng))))
proton.MessengerException: [-2]: CONNECTION ERROR
(sas.root.HubName:SharedAccessSignature
sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner#HubName.azure-devices.net:5671):
getaddrinfo(sas.root.HubName, SharedAccessSignature
sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=149086152
9&skn=iothubowner#HubName.azure-devices.net:5671): Servname not
supported for ai_socktype
The final error message looks like parsing of the connection string is being mangled (on a very quick look, pni_default_rewrite in qpid-proton/messenger.c appears to use the first occurrence of # to split the connection string which could be the problem)
However, I'm new to AMQP and proton, so before I raise a bug, want to check whether others have successfully use proton to iothub, or if I've missed something??
You need to encode amqp username and password.
iothubowner#sas.root.HubName
iothubowner%40sas.root.HubName
SharedAccessSignature sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=1490861529&skn=iothubowner
SharedAccessSignature%20sr%3DHubName.azure-devices.net%252fmessages%26sig%3DPERCENT_ENCODED_SIG%26se%3D1493454429%26st%3D1490861529%26skn%3Diothubowner
The server name should be HubName.azure-devices.net:5671, but as you can see in error message, server name changed to sas.root.HubName, SharedAccessSignature sr=HubName.azure-devices.net%2fmessages&sig=PERCENT_ENCODED_SIG&se=1493454429&st=149086152 9&skn=iothubowner#HubName.azure-devices.net:5671

Resources