Unable to perform Azure ARM deployment because of 'ClientSecretCredential' issues - azure

I want to perform an Azure ARM template deployment using Python. I am using 'ClientSecretCredential' library to construct my Azure credentials. I am using the 'ClientSecretCredential', because according to this post, looks like the way we construct the credentials has beeen enchanced from azure.common (ServicePrincipalCredentials) to azure-identity (ClientSecretCredential) While performing the deployment, I am getting the error:
Message='ClientSecretCredential' object has no attribute 'signed_session'
Source=C:\Users\manjug\Desktop\SQLServer_armtemplate\testFunction.py
StackTrace:
File "C:\Users\manjug\Desktop\SQLServer_armtemplate\testFunction.py", line 109, in executeArmDeployment
Deployment(properties=deployment_properties)
File "C:\Users\manjug\Desktop\SQLServer_armtemplate\testFunction.py", line 119, in <module> (Current frame)
executeArmDeployment('coe-extollo-apis-dev', resourceManagerClient)

i was able to solve this by:
updating the 'azure-mgmt-resource' module to 16.0.0
Changing the 'deployments.create_or_update' functions to 'deployments.begin_create_or_update'

Related

How to connect to GCP BigTable using Python

I am connecting to my GCP BigTable instance using python (google-cloud-bigtable library), after setting up "GOOGLE_APPLICATION_CREDENTIALS" in my environment variables. I'm successful at doing this.
However, my requirement is that I want to pass the credentials during the run-time while creating the BigTable Client object as shown below:
client = bigtable.Client(credentials='82309281204023049', project='xjkejfkx')
I have followed the GCP BigTable Client Documentation to connect to GCP BigTable, but I am getting this error:
Traceback (most recent call last):
File "D:/testingonlyinformix/bigtable.py", line 14, in <module>
client = bigtable.Client(credentials="9876543467898765", project="xjksjkdn", admin=True)
File "D:\testingonlyinformix\new_venv\lib\site-packages\google\cloud\bigtable\client.py", line 196, in __init__
project=project, credentials=credentials, client_options=client_options,
File "D:\testingonlyinformix\new_venv\lib\site-packages\google\cloud\client\__init__.py", line 320, in __init__
self, credentials=credentials, client_options=client_options, _http=_http
File "D:\testingonlyinformix\new_venv\lib\site-packages\google\cloud\client\__init__.py", line 167, in __init__
raise ValueError(_GOOGLE_AUTH_CREDENTIALS_HELP)
ValueError: This library only supports credentials from google-auth-library-python. See https://google-auth.readthedocs.io/en/latest/ for help on authentication with this library.
Can someone please suggest what are all the fields/attributes a Client object is expecting during the run-time when making a connection to GCP BigTable?
Thanks
After 2 hrs of searching I finally landed on these page(s), please check them out in order:
BigTable Authentication
Using end-user authentication
OAuth Scopes for BigTable
from google_auth_oauthlib import flow
appflow = flow.InstalledAppFlow.from_client_secrets_file(
"client_secrets.json", scopes=["https://www.googleapis.com/auth/bigtable.admin"])
appflow.run_console()
credentials = appflow.credentials
The credentials in the previous step will need to be provided to the BigTable client object:
client = bigtable.Client(credentials=credentials, project='xjkejfkx')
This solution worked for me, if anyone has any other suggestions, please do pitch-in.

GitlabParsingError when accessing a project with GitLab API with Python

I try to access a project on a private GitLab instance (please take a look at the screenshot for the version numbers) by using the python-gitlab module.
I have created an access token with all permissions over the web UI of the GitLab repository and copied this token into my Python code:
import gitlab
gl = gitlab.Gitlab("https://MyGit.com/.../MyProject", "k1WMD-fE5nY5V-RWFb-G")
print(gl.projects.list())
Python throws the following error during the execution
Exception: GitlabParsingError (note: full exception trace is shown but execution is paused at: <module>)
Failed to parse the server message
During handling of the above exception, another exception occurred:
The above exception was the direct cause of the following exception:
File ".../app.py", line 226, in <module> (Current frame)
print(gl.projects.list())
The first argument to gitlab.Gitlab() is the base URL of the instance not the full path to your project. e.g. https://gitlab.example.com. You should also use the keyword private_token
So, unless your instance lives at a relative path, you should have:
gl = gitlab.Gitlab('https://MyGit.com', private_token='your API key')

How to provide serializer, deserializer, and config arguments when instantiating Databricks in Azure Python SDK? [duplicate]

[Previously in this post I asked how to provision a databricks services without any workspace. Now I'm asking how to provision a service with a workspace as the first scenario seems unfeasible.]
As a cloud admin I'm asked to write a script using the Azure Python SDK which will provision a Databricks service for one of our big data dev teams.
I can't find much online about Databricks within the Azure Python SDK other than https://azuresdkdocs.blob.core.windows.net/$web/python/azure-mgmt-databricks/0.1.0/azure.mgmt.databricks.operations.html
and
https://azuresdkdocs.blob.core.windows.net/$web/python/azure-mgmt-databricks/0.1.0/azure.mgmt.databricks.html
These appear to offer some help provisioning a workspace, but I am not quite there yet.
What am I missing?
EDITS:
Thanks to #Laurent Mazuel and #Jim Xu for their help.
Here's the code I'm running now, and the error I'm receiving:
client = DatabricksClient(credentials, subscription_id)
workspace_obj = client.workspaces.get("example_rg_name", "example_databricks_workspace_name")
WorkspacesOperations.create_or_update(
workspace_obj,
"example_rg_name",
"example_databricks_workspace_name",
custom_headers=None,
raw=False,
polling=True
)
error:
TypeError: create_or_update() missing 1 required positional argument: 'workspace_name'
I'm a bit puzzled by that error as I've provided the workspace name as the third parameter, and according to this documentation, that's just what this method requires.
I also tried the following code:
client = DatabricksClient(credentials, subscription_id)
workspace_obj = client.workspaces.get("example_rg_name", "example_databricks_workspace_name")
client.workspaces.create_or_update(
workspace_obj,
"example_rg_name",
"example_databricks_workspace_name"
)
Which results in:
Traceback (most recent call last):
File "./build_azure_visibility_core.py", line 112, in <module>
ca_databricks.create_or_update_databricks(SUB_PREFIX)
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/expd_az_databricks.py", line 34, in create_or_update_databricks
self.databricks_workspace_name
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/azure-visibility-core/lib64/python3.6/site-packages/azure/mgmt/databricks/operations/workspaces_operations.py", line 264, in create_or_update
**operation_config
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/azure-visibility-core/lib64/python3.6/site-packages/azure/mgmt/databricks/operations/workspaces_operations.py", line 210, in _create_or_update_initial
body_content = self._serialize.body(parameters, 'Workspace')
File "/home/gitlab-runner/builds/XrbbggWj/0/SA-Cloud/azure-visibility-core/azure-visibility-core/lib64/python3.6/site-packages/msrest/serialization.py", line 589, in body
raise ValidationError("required", "body", True)
msrest.exceptions.ValidationError: Parameter 'body' can not be None.
ERROR: Job failed: exit status 1
So Line 589 in serialization.py has an error. I don't see where an error in my code is causing that. Thanks to all who have been generous to assist!
you need to create a databrick client, and workspaces will be attached to it:
client = DatabricksClient(credentials, subscription_id)
workspace = client.workspaces.get(resource_group_name, workspace_name)
I don't think creating a service without a workspace is even possible, trying to create databricks service on the portal, you will see workspace name is required as well
so using the SDK I would look at the doc for client.workspaces.create_or_update
(I work at MS in the SDK team)
with help from #Laurent Mazuel and support engineers at Microsoft, I have a solution:
managed_resource_group_ID = ("/subscriptions/"+sub_id+"/resourceGroups/"+managed_rg_name)
client = DatabricksClient(credentials, subscription_id)
workspace_obj = client.workspaces.get(rg_name, databricks_workspace_name)
client.workspaces.create_or_update(
{
"managedResourceGroupId": managed_resource_group_ID,
"sku": {"name":"premium"},
"location":location
},
rg_name,
databricks_workspace_name
).wait()

How to solve cloud foundry deployment to Azure

I encounter an error trying to deploy cloud foundry to azure. Below is the stack trace. Any ideas how to resolve it?
Deploying
---------
Director task 7
Deprecation: Ignoring cloud config. Manifest contains 'networks' section.
Started preparing deployment > Preparing deployment. Done (00:00:01)
Error 100: Unable to render instance groups for deployment. Errors are:
- Unable to render jobs for instance group 'cf_z1'. Errors are:
- Unable to render templates for job 'cloud_controller_ng'. Errors are:
- Error filling in template 'cloud_controller_api.yml.erb' (line 131: undefined method `empty?' for 123456:Fixnum)
It seems BOSH is expecting a string for a value in your manifest and you have supplied a number. I'm not sure what version of cloudfoundry you are deploying but looking at cloud_controller_api.yml.erb on line 131, I think you should start by looking at the value for router.route_services_secret in your manifest.

Azure .Net SDK Error : FsOpenStream failed with error 0x83090aa2

We are trying to download a file present in Data Lake Store. I have been following the below tutorial which uses .Net Azure SDk.
https://azure.microsoft.com/en-us/documentation/articles/data-lake-analytics-get-started-net-sdk/
As we have already the file present in Azure Data Lake Store , I just added the code to download the file
FileCreateOpenAndAppendResponse beginOpenResponse = _dataLakeStoreFileSystemClient.FileSystem.BeginOpen("/XXXX/XXXX/test.csv", DataLakeStoreAccountName, new FileOpenParameters());
FileOpenResponse openResponse = _dataLakeStoreFileSystemClient.FileSystem.Open(beginOpenResponse.Location);
But it's failing with the below error
{"RemoteException":{"exception":"RuntimeException","message":"FsOpenStream
failed with error 0x83090aa2 ().
[83271af3c3a14973ad7814e7d9d201f6]","javaClassName":"java.lang.RuntimeException"}}
While debugging we inspected the beginOpenResponse.Location that been used in the second line code. It seems to the correct value as below
https://XXXXXXXX.azuredatalakestore.net/webhdfs/v1/XXXX/XXX/test.csv?op=OPEN&api-version=2015-10-01-preview&read=true
The error does not provide much information to track down the problem.
I agree that the store errors are currently non-printable comment. We are working on improving this.
According to my store developer, 0x83090aa2 is access check failed. Can you please check if you have access to the storage account and/or the path is correct?

Resources