CICD Authentication using SPN - azure

I am creating a CI CD pipeline to move code between dev and test instances of databricks. I am able to achieve this using my personal token. Now, I am trying to do the same thing using a SPN and when I do the same, i get the following error.
HTTP ERROR 403
Problem accessing /api/2.0/workspace/mkdirs. Reason:
User not authorized.
Can any of you help me resolve this error or provide any links which talks about how to use SPN to authenticate from devops to databricks.

Does your SPN have Contributor Role on either the Databricks resources or Azure Resource Groups ? It could be throwing a very similar error if not.

Related

Use DefaultAzureCredentials to authenticate Service bus in Docker Container

I'm trying to use DefaultAzureCredentials to authenticate my Azure function against Azure Service Bus. In my azure function azure-func-service-bus, I call to Azure Service Bus
servicebus_client = ServiceBusClient(
fully_qualified_namespace=MY_SERVICE_BUS_NAMESPACE_NAME+".servicebus.windows.net",
credential=DefaultAzureCredential(additionally_allowed_tenants=['*'])
)
I created and pushed Docker container to ACR. When I run the container locally for testing outside of Azure, it does not know what permissions to use.
az acr login --name acr01
docker push acr01.azurecr.io/azure-func-service-bus:v1
docker pull acr01.azurecr.io/azure-func-service-bus:v1
docker run -it --rm -p 8080:80 acr01.azurecr.io/azure-func-service-bus:v1
but got the following error.
DefaultAzureCredential failed to retrieve a token from the included credentials.
Attempted credentials:
EnvironmentCredential: EnvironmentCredential authentication unavailable. Environment variables are not fully configured.
Visit https://aka.ms/azsdk/python/identity/environmentcredential/troubleshoot to troubleshoot.this issue.
ManagedIdentityCredential: ManagedIdentityCredential authentication unavailable, no response from the IMDS endpoint.
SharedTokenCacheCredential: SharedTokenCacheCredential authentication unavailable. No accounts were found in the cache.
VisualStudioCodeCredential: Failed to get Azure user details from Visual Studio Code.
AzureCliCredential: Azure CLI not found on path
AzurePowerShellCredential: PowerShell is not installed
To mitigate this issue, please refer to the troubleshooting guidelines here at https://aka.ms/azsdk/python/identity/defaultazurecredential/troubleshoot.
Unexpected error occurred (ClientAuthenticationError('DefaultAzureCredential failed to retrieve a token from the included credentials.\nAttempted credentials:\n\tEnvironmentCredential: EnvironmentCredential authentication unavailable. Environment variables are not fully configured.\nVisit https://aka.ms/azsdk/python/identity/environmentcredential/troubleshoot to troubleshoot.this issue.\n\tManagedIdentityCredential: ManagedIdentityCredential authentication unavailable, no response from the IMDS endpoint.\n\tSharedTokenCacheCredential: SharedTokenCacheCredential authentication unavailable. No accounts were found in the cache.\n\tVisualStudioCodeCredential: Failed to get Azure user details from Visual Studio Code.\n\tAzureCliCredential: Azure CLI not found on path\n\tAzurePowerShellCredential: PowerShell is not installed\nTo mitigate this issue, please refer to the troubleshooting guidelines here at https://aka.ms/azsdk/python/identity/defaultazurecredential/troubleshoot.')). Handler shutting down.
I'm missing a key piece of the puzzle. How can I handle this?
When the Azure Function runs in Azure, it's configured to support ManagedIdentityCredential. For your case I'd recommend trying to configure EnvironmentCredential to test locally.
You can find the details in the link, but the short version is:
Create a service principle (Docs) and give it the needed access
Run the container with extra Environment Variables:
AZURE_TENANT_ID: service principal's Tenant ID
AZURE_CLIENT_ID: service principal's AppId
AZURE_CLIENT_SECRET: service principle's password
I'd recommend using a .env file to make this easier, but be sure it doesn't get checked in anywhere.
FYI If your account doesn't use MFA, you can instead use the variables AZURE_USERNAME and AZURE_PASSWORD. But then you've put your username and password in a file or your terminal history which is concerning. Admittedly the service principal has the same problem, but you can more easily mitigate that with minimizing it's access and regularly rolling the secret.
P.S. If you're using Visual Studio for making your Azure Function you should be able to use something like: EnvironmentCredentialExample to automate setting up and using the needed .env file.

Azure terraform storage account permission

I want to learn more about azure open vpn configurations and how it work. So looking around I found a open source project on GitHub, at the following link:
https://github.com/terraform-azurerm-examples/example-hub.git (Thank you for your code)
I set all the variable I wanted, and removed the version from azure provider.
but when I run terraform apply, I got an error on azure Storage account.
the error is this one:
Error: reading queue properties for AzureRM Storage Account "examplehubw6sr1wyncn": queues.Client#GetServiceProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationPermissionMismatch" Message="This request is not authorized to perform this operation using this permission.\nRequestId:cce5a313-b003-005c-2bb2-9d8a2f000000\nTime:2021-08-30T15:19:07.9036073Z"
As far as I understand, the error is due to setting secret permissions, which I did updated giving Get, List and Set but the error keeps showing up.
I am using terraform version 0.14.5
and my azurerm version is 2.74.0
I never had this type of error, on my subscription I have administrator role.
Did anyone get this error and know how to solve it, I would really appreciate you help
The error is probably because your user does not have data plane permissions on your storage account - which is where Terraform wants to put the statefile. Give your user Storage Blob Data Contributor role: https://learn.microsoft.com/en-us/azure/storage/blobs/assign-azure-role-data-access?tabs=portal

Set up deployment to app service using personal access token

I've been given a personal access token (full access) which allows me to connect to a private Azure git repo within an Azure devops account from another subscription. Connecting to that repo locally using git is working fine.
I would like to set this up as a CI/CD deployment source for my app service but have been unable to find out how to do this. I tried Azure CLI:
az webapp deployment source config ... --repo-url https://anything:{pat}#dev.azure.com/Company/Project/_git/Reponame
This fails with a 500 error.
So I tried calling the Rest API directly but that also fails with the 500 error so not an Azure CLI issue.
Hoping someone can point me in the right direction,
Thanks for the help, much appreciated

Generate Azure Databricks Token using Powershell script

I need to generate Azure Databricks token using Powershell script.
I am done with creation of Azure Databricks using ARM template , now i am looking to generate Databricks token using powershell script .
Kindly let me know how to create Databricks token using Powershell script
The only way to generate a new token is via the api which requires you to have a token in the first place.
Or use the Web ui manually.
There is no official powershell commands for databricks, there are some unofficial ones but they still require you to generate a token manually first.
https://github.com/DataThirstLtd/azure.databricks.cicd.tools
Disclaimer I'm the author of these.
UPDATE: these powershell commands can now authenticate using a service principal instead of a bearer token (or can generate a bearer token for you).
so right now there is no way to use the API directly after deploying an Azure Databricks Workspace. I assume that you want to use it as part of an CI/CD pipeline - right? Reason is that you first need to manually create an API token which you can then use for all subsequent API requests.
But I will investigate and keep you updated here!
another option is to create it via terraform.
https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/resources/token
mind you, it creates the token as whomever you az login'd as. so if you az login as yourself (when it spawns a browser asking who to log in as), that's who the token will be created as (assuming that user has permissions in the databricks workspace) and contributor (or custom read role, reader role doesn't grant the right permissions) permissions into the resource group that houses the workspace.
you can always use az login -u username#email.com -p to log in as someone else, assuming that user doesn't have MFA then run the terraform init/plan/apply. mind you, if you have a backend storage, that user also has to have permissions to that backend storage as well so it can create/update any tfstate files stored there.

"insufficient authentication scopes" from Google API when calling from K8S cluster

I'm trying to report Node.js errors to Google Error Reporting, from one of our kubernetes deployments running on a GCP/GKE cluster with RBAC. (i.e. permissions defined in a service account associated to the cluster)
const googleCloud = require('#google-cloud/error-reporting');
const googleCloudErrorReporting = new googleCloud.ErrorReporting();
googleCloudErrorReporting.report('[test] dummy error message');
This works only in certain environments:
it works when run on my laptop, using a service account that has the "Errors Writer" role
it works when running in my cluster as a K8S job, after having added the "Errors Writer" role to that cluster's service account
it causes the following error when called from my Node.js application running in one of my K8S deployments:
ERROR:#google-cloud/error-reporting: Encountered an error while attempting to transmit an error to the Stackdriver Error Reporting API.
Error: Request had insufficient authentication scopes.
It feels like the job did pick up the permission changes of the cluster's service account, whereas my deployment did not.
I did try to re-create the deployment to make it refresh its auth token, but the error is still happening...
Any ideas?
UPDATE: I ended up following Jérémie Girault's suggestion: create a service account and bind it to my deployment. It works!
The error message has to do with the access scopes set on the cluster when using the default service account. You must enable access to the appropriate API.
As you mentioned, creating a separate service account, providing it the appropriate IAM permissions and linking it to your cluster or workload will bypass this error as well.

Resources