Cannot create Repo with Databricks CLI - databricks

I am using Azure DevOps and Databricks. I created a simplified CI/CD Pipeline which triggers the following Python script:
existing_cluster_id = 'XXX'
notebook_path = './'
repo_path = '/Repos/abc#def.at/DevOpsProject'
git_url = 'https://dev.azure.com/XXX/DDD/'
import json
import time
from datetime import datetime
from databricks_cli.configure.config import _get_api_client
from databricks_cli.configure.provider import EnvironmentVariableConfigProvider
from databricks_cli.sdk import JobsService, ReposService
config = EnvironmentVariableConfigProvider().get_config()
api_client = _get_api_client(config, command_name="cicdtemplates-")
repos_service = ReposService(api_client)
repo = repos_service.create_repo(url=git_url, provider="azureDevOpsServices", path=repo_path+"_new")
When I run the pipeline I always get an error (from the last line):
2022-12-07T23:09:23.5318746Z raise requests.exceptions.HTTPError(message, response=e.response)
2022-12-07T23:09:23.5320017Z requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://adb-XXX.azuredatabricks.net/api/2.0/repos
2022-12-07T23:09:23.5321095Z Response from server:
2022-12-07T23:09:23.5321811Z { 'error_code': 'BAD_REQUEST',
2022-12-07T23:09:23.5322485Z 'message': 'Remote repo not found. Please ensure that:\n'
2022-12-07T23:09:23.5323156Z '1. Your remote Git repo URL is valid.\n'
2022-12-07T23:09:23.5323853Z '2. Your personal access token or app password has the correct '
2022-12-07T23:09:23.5324513Z 'repo access.'}
In Databricks, I connect my repo with Azure DevOps: In Git I created a full access token which I added to Databricks' Git Integration and I am able to pull and push in Databricks.
For my CI/CD pipeline, I created variables containing my Databricks Host address and my token. When I change the token, I get a different error message (403 http code) - so the token seems to be fine.
Here a screenshot of my variables.
I have really no clue what I am doing wrong. I tried to run a simplified version of the official Databricks code here.

I tried to reproduce the error with the databricks CLI. I found out that simply _git was missing in the git repo url

Related

Error accessing remote module registry in Terraform

We have been given a remote Terraform Private registry to utilise. Along with that came a credentials name and token.
Once we configured the general details on the terraform script, we created a .terraformrc file in the same dir (mac) and created the following
credentials "my remote registy"
token = "tokenvaluegoeshere"
When we run a terraform init we get the following error (for all modules)
Error: Error Accessing remote module registry
failed to retrieve available versions for module "x" from external.location - failed to request discovery document: 401 Unauthorised
It feels like i haven't got something correctly setup in terraform (even though it looks fine)
I have tried running terraform from different locations on my mac also created new .terraformrc files but still doesn't work.

Can't import a new Agent in Dialogflow ES

I'm trying to import a DialogFlow ES agent as part of a deployment script. I'm using gcloud alpha dialogflow import client describe here
gcloud alpha dialogflow agent import --source="path/to/archive.zip" --replace-all
If the agent is already in place, the command succeed in updating/replacing it with the definition of the zip file. If the agent is not already created than I have this error.
ERROR: (gcloud.alpha.dialogflow.agent.import) Projects instance [*redacted_project_name*] not found: com.google.apps.framework.request.NotFoundException: No DesignTimeAgent found for project '*redacted_project_name*'.
Is there a command I'm missing in order to be able to use the agent import command line ?

I am trying to connect to databricks through cli, wated to replicate same in Azure devops

In the local system i am writing commands:
pip install databricks-cli
databricks configure--token
token value and later token
Now the thing is In azure devops i am using task cli and in that i have to enter the code but the catch is in local when code is running then i have to give the token and workspace but in azure devops i have to give in code only.
so is there is any way how to do this i have wriiten this code but its failing:
the pic from azure devops
Instead of configure we tend to write the configuration straight to the ~/.databrickscfg
echo "
[DEFAULT]
host = ...
token = ...
" > ~/.databrickscfg

How to add remote azure repo for terraform modules to make terraform code work on Azure pipelines

Source definition given below works for terraform modules BUT it has a PAT TOKEN. Works fine on local VM as well as on Azure Pipelines. This question is about how to define source definition for terraform modules but without hard coding PAT TOKEN
Working copy of code:
source = "git::https://<PAT TOKEN>#<AZURE DEVOPS URL>/DefaultCollection/<Project Name>y/_git/terraform-modules//<sub directory>"
I tried the below:
git::https://<AZURE DEVOPS URL>/DefaultCollection/<Project Name>/_git/terraform-modules.git//<sub directory>
That gave me error like below:
"git::https://<AZURE DEVOPS URL>/DefaultCollection/<Project Name>/_git/terraform-modules":
error downloading
'https://<AZURE DEVOPS URL>/DefaultCollection/<Project Name>/_git/terraform-modules':
/usr/bin/git exited with 128: Cloning into
'.terraform/modules/resource_group'...
fatal: could not read Username for 'https://<AZURE DEVOPS URL>':
terminal prompts disabled
Added my user name without the domain part like below:
source = "git::https://<USERNAMEM#<AZURE DEVOPS URL>/DefaultCollection/<PROJECT NAME>/_git/terraform-modules.git//compute"
Error below:
"git::https://<USERNAME>#<AZURE DEVOPS>/DefaultCollection/<PROJECT>/_git/terraform-modules.git":
error downloading
'https://<USERNAME>#<AZURE DEVOPS>/DefaultCollection/<PROJECT>/_git/terraform-modules.git':
/usr/bin/git exited with 128: Cloning into '.terraform/modules/sql_vms'...
fatal: could not read Password for
'https://<USERNAME>#<AZURE DEVOPS>': terminal prompts disabled
When Build pipeline can do checkout even without specifying username and password why do we have to mention in terraform code.
Azure Pipeline Agent has git credentials. Not sure if this is going to work at all without PAT Token?
Have a look at this - Is it possible to authenticate to a remote Git repository using the default windows credentials non interactively?
So, in our case we discovered that just running git config --global http.emptyAuth true before terraform resolves the problem. The :# business is not needed, unless your terraform module repository is an LFS repo. But this is not our case, so we did not need it.

Providing Terraform with credentials in terraform files instead of env variable

I have set-up a terraform project with a remote back-end on GCP. Now when I want to deploy the infrastructure, I run into issues with credentials. I have a credentials file in
\home\mike\.config\gcloud\credentials.json
In my terraform project I have the following data referring to the remote state:
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config {
bucket = "${var.bucket_name}"
prefix = "${var.prefix_project}"
}
}
and I specify the cloud provider with a the details of my credentials file.
provider "google" {
version = "~> 1.16"
project = "${data.terraform_remote_state.project_id.project_id}"
region = "${var.region}"
credentials = "${file(var.credentials)}"
}
However, this runs into
data.terraform_remote_state.project_id: data.terraform_remote_state.project_id:
error initializing backend:
storage.NewClient() failed: dialing: google: could not find default
credentials.
if I add
export GOOGLE_APPLICATION_CREDENTIALS=/home/mike/.config/gcloud/credentials.json
I do get it to run as desired. My issue is that I would like to specify the credentials in the terraform files as I am running the terraform commands in an automated way from a python script where I cannot set the environment variables. How can I let terraform know where the credentials are without setting the env variable?
I was facing the same error when trying to run terraform (version 1.1.5) commands in spite of having successfully authenticated via gcloud auth login.
Error message in my case:
Error: storage.NewClient() failed: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
It turned out that I had to also authenticate via gcloud auth application-default login and was able to run terraform commands thereafter.
I figured this out in the end.
Also the data needs to have the credentials.
E.g.
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config = {
bucket = "${var.bucket_name}"
prefix = "${var.prefix_project}"
credentials = "${var.credentials}" <- added
}
}

Resources