I want to create a database and collection in MongoDB with the Azure Pipeline
I created pipeline commands like this.
But
the use TESTDB command runs without waiting for the mongosh command to finish.
Related
I use Azure Devops pipelines. You can run a script on the database level with Bicep, that is listed clearly in the documents. But I want to run a script on cluster level to update the workload_group policy to increase the allowed concurrent queries. But when running the query as part of the bicep deployment (on the database script property) to alter this it results in the following error:
Reason: Not a database-scope command
How can I run this query (that should indeed be run on a cluster level) as part of the bicep deployment? I use the following query, that does work when running it in the query window in Azure Portal.
.create-or-alter workload_group ['default'] ```
<<workgroupConfig>>
```.
I also know there are tasks for Azure Devops for running scripts against the database, but I would not like to use those since data explorer is in a private network and not accessible publicly.
I am a bit struggling to find the best approach to achieve this.
I am trying to achieve a full automated process to spin up a Microsoft sql server, a SQL Database using terraform, and ultimately, when I have all the infra in place, to release a *.dacpac against the SQL Database to create tables and column and ultimately seed some data inside this database.
So far I am using azure pipeline to achieve this and this is my workflow:
terraform init
terraform validate
terraform apply
database script (to create the tables and script
The above steps works just fine and everything falls in place perfectly. Now I would like to implement another step to seed the database from a csv or excel.
A did some research on google and read Microsoft Documentation but apparently there are different ways of doing this but all those approaches, from bulk insert to bcp sqlcmd, are documented with a local server and not a cloud server.
Can anyone please advice me about how to achieve this task using azure pipeline and a cloud SQL Database? Thank you so so much for any hint
UPDATE:
If I create a task in release pipeline for cmd script. I get the following error:
2021-11-12T17:38:39.1887476Z Generating script.
2021-11-12T17:38:39.2119542Z Script contents:
2021-11-12T17:38:39.2146058Z bcp Company in "./Company.csv" -c -t -S XXXXX -d XXXX -U usertest -P ***
2021-11-12T17:38:39.2885488Z ========================== Starting Command Output ===========================
2021-11-12T17:38:39.3505412Z ##[command]"C:\Windows\system32\cmd.exe" /D /E:ON /V:OFF /S /C "CALL "D:\a\_temp\33c62204-f40c-4662-afb8-862bbd4c42b5.cmd""
2021-11-12T17:38:39.7265213Z SQLState = S1000, NativeError = 0
2021-11-12T17:38:39.7266751Z Error = [Microsoft][ODBC Driver 17 for SQL Server]Unable to open BCP host data-file
It should be actually quite simple:
The following example imports data using Azure AD Username and Password where user and password is an AAD credential. The example imports data from file c:\last\data1.dat into table bcptest for database testdb on Azure server aadserver.database.windows.net using Azure AD User/Password:
bcp bcptest in "c:\last\data1.dat" -c -t -S aadserver.database.windows.net -d testdb -G -U alice#aadtest.onmicrosoft.com -P xxxxx
If you do not plan use AAD account - please remove -G flag.
Ideally it would be if you put your admin account name and password into terraform directly from terraform. The you can fetch those values vie KeyVault task in the pipeline and use to run above mentioned script.
It looks that bcp is already on windows agents:
I tried to get the Run id using databricks runs list on CLI but didn't got the Run id's of all the jobs that run's everyday i got only top 20 Run id's but then i got the Job id of all jobs using databricks jobs list --output json now i want to get the Run id's of all job using Job id's. please help me with this i'm new to databricks.
Unfortunately, databricks cli doesn't provide the information of the Run ids.
Note: Only jobs started by the Databricks executor display using the job ID specified in the stage. The job ID is the same for all instances of the job.
You can find the run ID for a particular instance in the Data Collector log.
The Databricks executor also writes the run ID of the job to the event record. To keep a record of all run IDs, enable event generation for the stage.
There are different methods to get the RunId for any given job:
Azure Databricks Portal (user Interface): By clicking on the Jobs tab, you can view all the Jobs which you have created.
Select any Job to get detailed RunId for each run.
Azure Portal (user Interface) Using Kusto Query Language: If you have configured diagnostic log delivery, you can use KQL queries to get the JobID and RunID:
Databricks REST API: You can use the below REST API command to get list of jobs and runs.
curl "https://centralus.azuredatabricks.net/api/2.0/jobs/runs/list" -X GET -H "Authorization: Bearer dapiXXXXXXXXXXXXXXXXXXXXXXXXXXXXX4a"
I have created pipeline using Azure DevOps for Azure PostgreSQL database.
What actually pipeline do?
Connect to PostgreSQL;
Remove database db_test from PostgreSQL using Azure CLI;
az postgres db delete -g my_group -s database_here -n db_test --yes
However, I cannot do this due to error:
An unexpected error occured while processing the request.
Then, I was trying to remove the database using psql, but with no luck due to existing connections to database.
From my point of view - Azure CLI must handle such issues and remove database or pass correct error message to me. For example it would be great if parameter --force will be implemented.
I have removed all connections to database using the following syntax in a bash script:
psql "host=database_here port=5432 dbname=postgres user=postgres#database_here password=ReallyStrongPassword sslmode=require" -c "REVOKE CONNECT ON DATABASE db_test FROM PUBLIC; SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE pg_stat_activity.datname = 'db_test';"
and added DROP database action additionally to my pipeline:
psql "host=database_here port=5432 dbname=postgres user=postgres#database_here password=ReallyStrongPassword sslmode=require" -c "DROP database db_test;"
But I didn't remove AZ CLI database removal step from pipeline and it has failed with the following output:
Operation failed with status: 200. Details: Resource state Failed
I think on this step, AZ CLI should return something like: "Database does not exist." just as informative message.
How to properly handle such situations on the Azure side?
Does az postgres db show show the database details? Please check if this is happening due to this behaviour, albeit the discussion in the quoted thread is in context with a SQL database.
If the issue still persists, please feel free to drop your feedback here open an issue with our GitHub repo for our internal Team to take a look at.
Thanks for the feedback!
I am using Nodejs, graphQl, Prisma, docker and PostgreSQL,
whenever I change Schema the I have to deploy it, but it gives error as follow
ERROR: You can not deploy to a service stage while there is a deployment in progress or a pending deployment scheduled already. Please try again after the deployment finished
"code": 4008,
"status": 200
then I wait for a few minutes and try again the result is same, I tried a lot but the result is same
This happens when there is an unapplied migration in the management schema.
To resolve this follow the following steps:
Connect to your database using a GUI(like tableplus.io))
Change your database schema to management schema
Goto the migration table
Delete the last row
Then try to redeploy your service.