I have created pipeline using Azure DevOps for Azure PostgreSQL database.
What actually pipeline do?
Connect to PostgreSQL;
Remove database db_test from PostgreSQL using Azure CLI;
az postgres db delete -g my_group -s database_here -n db_test --yes
However, I cannot do this due to error:
An unexpected error occured while processing the request.
Then, I was trying to remove the database using psql, but with no luck due to existing connections to database.
From my point of view - Azure CLI must handle such issues and remove database or pass correct error message to me. For example it would be great if parameter --force will be implemented.
I have removed all connections to database using the following syntax in a bash script:
psql "host=database_here port=5432 dbname=postgres user=postgres#database_here password=ReallyStrongPassword sslmode=require" -c "REVOKE CONNECT ON DATABASE db_test FROM PUBLIC; SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE pg_stat_activity.datname = 'db_test';"
and added DROP database action additionally to my pipeline:
psql "host=database_here port=5432 dbname=postgres user=postgres#database_here password=ReallyStrongPassword sslmode=require" -c "DROP database db_test;"
But I didn't remove AZ CLI database removal step from pipeline and it has failed with the following output:
Operation failed with status: 200. Details: Resource state Failed
I think on this step, AZ CLI should return something like: "Database does not exist." just as informative message.
How to properly handle such situations on the Azure side?
Does az postgres db show show the database details? Please check if this is happening due to this behaviour, albeit the discussion in the quoted thread is in context with a SQL database.
If the issue still persists, please feel free to drop your feedback here open an issue with our GitHub repo for our internal Team to take a look at.
Thanks for the feedback!
Related
I'm trying to create an Azure Synapse Link for Azure SQL Database, using the steps from here:
https://learn.microsoft.com/en-us/azure/synapse-analytics/synapse-link/connect-synapse-link-sql-database
After I create the link connection and I want to start it I receive the following error:
The connection to the sink database is failed. Detailed error message is: Login failed for user ''.
ConnectionToAzureDB
LinkConnection
Also I have configurated the Azure SQL database to use ADD Auth. The connection to the Azure Database seems to be working.
My user ( used to create the Synapse workspace is Subscription Owner)
The user is also owner of the storage account.
I added the SQL Managed Identity as Storage Blob Data Contributor
Did anyone else got this error and manage to fix it?
There are certain limitations while connecting SQL Database to Synapse Link as per document:
When setting up your workspace, users must select "Disable Managed Virtual Network" and "Allow connections from any IP addresses."
A link connection cannot be enabled by Azure Synapse link for SQL if the database owner does not have a mapped log in. it will cause to get error.The (ALTER AUTHORIZATION command can be used to workaround this problem by changing the database owner to an user.)
With fewer than 100 DTUs, the Free, Basic, or Standard tiers do not allow Azure Synapse Link for SQL.
With is limitation I tried to Connect SQL Database to Synapse Link and able to connect without error:
I was trying to create a Synapse Link service with On Premises SQL Server and getting following error
Failed to enable Synapse Link on the source due to 'Failed to enable the source database: Some internal error happened due to 'Calling internal service failed: Failed to execute non query on change publisher with status code 400 and error Fail to non-query change publisher with error: 'sqlErrorCode - 22301; exceptionCode - TransferServiceUnknowError; error - A database operation failed with the following error: 'Could not update the metadata. The failure occurred when executing the command '(null)'. The error/state returned was 15517/1: 'Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.'. Use the action and error to determine the cause of the failure and resubmit the request.'; detailedError - A database operation failed with the following error: 'Could not update the metadata. The failure occurred when executing the command '(null)'. The error/state returned was 15517/1: 'Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.'. Use the action and error to determine the cause of the failure and resubmit the request.'
I resolved by by changing the corresponding database user to 'sa' and it works.
use [YourCorrespondingDatabase] EXEC sp_changedbowner 'sa'
I am a bit struggling to find the best approach to achieve this.
I am trying to achieve a full automated process to spin up a Microsoft sql server, a SQL Database using terraform, and ultimately, when I have all the infra in place, to release a *.dacpac against the SQL Database to create tables and column and ultimately seed some data inside this database.
So far I am using azure pipeline to achieve this and this is my workflow:
terraform init
terraform validate
terraform apply
database script (to create the tables and script
The above steps works just fine and everything falls in place perfectly. Now I would like to implement another step to seed the database from a csv or excel.
A did some research on google and read Microsoft Documentation but apparently there are different ways of doing this but all those approaches, from bulk insert to bcp sqlcmd, are documented with a local server and not a cloud server.
Can anyone please advice me about how to achieve this task using azure pipeline and a cloud SQL Database? Thank you so so much for any hint
UPDATE:
If I create a task in release pipeline for cmd script. I get the following error:
2021-11-12T17:38:39.1887476Z Generating script.
2021-11-12T17:38:39.2119542Z Script contents:
2021-11-12T17:38:39.2146058Z bcp Company in "./Company.csv" -c -t -S XXXXX -d XXXX -U usertest -P ***
2021-11-12T17:38:39.2885488Z ========================== Starting Command Output ===========================
2021-11-12T17:38:39.3505412Z ##[command]"C:\Windows\system32\cmd.exe" /D /E:ON /V:OFF /S /C "CALL "D:\a\_temp\33c62204-f40c-4662-afb8-862bbd4c42b5.cmd""
2021-11-12T17:38:39.7265213Z SQLState = S1000, NativeError = 0
2021-11-12T17:38:39.7266751Z Error = [Microsoft][ODBC Driver 17 for SQL Server]Unable to open BCP host data-file
It should be actually quite simple:
The following example imports data using Azure AD Username and Password where user and password is an AAD credential. The example imports data from file c:\last\data1.dat into table bcptest for database testdb on Azure server aadserver.database.windows.net using Azure AD User/Password:
bcp bcptest in "c:\last\data1.dat" -c -t -S aadserver.database.windows.net -d testdb -G -U alice#aadtest.onmicrosoft.com -P xxxxx
If you do not plan use AAD account - please remove -G flag.
Ideally it would be if you put your admin account name and password into terraform directly from terraform. The you can fetch those values vie KeyVault task in the pipeline and use to run above mentioned script.
It looks that bcp is already on windows agents:
I am trying to create a sql database using cloud shell
Note: I am able to create the sql database in the same resource group without any issues.
When i execute the command from the the cloud shell I get the following error message.
PS /home/xxx> az sql db create -g akshandsonlab -s aksdatabase -n mhcdb --service-objective S0
ResourceNotFoundError: The Resource 'Microsoft.Sql/servers/aksdatabase' under resource group 'akshandsonlab' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix
I have followed the above link but I am reaching a dead end.
Can any one shed some light on this
Regards
Sudlo
You could check if you have selected the correct subscription when you create the SQL database via az account show.
If not, you could list the subscription(az account list) then set the subscription(az account set -s <subscriptionID>) that you want to create the resource.
If not, you could double-check the Resource name and Resource group name.
For more information, please refer to https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/error-not-found
Note: I am able to create the sql database in the same resource group
without any issues.
It looks like you want to create your database in a different resource group than that of your SQL server, which is not possible as of today. The SQL server and DB must exist in the same resource group.
This is most likely the reason you're seeing this error. Instead, run the command passing the resource group where your SQL server exists.
az sql db create -g <sql-server-resource-group> -s aksdatabase -n mhcdb --service-objective S0
I currently have an Azure SQL data warehouse and I'd like to enable caching so that intensive queries run faster in the database with the following code:
ALTER DATABASE [myDB]
SET RESULT_SET_CACHING ON;
However, no matter how I try to run this query I get the following error:
Msg 5058, Level 16, State 12, Line 3
Option 'RESULT_SET_CACHING' cannot be set in database 'myDB'.
I am running the query based on Azure's documentation here: https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azure-sqldw-latest
I have tried running this query both in the master database and in the underlying one called myDB. I have also tried using commands such as:
USE master
GO
With no avail. Has anyone had success in enabling caching on Azure? Please let me know!
Screenshot of error and command below:
https://i.stack.imgur.com/mEJIy.png
I tested and this command works well in my ADW dwleon, see the bellow screenshot:
Please make sure:
Login you Azure SQL data warehouse with SQL server Admin account.
Run this command in master db
Summary of the document:
To set the RESULT_SET_CACHING option, a user needs server-level
principal login (the one created by the provisioning process) or be a
member of the dbmanager database role.
Enable result set caching for a database:
--Run this command when connecting to the MASTER database
ALTER DATABASE [database_name]
SET RESULT_SET_CACHING ON;
Hope this helps.
I am using Nodejs, graphQl, Prisma, docker and PostgreSQL,
whenever I change Schema the I have to deploy it, but it gives error as follow
ERROR: You can not deploy to a service stage while there is a deployment in progress or a pending deployment scheduled already. Please try again after the deployment finished
"code": 4008,
"status": 200
then I wait for a few minutes and try again the result is same, I tried a lot but the result is same
This happens when there is an unapplied migration in the management schema.
To resolve this follow the following steps:
Connect to your database using a GUI(like tableplus.io))
Change your database schema to management schema
Goto the migration table
Delete the last row
Then try to redeploy your service.