I am a bit struggling to find the best approach to achieve this.
I am trying to achieve a full automated process to spin up a Microsoft sql server, a SQL Database using terraform, and ultimately, when I have all the infra in place, to release a *.dacpac against the SQL Database to create tables and column and ultimately seed some data inside this database.
So far I am using azure pipeline to achieve this and this is my workflow:
terraform init
terraform validate
terraform apply
database script (to create the tables and script
The above steps works just fine and everything falls in place perfectly. Now I would like to implement another step to seed the database from a csv or excel.
A did some research on google and read Microsoft Documentation but apparently there are different ways of doing this but all those approaches, from bulk insert to bcp sqlcmd, are documented with a local server and not a cloud server.
Can anyone please advice me about how to achieve this task using azure pipeline and a cloud SQL Database? Thank you so so much for any hint
UPDATE:
If I create a task in release pipeline for cmd script. I get the following error:
2021-11-12T17:38:39.1887476Z Generating script.
2021-11-12T17:38:39.2119542Z Script contents:
2021-11-12T17:38:39.2146058Z bcp Company in "./Company.csv" -c -t -S XXXXX -d XXXX -U usertest -P ***
2021-11-12T17:38:39.2885488Z ========================== Starting Command Output ===========================
2021-11-12T17:38:39.3505412Z ##[command]"C:\Windows\system32\cmd.exe" /D /E:ON /V:OFF /S /C "CALL "D:\a\_temp\33c62204-f40c-4662-afb8-862bbd4c42b5.cmd""
2021-11-12T17:38:39.7265213Z SQLState = S1000, NativeError = 0
2021-11-12T17:38:39.7266751Z Error = [Microsoft][ODBC Driver 17 for SQL Server]Unable to open BCP host data-file
It should be actually quite simple:
The following example imports data using Azure AD Username and Password where user and password is an AAD credential. The example imports data from file c:\last\data1.dat into table bcptest for database testdb on Azure server aadserver.database.windows.net using Azure AD User/Password:
bcp bcptest in "c:\last\data1.dat" -c -t -S aadserver.database.windows.net -d testdb -G -U alice#aadtest.onmicrosoft.com -P xxxxx
If you do not plan use AAD account - please remove -G flag.
Ideally it would be if you put your admin account name and password into terraform directly from terraform. The you can fetch those values vie KeyVault task in the pipeline and use to run above mentioned script.
It looks that bcp is already on windows agents:
Related
I am trying to create a sql database using cloud shell
Note: I am able to create the sql database in the same resource group without any issues.
When i execute the command from the the cloud shell I get the following error message.
PS /home/xxx> az sql db create -g akshandsonlab -s aksdatabase -n mhcdb --service-objective S0
ResourceNotFoundError: The Resource 'Microsoft.Sql/servers/aksdatabase' under resource group 'akshandsonlab' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix
I have followed the above link but I am reaching a dead end.
Can any one shed some light on this
Regards
Sudlo
You could check if you have selected the correct subscription when you create the SQL database via az account show.
If not, you could list the subscription(az account list) then set the subscription(az account set -s <subscriptionID>) that you want to create the resource.
If not, you could double-check the Resource name and Resource group name.
For more information, please refer to https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/error-not-found
Note: I am able to create the sql database in the same resource group
without any issues.
It looks like you want to create your database in a different resource group than that of your SQL server, which is not possible as of today. The SQL server and DB must exist in the same resource group.
This is most likely the reason you're seeing this error. Instead, run the command passing the resource group where your SQL server exists.
az sql db create -g <sql-server-resource-group> -s aksdatabase -n mhcdb --service-objective S0
I am trying to backup SAP HANA database which is in Azure VM by using Recovery Vault service. While running "msawb-plugin-config-com-sap-hana.sh" script file I am getting the error
Failed to determine SYSTEM_KEY_NAME: Please specify with the '--system-key' option.
Need a valid system key to create the backup key.
Please help me to resolve this error.
According to the prerequisites https://learn.microsoft.com/en-us/azure/backup/tutorial-backup-sap-hana-db#prerequisites, you have to create a key in the default hdbuserstore.
You can create it by login as ndbadm:
su - ndbadm
and add the key:
/hana/shared/NDB/hdbclient/hdbuserstore set BACKUP YOUR_HOSTNAME:30013 SYSTEM YOUR_PASSWORD
Then as a root, run the script.
After running the script, you can check again as the ndbadm user if the key AZUREWLBACKUPHANAUSER is there:
/hana/shared/NDB/hdbclient/hdbuserstore list
and delete your previously created key:
/hana/shared/NDB/hdbclient/hdbuserstore delete BACKUP
The script uses the command "runuser" (in my case ndbadm). When hdbuserstore is executed under the profile ndadm no keys is returned. You can copy the files SSFS_HDB.DAT and SSFS_HDB.KEY in the path returned by hdbuserstore LIST from a profile with valid files.
Refer to SAP Note 2853601 - Why is Nameserver Port Used in HDBUSERSTORE for SAP Application Installation.
In an MDC - nameserver port (e.g. 30013) is used in hdbuserstore instead of indexserver port (e.g. 30015) for a tenant DB.
Screenshot
I currently have an Azure SQL data warehouse and I'd like to enable caching so that intensive queries run faster in the database with the following code:
ALTER DATABASE [myDB]
SET RESULT_SET_CACHING ON;
However, no matter how I try to run this query I get the following error:
Msg 5058, Level 16, State 12, Line 3
Option 'RESULT_SET_CACHING' cannot be set in database 'myDB'.
I am running the query based on Azure's documentation here: https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azure-sqldw-latest
I have tried running this query both in the master database and in the underlying one called myDB. I have also tried using commands such as:
USE master
GO
With no avail. Has anyone had success in enabling caching on Azure? Please let me know!
Screenshot of error and command below:
https://i.stack.imgur.com/mEJIy.png
I tested and this command works well in my ADW dwleon, see the bellow screenshot:
Please make sure:
Login you Azure SQL data warehouse with SQL server Admin account.
Run this command in master db
Summary of the document:
To set the RESULT_SET_CACHING option, a user needs server-level
principal login (the one created by the provisioning process) or be a
member of the dbmanager database role.
Enable result set caching for a database:
--Run this command when connecting to the MASTER database
ALTER DATABASE [database_name]
SET RESULT_SET_CACHING ON;
Hope this helps.
I have created pipeline using Azure DevOps for Azure PostgreSQL database.
What actually pipeline do?
Connect to PostgreSQL;
Remove database db_test from PostgreSQL using Azure CLI;
az postgres db delete -g my_group -s database_here -n db_test --yes
However, I cannot do this due to error:
An unexpected error occured while processing the request.
Then, I was trying to remove the database using psql, but with no luck due to existing connections to database.
From my point of view - Azure CLI must handle such issues and remove database or pass correct error message to me. For example it would be great if parameter --force will be implemented.
I have removed all connections to database using the following syntax in a bash script:
psql "host=database_here port=5432 dbname=postgres user=postgres#database_here password=ReallyStrongPassword sslmode=require" -c "REVOKE CONNECT ON DATABASE db_test FROM PUBLIC; SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE pg_stat_activity.datname = 'db_test';"
and added DROP database action additionally to my pipeline:
psql "host=database_here port=5432 dbname=postgres user=postgres#database_here password=ReallyStrongPassword sslmode=require" -c "DROP database db_test;"
But I didn't remove AZ CLI database removal step from pipeline and it has failed with the following output:
Operation failed with status: 200. Details: Resource state Failed
I think on this step, AZ CLI should return something like: "Database does not exist." just as informative message.
How to properly handle such situations on the Azure side?
Does az postgres db show show the database details? Please check if this is happening due to this behaviour, albeit the discussion in the quoted thread is in context with a SQL database.
If the issue still persists, please feel free to drop your feedback here open an issue with our GitHub repo for our internal Team to take a look at.
Thanks for the feedback!
When i run the below code to import data from csv file stored in azure storage account i get following error:
syntax error at or near "CREDENTIALS"
COPY ccsm.vital_signs FROM
'https://abc.blob.core.windows.net/dta/abc.csv'
CREDENTIALS ''
DELIMITER '|'
CSV HEADER;
I found CREDENTIALS is only used in Amazon Redshift(based on but different from psql), which is not valid in psql according to its document.
Don't find any similar operation supported by Azure psql either.
So I recommend you to download the file first, then import it.
Note that we don't have superuser privilege to use COPY method in Azure psql, see this thread. So we have to use local psql tool with \copy method.