SqlPackage.exe deploy DACPAC with always encrypted and Key vault - azure

I am trying to deploy a DACPAC using Azure Release pipeline.
Following are the methods which I used
Using Azure SQL DACPAC task - While using this approach, as part of additional properties following set of params are passed for deployment
AzureKeyVaultAuthMethod:ClientIdSecret /ClientId:'$(SERVICEPRINCIPALID)' /Secret:'$(SERVICEPRINCIPALKEY)'. On enabling diagnostics log, getting following error:
SqlPackage build version - 16.0.6161.0
Using a power-shell script - While using the power-shell script in order to do the DACPAC deployment, I am getting the following error.
Failed to decrypt a column encryption key. Invalid key store provider name: 'AZURE_KEY_VAULT'. A key store provider name must denote either a system key store provider or a registered custom key store provider. Valid system key store provider names are: 'MSSQL_CERTIFICATE_STORE', 'MSSQL_CNG_STORE', 'MSSQL_CSP_PROVIDER'. Valid (currently registered) custom key store provider names are: . Please verify key store provider information in column master key definitions in the database, and verify all custom key store providers used in your application are registered properly.
Logs attached here
SqlPackage build version - 15.0.5472.2
As a pre-requisite before deploying the DACPAC, using a power-shell script I am creating the keys and then inserting them to DB. Contents of the script below.
All the deployments are happening through a service principal and it has admin level access on all the resources in the Azure AD.
Am I missing any steps here before the deployment of DACPAC through release pipeline.
Thanks,
Nandan

Related

Empty error while executing SSIS package in Azure Data Factory

I have created a simple SSIS project and in this project, I have a package that will delete a particular file in Downloads folder.
I deployed this project to Azure. And when I am trying to execute this package using Azure Data Factory then the pipeline fails with an empty error (I am attaching the screenshot here).
enter image description here
What I have done to fix this error is:
I have added self-hosted IR to Azure-SSIS IR as the proxy to access the data on-premise.
Set the ConnectByProxy as True.
Converted the project to Project Deployment Model.
Please help me out to fix this error and if you need more details then just leave a comment.
Windows Authentication :
To access data stores such as SQL servers/file shares on-premises or Azure Files, check the Windows authentication check box.
If this check box is selected, fill in the Domain, Username, and Password fields with the values for your package execution credentials. The domain is Azure, the username is storage account name>, and the password is storage account key> to access Azure Files, for example.
Using the secrets stored in your Azure Key Vault
As a substitute, you can leverage secrets from your Azure Key Vault as values. Select the AZURE KEY VAULT check box next to them to do so. Create a new key vault connected service or choose or update an existing one. Then choose your value's secret name and version. You can pick or update an existing key vault or create a new one when creating or editing your key vault connected service. If you haven't previously done so, allow Data Factory managed identity access to your key vault. You may also directly input your secret in the format key vault linked service name>/secret name>/secret version>.
Note : If you are using Windows Authentication, there are four methods to
access data stores with Windows authentication from SSIS packages
running on your Azure-SSIS IR: Access data stores and file shares with
Windows authentication from SSIS packages in Azure | Docs
Make Sure it Falls under one of such methods, else it could potentially fail at the Run Time.

Creating Multiple Environment Parameters for Azure Data Factory Linked Services

I have a requirement where I need to point our DEV Azure Data Factory to a Production Azure SQL database and also have the ability to switch the data source back to the Dev database should we need to.
I've been looking at creating parameters against the linked services but unsure of the best approach.
Should I create parameters as follows and choose the relevant parameters depending on the environment I want to pull data from?
DevFullyQualifiedDomainName
ProdFullyQualifiedDomainName
DevDatabaseName
ProdDatabaseName
DevUserName
ProdUserName
Thanks
Any sort of trigger can also have parameters attached to it. Check out the following example, assuming you have a custom event trigger and SQL server as a source:
Create a string parameter for the database name field while establishing a SQL server connected service as a dataset.
Create New parameter in dataset, assign the dataset parameter to that same Linked service parameter, which will be used to store the trigger data.
A custom event trigger has the ability to parse and deliver a custom data payload to your pipeline. You define the pipeline parameters and then populate the values on the Parameters page. To parse the data payload and provide values to the pipeline parameters, use the format #triggerBody().event.data. keyName_.
As per Microsoft Official Documents, which could be referred:
Reference trigger metadata in pipelines
System variables in custom event trigger
When you utilize a pipeline activity in a source, it will request you for a dataset parameter. In this case, utilize dynamic content and choose the parameter containing the trigger data.
I would suggest using Azure Key Vault for that.
Create an Azure Key Vault for each environment (dev, prod, etc.)
Create secrets inside both key vaults with the same name but different values.
For example, for the database server name, create the same secret "database-server" in both dev and prod key vaults but with the correct value representing the connection string of the dev and prod server respectively, in the following format:
integrated security=False;encrypt=True;connection timeout=30;data source=<serverName>.database.windows.net;initial catalog=<databaseName>;user id=<userName>;password=<loginPassword>
In your Azure Data Factory, create a Key Vault linked service pointing to your key vault.
In your Azure Data Factory, create a new Azure SQL Database linked service selecting the Key Vault created in step 1 and the secret created in step 2.
Now you can easily switch between dev and prod by simply adjusting your Key Vault linked service to point to the desired environment.
Have fun ;)
Reference:
https://learn.microsoft.com/en-us/azure/data-factory/store-credentials-in-key-vault

Azure ADF using Azure Batch throws Shared Access Signature generation error

I am working on a simple Azure Data Factory pipeline where I have simply added a Batch Service and in that specified the Batch Service account (which I have created thru linked service and tested the connection is working). In the command I am just running a simple "ls" command and when I do a debug run I get this error: "Cannot create Shared Access Signature unless Account Key credentials are used." I have following linked services "Azure Batch", "Azure Blob Storage" and Key Vault (where we store the access key). All linked services connections are working properly.
Any help on how to fix this error: "Cannot create Shared Access Signature unless Account Key credentials are used."
Azure Batch Linked service:
Azure Storage Linked service:
Azure Data factory pipeline:
The issue happens because you use "Managed Identity" to connect ADF to the Storage. It will say "successful" when doing a connection test on the linked services but when this storage is used for a Batch, it needs to have "Account Key" authentication type (see here).

Retrieve COSMOS Connection String or Primary Key in javascript running in azure pipeline

I have created azure pipeline using classic editor and executes test.js file using pipeline. I need to retrieve azure COSMOS key which could be used in the js file.
Tried by installing Cosmos DB Key Retriever extension but it doesnt show ADD option in the pipeline.
How can this be resolved? How cosmos key be fetched within js file?
How can this be resolved? How cosmos key be fetched within js file?
We strongly suggest using a config.js file to set your app's configurations, including the PRIMARY KEY of Azure Cosmos DB. Check related official documents here: #1, #2, #3.
It seems that you want to avoid writing the key directly in code, then you can consider:
1.Copy the primary key from this page in Azure Web portal, and then create a variable group in Azure Devops pipelines to store that value. (Change variable type to secret !)
Also you can choose to host that value using Azure key valut and then link secrets from an Azure key vault in current variable group. (If you don't want to host the value in Varibale group directly.)
2.Link the variable group to your current classic pipeline.
3.Then you can use Replace Token task to insert the value of your Primary key into the config.js or xx.js file. You should run your other tasks after this task so that you can use the key in your js file.
Assuming I have the variable TheKeyIfCosmos to store the primary key.
Then specifying this format in config.js file:
key: "#{TheKeyOfCosmos}#",
After running that task, the content in config.js would be real key: "xxxxxxxxxxxxxxxx",.
After above steps you can test/run your app with primary key.

Seperating ConnectionString from source control and be able to perform integrating testing in Azure CI/CD pipeline

I've a WebAPI project and I'm using Azure CI/CD pipeline to deploy it in azure. The project contains unit testing and integration testing. And for integration testing it needs to access database. But since I don't want to check in my connectionstring to source control, the build pipeline will always fail.
So, the question is what solutions/features or workaround exist that can help me accomplish this scenario?
You can use token replace task to feed your config file with connection string. For that you need to install and add token replace task
and for this configuration:
you need to have such appsettings.json
{
"ConnectionStrings": {
"BloggingDatabase": "#{ConnectionString}#"
},
}
and in your pipeline please variable ConnectionString:
You can also use variable groups with Azure KeyVault. For that appeoach please check this blog post.
Azure Key Vault is a good place to securely store secrets such as db server credentials; this keeps them out of source control.
The general approach is:
in advance, save the db server password as a keyvault secret
in the pipeline, get the db server password using the Azure Key Vault task; it is now available as a secret variable in the pipeline
use the db server password in subsequent tasks; either directly, or by substituting into app settings as described in Krzysztof Madej's answer

Resources