Azure DevOps CI/CD Pipelines for the Azure SQL Always Encrypted database Issues - azure

During the Setup of the Azure DevOps CI/CD Pipelines for the Azure SQL Always Encrypted database,
Example :- Table1 consists of the 5 columns, out of the 5 column's Column1 and Column2 were encrypted
Always Enabled setting in Connection string
Dacpac file successfully created without any issues and able to view the Table1
Observed the issue with while inserting data in to Table1 by using transaction data
Error Message : Encryption scheme mismatch for columns/variables
Same code is working fine if execute this dacpac file manually in SSMS studio
Displaying error if use execute the dapac through SSDT or CI/CD Pipelines
Please let me know your thoughts about this issue?

Usually CI/CD pipeline with Dacpac working together is complex with Always encrypted enabled.Please check if the below points can narrow down the issue.
Usually the certificate for the Column Master Key is stored in the
client machine, not on the SQL server machine. If that is the case,
you are not able to insert data into the table with an Always
Encrypted column,Do the Master Key configuration .
(Hope you already knew but just for your info mismatch error in ssms can be solved this way)
According to permissions-for-publishing-a-dac-package-if-always-encrypted
To publish DAC package if Always Encrypted is set up in the DACPAC
or/and in the target database, you might need some or all of the below
permissions, depending on the differences between the schema in the
DACPAC and the target database schema.
ALTER ANY COLUMN MASTER KEY, ALTER ANY COLUMN ENCRYPTION KEY, VIEW ANY
COLUMN MASTER KEY DEFINITION, VIEW ANY COLUMN ENCRYPTION KEY
DEFINITION
Also note that Azure SQL is a PaaS Service which means it receives
update transparently and relatively often with a new compatibility
level. Try updating SSDT version . Always Encrypted is supported in
all editions of SQL Server Database V12.
Always Encrypted uses two types of cryptographic keys: column
encryption keys (CEKs) and column master keys (CMKs). see developing
databases using always encrypted
Please do variable declaration and value assignment are performed on
the same line.
Example:
DECLARE #OPERATION_ID int = 4
DECLARE #PARAMETER_NAME varchar(100) = 'xyz'
Try to store the value to be inserted in a variable or result and store in the application and then insert the data from the result set into SQL Server.
Also see
azure-encryption-server-side-client-side-azure-key-vault
create-and-store-column-master-keys-always-encrypted
ci-with-a-sql-always-encrypted-column

Related

Creating Multiple Environment Parameters for Azure Data Factory Linked Services

I have a requirement where I need to point our DEV Azure Data Factory to a Production Azure SQL database and also have the ability to switch the data source back to the Dev database should we need to.
I've been looking at creating parameters against the linked services but unsure of the best approach.
Should I create parameters as follows and choose the relevant parameters depending on the environment I want to pull data from?
DevFullyQualifiedDomainName
ProdFullyQualifiedDomainName
DevDatabaseName
ProdDatabaseName
DevUserName
ProdUserName
Thanks
Any sort of trigger can also have parameters attached to it. Check out the following example, assuming you have a custom event trigger and SQL server as a source:
Create a string parameter for the database name field while establishing a SQL server connected service as a dataset.
Create New parameter in dataset, assign the dataset parameter to that same Linked service parameter, which will be used to store the trigger data.
A custom event trigger has the ability to parse and deliver a custom data payload to your pipeline. You define the pipeline parameters and then populate the values on the Parameters page. To parse the data payload and provide values to the pipeline parameters, use the format #triggerBody().event.data. keyName_.
As per Microsoft Official Documents, which could be referred:
Reference trigger metadata in pipelines
System variables in custom event trigger
When you utilize a pipeline activity in a source, it will request you for a dataset parameter. In this case, utilize dynamic content and choose the parameter containing the trigger data.
I would suggest using Azure Key Vault for that.
Create an Azure Key Vault for each environment (dev, prod, etc.)
Create secrets inside both key vaults with the same name but different values.
For example, for the database server name, create the same secret "database-server" in both dev and prod key vaults but with the correct value representing the connection string of the dev and prod server respectively, in the following format:
integrated security=False;encrypt=True;connection timeout=30;data source=<serverName>.database.windows.net;initial catalog=<databaseName>;user id=<userName>;password=<loginPassword>
In your Azure Data Factory, create a Key Vault linked service pointing to your key vault.
In your Azure Data Factory, create a new Azure SQL Database linked service selecting the Key Vault created in step 1 and the secret created in step 2.
Now you can easily switch between dev and prod by simply adjusting your Key Vault linked service to point to the desired environment.
Have fun ;)
Reference:
https://learn.microsoft.com/en-us/azure/data-factory/store-credentials-in-key-vault

Decryption issue with Azure SQL Managed Instance CLE on secondary instance of a failover group

We have an Azure SQL Managed Instance Failover Group setup with a primary and secondary instance – the issue I’m hitting is that we use cell (column) level encryption (CLE) for some of our database table columns. My limited understanding is that the decryption of these depends on the service master key. I think the issue is that the database master key gets encrypted with the service master key and then the databases get synchronised between instances but synchronisation won’t do the server (instance) level data i.e. Service Master Key… so on the primary instance the data can be decrypted but on the failover instance it can’t. Hence you get an error like this:
Please create a master key in the database or open the master key in the session before performing this operation.
If I run the below SQL on my user database it will fix the issue until I failover, at which point I’ll need to run it again. So not ideal from a failover perspective and also means I can’t use the secondary instance as a readonly instance.
OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘XXX’
ALTER MASTER KEY DROP ENCRYPTION BY SERVICE MASTER KEY
OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘XXX'
ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY
Below is the only article I could find describing the problem (scroll towards end where it says “Decrypt data in the new primary replica”), and it solves the problem by backing up the service master key from the primary instance and restoring it to the secondary instance, but it's an on-premise setup vs our Azure setup, and the issue is I don’t know how (or if its even possible) to do a backup and restore of the service master key in Azure.
https://www.sqlshack.com/column-level-sql-server-encryption-with-sql-server-always-on-availability-groups/
I did try and backup the service master key from the primary instance so I could restore it to the secondary instance but I could not see a way to do this export in an Azure SQL Managed Instance - https://learn.microsoft.com/en-us/sql/t-sql/statements/backup-service-master-key-transact-sql?view=sql-server-ver15 … I tried giving it blob storage location which was a bit of a stretch and it didn’t like it:
BACKUP SERVICE MASTER KEY TO FILE = 'https://ourstorage.blob.core.windows.net/database-backups/service_master_key.key' ENCRYPTION BY PASSWORD = 'YYYY';
Msg 3078, Level 16, State 2, Line 69
The file name "https://pptefsaaseprd.blob.core.windows.net/database-backups/ase_prod_service_master_key" is invalid as a backup device name for the specified device type. Reissue the BACKUP statement with a valid file name and device type.
I’ve heard mention of perhaps using Azure Key Vault instead but couldn’t find any examples and ideally don’t want to cause any breaking changes to code/sql.
To give some more context our current stored procedures do something like the following:
OPEN SYMMETRIC KEY SSN_Key_Surname
DECRYPTION BY CERTIFICATE Surname;
/* SQL making use of the decrypted column */
CLOSE SYMMETRIC KEY SSN_Key_Surname;
So that’s where I’m at. Hopefully I’m just missing a simple step – surely this is not an uncommon scenario? i.e. if you have Azure SQL Managed Instances in a failover group, with column level encryption where the database master key is encrypted by the service master key, how do you configure things so data can be decrypted on both primary and secondary instance?
I'd imagine for this to work you'd need to be able to backup the service master key from the primary instance and restore it to the secondary instance - is this possible in Azure?
As expected I was just missing a simple step as described here https://stackoverflow.com/a/58228431/1450351
The Database Master Key (DMK) is encrypted with the Service Master Key (SMK) which is unique to each SQL Service instance and you want it this way.
SQL Server has an alternate way of decrypting the DMK. If the DMK cannot be decrypted with the SMK, it searches the credential store for a password that matches the same family GUID. If it finds a family GUID that matches your database it will attempt to decrypt the DMK with the stored password. If that succeeds then it will use the DMK to encrypt or decrypt credentials or keys within the database.
So the use of sp_control_dbmasterkey_password will store a family GUID based on the database name and the password decrypting the DMK in the master database.
To ensure that the DMK works when a AG fails from the primary to a secondary, run the sp_control_dbmasterkey_password on the secondary as part of your process to join an database to an AG.
So on the secondary instance I had to run this on the master DB
EXEC sp_control_dbmasterkey_password #db_name = N'MyDatabaseWithCLE',
#password = N'XX MY MASTER KEY PASSWORD XX’, #action = N'add';
GO

Retrieve COSMOS Connection String or Primary Key in javascript running in azure pipeline

I have created azure pipeline using classic editor and executes test.js file using pipeline. I need to retrieve azure COSMOS key which could be used in the js file.
Tried by installing Cosmos DB Key Retriever extension but it doesnt show ADD option in the pipeline.
How can this be resolved? How cosmos key be fetched within js file?
How can this be resolved? How cosmos key be fetched within js file?
We strongly suggest using a config.js file to set your app's configurations, including the PRIMARY KEY of Azure Cosmos DB. Check related official documents here: #1, #2, #3.
It seems that you want to avoid writing the key directly in code, then you can consider:
1.Copy the primary key from this page in Azure Web portal, and then create a variable group in Azure Devops pipelines to store that value. (Change variable type to secret !)
Also you can choose to host that value using Azure key valut and then link secrets from an Azure key vault in current variable group. (If you don't want to host the value in Varibale group directly.)
2.Link the variable group to your current classic pipeline.
3.Then you can use Replace Token task to insert the value of your Primary key into the config.js or xx.js file. You should run your other tasks after this task so that you can use the key in your js file.
Assuming I have the variable TheKeyIfCosmos to store the primary key.
Then specifying this format in config.js file:
key: "#{TheKeyOfCosmos}#",
After running that task, the content in config.js would be real key: "xxxxxxxxxxxxxxxx",.
After above steps you can test/run your app with primary key.

Setting of delete entities from Azure Table Storage by Azure Logic App

I need help with following problem ...
I have set-up Azure Data Factory (DF) process which copying data from Storage Table into Azure SQL Database. Now I need deleting data from Storage Table after succeful copy to SQL. I'm trying do it over Web Action in DF, where I call Azure Logic App with Delete Entity step.
Everything working well when I'm sending debug entries for Partition Key and Row Key - entity is corectly deleted. But I can't found way, how I can send all Part/Row keys from source Table Storage to Logic App for deleting ... :-/
I was trying some setting about dynamic content in DF pipeline, but without success ...
BTW ... I was inspired by this article, but there is not complete description of solution for my problem ... https://kromerbigdata.com/2018/03/15/azure-data-factory-delete-from-azure-blob-storage-and-table-storage/
If you know the source table name and you want delete all entities, you could get the all entities firstly, then use a for each action to delete them all. The below is my test flow.
The output is the table entities:#body('Get_entities')?['value'], and the partition key and the row key are:#{encodeURIComponent(items('For_each')?['PartitionKey'])} and #{encodeURIComponent(items('For_each')?['RowKey'])}. Also you could get them from Dynamic content like the below picture.

Trigger from local sql database to azure sql db

I have a local and an Azure database.
When a row is inserted in the local db, I would like it to insert it in the Azure one. I have created a trigger on the local db for that task:
USE [db_local]
Create trigger [dbo].[trigInverse] on [dbo].[db_local]
after insert
as
begin
insert into [serverazure.DATABASE.WINDOWS.NET].[db_azure].[Access].[Companies]([CompanyName])
select NAME
from inserted;
end
However, when I try to insert a row the error in picture1 appears
I cannot see what the parameter is, and how to set a correct value in that error message.
I did some tests to find the cause: I put that trigger between 2 local db, and tried adding rows to the source db, and it works
In the linked server node, these are the settings
You can use Azure SQL Data Sync, download and install in your local server SQL Azure Data Sync Agent
Then setup your azure data base like this:
Getting Started with Azure SQL Data Sync
It will sync your databases every 5 minutes.

Resources