Failed to encrypt sub-resource payload and the error is: Failed to encrypted linked service credentials on self-hosted IR - azure

I am developing an ETL solution using .NET and Azure Data Factory to move data from an on-premises SQL Server to Azure. Self-hosted IR is set up correctly and is running on Azure Portal, but when I run the code I get this exception:
Failed to encrypt sub-resource payload and error is: Failed to encrypted linked service credentials on self-hosted IR reason is: InternalServerError, error message is: Internal Server Error..
ConnectionString for the on-premises SQL Server server is in plain text and not encrypted.
How can I fix this problem?

I found a solution myself, and it works. I hope it helps someone who also encounters the same problem.
You need to set the value for EncryptedCredential and don't need to set the Password property.
And set Connection String like this:

Related

Backup local SQL Server DB to Azure blob storage fails with (400) Bad Request

I keep getting the following error in trying to backup local database to Azure blob storage.
Msg 3271, Level 16, State 1, Line 8
A nonrecoverable I/O error occurred on file
"https://mystorageaccount.blob.core.windows.net/mycontainer/AdventureWorks2016.bak:"
Backup to URL received an exception from the remote endpoint. Exception Message:
The remote server returned an error: (400) Bad Request..
Msg 3013, Level 16, State 1, Line 8
BACKUP DATABASE is terminating abnormally.
I followed the prescribed approach of
[1] Create credential:
CREATE CREDENTIAL [testcred] WITH IDENTITY = 'mystorageaccount'
,SECRET = 'storage account key';
[2] Create the backup to the url of storage account
BACKUP DATABASE AdventureWorks2014
TO URL = 'https://mystorageaccount.blob.core.windows.net/mycontainer/AdventureWorks2016.bak'
WITH CREDENTIAL = 'testcred'
,COMPRESSION
,STATS = 5;
GO
I have tried with general purpose storage v2 and v1 yet the error persists.
I have tried using a SAS token too but it gave same error
I tried on SQL Server 2014 Express and 2017 Express. Same error
In trying to connect directly to the Azure storage account from the "Connect" option in SQL Server Management Studio I finally saw a clue to the problem as shown below:
I kept changing the TLS version in the Storage Account settings until I found the correct option that tallies with the version of SQL am running (2017 Express). For me it is "version 1.0". Be careful with this though since it manages how your data is encrypted as it moves between Azure and your machine. See the Microsoft docs on this:
Come on Microsoft, simply sending "The remote server returned an error (400) Bad Request" hardly helps anyone. Especially averagely technical people.

Azure Blob Source Error: The remote server returned an error: (400) Bad Request

I have created Data Flow Task in SSIS and configured Blob storage container.My request to process from Azure Blob to SQL server.I am getting "The remote server returned an error: (400) Bad Request" exception while executing SSIS.
I have verified the connectivity and access in Azure.
execution error
No debug error
#Meena
You need to add two Key to windows registry
To use TLS 1.2, add a REG_DWORD value named SchUseStrongCrypto with data 1 under the following two registry keys.
HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft.NETFramework\v4.0.30319
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft.NETFramework\v4.0.30319
This works.
Just from your second screenshot, you get the error in progress but the data flow works well and the data have been load to SQL Server successfully, am I right?
I tried the same SSIS package(Load data from Blob to SQL Server) but has no error. Please try delete and reset the Azure Blob Source.
Test the connection when we create the source:
Azure Blob Source editor:
Progress:
Just from your progress error, we can not find which cause the 400 bed request error. But We can try re-create the source again to check if it will happen again. And most of all, the data flow task works well and all the data loaded to SQL Server correctly.

"Failed to connect to: we.frontend.clouddatahub.net" error while registering Integration Runtime of Azure Data Factory

This is what I followed to setup IR.
In the final step of Registering Azure Data factory self hosted intergration runtime, we need to provide the Authentication Key. then the installation is making a call to internet. Isn't this strange as the VM could be in a private network?
If the VM is not connected to internet and it gets this error then what to do? "Failed to connect to: we.frontend.clouddatahub.net"
This is the error I get
Failed to execute the command ' -Key xxx'. Error message: Microsoft.DataTransfer.DIAgentClient.HostServiceException: Failed to get service token from ADF service with key xxxx and time cost is: 3.0786307 seconds, the error code is: UnexpectedFault, activityId is: xxx and detailed error message is An error occurred while sending the request.
The underlying connection was closed: An unexpected error occurred on a send.
Authentication failed because the remote party has closed the transport stream.
The issue seems to be disabled remote access. How can I enable it? Dmgcmd -era 8060 is not working.
I have also a related issue logged as another VM works and this fails
Even if you have some private network where the communication can go without any restrictions between your data sources and your integration runtime, the integration runtime application needs to be able to communicate with the Azure data factory services as well. Try whitelisting the IPs for your region in the networking settings of your Azure VM or in your firewall - according to this:
https://learn.microsoft.com/sv-se/azure/data-factory/azure-integration-runtime-ip-addresses

Configure Self hosted integration runtime for ADF v1

I have installed self hosted IR on my PC and am trying to use it in my ADF (SQL Server to Azure SQL DB) pipeline. When i run the pipeline it fails with the below error.
InvalidParameter,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The value of the property 'msiAuthenticator' is invalid: 'The required property is not specified. Parameter name:
I think you can try the copy tool UI and set up it again.
did you use the encrypted credential for your linked service, what authentication type did you use ? to know your scenario, I need more information.

'Failed to encrypt sub-resource payload' error when attempting CI/CD

We are trying to setup CI/deployment with DevOps using the documentation provided here: https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment. We are using a shared IR that has been set up in the target environment prior to deployment.
The release succeeds if the deployment mode setting is set to validation only, but fails when incremental or complete is selected. We get the following error when using override template parameters:
2018-09-21T17:07:43.2936188Z ##[error]BadRequest: {
"error": {
"code": "BadRequest",
"message": "Failed to encrypt sub-resource payload
Please make sure your shared IR is online when doing the deployment, otherwise you may meet this problem because self-hosted IR will be used to encrypt your payload.
If you confirm the above action is done and you still have this error, please share the request activity ID to us and we can do some further investigation.
Make sure that you've entered the right connection string into your parameters JSON for any linked services you are using. This fixed the error for me although I don't have a full CI/CD environment with IR established.
I can solve it using the Azure Key Vault.
I added the connection string as a Secret.
In the connection string I also included the authentication data (username and password).
The limitation of this approach is that the possibility of passing the parameters is lost.
For example: dynamic values such as the name of the database or the user.
I would request you to look into the connection string for the respective Linked Service for which you have attached IR. For my ASQL based Linked service I had to use something like this , simple server name would not suffice and you will get "message": "Failed to encrypt sub-resource payload
"typeProperties": {
"connectionString": "Integrated Security=False;Encrypt=True;Connection Timeout=30;Data Source=axxx-xxx-xx-xxxx.database.windows.net;Initial Catalog=\"#{split(linkedService().LS_ASQL_SERVERDB,';')[1]}\""
}
I override parameter because of the connection string was secure. Use dummy value of(username, password, connection string) if You don't have original ones and then deploy.
The IR already being running doesn't make sense when doing a full deployment of an ADF instance. The IR key is generated within the instance of ADF you deploy, meaning you've created circular logic: you cannot deploy IR until the deployment of ADF is complete, but you can't complete the deployment of ADF until the IR is deployed.
So far our answer has been to let the arm template fail at this point, which is after the IR registration in the template so the IR key is then generated. We use that to deploy the IR, then re-run the template and it succeeds... it's stupid and hacky and there has to be a more sane way to do this than intentional failure/retry.

Resources