The gateway did not receive a response from 'Microsoft.Sql' within the specified time period - azure

I am running terraform via Azure devOps pipeline, in order to create azure MSSQL along with Blob Auditing Policies. However, when I run the pipeline, I am getting the following error after the pipeline runs for a while. Can some please help me identifying the root cause of this issue?
Error: failure in issuing create/update request for SQL Database "Identity" Blob Auditing Policies(SQL Server ""/ Resource Group ""): sql.ExtendedDatabaseBlobAuditingPoliciesClient#CreateOrUpdate: Failure responding to request: StatusCode=504 -- Original Error: autorest/azure: Service returned an error. Status=504 Code="GatewayTimeout" Message="The gateway did not receive a response from 'Microsoft.Sql' within the specified time period."
on azure-sql-server.tf line 92, in resource "azurerm_mssql_database" "sqlserver":
92: resource "azurerm_mssql_database" "sqlserver" {

failure in issuing create/update request for SQL Database "Identity" Blob Auditing Policies(SQL Server ""/ Resource Group ""): sql.ExtendedDatabaseBlobAuditingPoliciesClient#CreateOrUpdate: Failure responding to request: StatusCode=504 -- Original Error:
autorest/azure: Service returned an error. Status=504
Code="GatewayTimeout" Message="The gateway did not receive a response from 'Microsoft.Sql' within the specified time period.
To resolve the above error, please try the following:
Try removing the azurerm_mssql_database_extended_auditing_policy try replacing with the old extended_auditing_policy block within azurerm_mssql_database .
Using storage requires to enable 'Allow trusted Microsoft services to access this storage account' on the storage account.
Make sure you have Storage Blob Data Contributor for the storage created from terraform.
Enable System Managed Identity on the existing SQL Server.
For the workaround, try editing the state file to remove the "status": "tainted", line from the "azurerm_mssql_server" resource.
For more in detail, please refer below links:
azure - Creating SQL Server vulnerability assessment resource using a private Storage Account fails - Stack Overflow.
mssql_server: breaking change in the azure api · Issue #8915 · hashicorp/terraform-provider-azurerm · GitHub.
Export database fails with "The gateway did not receive a response from 'Microsoft.Sql'" - Microsoft Q&A.

Related

Azure blob storage - SAS - Data Factory

I was able to blob test connection and it's successful, but when I attempt to look for the storage path it shows this error. screenshot
Full error:
Failed to load
Blob operation failed for: Blob Storage on container '' and path '/' get failed with 'The remote server returned an error: (403) Forbidden.'. Possible root causes: (1). Grant service principal or managed identity appropriate permissions to do copy. For source, at least the “Storage Blob Data Reader” role. For sink, at least the “Storage Blob Data Contributor” role. For more information, see https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory#service-principal-authentication. (2). It's possible because some IP address ranges of Azure Data Factory are not allowed by your Azure Storage firewall settings. Azure Data Factory IP ranges please refer https://docs.microsoft.com/en-us/azure/data-factory/azure-integration-runtime-ip-addresses. If you allow trusted Microsoft services to access this storage account option in firewall, you must use https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory#managed-identity. For more information on Azure Storage firewalls settings, see https://docs.microsoft.com/en-us/azure/storage/common/storage-network-security?tabs=azure-portal.. The remote server returned an error: (403) Forbidden.StorageExtendedMessage=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Context: I'm trying to copy data from SQL db to Snowflake and I am using Azure Data Factory for that. Since this doesn't publish, I enable the staged copy and connect blob storage.
I already tried to check network and it's set for all network. I'm not sure what I'm missing here because I found a youtube video that has it working but they didn't show an issue related/similar to this one. https://www.youtube.com/watch?v=5rLbBpu1f6E.
I also tried to retain empty storage path but trigger for copy data pipeline isn't successfully to.
Full error from trigger:
Operation on target Copy Contacts failed: Failure happened on 'Sink' side. ErrorCode=FileForbidden,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error occurred when trying to upload a blob, detailed message: dbo.vw_Contacts.txt,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.WindowsAzure.Storage.StorageException,Message=The remote server returned an error: (403) Forbidden.,Source=Microsoft.WindowsAzure.Storage,StorageExtendedMessage=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
I created Blob storage and generated SAS token for that. I created a blob storage linked service using SAS URI It created successfully.
Image for reference:
When I try to retrieve the path I got below error
I changed the networking settings of storage account by enabling enabled from all networks of storage account
Image for reference:
I try to retrieve the path again in data factory. It worked successfully. I was able to retrieve the path.
Image for reference:
Another way is by whitelisting the IP addresses we can resolve this issue.
From the error message:
'The remote server returned an error: (403) Forbidden.'
It's likely the authentication method you're using doesn't have enough permissions on the blob storage to list the paths. I would recommend using the Managed Identity of the Data Factory to do this data transfer.
Take the name of the Data Factory
Assign the Blob Data Contributor role in the context of the container or the blob storage to the ADF Managed Identity (step 1).
On your blob linked service inside of Data Factory, choose the managed identity authentication method.
Also, if you stage your data transfer on the blob storage, you have to make sure the user can write to the blob storage, and also bulk permissions on SQL Server.

The connection to the sink database is failed. Detailed error message is: Login failed for user '<token-identified principal>'AzureSynapse Link forSQL

I'm trying to create an Azure Synapse Link for Azure SQL Database, using the steps from here:
https://learn.microsoft.com/en-us/azure/synapse-analytics/synapse-link/connect-synapse-link-sql-database
After I create the link connection and I want to start it I receive the following error:
The connection to the sink database is failed. Detailed error message is: Login failed for user ''.
ConnectionToAzureDB
LinkConnection
Also I have configurated the Azure SQL database to use ADD Auth. The connection to the Azure Database seems to be working.
My user ( used to create the Synapse workspace is Subscription Owner)
The user is also owner of the storage account.
I added the SQL Managed Identity as Storage Blob Data Contributor
Did anyone else got this error and manage to fix it?
There are certain limitations while connecting SQL Database to Synapse Link as per document:
When setting up your workspace, users must select "Disable Managed Virtual Network" and "Allow connections from any IP addresses."
A link connection cannot be enabled by Azure Synapse link for SQL if the database owner does not have a mapped log in. it will cause to get error.The (ALTER AUTHORIZATION command can be used to workaround this problem by changing the database owner to an user.)
With fewer than 100 DTUs, the Free, Basic, or Standard tiers do not allow Azure Synapse Link for SQL.
With is limitation I tried to Connect SQL Database to Synapse Link and able to connect without error:
I was trying to create a Synapse Link service with On Premises SQL Server and getting following error
Failed to enable Synapse Link on the source due to 'Failed to enable the source database: Some internal error happened due to 'Calling internal service failed: Failed to execute non query on change publisher with status code 400 and error Fail to non-query change publisher with error: 'sqlErrorCode - 22301; exceptionCode - TransferServiceUnknowError; error - A database operation failed with the following error: 'Could not update the metadata. The failure occurred when executing the command '(null)'. The error/state returned was 15517/1: 'Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.'. Use the action and error to determine the cause of the failure and resubmit the request.'; detailedError - A database operation failed with the following error: 'Could not update the metadata. The failure occurred when executing the command '(null)'. The error/state returned was 15517/1: 'Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.'. Use the action and error to determine the cause of the failure and resubmit the request.'
I resolved by by changing the corresponding database user to 'sa' and it works.
use [YourCorrespondingDatabase] EXEC sp_changedbowner 'sa'

Azure terraform storage account permission

I want to learn more about azure open vpn configurations and how it work. So looking around I found a open source project on GitHub, at the following link:
https://github.com/terraform-azurerm-examples/example-hub.git (Thank you for your code)
I set all the variable I wanted, and removed the version from azure provider.
but when I run terraform apply, I got an error on azure Storage account.
the error is this one:
Error: reading queue properties for AzureRM Storage Account "examplehubw6sr1wyncn": queues.Client#GetServiceProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationPermissionMismatch" Message="This request is not authorized to perform this operation using this permission.\nRequestId:cce5a313-b003-005c-2bb2-9d8a2f000000\nTime:2021-08-30T15:19:07.9036073Z"
As far as I understand, the error is due to setting secret permissions, which I did updated giving Get, List and Set but the error keeps showing up.
I am using terraform version 0.14.5
and my azurerm version is 2.74.0
I never had this type of error, on my subscription I have administrator role.
Did anyone get this error and know how to solve it, I would really appreciate you help
The error is probably because your user does not have data plane permissions on your storage account - which is where Terraform wants to put the statefile. Give your user Storage Blob Data Contributor role: https://learn.microsoft.com/en-us/azure/storage/blobs/assign-azure-role-data-access?tabs=portal

Azure DevOps Pipeline agent fails while running Terraform Plan with UnAuthorized error while connecting to a Storage Account

I have a storage account which has
a) Microsoft network routing selected.
b) Publish route-specific endpoint as only Microsoft network routing enabled.
I have an Azure DevOps pipeline agent running terraform plan - before running a plan I get the public ip of the VM (using curl) and run bash script to add thise public ip of the VM to the Network ACL of the storage account.
However the plan fails with not authorized error.
As soon as I also select the "Publish Internet routing" the plan starts working.
Can anyone shed light/explain why this is happening ?
PS: attaching the error details from pipeline..
Error: Error retrieving Container "bootdiag" (Account "xxxxxxxxx" / Resource Group "xx-dev-xx-xxx-001"): containers.Client#GetProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailure" Message="This request is not authorized to perform this operation.\nRequestId:f01c457e-d01e-0036-38b5-f25ba0000000\nTime:2021-01-25T00:57:41.2404471Z"

ResourceMoveProviderValidationFailed Error

while moving VM from one resource group to another this error encountered while there is no SQL VM associated with VM still getting this error
{
**"code": "ResourceMoveProviderValidationFailed",**
"message": "Resource move validation failed. Please see details. Diagnostic information: timestamp '20200908T142742Z', subscription id 'xxx-xxx-xxxx', tracking id 'xxxxxxx-414a-xxxxx-adb4-xxxxxx', request correlation id 'xxxxxxxxxxxx'.",
"details": [
{
"code": "MissingMoveResources",
"target": "Microsoft.SqlVirtualMachine/SqlVirtualMachines",
"message": **"Cannot move resource(s) because following resources /subscriptions/xxxxxxxxx/resourceGroups/myrgroup/providers/Microsoft.SqlVirtualMachine/sqlVirtualMachines/xxxxx0020 need to be included in move request to target resource group as well. Please include these and try again.**"
}
]
}
The error code 409 MissingMoveResources is documented in the Azure SQL VM REST API documentation as:
409 MissingMoveResources - Cannot move resources(s) because some
resources are missing in the request.
So, going by the error details posted above, it does mean that the Virtual Machine you're looking at is linked to a SQL Virtual Machine. The easiest way would be to verify it from the Portal itself:
As seen in the screenshot above:
Presence of the SQL Server Configuration tab under the Settings blade, and
Publisher being MicrosoftSQLServer
confirm the same.
Therefore, you'd have to know the associated SQL Virtual Machine and include that as well in your request to complete the move operation successfully. You can get to the SQL VM by accessing the SQL Server configuration tab.

Resources