KMS access issue with Copying RDS Snapshot - amazon-rds

I am able to migrate RDS from account A to account B.
But on doing the opposite i.e. account B to account A i am unable to restore the rds.
Gives error....
The source snapshot KMS key [kms arn] does not exist, is not enabled or you do not have permissions to access it.

Related

AzureML to use geo-replication / secondary Blob Storage container for Datastore

For sake of safety I wish to use geo-replication / secondary Blob Storage container for a data source for AzureML Datastore. So I do the following:
New Datastore
Enter name + Azure Blob Storage + Enter manually
For URL I paste "Secondary Blob Service Endpoint" value from "Storage account endpoints" and I add container name at the end, e.g. https://somedata-secondary.blob.core.windows.net/container-name
Select subscription ID
I select the resource group in which somedata is hosted,
I add account key taken from "Access keys" section, I tried also with SAS token
After finalizing, the new datastore seem to appear in the list but it is impossible to Browse (preview), throwing the error "Invalid host".
What is the correct way of doing this?
Is it possible at all to access this geo-replication / secondary Blob Storage as datastore?
Please check with below points:
Initially please check if Share Access Token (SAS) token is outdated or expired
Please note that Both primary and geo-secondary are required to have the same service tier and strongly recommended that the geo-secondary is configured with the same backup storage redundancy and compute size as the primary.
Note: You can only access your storage account by its primary name. In the event of failover, that name will be mapped to the alternate datacenter.
There are two disadvantages of GRS redundancy:
Replication between regions is asynchronous and so data is propagated with a small delay
The second region cannot be accessed or read until the storage account fails over
Active geo-replication - Azure SQL Database | Microsoft Docs
As the replicated endpoint will be https://account-secondary.blob.core.windows.net. Note that this DNS entry won’t even be registered unless read access geo redundant replication is enabled.
The access keys for your storage account are the same for both the primary and secondary endpoints. You can use the same primary (or secondary) access key for the secondary too.

Not able to delete RDS instance when RDS is associated with continuous backup +Terraform

I have craeted RDS ,point in time recovery ,backup plan and continuous back up using Terraform
Now when i am trying to delete RDS i am getting below error
1.Error Deleting Database Instance , invalid parameter combination, RDS instance is associated with aws backup resource. No delete automated backups must be specified
then i have deleted continuous back up in backup vault , which i beleive does not make sense , RDS should be deleted without deleting continuous backup anyway once i deleted then below error
2.Error Deleting Database Instance , DB snapshot already exists with same name
after that i deleted snapshot and then finally able to delete RDS using etraraorm
When you delete an RDS, it asks you two things.
1: Retain automated backups
2: Create final snapshot?
In your case, you faced two issues, one being related to RDS and the other being related to aws backup.
In your case you had the option 2(Create final snapshot?) 'enabled' and you already had a snapshot with the name $yourDBname-final-snapshot and that's why the error.
As for your error #1. It clearly says to specify 'No delete automated backups'. And Backup vaults cannot be deleted when there are recovery points.

Empty error while executing SSIS package in Azure Data Factory

I have created a simple SSIS project and in this project, I have a package that will delete a particular file in Downloads folder.
I deployed this project to Azure. And when I am trying to execute this package using Azure Data Factory then the pipeline fails with an empty error (I am attaching the screenshot here).
enter image description here
What I have done to fix this error is:
I have added self-hosted IR to Azure-SSIS IR as the proxy to access the data on-premise.
Set the ConnectByProxy as True.
Converted the project to Project Deployment Model.
Please help me out to fix this error and if you need more details then just leave a comment.
Windows Authentication :
To access data stores such as SQL servers/file shares on-premises or Azure Files, check the Windows authentication check box.
If this check box is selected, fill in the Domain, Username, and Password fields with the values for your package execution credentials. The domain is Azure, the username is storage account name>, and the password is storage account key> to access Azure Files, for example.
Using the secrets stored in your Azure Key Vault
As a substitute, you can leverage secrets from your Azure Key Vault as values. Select the AZURE KEY VAULT check box next to them to do so. Create a new key vault connected service or choose or update an existing one. Then choose your value's secret name and version. You can pick or update an existing key vault or create a new one when creating or editing your key vault connected service. If you haven't previously done so, allow Data Factory managed identity access to your key vault. You may also directly input your secret in the format key vault linked service name>/secret name>/secret version>.
Note : If you are using Windows Authentication, there are four methods to
access data stores with Windows authentication from SSIS packages
running on your Azure-SSIS IR: Access data stores and file shares with
Windows authentication from SSIS packages in Azure | Docs
Make Sure it Falls under one of such methods, else it could potentially fail at the Run Time.

AWS S3 Cross-account file transfer via Spark: Getting access denied on the transferred objects in the destination bucket

I have a use-case where I want to leverage Spark to transfer files between S3 Buckets in 2 different AWS Accounts.
I have Spark running in a different AWS Account (say Account A). I do not have access to this AWS Account.
I have AWS Account B which is holding the source S3 bucket (S3_SOURCE_BUCKET) and AWS Account C that is holding destination S3 bucket (S3_DESTINATION_BUCKET).
I have created an IAM role in Account C (say: CrossAccountRoleC) to read and write from the destination S3 bucket.
I have set up the primary IAM role in Account B (say: CrossAccountRoleB).
Adding Account A's spark IAM Role in trust entity
Adding read write permission to S3 buckets in both Account B and Account C
Adding an inline policy to assume CrossAccountRoleC
Added CrossAccountRoleB as a trusted entity in CrossAccountRoleC
Also added CrossAccountRoleB in the bucket policy in the S3_DESTINATION_BUCKET.
I am using Hadoop's FileUtil.copy to transfer files between the source and destination S3 buckets. While the transfer is happening successfully, I am getting 403 access denied on the copied objects.
When I am specifying hadoopConfiguration.set("fs.s3.canned.acl", "BucketOwnerFullControl") , I am getting an error that says "The requester is not authorized to perform action [ s3:GetObject, s3:PutObject, or kms:Decrypt ] on resource [ s3 Source or Sink ]" . From the logs, it seems that the operation is failing while writing to the Destination bucket.
What am I missing?
you are better off using s3a per-bucket settings and just using a different set of credentials for the different buckets. Not as "pure" as IAM Role games but since nobody understands IAM roles or knows how to debug them, its more likely to work.
(Do not take the fact that the IAM roles aren't working as a personal skill failing. Everyone fears support issues related to them)

How to force refresh secret used to mount ADLS Gen2? Azure Databricks mounts using Azure KeyVault-backed scope -- SP secret update

Issue:
Mounted ADLS gen2 container using service principal secret as secret from Azure Key Vault-backed secret scope. All good, can access the data.
Deleted secret from service principal in AAD, added new, updated Azure Key Vault secret (added the new version, disabled the old secret). All was still good, could access the data.
Restarted cluster. Unable to access mount point, error: “AADToken: HTTP connection failed for getting token from AzureAD. Http response: 401 Unauthorized”
Unmount/mount using the same config helped.
Is there a way to refresh the secret used for mount point that I could add to init scripts to avoid this issue? I would rather avoid unmounting/mounting all mount points in init scripts and was hoping that there is something like dbutils.fs.refreshMounts() that would help (refreshMounts didn't help with this particular issue).
I mounted ADLS Gen2 using service principal, oauth2.0, and azure key vault-backed secret scope, following this documentation: https://learn.microsoft.com/en-us/azure/databricks/data/data-sources/azure/azure-datalake-gen2#mount-azure-data-lake-gen2
Also - out of curiosity: does anybody know how long a token to mount to ADLS Gen2 lives? As long as the cluster did not restart, I was able to access my mnt even though the secret was deleted and updated (i.e., secret was updated in AAD and Key Vault; no failures until restarting the cluster - which was more than 12 hours after the update).
This is a known limitation. Whenever you create a mount point using credentials coming from an Azure Key Vault backed secret scope, the credentials will be stored in the mount point and will never be refreshed again.
This is a one-time read activity on mount point creation time. So each time you rotate credentials in Azure Key Vault you need to re-create the mount points to refresh the credentials there.
I would suggest you to provide feedback on the same:
Azure Databricks - Feedback
All of the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building Azure.

Resources