I am working on this Tutorial from Microsoft Azure team to implement Access provisioning by data owner to Azure Storage datasets. As shown in the image below, the Data Owner Policy is supposed to allow Grady Archie a Read permission on Azure Data Lake Gen2 storage account called acct4dlsgen2. But for some reasons, when Grady Archie logs into Azure portal in the same network, he is unable to access acct4dlsgen2 storage.
Question: What I may be doing wrong, and how can we fix the issue?
Remarks:
I have satisfied all the prerequisites of the same article mentioned above.
Have also given Grady Archie the Read permissions on the Purview Collection where this storage account is registered in Purview.
When I give Grady Archie a Read permission directly by going
through that storage account via Azure portal, Grady Archie can
access that storage after he logs-in. But this defeats the purpose of implementing Data Access using Purview as described here by Microsoft team.
One of the pre-requisites you have done is to configure the subscription for Purview policies using a PowerShell script
But this configuration is only applied to newly created storage accounts. And maybe your storage account was already existing when you configured the subscription for purview policies
if you create a new storage account inside your subscription, I believe your purview policies will work on this account.
Related
Added a policy in my test subscription and it works as expected.
The same policy at my PROD sub work does not do anything, it should move blobs to the cool access tier from hot.
On my test sub I have owner rights and storage blob data contributor rights.
On my PROD sub I have storage account contributor and storage blob data owner, should I also add storage blob data contributor rights?. Wouldn't that be included in storage account contributor?
In order to work with Azure Storage Account Life Cycle Management policies, you need role that includes Microsoft.Storage/storageAccounts/managementPolicies/write permission.
The valid roles that allow you to work with Life Cycle Management policies are:
Owner - It grants you full access to manage all resources along with assigning roles.
Contributor - It won't allow you to assign roles but grants full access to manage all resources.
Storage Account Contributor - It grants you full access to manage storage accounts(only).
As your Test subscription has Owner rights, it allowed you to manage Life Cycle Management policies.
To confirm this, click on the role and check for storage management policies:
There is no need to assign Storage Blob Data Contributor role to Prod subscription as it already has Storage Account Contributor role that includes below actions:
I tried to reproduce the same in my environment by assigning Storage Account Contributor role and got below results.
I created a Life Cycle Management policy to move blobs to the cool access tier from hot tier like below:
Go to Azure Portal -> Storage Accounts -> Your account -> Lifecycle management -> Add a role
When I checked the blobs, they are still in Hot access tier like below:
As I created the policy recently, it may take upto 48 hrs to be effective as mentioned below:
If that's your case, please wait for intended time period and check after a couple of days.
Blobs moved to cool access tier from hot when I checked after a few days like below:
UPDATE:
Please check the below note from this Microsoft Doc that confirms management policies will be blocked if firewall rules are enabled for your storage account.
You need to select exception as below that allows access to trusted Azure services:
References:
Grant access to trusted azure services | Microsoft Docs
Managing the lifecycle policies - Azure Storage | Microsoft Docs
Recently, I created a second Linked Azure Data Lake Storage Gen2 within the Synapse Workspace using the Workspace's Managed Identity and adding it (together with the people that need to analyze it) as a Storage Blob Data Reader.
I do not have access to the actual resource, but I am able to see the new Linked Azure Data Lake Storage Gen2 resource in the Workspace after linking it. However 2 users that also have Synapse Administrator rights within the Workspace (and have read rights on the actual resource) cannot even see the newly Linked Data Lake in the Workspace. They both have Reader rights on the Workspace resource itself. I have Contributor rights on the Workspace and can see the Linked Data Lake even after removing myself from the firewall whitelist.
Any ideas what could cause this behavior?
Grant Synapse administrators or users the Azure Contributor role on the workspace.
If the workspace creator isn't the owner of the ADLS Gen2 storage account, then Azure Synapse doesn't assign the Storage Blob Data Contributor role to the managed identity.
Verify that the Storage Blob Data Contributor role is assigned to the managed identity
Below Role assignments on the Workspace’s storage account using IAM (in your case the for the second linked DLS)
Refer: Grant Synapse administrators the Azure Contributor role on the workspace
I have an Azure Storage Account and want to grant read access to a colleague. All identities are in the same Azure Active Directory so it was easy to add him to the "Reader" role in the Access Control blade of the Azure portal.
When he opens Microsoft Azure Storage Explorer the subscription and storage account are visible but the node for Blob Containers can't be expanded. Exception says:
Could not obtain keys for Storage Account. Please check that you have
the correct permissions
This is expected behavior. Essentially to list storage keys, the user should be in a role that allows listKeys operation. The built-in Reader role does not have permission to perform listKeys operation.
The rationale (a bit convoluted though) behind this decision is that a user in Reader role should only be able to Read and not perform any inserts/updates or deletes. Considering if someone has account key for a storage account, they can do these operations. Thus the user in Reader role is not granted permission to list the account keys.
What you could do is create a Shared Access Signature (SAS) with read/list permissions and share that SAS URL with your colleague. Then they will be able to access the data in that storage account but won't be able to perform any create/update/delete operations.
Looks like this is now possible (In preview). Your AD users can be given the "Storage Blob Data Reader" privilege.
https://azure.microsoft.com/en-us/blog/announcing-the-preview-of-aad-authentication-for-storage/
I have a Service Principal that has been granted Contributor roles on a storage account.
When I attempt to create a container within that account I receive the following error message
One-time registration of Microsoft.Storage failed - The client 'd38eaaca-1429-44ef-8ce2-3c63a62849c9' with object id 'd38eaaca-1429-44ef-8ce2-3c63a62849c9' does not have authorization to perform action 'Microsoft.Storage/register/action' over scope '/subscriptions/********'
My goal is to allow a Service Principal READ-ONLY to the blobs contained within a given storage account and to create containers within that storage account. What are the steps needed to configure my principle to do that.
Regarding your error, please see this thread: In Azure as a Resource Group contributor why can't I create Storage Accounts and what should be done to prevent this situation?.
My goal is to allow a Service Principal READ-ONLY to the blobs
contained within a given storage account and to create containers
within that storage account. What are the steps needed to configure my
principle to do that.
As of today, it is not possible to do so. Simply because RBAC only applies to the control plane of the API. So using RBAC, you can control who can create/update/delete a storage account. Access to the data inside a storage account is still controlled by an account key. Anyone who has access to the account key will have complete control over that storage account.
How do I build a rich storage ACL policy system with Azure storage?
I want to have a blob container that has the following users:
public - read-only against some set of blobs
Uploader - read-write against some subset of blob names, these keys are shared out to semi-trusted build machines
shared admin - full capabilities against this blob subset
Ideally these users are accounts driven through Azure AD, so I can use the full directory service power with them... :)
My understanding of shared access keys is that they are (1) time-limited and (2) have to be created with hand-tooled code. My desire is that I can do something similar to AWS IAM policies on S3... :-)
Thing like AWS IAM Policies for S3 does not exist for Azure Blob Storage today. Azure recently started a Role Based Access Control (RBAC) and is available for Azure Storage but it is limited to performing management activities only like creating storage accounts etc. It is yet not available for perform data management activities like uploading blobs etc.
You may want to look at Azure Rights Management Service (Azure RMS) and see if it is a right solution for your needs. If you search for Azure RMS Blob you will find one of the search results link to a PDF file which talks about securing blob storage with this service (the link directly downloads the PDF file and hence I could not include it here).
If you're looking for a 3rd party service to do this, do take a look at the "Team Edition" of Cloud Portam (a service I am building currently). We recently released the Team Edition. In short, Cloud Portam is a browser-based Azure Explorer and it supports managing Azure Storage, Search Service and DocumentDB accounts. The Team Edition makes use of your Azure AD for user authentication and you can grant permissions (None, Read-Only, Read-Write and Read-Write-Delete) on the Azure resources you manage through this application.
Paul,
While Gaurav is correct in that Azure Storage does not have AD integration today, I wanted to point out a couple of things about shared access signatures from your post:
My understanding of shared access keys is that they are (1) time-limited and (2) have to be created with hand-tooled code
1) A sas token/uri does not need to have an expiry date on it (it's an optional field), so in that sense they are not time-limited and need not be regenerated unless you change the shared key with which you generated the token
2) You can use PS cmdlets to do this for e.g.: https://msdn.microsoft.com/en-us/library/dn806416.aspx. Some storage explorers also support creation of sas tokens/uris statically without you having to write code for it.