How to Access Azure Storage NFS File Share from another Subscription - azure

I have a storage account in a subscription which has a VNet that the storage account is setup to use. This works well in the kubernetes cluster in that subscription that attached to that Vnet. NFS works fine to the the storage account in question.
But we have a secondary subscription for failover in a paired region (East US and West US) that I'd like to have that k8s cluster also be able to mount the NFS share.
I've tried creating a peering and adding the secondary subscription's VNet (which doesn't overlap) to the Storage account, but the k8s cluster in the secondary subscription times out connecting the share.
I didn't do any routing options when creating the peering, but I would have assumed that this would just work.
Does anyone have any instructions on how to get this working so that the secondary cluster can access the NFS share?

The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account.
Click Access control (IAM) on the left-hand table of contents.
Click the Role assignments tab to the list the users and applications (service principals) that have access to your storage account.
Verify Microsoft.StorageSync or Hybrid File Sync Service (old application name) appears in the list with the Reader and Data Access role.
This GitHub document on Azure file share can give you better insights.

Related

Unable to link storage account to Log analytics workspace

We are using fluentbit to output application logs to a Azure log analytics workspace. The application log does appear in the workspace as a table under the Logs blade, Custom Logs category. So far so good.
Due to the maximum retention period of the Log analytic workspace limited to 730 days, I thought linking a storage account to type Custom logs & IIS logs under the Linked storage accounts would solve the problem for me. My understanding is once a storage account is linked to type Custom logs & IIS logs, all Custom Logs will be written into the nominated storage account instead of the default storage account that comes with the creation of the Log analytics workspace. Is this understanding correct?
Secondly, after clicking on the Custom logs & IIS logs item, and selecting a storage account from the Pop-up blade on the left hand side, Azure Portal reported a message Successfully linked storage account . However, the Linked storage accounts view still reports No linked storage accounts.
Browsing the target storage account, no log seems to be written to the storage account.
Updates 1
Storage account network configuration.
Updates 2
The answer is accepted as it is technically correct. However, a few steps/details are missing in the documentation. In order to, map a customer storage account to a LA Workspace, one must build resources to match the following diagram.
Create a AMPLS resource.
Link the AMPLS resource to your LA workspace.
Create private endpoint on the target vnet for the AMPLS resource
Create storage account.
Create print endpoints (blob type) on the target vnet
Link the storage account to the LA workspace.
We need to follow few prerequisites before linking the storage account to the workspace.
Storage account should be in the same region as log analytics workspace.
Need to give permissions for other services to allow accessing the storage account.
Allow Azure Monitor to access the storage account. If you chose to allow only select networks to access your storage account, you should select the exception: “Allow trusted Microsoft services to access this storage account”
For rest of the configuration information refer to MS Docs.
By following the above documentation, I can link the storage account successfully as below:

Azure databricks cluster don't have acces to mounted adls2

I followed the documentation azure-datalake-gen2-sp-access and I mounted a ADLS2 storage in databricks, but when I try to see data from the GUI I get the next error:
Cluster easy-matches-cluster-001 does not have the proper credentials to view the content. Please select another cluster.
I don't find any documentation, only something about premium databricks, so I can only access with a premium databricks resource?
Edit1: I can see the mounted storage with dbutils.
After mounting the storage account, please do run this command do check if you have data access permissions to the mount point created.
dbutils.fs.ls("/mnt/<mount-point>")
If you have data access - you will see the files inside the storage
account.
Incase if you don't have data access- you will get this error - "This request is not authorized to perform this operation using this permissions", 403.
If you are able to mount the storage but unable to access, check if the ADLS2 account has the necessary roles assigned.
I was able to repro the same. Since you are using Azure Active Directory application, you would have to assign "Storage Blob Data Contributor" role to Azure Active Directory application too.
Below are steps for granting blob data contributor role on the registered application
1. Select your ADLS account. Navigate to Access Control (IAM). Select Add role assignment.
2. Select the role Storage Blob Data Contributor, Search and select your registered Azure Active Directory application and assign.
Back in Access Control (IAM) tab, search for your AAD app and check access.
3. Run dbutils.fs.ls("/mnt/<mount-point>") to confirm access.
Solved unmounting, mounting and restarting the cluster. I followed this doc: https://learn.microsoft.com/en-us/azure/databricks/kb/dbfs/remount-storage-after-rotate-access-key
If you still encounter the same issue when Access Control is checked. Do the following.
Use dbutils.fs.unmount() to unmount all storage accounts.
Restart the cluster.
Remount

How to know Storage Account is associated with Azure VM or HDInsight Cluster

I have create more than 3 storage account and 3 VM and 3 Clusters.
Storage Accounts:
Storage Account 1
Storage Account 2
Storage Account 3
I want to know Storage Account 1 is associated with how many VM and Clusters. How can I find it via Azure Portal ?
A storage account isn't an "owned" or "dedicated" resource. That is, even if you use a storage account for a given app or service, there's no tight coupling between the two. Any service / app that has your account credentials (or a SAS link to a specific container/queue/table within your storage account) will be able to use that storage account.
However, if you look at the settings for a given app or service (in your case, your VM or HDInsight), you can see which storage accounts it's using, with a bit of digging. For example, your VM might have both OS and Data disks, with each disk using potentially a different storage account - you'd need to enumerate the OS+attached disks to see which storage accounts are in use for each.
Further, if you create all resources at once (again, imagine creating a new VM with new storage), all of your resources will be bundled together within the same Resource Group.
You can via the new Azure portal to find the Azure Storage Account, in the storage account, you will find the Container. The vhds container used for Azure VM by default, select the vhds, you will find the VMs' VHD files there. About the HDInsight, the default Container name is the HDInsight name, so we can find the result manually.

Azure RBAC based access to Storage Account

I have a Service Principal that has been granted Contributor roles on a storage account.
When I attempt to create a container within that account I receive the following error message
One-time registration of Microsoft.Storage failed - The client 'd38eaaca-1429-44ef-8ce2-3c63a62849c9' with object id 'd38eaaca-1429-44ef-8ce2-3c63a62849c9' does not have authorization to perform action 'Microsoft.Storage/register/action' over scope '/subscriptions/********'
My goal is to allow a Service Principal READ-ONLY to the blobs contained within a given storage account and to create containers within that storage account. What are the steps needed to configure my principle to do that.
Regarding your error, please see this thread: In Azure as a Resource Group contributor why can't I create Storage Accounts and what should be done to prevent this situation?.
My goal is to allow a Service Principal READ-ONLY to the blobs
contained within a given storage account and to create containers
within that storage account. What are the steps needed to configure my
principle to do that.
As of today, it is not possible to do so. Simply because RBAC only applies to the control plane of the API. So using RBAC, you can control who can create/update/delete a storage account. Access to the data inside a storage account is still controlled by an account key. Anyone who has access to the account key will have complete control over that storage account.

Azure Subscription ID vs Account ID

I'm working through comparing Azure Subscription IDs and Account IDs. Is it really as simple as the subscription ID relates to the storage name and is unique for each storage container, and the account ID relates to your Azure account? Why do you need them both?
So I think here are 4 concepts:
Azure Account - either an Microsoft Account (like xx#outlook.com, xx#hotmail.com), or an Organizational Account (created by Azure AD, if you don't know this you don't need to care). This is what you used to log in to Azure Portal and use the service. Global Unique.
Azure Subscription - more like a billing unit for your Azure Services, including VM, Storage, etc. The identity is a GUID and its name is just for display, no uniqueness required.
Azure Storage Account - used for authentication to Azure Storage with a pair of storage name + storage key. The name is an identity and must be globally unique. You can have multiple storage accounts in a subscription.
There are various reasons about why Azure Storage has its own authentication other than use Subscription certificates or Azure Account. One of them is that Azure Storage are more likely accessed by application programmatically which has different requirements of the portal, so name/key or SASToken are used to do authentication here.
Azure Storage Container - like a directory in an Azure Storage Account to group data. Its name should be unique within one account.
An Azure subscription may have many storage accounts.
A storage account may have many containers.
In order to access the contents of a container, you'll need your corresponding storage account and key. You will not need your subscription credentials to access storage account contents directly.

Resources