I have an Azure Storage Account and currently using File Shares with some folder structure, I need to enable Geo-Replication, It looks like Azure Storage Account does not support Geo-Replication for File Shares. Does it? If not, How do I handle the Geo-Replication?
You only need to enable the Geo-Replication of the storage account, which is also valid for File Shares.
Please refer to Initiate a storage account failover.
Please note:
Related
I have implemented Azure SFTP Gateway.
I looked everywhere and I couldn't find anything in the documentation.
During the configuration, you set a Storage account where the use can deploy its file. And you can set different blobs for different users if required, but I couldn't find anywhere if its possible or not to have multiple storage accounts.
In sftp gateway webpage, under setting, I have the option to set only one storage account. Is this a limitation of the service?
Do I need a SFTP Gateway VM for each storage account?
Thank you very much for your help and any clarification.
For SFTP Gateway version 2 on Azure, only one Blob Storage Account can be configured per VM. This is because of a product limitation -- the Blob Storage Account connection settings are global to the VM. You can point SFTP users to different containers within the same Blob Storage Account. But if you wanted to point to separate Storage Accounts, that would require a separate VM.
For SFTP Gateway version 3 on Azure, you can configure "Cloud Connections". These let you point SFTP users to different Blob Storage Accounts, so you only need to use one VM.
As abatishchev mentioned, contacting the product owner's support email is a good approach.
Can Azure Media Services re-use an existing Storage Account, for example one we use for Table storage ? Or is this Storage Account dedicated only for AMS ?
Thanks !
You should be able to use an existing storage account as long as it's redundancy type is Locally Redundant, Geo-Redundant, or Read-only Access Geo-Redundant.
Azure databricks Allows to mount storage objects so I cant easily mount Azure storage(Blob,Data Lake), and I know Azure storage using 256-bit AES encryption.
But my question is when I store my data or save my data in Default Databricks file system or DBFS root(Not mount point) is it use any kind of encryption system or not?
Any help appreciate, thanks in advance.
Data in Azure Storage (Azure Databricks DBFS resides in Blob storage which is created while creating databricks workspace called as Managed Resource Group) is encrypted and decrypted transparently using 256-bit AES encryption.
Azure Databricks File System DBFS is an abstraction layer on top of Azure Blob Storage which is created in the Managed Resource Group that lets you access data as if it were a local file system.
By default, when you deploy Databricks it creates Azure Blob Storage that is used for storage and can be accessed via DBFS. When you mount to DBFS, you are essentially mounting a Azure Blob Storage/ADLS Gen1/Gen2 to a path on DBFS.
Hope this helps. Do let us know if you any further queries.
Do click on "Mark as Answer" and Upvote on the post that helps you, this can be beneficial to other community members.
Our software uses Azure blob & Azure table storage.
I would like developers to be able to look through our production data with the Microsoft Azure Storage Explorer, but not be allowed to accidentaly edit it's data.
I don't want to allow anonymous access to the data (read only) as suggested here.
What would be a good way to achieve this?
Make use of Shared Access Signature option to connect to Azure Blob Storage from the Storage Explorer.
Find more details about SAS here.
Find more details about SAS in Storage Explorer here.
When working with a VHD hosted within an Azure Storage account, are there any operations one can perform to access the Storage account directly?
I.e. I create a VM and store it's VHD in a blob in account A, are there any local/efficient ways to work with data in account A from the VM?
See if Azure Storage Files service will work for you. You may attach your storage as a file share and communicate with that directly using traditional APIs.
Apart of that, you may use cross-platform Azure Storage Explorer for communicating with other Storage subservices like Blobs.