Configure Azure SFTP Gateway with multiple storage accounts - azure

I have implemented Azure SFTP Gateway.
I looked everywhere and I couldn't find anything in the documentation.
During the configuration, you set a Storage account where the use can deploy its file. And you can set different blobs for different users if required, but I couldn't find anywhere if its possible or not to have multiple storage accounts.
In sftp gateway webpage, under setting, I have the option to set only one storage account. Is this a limitation of the service?
Do I need a SFTP Gateway VM for each storage account?
Thank you very much for your help and any clarification.

For SFTP Gateway version 2 on Azure, only one Blob Storage Account can be configured per VM. This is because of a product limitation -- the Blob Storage Account connection settings are global to the VM. You can point SFTP users to different containers within the same Blob Storage Account. But if you wanted to point to separate Storage Accounts, that would require a separate VM.
For SFTP Gateway version 3 on Azure, you can configure "Cloud Connections". These let you point SFTP users to different Blob Storage Accounts, so you only need to use one VM.
As abatishchev mentioned, contacting the product owner's support email is a good approach.

Related

Copy blob to another storage account through the Microsoft Backbone? Private link/endpoint?

Scenario:
I need to copy blobs from a storage account container into an Azure File Share in another storage account.
The storage accounts are in different spoke subscriptions in a hub-spoke architecture.
It will be done with an Azure Runbook.
This can be done with azcopy like so:
azcopy copy "https://source.blob.core.windows.net/blobs/myBlob?<SAS>" "https://destination.file.core.windows.net/fileshare/myBlob.txt.bak?<SAS>"
However this traffic goes over the public internet which is not suitable for this scenario. Very sensitive data.
I was thinking of enabling private endpoints, but as I understand this is for scenarios where something inside the VNET needs private access to the storage account, for example a VM. Not storage account -> storage account connectivity. AzCopy will still use the internet.
So the question is: Is there any way to copy files from one storage account to another using the VNETs in Azure or using the Microsoft backbone network? Copy from VM to VM is not desired.
EDIT:
Maybe it's possible to specify the private URL with azcopy and as such uses the DNS of the private endpoint? However as you need to specify whether this endpoint is for blob OR files it might not work (storage subresource). Gonna try this option tomorrow.
source.privatelink.blob.core.windows.net
destination.privatelink.file.core.windows.net

Transferring files from one blob to another through vnet in azure

I have a requirement where I need to transfer files from one blob to the other through vnets deployed in different geographies and connected to each other. As I am very new to Azure platform, I tried researching over the web but could not find any proper solution. I got a suggestion that I can achieve this through programming an app service. Please let me know how I can achieve this.
Depends on your scenario, here are options:
To perform a backup of the storage account across different gegions, you can just specify the replication parameter (while creating a new storage account) to one these values:
Geo-redundant storage
Read-access geo-redundant storage
Another article on HA applications:
Designing Highly Available Applications using RA-GRS
If you want to manually copy files from a storage account to another, you can use Azure Storage events, it will push an event to Event Grid every time a blob is created.
Reacting to Blob storage events
You can then use a Logic App or a Function App to copy blobs to another storage account.

How to know Storage Account is associated with Azure VM or HDInsight Cluster

I have create more than 3 storage account and 3 VM and 3 Clusters.
Storage Accounts:
Storage Account 1
Storage Account 2
Storage Account 3
I want to know Storage Account 1 is associated with how many VM and Clusters. How can I find it via Azure Portal ?
A storage account isn't an "owned" or "dedicated" resource. That is, even if you use a storage account for a given app or service, there's no tight coupling between the two. Any service / app that has your account credentials (or a SAS link to a specific container/queue/table within your storage account) will be able to use that storage account.
However, if you look at the settings for a given app or service (in your case, your VM or HDInsight), you can see which storage accounts it's using, with a bit of digging. For example, your VM might have both OS and Data disks, with each disk using potentially a different storage account - you'd need to enumerate the OS+attached disks to see which storage accounts are in use for each.
Further, if you create all resources at once (again, imagine creating a new VM with new storage), all of your resources will be bundled together within the same Resource Group.
You can via the new Azure portal to find the Azure Storage Account, in the storage account, you will find the Container. The vhds container used for Azure VM by default, select the vhds, you will find the VMs' VHD files there. About the HDInsight, the default Container name is the HDInsight name, so we can find the result manually.

Azure Cloud Service(classic) does not autoscale with new Storage Account

I deployed WorkerRole to Azure Cloud Service (classic) in new portal. With this, I also created Azure Storage account for queue.
Try to add AutoScale rule, the storage account is not listed. Tried to select Other Resource and put Resource Identifier of storage, there's no Metric name listed.
Is it by design that classic Cloud Service and new Storage account not working together?
Storage account data (e.g. blobs, queues, containers, tables) are accessible simply with account name + key. Any app can work with them.
However, to manage/enumerate available storage accounts, there are Classic-created and ARM-created accounts, each with different API's.
The original Azure Service Management (ASM) API doesn't know anything about ARM resources. There's a fairly good chance that, since you're deploying to a Classic cloud service, it's using ASM only and will not be able to enumerate ARM-created storage accounts.
If you create a Classic storage account (which has zero difference in functionality), you should be able to see it as an option for auto-scale.
I have a bit more details on the differences in this answer.
At this time, it is not possible to autoscale anything based on a new "v2" storage account. It has nothing to do with the fact that you are using the classic Azure Cloud Service. I am having the same issue with using Azure App Services. In the end, I just created a classic storage account to use for the autoscaling. There is no difference in how you interact with the different types of storage accounts.

Why do we link an azure storage account to a cloud service?

Why do we link an azure storage account to a cloud service? How does it help? What happens if I do not link them?
Two reasons:
Easier management - you have better idea of what is your overall configuration for a particular deployment
Easier management - upon deleting a resource you are being asked whether you want to delete the linked resources also
By the way, you can also link a Windows Azure SQL Database to a Cloud Service.
The whole idea is to help you better manage the services. There is no other reason and nothing will happen if you do not link. But think a bit - if you manage 3 subscriptions, 2 cloud services deployments each, 2 storage accounts per deployment. That is 6 cloud services, 12 storage accounts. Can you easily tell which service is using which account?
The cloud service depends on the storage account. When deploying the cloud service it will create a container called vsdeploy with a block blob that is used for the VMs it creates.
It also stores crash dump files there as well under the container wad-crashdumps. The folder structure is WAD{GUID}{worker role}{instance}. Then it will store all the .dmp files as block blobs.

Resources