Limit Azure Blob Access to WebApp - azure

Situation:
We have a web-app on azure, and blob storage, via our web-app we write data into the blob, and currently read that data back out returning it as responses in the web-app.
What we're trying to do:
Trying to find a way to restrict access to the blob so that only our web-app can access it. Currently setting up an IP address in the firewall settings works fine if we have a static IP (we often test running the web app locally from our office and that lets us read/write to the blob just fine). However when we use the IP address of our web app (as read from the cross domain page of the web app) we do not get the same access, and get errors trying to read/write to the blob.
Question:
Is there a way to restrict access to the blob to the web app without having to set up a VPN on azure (too expensive)? I've seen people talk about using SAS to generate time valid links to blob content, and that makes sense for only allowing users to access content via our web-app (which would then deliver them the link), but that doesn't solve the problem of our web-app not being able to write to the blob when not publicly accessible.
Are we just trying to miss-use blobs? or is this a valid way to use them, but you have to do so via the VPN approach?

Another option would be to use Azure AD authentication combined with a managed identity on your App Service.
At the time of writing this feature is still in preview though.
I wrote on article on how to do this: https://joonasw.net/view/azure-ad-authentication-with-azure-storage-and-managed-service-identity.
The key parts:
Enable Managed Identity
Add the generated service principal the necessary role in the storage account/blob container
Change your code to use AAD access tokens acquired with the managed identity instead of access key/SAS token
Acquiring the token using https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication/1.1.0-preview:
private async Task<string> GetAccessTokenAsync()
{
var tokenProvider = new AzureServiceTokenProvider();
return await tokenProvider.GetAccessTokenAsync("https://storage.azure.com/");
}
Reading a blob using the token:
private async Task<Stream> GetBlobWithSdk(string accessToken)
{
var tokenCredential = new TokenCredential(accessToken);
var storageCredentials = new StorageCredentials(tokenCredential);
// Define the blob to read
var blob = new CloudBlockBlob(new Uri($"https://{StorageAccountName}.blob.core.windows.net/{ContainerName}/{FileName}"), storageCredentials);
// Open a data stream to the blob
return await blob.OpenReadAsync();
}

SAS Keys is the correct way to secure and grant access to your Blob Storage. Contrary to your belief, this will work with a private container. Here's a resource you may find helpful:
http://www.siddharthpandey.net/use-shared-access-signature-to-share-private-blob-in-azure/
Please also review Microsoft's guidelines on securing your Blob storage. This addresses many of the concerns you outline and is a must read for any Azure PaaS developer:
https://learn.microsoft.com/en-us/azure/storage/common/storage-security-guide

Related

How does Azure BlobStorage connection data have to be stored to support all available addressing modes?

I am using libraries Microsoft.Azure.Storage.Blob 11.2.3.0 and Microsoft.Azure.Storage.Common 11.2.3.0 to connect to an Azure BlobStorage from a .NET Core 3.1 application.
Users of my application are supposed to supply connection information to an Azure BlobStorage to/from where the application will deposit/retrieve data.
Initially, I had assumed allowing users to specify a connection string and a custom blob container name (as an optional override of the default) would be sufficient. I could simply stuff that connection string into the CloudStorageAccount.Parse method and get back a storage account instance to call CreateBlobCloudClient on.
Now that I'm trying to use this method to connect using a container-specific SAS (also see my other question about that), it appears that the connection string might not be the most universal way to go.
Instead, it now seems a blob container URL, plus a SAS token or an account key (and possibly an account name, thought that seems to be included in the blob container URL already) are more versatile. However, I am concerned that the next way of pointing to a blob storage that I need to support (whichever that may be) might require yet another kind of information - hence my question:
What set of "fields" do I need to support in the configuration files of my application to make sure my users can point to their BlobStorage whichever way they want, as long as they have a BlobStorage?
(Is there maybe even a standard solution or best practice recommendation by Microsoft?)
Please note that I am exclusively concerned with what to store. An arbitrarily long string? A complex object of sorts? If so, with what fields?
I am not asking how to store that configuration once I know what it must comprise. For example, this is not about securely encrypting credentials etc.
On Workaround To access the Storage account using the SAS Token you need to pass the Account Name along with the SAS Token and Blob Name if you trying to upload and You need give the permission for your SAS Token .
Microsoft recommends using Azure Active Directory (Azure AD) to authorize requests against blob and queue data if possible, instead of Shared Key. Azure AD provides superior security and ease of use over Shared Key. For more information about authorizing access to data with Azure AD, see Authorize access to Azure blobs and queues using Azure Active Directory..
Note: Based on my testes you need to pass the Storage Account Name And SAS Token and the Container Name And Blob name
Example: I tried with uploading file to container using container level SAS Token . able to upload the file successfully.
const string sasToken = "SAS Token";
StorageCredentials storageCredentials = new StorageCredentials(sasToken);
const string accountName = "teststorage65";//Account Name
const string blobContainerName = "test";
const string blobName = "test.txt";
const string myFileLocation = #"Local Path ";
var storageAccount = new CloudStorageAccount(storageCredentials, accountName, null, true);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer blobContainer = blobClient.GetContainerReference(blobContainerName);
//blobContainer.CreateIfNotExists();
CloudBlockBlob cloudBlob = blobContainer.GetBlockBlobReference(blobName);
cloudBlob.UploadFromFile(myFileLocation);
As you already know You can use the Storage connection string to connect with Storage.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse("Connection string");
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("test");
Your application needs to access the connection string at runtime to authorize requests made to Azure Storage.
You have several options for storing your connection string Or SAS Token
1) You can store your connection string in an environment variable.
2) An application running on the desktop or on a device can store the connection string in an app.config or web.config file. Add the connection string to the AppSettings section in these files.
3) An application running in an Azure cloud service can store the connection string in the Azure service configuration schema (.cscfg) file. Add the connection string to the ConfigurationSettings section of the service configuration file.
Reference: https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string

Azure Shared Access Signature for whole storage account

I am using an API (node.js) to generate a read only shared access signature for an iOS app using Azure Mobile Services. The API generates the SAS using the following code...
var azure = require('azure-storage');
var blobService = azure.createBlobService(accountName, accountKey);
var sas = blobService.generateSharedAccessSignature("containerName", null, sharedAccessPolicy);
This works great when I want a SAS for access to one container. But I really need access to all containers in the storage account. I could obviously do this with a separate API call for each container but this would require hundreds of extra calls.
I have looked everywhere for a solution but I can't get anything to work, I would very much appreciate knowing if there is a way to generate a SAS for all containers in a storage account?
You can construct an account-level SAS, where you get to specify:
services to include (blob, table, queue, file)
resource access (e.g. container create & delete)
permissions (e.g. read, write, list)
protocol (e.g. https only, vs http+https)
Just like a service-specific SAS, you get to specify expiry date (and optionally start date).
Given your use case, you can tailor your account SAS to be just for blobs; there's no need to include unneeded services (in your case, tables/queues/files).
More specifics are documented here.

Accessing Azure Storage services from a different subscription

We are looking to deploy our Azure cloud services to multiple subscriptions but want to be able to be able to access the same Storage accounts for storing blobs and tables. Wanted to know if it is possible to access storage accounts from across different subscriptions using just the storage account name and key?
Our data connection takes the form
Trying to use the above and it always try to find end point for given accountname within the current subscription
If i understood your question...
able to access the same Storage accounts
Via Azure Panel (Management Portal) : you can access the storage account only in the subscription.
Via Visual Studio: you can attach storage account outside your current login account in visual studio <-> azure with account name and key (and manage it)
Via Code: You can access storage account (blob, queue, table) from all your apps with storage connection strings (don't put it in code)
If you want, you can restrict blob access with CORS settings. Something like this :
private static void InitializeCors()
{
ServiceProperties blobServiceProperties = blobClient.GetServiceProperties();
//Attiva e Configura CORS
ConfigureCors(blobServiceProperties);
//Setta
blobClient.SetServiceProperties(blobServiceProperties);
}
private static void ConfigureCors(ServiceProperties prop)
{
var cors = new CorsRule();
cors.AllowedOrigins.Add("www.domain1.net, www.domain2.it");
prop.Cors.CorsRules.Add(cors);
}

Azure CDN per Blob SAS

As far as I know in Azure Storage we can delegate access to our storage to single person using SAS on CONTAINER basis.
I need to delegate access on per BLOB basis to prevent hotlinking.
We are using Asp.Net MVC. Sorry for my English:)
Edit: And how new Azure user can create CDN?
So you can create a SAS on a blob. The approach is similar to the way you create a SAS on a blob container. Since you're using ASP.Net MVC, I'm assuming you would want to use .Net Storage Client API to create SAS on a blob. To create a SAS on a blob, just call GetSharedAccessSignature method on the blob object you have created.
For example, the code below would give you a SAS URL where user has permission to download a blob:
var sas = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-5),
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(15),
});
return string.Format(CultureInfo.InvariantCulture, "{0}{1}", blob.Uri, sas);
I wrote a blog post some time ago which describes SAS functionality on blobs and containers in more details: http://gauravmantri.com/2013/02/13/revisiting-windows-azure-shared-access-signature/
Regarding your question about CDN, I believe the functionality to create DSN nodes was taken away from the Windows Azure Portal when new portal was announced. I guess you would need to wait for the functionality to come up again on the portal.

ACL access abilities for Azure Containers and Blobs

I am looking at using azure Containers and Blobs to store images and videos for my website. I found http://msdn.microsoft.com/en-us/library/windowsazure/dd179354.aspx which talks about the different ALC settings but it did not answer one of my questions. If a Container/Blob are set to "No public read access" the site says that only the account owner can read the data. Would this mean that people could not access it by the URL but my MVC Web App hosted on an Azure VM would be able to access it via URL?
Please bear with me if the answer sounds a bit preachy & unnecessary lengthy :)
Essentially each resource (Blob Container, Blob) in Windows Azure has a unique URL and is accessible via REST API (thus accessible over http/https protocol). Wit ACL, you basically tell storage service whether or not to honor the request sent to serve the resource. To read more about authentication mechanism, you may find this link useful: http://msdn.microsoft.com/en-us/library/windowsazure/dd179428.aspx.
When you set the ACL as No public read access, you're instructing storage service not to honor any anonymous requests. Only authenticated requests will be honored. To create an authenticated request, you would require your account name and key and create an authorization header which gets passed along with the request to access the request. If this authorization header is not present in your request, the request will be rejected.
So long story short, to answer your question even your MVC application won't be able to access the blob via URL unless that authorization header is included in the request. One possibility would be to explore Shared Access Signature (SAS) functionality in blob storage. This would give time-bound restricted permissions to blobs in your storage. So what you would do is create a SAS URL for your blob in your MVC app using your account name and key and use that SAS URL in the application.
To further explain the concept of ACL, let's say you have a blob container called mycontainer and it has a blob called myblob.txt in a storage account named myaccount. For listing blobs in the container, the container URL would be http://myaccount.blob.core.windows.net/mycontainer?restype=container&comp=list and the blob URL would be http://myaccount.blob.core.windows.net/mycontainer/myblob.txt. Following will be the behavior when you try to access these URLs directly through the browser with different ACL:
No public read access
Container URL - Error
Blob URL - Error
Public read access for blobs only
Container URL - Error
Blob URL - Success (will download the blob)
Full public read access
Container URL - Success (will show an XML document containing information about all blobs in the container)
Blob URL - Success (will download the blob)

Resources