Azure blob storage shared access policies apply/remove - azure

I have a container in Azure blob storage with the Access Policy set to "Blob". The container has existing blobs that I would like to protect with a Shared Access Policy.
I noticed if I create a container shared access policy...
var sharedPolicy = new SharedAccessBlobPolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(120),
Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.Write
};
permissions.SharedAccessPolicies.Add(Guid.NewGuid().ToString(), sharedPolicy);
_blobContainer.SetPermissions(permissions);
I am still able to read the existing blobs. I expected the existing blobs to be "protected" by the newly created SAP.
How can I apply a SAP to an existing blob?
Can all SAP's be removed from a container to expose all the blobs publicly again (assuming you know the url)? Or does removing the SAP make the blobs inaccessible somehow?
If I am using SAP's to protect the blobs, can I set the container's Access Policy to "private" and have it still work?
Thanks!

I tinkered with this for a while. Here is what I observed...
Access Policy = Blob
Anyone that knows the URI to the blob can read the blob (regardless of SAP's).
Access Policy = Private
To read you must...
Connect with the storage access key
OR have a valid Shared Access Signature
OR have a valid Stored Access Policy (on the container)

Related

Get shared access signature URI for ALL blobs in an azure blob container?

I am generating shared access signatures for blobs inside an azure blob container like so:
string sasToken = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTimeOffset.UtcNow.AddMinutes(30)
});
return new Uri(blob.Uri, sasToken).AbsoluteUri;
This returns a URI string that we can ping to download an individual blob. Great, it works.
However, I need to potentially generate hundreds of these shared access signatures, for many different blobs inside the container. It seems very inefficient to loop through each blob and make this call individually each time.
I know that I can call:
container.GetSharedAccessSignature()
in a similar manner, but how would I use the container's SAS token to distribute SAS tokens for each individual blob inside the container?
Yes, you can.
After you generate the container sas token, then it can also work for each blob inside it.
You just need to add the blob name in the url like below:
https://xxxxx/container/blob?container_sastoken.

Azure blob storage networking rules (Ip) for Azure data warehouse

I need to load external data (in blob storage) to my Azure data warehouse using Polybase. I had it working fine when I was using Classic Azure Storage.
Recently, I have to update our Storage to ARM and I could not figure out how to set up the firewall rule on the ARM Storage to my Azure data warehouse. If I set the firewall to "All networks" everything works seamlessly. However, I cannot let the blob wide open.
I tried using nslookup to find the outbound ip for our Azure Data warehouse and put the value into the Firewall of the Storage; I got "This request is not authorized to perform this operation." error
Is there a way I can find the ip address for an Azure Data warehouse? Or I should use different approach to make it work?
Any Suggestions are appreciated.
Kevin
Under the section 1.1 Create a Credential, it states:
Don't skip this step if you are using this tutorial as a template for loading your own data. To access data through a credential, use the following script to create a database-scoped credential, and then use it when defining the location of the data source.
-- A: Create a master key.
-- Only necessary if one does not already exist.
-- Required to encrypt the credential secret in the next step.
CREATE MASTER KEY;
-- B: Create a database scoped credential
-- IDENTITY: Provide any string, it is not used for authentication to Azure storage.
-- SECRET: Provide your Azure storage account key.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH
IDENTITY = 'user',
SECRET = '<azure_storage_account_key>'
;
-- C: Create an external data source
-- TYPE: HADOOP - PolyBase uses Hadoop APIs to access data in Azure blob storage.
-- LOCATION: Provide Azure storage account name and blob container name.
-- CREDENTIAL: Provide the credential created in the previous step.
CREATE EXTERNAL DATA SOURCE AzureStorage
WITH (
TYPE = HADOOP,
LOCATION = 'wasbs://<blob_container_name>#<azure_storage_account_name>.blob.core.windows.net',
CREDENTIAL = AzureStorageCredential
);
Edit: (additional way to access Blobs from ADW through the use of SAS):
You also can create a Storage linked service by using a shared access signature. It provides the data factory with restricted/time-bound access to all/specific resources (blob/container) in the storage.
A shared access signature provides delegated access to resources in your storage account. You can use a shared access signature to grant a client limited permissions to objects in your storage account for a specified time. You don't have to share your account access keys. The shared access signature is a URI that encompasses in its query parameters all the information necessary for authenticated access to a storage resource. To access storage resources with the shared access signature, the client only needs to pass in the shared access signature to the appropriate constructor or method. For more information about shared access signatures, see Shared access signatures: Understand the shared access signature model.
Full document can be found here

Unable to access a blob within a container with access type "Blob" from my HDInsight cluster

I am trying to access a blob within a container with access type "Blob" from my HDInsight cluster. But when I do a: -
hadoop fs -text wasb://myconatiner#***.blob.core.windows.net/file.csv
I get the following exception:
org.apache.hadoop.fs.azure.AzureException: Container ** in account **.blob.core.windows.net not found, and we can't create it using anoynomous credentials, and no credentials found for them in the configuration.
So is this an expected behavior and I cannot access it with with access type "Blob"? But this works if the access type is "Container". Please note my storage account is not linked with the cluster i.e it not configured as a default or additional storage account in the cluster.
This is permission issue. You need to add this storage account as additional storage account to the cluster.
So is this an expected behavior and I cannot access it with access type "Blob"?
It is not the expected behavior if just read the data from the blob while a container with access type "Blob".
It will have read-only permission to the blob in the containers if the container access type Blob. We can get more info from the azure tutorial.
Containers in the storage accounts that are connected to a cluster: Because the account name and key are associated with the cluster during creation, you have full access to the blobs in those containers.
Public containers or public blobs in storage accounts that are NOT connected to a cluster: You have read-only permission to the blobs in the containers.
Note
Public containers (Container) allow you to get a list of all blobs that are available in that container and get container metadata.
Public blobs (Blob) allow you to access the blobs only if you know the exact URL. For more information, see Restrict access to containers and blobs.
According to the exception that you mentioned, I assume that you may have other operations to do with container such as list blobs in the container or get container metadata etc. These operations are not allowed with container access type Blob, but it is allowed with container type Container.

View SharedAccessBlobPolicy created programatically - in Azure portal

I'm creating a container and then a Shared Access Signature for that container in code as so:
SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Write,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(36)
};
var sas = container.GetSharedAccessSignature(policy, $#"{id}-{DateTime.Now}");
That work's fine.
However when I go into Azure portal I can't see a list of policies that have been created.
Does anyone know if this is possible and if so where/how?
Azure Portal offers very limited functionality for managing Storage Accounts. As of today, this functionality doesn't exist there.
What you could do is use any Storage Explorer available in the market (including Microsoft's own Storage Explorer - http://storageexplorer.com) and view access policies there.

Avoid over-writing blobs AZURE on the server

I have a .NET app which uses the WebClient and the SAS token to upload a blob to the container. The default behaviour is that a blob with the same name is replaced/overwritten.
Is there a way to change it on the server, i.e. prevents from replacing the already existing blob?
I've seen the Avoid over-writing blobs AZURE but it is about the client side.
My goal is to secure the server from overwritting blobs.
AFAIK the file is uploaded directly to the container without a chance to intercept the request and check e.g. existence of the blob.
Edited
Let me clarify: My client app receives a SAS token to upload a new blob. However, an evil hacker can intercept the token and upload a blob with an existing name. Because of the default behavior, the new blob will replace the existing one (effectively deleting the good one).
I am aware of different approaches to deal with the replacement on the client. However, I need to do it on the server, somehow even against the client (which could be compromised by the hacker).
You can issue the SAS token with "create" permissions, and without "write" permissions. This will allow the user to upload blobs up to 64 MB in size (the maximum allowed Put Blob) as long as they are creating a new blob and not overwriting an existing blob. See the explanation of SAS permissions for more information.
There is no configuration on server side but then you can implement some code using the storage client sdk.
// retrieve reference to a previously created container.
var container = blobClient.GetContainerReference(containerName);
// retrieve reference to a blob.
var blobreference = container.GetBlockBlobReference(blobName);
// if reference exists do nothing
// else upload the blob.
You could do similar using the REST api
https://learn.microsoft.com/en-us/rest/api/storageservices/fileservices/blob-service-rest-api
GetBlobProperties which will return 404 if blob does not exists.
Is there a way to change it on the server, i.e. prevents from replacing the already existing blob?
Azure Storage Services expose the Blob Service REST API for you to do operations against Blobs. For upload/update a Blob(file), you need invoke Put Blob REST API which states as follows:
The Put Blob operation creates a new block, page, or append blob, or updates the content of an existing block blob. Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with Put Blob; the content of the existing blob is overwritten with the content of the new blob.
In order to avoid over-writing existing Blobs, you need to explicitly specify the Conditional Headers for your Blob Operations. For a simple way, you could leverage Azure Storage SDK for .NET (which is essentially a wrapper over Azure Storage REST API) to upload your Blob(file) as follows to avoid over-writing Blobs:
try
{
var container = new CloudBlobContainer(new Uri($"https://{storageName}.blob.core.windows.net/{containerName}{containerSasToken}"));
var blob = container.GetBlockBlobReference("{blobName}");
//bool isExist=blob.Exists();
blob.UploadFromFile("{filepath}", accessCondition: AccessCondition.GenerateIfNotExistsCondition());
}
catch (StorageException se)
{
var requestResult = se.RequestInformation;
if(requestResult!=null)
//409,The specified blob already exists.
Console.WriteLine($"HttpStatusCode:{requestResult.HttpStatusCode},HttpStatusMessage:{requestResult.HttpStatusMessage}");
}
Also, you could combine your blob name with the MD5 code of your blob file before uploading to Azure Blob Storage.
As I known, there is no any configurations on Azure Portal or Storage Tools for you to achieve this purpose on server-side. You could try to post your feedback to Azure Storage Team.

Resources