SAS for CloudBlobDirectory or path within a container - azure

Is there a way to generate a SAS token or policy for a virtual path within blob container ?
E.g. I have a blob container called mycontainer. Inside it have the following blobs
FolderA/PathA/file.pdf
FolderA/PathA/file2.mpg
FolderA/PathC/file.doc
FolderB/PathA/file.pdf
I want to generate SAS token such that the client/application can perform operations inside of FolderA only within container mycontainer
Is that possible ?
Alternate approach is either to
a) Create a list of SAS tokens for each file (i.e. blockblob) within FolderA
b) Re-design such that FolderA is a blob container instead

I want to generate SAS token such that the client/application can
perform operations inside of FolderA only within container mycontainer
Is that possible ?
No, it is not possible because Folder inside a blob container is a virtual entity. Azure blob storage supports only two level hierarchy - container and blob. A folder is simply the prefix in a blob's name.
Both of the solutions you mentioned below are good alternatives and you would be able to use either of them depending on your use case. My recommendation would be to use approach (b) as it provides a nice isolation for individual users in the sense that each user gets her/his own container where they can save their own files.

Related

Get shared access signature URI for ALL blobs in an azure blob container?

I am generating shared access signatures for blobs inside an azure blob container like so:
string sasToken = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTimeOffset.UtcNow.AddMinutes(30)
});
return new Uri(blob.Uri, sasToken).AbsoluteUri;
This returns a URI string that we can ping to download an individual blob. Great, it works.
However, I need to potentially generate hundreds of these shared access signatures, for many different blobs inside the container. It seems very inefficient to loop through each blob and make this call individually each time.
I know that I can call:
container.GetSharedAccessSignature()
in a similar manner, but how would I use the container's SAS token to distribute SAS tokens for each individual blob inside the container?
Yes, you can.
After you generate the container sas token, then it can also work for each blob inside it.
You just need to add the blob name in the url like below:
https://xxxxx/container/blob?container_sastoken.

Azure - Blob Storage - Mixing Custom Domain with SAS

I have an Azure Storage account that hosts a static web site as explained here. This means the static web site "lives" in a storage container named $web. This web site is accessible via a custom domain. This is currently working as desired. However, there is one file that I want to restrict access to.
There is one file in the $web storage container that I only want individuals to access if a) they have a key and b) it's during a specific time window. My thinking was that I could accomplish this with a Shared Access Signature (SAS). However, while testing this approach, it doesn't seem to work. It seems that everything in the $web storage container is publicly visible whether a SAS has been generated or not. Is this correct?
Is there a way to require that a file in the $web storage container have an SAS? Or, do I need to "host" the file in a separate storage container (thus removing it from my custom domain)?
Thank you.
When visit the files stored in $web container via primary static website endpoint(for example, https://contosoblobaccount.z22.web.core.windows.net/index.html), the files are always be accessible whether the container is public or private. So it doesn't matter the sas token is specified or not.
And the sas token only take effects if the $web container is private access, and people visit it via primary blob service endpoint(For example, https://contosoblobaccount.blob.core.windows.net/$web/index.html).
Please refer to this official doc for more details.
So for your purpose, you should put it in another container with private access.

Store Item Level Blobs (Images) in Azure Blob Storage

I need to store multiple images link to 1000s of items in SQL Server. For each record there can be multiple Images to be stored.
I will be using a single account for storing all documents. Though, within accounts Blob Storage allows segregation only by Containers.
Should I create 1000s of containers for each item to separate images? Or '/' notation is recommended in this scenario (link below with details on using forward slash to achieve hierarchy)? If images are stored with '/' notation, can the name of the image be preserved while rendering or when users are accessing it, without '/' in the name?
Creating an Azure Blob Hierarchy
Here is an example scenario -
ItemID Images
1 A1.jpg; A2.jpg; A32018.jpg
2 B1.jpg; B2.jpg; A32018.jpg
3 C1.jpg; C2.jpg; A32018.jpg
As specified here, multiple items can have images with same names, but should be stored separately.
The Azure blob storage have account/contianer/blob(s)
There is one account, one container (could be called folder) but can have multiple blob (folders or files). This means that whatever is after container and whatever is the depth all are called as blob.
I don't understand still know that how you want to store the images but may be this will help.
In case I have two images image1.png and image2.png, it can be stored in below ways
Suppose account : imagedatastore
container : imagescontainer
You can keep both the images on imagescontainer such as
imagescontainer/image1.png
imagescontainer/image2.png
or can create separate containers for both images.
This is totally dependant on your segregation logic.

Creating a folder using Azure Storage Rest API without creating a default blob file

I want to create following folder structure on Azure:
mycontainer
-images
--2007
---img001.jpg
---img002.jpg
Now, one way is to use PUT Blob request and upload img001.jpg specifying the whole path as
PUT "mycontainer/images/2007/img001.jpg"
But, I want to first create the folders images and 2007 and then in a different request upload the blob img001.jpg.
Right now when I tried to doing this using PUT BLOB request:
StringToSign:
PUT
x-ms-blob-type:BlockBlob
x-ms-date:Tue, 07 Feb 2017 23:35:12 GMT
x-ms-version:2016-05-31
/account/mycontainer/images/
HTTP URL
sun.net.www.protocol.http.HttpURLConnection:http://account.blob.core.windows.net/mycontainer/images/
It is creating a folder but its not empty. By, default its creating an
empty blob file without name.
Now, a lot of people say we can't create a empty folder. But, then how come, we can make it using the azure portal as the browser must be sending some type of rest request to create the folder.
I think it has to do something with Content-Type i.e. x-ms-blob-content-type, which should be specified in order to tell azure that its a folder not a blob.
But, I am confused.
I want to first create the folders images and 2007 and then in a different request upload the blob img001.jpg
I agree with Brendan Green, currently, Azure blob storage just enable us to create virtual directory structure by naming blobs with path information in their names.
I think it has to do something with Content-Type i.e. x-ms-blob-content-type, which should be specified in order to tell azure that its a folder not a blob. But, I am confused.
You could check the description of Request Headers that could be set for Put Blob operation and you will find it does not support creating an empty folder by specifying some request headers.
Besides, as Gaurav Mantri said, if you really want to create an empty folder structure without content, you could try to use Azure File storage and it also enables us to use REST API to access Azure File storage. And the Create Directory operation cloud be used to create a new directory under the specified share or parent directory.
PUT https://myaccount.file.core.windows.net/myshare/myparentdirectorypath/mydirectory?restype=directory
This is not possible - the folder structure is virtual only.
See Get started with Azure Blob storage using .NET. You can only create a container, and everything else held in that container is a blob.
Excerpt:
As shown above, you can name blobs with path information in their
names. This creates a virtual directory structure that you can
organize and traverse as you would a traditional file system. Note
that the directory structure is virtual only - the only resources
available in Blob storage are containers and blobs.

Azure - is one 'block blob' seen as one file?

Question background:
This may be a simple question but I cant find an answer to it. I've just started using Azure storage (for storing images) and want to know if one 'blob' holds a maximum of one file?
This is my container called fmfcpics:
Within the container I have a block blob named myBlob and within this I have one image:
Through the following code, if I upload another image file to the myBlob block blob then it overwrites the image already in there:
CloudBlockBlob blockBlob = container.GetBlockBlobReference("myblob");
using (var fileStream = System.IO.File.OpenRead(#"C:\Users\Me\Pictures\Image1.jpg"))
{
blockBlob.UploadFromStream(fileStream);
}
Is this overwriting correct? Or should I be able to store multiple files at the myBlob?
Each blob is a completely separate entity, direct-addressable via uri:
http(s)://storageaccountname.blob.core.windows.net/containername/blobname
If you want to manage multiple entities (such as image jpg's in your case), you would upload each one to a separate blob name (and you're free to store as many as you want within a single container, and you may have as many containers as you want).
Note: These are block blobs. There are also page blobs that have random-access capability, and this is the basis for vhd storage (and in that case, the vhd would have a formatted file system within it, with multiple files).
In blob Azure Documentation you understand how blob service works and the concepts about this storage service.
In some minutes you can use the service easily

Resources