Is there any way to assign custom domains to an azure container, but not to the whole account?
For instance, I have two containers: /images and /downloads
I want to put a custom domain for each one like:
images.mydomain.com -> blob.core.windows.net/images
downloads.mydomain.com -> blob.core.windows.net/downloads
I searched but did not find any results about this. Thanks!
Based on the documentation her: http://www.windowsazure.com/en-us/manage/services/storage/custom-dns-storage/, while it is possible to map a custom domain to a storage account, I don't think it is possible to map a custom domain to a blob container.
What you could do is have a sub domain like assets.yourdomain.com and map it to a storage account and then have two blob containers called "images" and "downloads" in there so that the blob containers work like a virtual directory.
Related
So I have a quick question regarding filter prefixes related to lifecycle management for azure storage account V2.
So the scenario I'm faced with is that I have a blob directory/container which in turn contains sub directories created dynamically via a function that pushes/creates blobs depending on conditions, so the directories are created depending on that logic.
The problem I want to solve is that I want to delete the blobs after 7 days.
In the documentation for lifecycle management it says that I can set a filter prefix for which container I want to apply the "retention rule" for, so to speak.
So the question related to what I'm trying to do is the following:
When setting the filter prefix for a blob container to: "containerName/",
as it says to do in the documentation will it also look in the subfolders?
In the Microsoft documentation it says:
"A prefix match string like container1/ applies to all blobs in the
container named container1."
Does that also include all the blobs in all the subfolders automatically. or do I have to specify each subfolder after the slash as it says further down in the same part of the documentation?
I would like to include all blobs in that first container regardless if they are in subfolders or not as the subfolders are created dynamically as mentioned before.
Does that also include all the blobs in all the subfolders
automatically. or do I have to specify each subfolder after the slash
as it says further down in the same part of the documentation?
Yes, when you set the prefix as container name, all blobs (including those in subfolders) will be considered thus you need not specify subfolders specifically.
You would specify subfolder in prefix only when you want to lifecycle management to manage blobs inside a specific subfolder.
I have azure storage account on which I'm enabling life cycle management.
Is it possible to set filter at container level?
eg Container/folder1/x.txt
Container/folder2/y.txt
I want the life management filter set to be applied at container level instead of filtering by blob name prefix.
The only filter on container level is like below:
Suppose you have 3 containers, like container-1, container-2, container3, then you can set the filter as container-, then the policy is only applied to container-1, container-2.
So in your case, you can directly set the filter as the container name, like Container.
To apply filtering at the container level, simply specify the container name as blob prefix.
I have an Azure Storage account that hosts a static web site as explained here. This means the static web site "lives" in a storage container named $web. This web site is accessible via a custom domain. This is currently working as desired. However, there is one file that I want to restrict access to.
There is one file in the $web storage container that I only want individuals to access if a) they have a key and b) it's during a specific time window. My thinking was that I could accomplish this with a Shared Access Signature (SAS). However, while testing this approach, it doesn't seem to work. It seems that everything in the $web storage container is publicly visible whether a SAS has been generated or not. Is this correct?
Is there a way to require that a file in the $web storage container have an SAS? Or, do I need to "host" the file in a separate storage container (thus removing it from my custom domain)?
Thank you.
When visit the files stored in $web container via primary static website endpoint(for example, https://contosoblobaccount.z22.web.core.windows.net/index.html), the files are always be accessible whether the container is public or private. So it doesn't matter the sas token is specified or not.
And the sas token only take effects if the $web container is private access, and people visit it via primary blob service endpoint(For example, https://contosoblobaccount.blob.core.windows.net/$web/index.html).
Please refer to this official doc for more details.
So for your purpose, you should put it in another container with private access.
Is there a way to generate a SAS token or policy for a virtual path within blob container ?
E.g. I have a blob container called mycontainer. Inside it have the following blobs
FolderA/PathA/file.pdf
FolderA/PathA/file2.mpg
FolderA/PathC/file.doc
FolderB/PathA/file.pdf
I want to generate SAS token such that the client/application can perform operations inside of FolderA only within container mycontainer
Is that possible ?
Alternate approach is either to
a) Create a list of SAS tokens for each file (i.e. blockblob) within FolderA
b) Re-design such that FolderA is a blob container instead
I want to generate SAS token such that the client/application can
perform operations inside of FolderA only within container mycontainer
Is that possible ?
No, it is not possible because Folder inside a blob container is a virtual entity. Azure blob storage supports only two level hierarchy - container and blob. A folder is simply the prefix in a blob's name.
Both of the solutions you mentioned below are good alternatives and you would be able to use either of them depending on your use case. My recommendation would be to use approach (b) as it provides a nice isolation for individual users in the sense that each user gets her/his own container where they can save their own files.
I would like to allow multiple custom defined host headers to access my Azure Blob Storage. Is there a way to do so?
(Case-1: Success, http-200 responsed)
host header (storage.mydomain.com) --> abc.blog.blob.core.windows.net (custom domain: storage.mydomain.com)
(Case-2: Failed, http-400 responsed)
host header (newstorage.mydomain.com) --> abc.blog.blob.core.windows.net (custom domain: storage.mydomain.com)
What should I do if I need to cater "Case-2"?
p.s. I found no way to add more than one custom domain to a blob storage in Azure Management Portal.
I don't believe this is possible. You can only map one CNAME to your storage account and then only use HTTP. Keep in mind that mapping a CNAME on a CNAME won't work either (i.e. foo.domain.com -> mystorage.domain.com -> mystorage.blob.core.windows.net). If you were to do that, storage would see the first one ('foo') and would not be able to resolve which storage account was being accessed.