I would like to allow multiple custom defined host headers to access my Azure Blob Storage. Is there a way to do so?
(Case-1: Success, http-200 responsed)
host header (storage.mydomain.com) --> abc.blog.blob.core.windows.net (custom domain: storage.mydomain.com)
(Case-2: Failed, http-400 responsed)
host header (newstorage.mydomain.com) --> abc.blog.blob.core.windows.net (custom domain: storage.mydomain.com)
What should I do if I need to cater "Case-2"?
p.s. I found no way to add more than one custom domain to a blob storage in Azure Management Portal.
I don't believe this is possible. You can only map one CNAME to your storage account and then only use HTTP. Keep in mind that mapping a CNAME on a CNAME won't work either (i.e. foo.domain.com -> mystorage.domain.com -> mystorage.blob.core.windows.net). If you were to do that, storage would see the first one ('foo') and would not be able to resolve which storage account was being accessed.
Related
I have an Azure Storage account that hosts a static web site as explained here. This means the static web site "lives" in a storage container named $web. This web site is accessible via a custom domain. This is currently working as desired. However, there is one file that I want to restrict access to.
There is one file in the $web storage container that I only want individuals to access if a) they have a key and b) it's during a specific time window. My thinking was that I could accomplish this with a Shared Access Signature (SAS). However, while testing this approach, it doesn't seem to work. It seems that everything in the $web storage container is publicly visible whether a SAS has been generated or not. Is this correct?
Is there a way to require that a file in the $web storage container have an SAS? Or, do I need to "host" the file in a separate storage container (thus removing it from my custom domain)?
Thank you.
When visit the files stored in $web container via primary static website endpoint(for example, https://contosoblobaccount.z22.web.core.windows.net/index.html), the files are always be accessible whether the container is public or private. So it doesn't matter the sas token is specified or not.
And the sas token only take effects if the $web container is private access, and people visit it via primary blob service endpoint(For example, https://contosoblobaccount.blob.core.windows.net/$web/index.html).
Please refer to this official doc for more details.
So for your purpose, you should put it in another container with private access.
I will try to keep this simple.
1) I have a VM : NGINX server serving a webpage. ( azure VM )
2) I have a storage with a Blob folder called web-images. ( azure storage )
This storage are fully block to only " Selected networks "
Question:
How does a page in the NGINX server can point to the blob storage and get the file?
Example:
<img src="https://xxxxxx.blob.core.windows.net/web-images/demoImage-1.jpg" >
thanks for the help..
Simple answer is that you can't. Because the storage account is restricted to selected networks and when a user accesses a resource in that account through a browser they are accessing it via their network and thus they will not be able to access the file.
One option is to download the file on your server from the storage account and then use a link of your servers in your HTML pages but then it defeats the process of having files in storage in the first place.
I create the the Storage account, CDN Profile and CDN endpoint from powershell. But adding images to the storage account is a manual process after creating all azure components. Now we have the issue that images are not showing up in page. When I try to access the CDN image url directly, I get this error
The requested URI does not represent any resource on the server
But I can access the content directly by using blob storage url to ensure content exist. I tried changing the caching rules , but nothing is working. I have standard verizon cdn profile.
Any suggestions?
Update1 : When I delete the endpoint and recreate the endpoint with all images already loaded in Storage account, everything works fine. Any idea what is the predictable behaviour?
This error happens when you're using a "/" with the root container where the blob is present (sub-folders). for now the "/" are not supported, you can get around it by referencing the root container in the link, ex:
GET https://myaccount.blob.core.windows.net/$root/myphoto
When using the CDN, the format should look like the following:
http://<EndpointName>.azureedge.net/<myPublicContainer>/<BlobName>
There is also a cool tutorial on how to host static sites via blobs and CDN worth checking out: https://blog.lifeishao.com/2017/05/24/serving-your-static-sites-with-azure-blob-and-cdn
Documentation:
You can get more info from these links: https://learn.microsoft.com/en-us/rest/api/storageservices/Working-with-the-Root-Container?redirectedfrom=MSDN
https://learn.microsoft.com/en-us/azure/cdn/cdn-create-a-storage-account-with-cdn
I need to load external data (in blob storage) to my Azure data warehouse using Polybase. I had it working fine when I was using Classic Azure Storage.
Recently, I have to update our Storage to ARM and I could not figure out how to set up the firewall rule on the ARM Storage to my Azure data warehouse. If I set the firewall to "All networks" everything works seamlessly. However, I cannot let the blob wide open.
I tried using nslookup to find the outbound ip for our Azure Data warehouse and put the value into the Firewall of the Storage; I got "This request is not authorized to perform this operation." error
Is there a way I can find the ip address for an Azure Data warehouse? Or I should use different approach to make it work?
Any Suggestions are appreciated.
Kevin
Under the section 1.1 Create a Credential, it states:
Don't skip this step if you are using this tutorial as a template for loading your own data. To access data through a credential, use the following script to create a database-scoped credential, and then use it when defining the location of the data source.
-- A: Create a master key.
-- Only necessary if one does not already exist.
-- Required to encrypt the credential secret in the next step.
CREATE MASTER KEY;
-- B: Create a database scoped credential
-- IDENTITY: Provide any string, it is not used for authentication to Azure storage.
-- SECRET: Provide your Azure storage account key.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH
IDENTITY = 'user',
SECRET = '<azure_storage_account_key>'
;
-- C: Create an external data source
-- TYPE: HADOOP - PolyBase uses Hadoop APIs to access data in Azure blob storage.
-- LOCATION: Provide Azure storage account name and blob container name.
-- CREDENTIAL: Provide the credential created in the previous step.
CREATE EXTERNAL DATA SOURCE AzureStorage
WITH (
TYPE = HADOOP,
LOCATION = 'wasbs://<blob_container_name>#<azure_storage_account_name>.blob.core.windows.net',
CREDENTIAL = AzureStorageCredential
);
Edit: (additional way to access Blobs from ADW through the use of SAS):
You also can create a Storage linked service by using a shared access signature. It provides the data factory with restricted/time-bound access to all/specific resources (blob/container) in the storage.
A shared access signature provides delegated access to resources in your storage account. You can use a shared access signature to grant a client limited permissions to objects in your storage account for a specified time. You don't have to share your account access keys. The shared access signature is a URI that encompasses in its query parameters all the information necessary for authenticated access to a storage resource. To access storage resources with the shared access signature, the client only needs to pass in the shared access signature to the appropriate constructor or method. For more information about shared access signatures, see Shared access signatures: Understand the shared access signature model.
Full document can be found here
Is there any way to assign custom domains to an azure container, but not to the whole account?
For instance, I have two containers: /images and /downloads
I want to put a custom domain for each one like:
images.mydomain.com -> blob.core.windows.net/images
downloads.mydomain.com -> blob.core.windows.net/downloads
I searched but did not find any results about this. Thanks!
Based on the documentation her: http://www.windowsazure.com/en-us/manage/services/storage/custom-dns-storage/, while it is possible to map a custom domain to a storage account, I don't think it is possible to map a custom domain to a blob container.
What you could do is have a sub domain like assets.yourdomain.com and map it to a storage account and then have two blob containers called "images" and "downloads" in there so that the blob containers work like a virtual directory.