How to setup Azure CDN - azure

How to best use Azure CDN to enhance the reliability of accessing Azure storage?
Hi,
In our current system implementation, we have use an azure storage account to store some essential information of the system. Thus, the storage becomes the single point of failure. In order to enhance the reliability of this mechanism, I am considering to use Azure CDN at the top of storage account.
Since I am new to this product, wondering what is the best practice here. And also here are some questions.
I figure out Azure CDN could provide a cache at the top of storage, but what if I updated the blobs, but the content inside cache doesn't expire, how to force cdn to bypass the cache to fetch the up-to-date content. Could I setup the CDN to detect the update in the storage and catch the new stuff?
Need to confirm, if I create the content delivery network (CDN) in a resource group locating in East US, the cdn is not only available in this region, which means if Azure East US went down, we could still access the endpoint empowered by another edge node in different region.
Also, there is another approach I could think of, Azure storage Geo-redundant storage(GRS). So in this case, I could simply add a try catch in the code, whenever I got a failure from the original storage endpoint, I route to grs endpoint and do the same get.
Which way do you think is better?

The ideal solution would be use the CDN with (RAGRS) storage account. CDN is a global resource which deploys at each of the edges providing far better Caching and Acceleration.
You can purge the CDN cache every time you update the storage blob. Normally this is done using your CI/CD pipeline at deployment.
In Azure DevOps there's already a task, or you can google for various ways to purge the cache.
Using the azure portal:
https://learn.microsoft.com/en-us/azure/cdn/cdn-purge-endpoint
Script:
https://gallery.technet.microsoft.com/scriptcenter/Purge-Azure-CDN-endpoint-042fb00d

Related

Azure Storage Total Requests High

I have signed up for Azure Storage the other day. I noticed today when I went into the Azure portal that there are about 500 requests per hour to the table storage. The strange thing is that I'm not using Table Storage and my site isn't live at the moment. So what could possibly be making all these requests? Any ideas?
Azure Storage has this feature called Storage Analytics which performs logging and provides metrics data for a storage account. This data gets stored in the same storage account under special tables (starting with $ e.g. $MetricsCapacityBlob). By default some analytics data is collected and this is why you're seeing these requests.
One way to check the transactions is by exploring contents of $logs blob container. It will tell you in details from where the requests to your storage accounts are being originated.
OK, mystery solved. It turns out it's the actual Azure Portal that is generating the traffic. I originally thought it was the SDK somehow making the calls, but then I had the website turned off, and the portal open, and it continued making requests. Close portal for a while, no requests.

Can you use Azure CDN without having to upload the files to Azure storage?

I have a website where I would like to cache the few images/stylesheets/javascript-files I have. Is it possible to have Azure CDN point directly on the files on my server, and then cache them, instead of having to upload them to an Azure storage?
It's not possible. Azure will not allow you to configure arbitrary domain as origin domain to support origin content pull. The only available targets are existing azure website, cloudservice or storage account.
Let us discuss your desired end goal.
If you want to improve your caching with CDN related functionality with the same domain name, take a look at Cloud Flare.
However, if you were going to a separate your content into a CDN domain and the application domain, you could look at expanding the following MSDN sample. The idea with this sample is so that as a deployment step, you upload all the CDN related content to the Azure Storage Account.
https://code.msdn.microsoft.com/windowsazure/Synchronizing-Files-to-a14ecf57

Azure storage locality

I am somewhat confused by azure storage account, I do not understand why a storage account can’t have multiple geo-locations, and then why a request can’t be automatically handled by a geo-local azure storage.
To make it clear, consider below:
I have two data centers, West-US , East-Europe, each have web-servers and blob storages, web-server is stateless.
For example:
Region West-US : webserver 1, Blob1
Region East-Europe : webserver2, Blob2
I want my East-Europe web-server2 to access “Region East-Europe blob2” and West-US web-server1 to access “Region West-US bolo1”, due to geo-locaity.
I do not want webserver1 to access Blob2 because extra latency unless Blob1 is inaccessible.
But Blob1 and Blob2 are in different region and so they have different URLs and Access Keys, I do not see an easy way to archive what I want.
I know there is azure traffic manager, but looks like it only support “Cloud Service” and “WebSites”, not to mention there is also the ACCESS KEY.
So, my question, am I doing something wrong?
Thanks in advance!
Blobs are accessible via REST API's - so it should not matter where your webserver is you can reference the dependent blobs using the appropriate blob's URI. One thing you do of course have to do is ensure the blob is actually publically accessible. Take a look here for more information.
Of course they will have different URLs and Access keys and you should use separate code base in web server 1 and web server 2 to access these two blobs differently.
A completely different thing is Azure CDN. I'm talking about this, because you were referring to a traffic manager kind of a mechanism for Azure storage. CDN is not exactly that, but it certainly strikes mind as it might be relevant for you.
You can make these blobs as the source to the CDN and CDN will cache these contents at different edge servers. In your web application, instead of directly accessing the web URL, you can access the CDN URL and CDN will decide from which edge server the requested content (blob) should be served from.
Take a look at https://azure.microsoft.com/en-in/documentation/articles/cdn-serve-content-from-cdn-in-your-web-application/

Azure - Multiple Cloud Services, Single Storage Account

I want to create a couple of cloud services - Int, QA, and Prod. Each of these will connect to separate Db's.
Do these cloud services require "storage accounts"? Conceptually the cloud services have executables and they must be physically located somewhere.
Note: I do not use any blobs/queues/tables.
If so, must I create 3 separate storage accounts or link them up to one?
Storage accounts are more like storage namespaces - it has a url and a set of access keys. You can use storage from anywhere, whether from the cloud or not, from one cloud service or many.
As #sharptooth pointed out, you need storage for diagnostics with Cloud Services. Also for attached disks (Azure Drives for cloud services), deployments themselves (storing the cloud service package and configuration).
Storage accounts are free: That is, create a bunch, and still only pay for consumption.
There are some objective reasons why you'd go with separate storage accounts:
You feel that you could exceed the 20,000 transaction/second advertised limit of a single storage account (remember that storage diagnostics are using some of this transaction rate, which is impacted by your logging-aggressiveness).
You are concerned about security/isolation. You may want your dev and QA folks using an entirely different subscription altogether, with their own storage accounts, to avoid any risk of damaging a production deployment
You feel that you'll exceed 200TB 500TB (the limit of a single storage account)
Azure Diagnostics uses Azure Table Storage under the hood (and it's more convenient to use one storage account for every service, but it's not required). Other dependencies your service has might also use some of the Azure Storage services. If you're sure that you don't need Azure Storage (and so you don't need persistent storage of data dumped through Azure Diagnostics) - okay, you can go without it.
The service package of your service will be stored and managed by Azure infrastructure - that part doesn't require a storage account.

Can I use Azure Storage geo-replication as source?

I know Azure will geo-replication a copy of current storage account to another location,
my questions is: can I access another location in program, even just read only
I asked this, because this allow me to build another deploy in different geo-location for performance and disaster-proof like what Azure did. For current setup, if I use same source of storage in different geo-location, I have to pay extra bandwidth cost.
You can only access your storage account by its primary name. In the event of failover, that name will be mapped to the alternate datacenter. You cannot access the failover storage directly, nor can you choose when to trigger a failover. For a multi-site setup as you described, you'd need to duplicate your data (which would then add the cost of storage in datacenter #2). This does give you ultimate flexibility in your DR and performance planning, but at an added cost of storage and bandwidth (egress-only).
Last week the storage team announced read-only access to the failover storage: Windows Azure Storage Redundancy Options and Read Access Geo Redundant Storage.
This means you can now deploy your application in a different datacenter which can be used for "full" failover (meaning that the storage will also be available there). Even if it's only read-only, your application will still be online - but simply in "degraded" mode.
The steps on how you can implement this with traffic manager are described here: http://fabriccontroller.net/blog/posts/adding-failover-to-your-application-with-read-access-geo-redundant-storage-and-the-windows-azure-traffic-manager/

Resources