Azure storage locality - azure

I am somewhat confused by azure storage account, I do not understand why a storage account can’t have multiple geo-locations, and then why a request can’t be automatically handled by a geo-local azure storage.
To make it clear, consider below:
I have two data centers, West-US , East-Europe, each have web-servers and blob storages, web-server is stateless.
For example:
Region West-US : webserver 1, Blob1
Region East-Europe : webserver2, Blob2
I want my East-Europe web-server2 to access “Region East-Europe blob2” and West-US web-server1 to access “Region West-US bolo1”, due to geo-locaity.
I do not want webserver1 to access Blob2 because extra latency unless Blob1 is inaccessible.
But Blob1 and Blob2 are in different region and so they have different URLs and Access Keys, I do not see an easy way to archive what I want.
I know there is azure traffic manager, but looks like it only support “Cloud Service” and “WebSites”, not to mention there is also the ACCESS KEY.
So, my question, am I doing something wrong?
Thanks in advance!

Blobs are accessible via REST API's - so it should not matter where your webserver is you can reference the dependent blobs using the appropriate blob's URI. One thing you do of course have to do is ensure the blob is actually publically accessible. Take a look here for more information.

Of course they will have different URLs and Access keys and you should use separate code base in web server 1 and web server 2 to access these two blobs differently.
A completely different thing is Azure CDN. I'm talking about this, because you were referring to a traffic manager kind of a mechanism for Azure storage. CDN is not exactly that, but it certainly strikes mind as it might be relevant for you.
You can make these blobs as the source to the CDN and CDN will cache these contents at different edge servers. In your web application, instead of directly accessing the web URL, you can access the CDN URL and CDN will decide from which edge server the requested content (blob) should be served from.
Take a look at https://azure.microsoft.com/en-in/documentation/articles/cdn-serve-content-from-cdn-in-your-web-application/

Related

Uploading and accessing images with Azure

I want to upload some static images that I will later access via some mobile apps. I have an Azure Account that I rarely use so I thought that was the best place and therefore I uploaded them to a "File Share" within Azure Storage.
I naievely thought I could them just access those files via a simple web request url
https://myplace.file.core.windows.net/app/images/bnb/shop/bugle_200_2.jpg
All this gets me is a BadRequest error. I realize that I could create a Shared Access Signature (SAS) for every file but that seems total overkill.
Is there a better Azure feature to use? I do not want to have to use the Azure APIs to get at these files
Adding a few more points to #CtrlDot's excellent answer.
I completely agree that you should use Blob Storage for storing static content.
On the container permissions, I would actually recommend setting the permission (ACL) to Blob so that user can only view the blob they have the URL for and not enumerate all blobs in a container (setting container ACL to Container will enable the users to list blobs in a container which may not be a desired behaviour for you).
Other than these, there are two distinct advantage of using Blob Storage:
Custom domain: You can map blob storage to a custom domain (e.g. static content.mywebsite.com) and use that to serve the content instead of using Azure Blob Storage standard endpoint (your account.blob.core.windows.net).
CDN: You can also CDN enable your blob storage endpoint. The content will then be replicated across many CDN nodes spread throughout the globe and will be served from a node near to your user thus improving the user experience.
I think the service you should be looking to use is blob storage, not file storage. File storage, as per the documentation, is meant more for SMB shares.
When you setup Azure blob storage, you have a couple of different options. If there is nothing sensitive/secure about these static images, you could consider making a public container and simply accessing the files like that.
If you require authentication, then you need to either use azure storage access keys, or azure storage access tokens. Of the two, the storage access tokens are by far the most secure.
You wouldn't need to create a SAS token for each file, rather, grant it read permission to the container. Once again, you will have to tailor this to the security/sensitivity needs of your application.

Can you use Azure CDN without having to upload the files to Azure storage?

I have a website where I would like to cache the few images/stylesheets/javascript-files I have. Is it possible to have Azure CDN point directly on the files on my server, and then cache them, instead of having to upload them to an Azure storage?
It's not possible. Azure will not allow you to configure arbitrary domain as origin domain to support origin content pull. The only available targets are existing azure website, cloudservice or storage account.
Let us discuss your desired end goal.
If you want to improve your caching with CDN related functionality with the same domain name, take a look at Cloud Flare.
However, if you were going to a separate your content into a CDN domain and the application domain, you could look at expanding the following MSDN sample. The idea with this sample is so that as a deployment step, you upload all the CDN related content to the Azure Storage Account.
https://code.msdn.microsoft.com/windowsazure/Synchronizing-Files-to-a14ecf57

Strategy to minimize Azure storage outbound data costs

I am building a web site that (among other things) allows the user to upload photos via web api. The user images will be stored in azure storage blob to be displayed in user albums, and shared with social media. The site will be hosted as an azure web site. I am eager to minimize data transfer costs. I understand that data transfer between an azure web site and table/blob storage incurs no data transfer charge (as it is not considered "outbound") while and data requested from outside the azure web site does. In response to this, I have 2 strategies for exposing the images to the browser:
1.) Via the URI to the image blob in azure storage e.g. with local storage account http://ipv4.fiddler:10000/devstoreaccount1/bcb2ad7581.jpg
2.) Via web api that downloads the image bytes from storage and returns them. e.g. with local host http://localhost:58559/api/image/bcb2ad7581.jpg
These are my assumptions. The direct to storage access (method 1 above) is more efficient. Accessing the images via web api (method 2 above) must incur overheads that the direct access doesn't, right? Each web api request must consume an asp .net thread plus cpu cycles. For each web api image request processed, that is one less request for other web api resources on the site that cannot, and must be queued. On the other hand any external site the image is shared with would add a data transfer cost (among other costs) for each image request; if accessed via method 1.
So my strategy is to access the images within the site via a direct link to the storage (method 1) e.g. when the user opens an album all tags have azure blob uri in their src attribute. However when the user clicks on the Facebook icon to share, I will provide a link to the image via web api (method 2). I realise the user can bypass all of that with plugins like the "PinIt" button etc, but that's OK.
I am only learning this stuff, so I could be way off.
Am I wrong about outbound transfer costs not being applied to azure web sites? I don't think I am but the whole pricing model is confusing, to say the least.
Is accessing blob storage from a browser html page with tag and src atribute, considered outbound data transfer; even if the html page comes from an azure website domain? I mean is it only free when the server side code accesses the storage, not the html client?
Is any data transfer cost saved via method 2 (if indeed there is one), simply cancelled out by a different cost associated with the web api method (like bandwith cost)?
Am I wrong about the performance benefit of direct access to the blob storage, or possibly wrong about the overhead of the web api requests?
It is early days in the design, so I can dump Azure if I have to. I would rather not though, as I think it is what I'm looking. I don't want something for nothing and am happy to pay for the services I consume. Naturally, though, I don't want my ignorance to cost me.
I could do with your advice, on this, and truly appreciate your help.
To answer your questions:
Am I wrong about outbound transfer costs not being applied to azure
web sites?
Sadly, Yes :) Any data that goes out of an Azure Datacenter (DC) incurs an outbound transfer cost and that includes data served through your websites.
Is accessing blob storage from a browser html page with tag and src
atribute, considered outbound data transfer; even if the html page
comes from an azure website domain? I mean is it only free when the
server side code accesses the storage, not the html client?
Yes. Remember the browser is consuming the data which is sitting outside of Azure DC.
Is any data transfer cost saved via method 2 (if indeed there is one),
simply cancelled out by a different cost associated with the web api
method (like bandwidth cost)?
No. Because data eventually flows out of Azure DC (doesn't matter if it is via storage directly or via web api).
Am I wrong about the performance benefit of direct access to the blob
storage, or possibly wrong about the overhead of the web api requests?
You will certainly get more performance benefit by providing direct access to the blob storage than transferring data through web api. Plus you will increase latency as well.
Solution Recommendation
For your application, may I recommend that you look at Shared Access Signature functionality offered by Azure Blob Storage. I believe this will significantly improve the performance of your application.
For uploads, you could create a SAS URL will upload permission and have your web application directly upload files in blob storage. That way the upload data won't be routed through your servers. I wrote some blog posts on the same which you may find useful:
http://gauravmantri.com/2013/02/16/uploading-large-files-in-windows-azure-blob-storage-using-shared-access-signature-html-and-javascript/
http://gauravmantri.com/2013/12/01/windows-azure-storage-and-cors-lets-have-some-fun/
For downloading images, again have your Web API return a SAS URL instead of reading the image data from blob storage and then stream that data back to the client browser.

Windows azure: how to setup front-end and back-end with shared image folder

I'm trying to find the best setup for my website on Windows Azure.
I have a front-end and a back-end website made in ASP.NET MVC4.
Both websites must use a shared same images. Font-end for displaying, back-end for CRUD actions. The image files are stored in a folder in the front-end web application and the url's to those images are stored in a mysql database.
Currenty i have 2 Windows Azure websites, but i can't access the images from the back-end website because there are stored in a folder on the front-end application?
What's the best setup and cheapest for this type of application?
2 websites with shared BLOB storage ?
A cloud service containing 2 webroles (front- and back-end) ?
... ?
Thanks
First you should not use web application's folder beside temporary operations. Since Azure means multi-computer environment, resource (image) won't be available for requester if you use more than one instance (machine)
I would go on 2 blob container. (not 2 blob storage account)
We do not have IP based restriction on blobs yet so as long as you don't share those addresses you will be fine. If you really need to have restriction you can use Shared Access Policy which you can find more details on Use a Stored Access Policy also you should review this one too Restrict Access to Containers and Blobs
I think that using a shared blob storage account is the right direction.
Using a local folder is not a good idea - on web sites and cloud services these are not persistent and you may lose your files. Either way - this is not a scalable solutions - if you'll add additional instances in the future you will not have access to the files.
Using blob storage will give you a location that is accessible from both locations and indeed from the client's browser directly.
You do not specify whether the images need to be accessed securely from the front end or not, if not that blob storage is particularly useful as they can be served from a public container on azure storage directly.

What is the best strategy for using Windows Azure as a file storage system - with http download capabilities

I need to store multiple files that users upload, and then provide these users with the capability of accessing their files via http. There are two key considerations:
- Storage (which is my primary concern here)
- Security (which let's leave aside for now)
The question is:
What is the most cost efficient and performant way of storing all these files and giving access to them later? I believe the answer is:
- Store files within Azure Storage Account, and have a key that references them in an SQL Azure database.
I am correct on this?
Is a blob storage flat? Or can I create something like folders inside it to better organize my files?
The idea of using SQL Azure to store metadata for your blobs is a pretty common scenario, which allows you to take advantage of SQL for searching, and blobs for storage.
Blobs are organized by container. So you'd have something like:
http://mystorage.blob.core.windows.net/mycontainer/myfile.doc
You can also simulate a hierarchy using a delimiter, but in reality there's just container plus blob.
If you keep the container or blob private, the user would either have to go through your web front end (or web service), or you'd have to provide them with a special URL with a Shared Access Signature appended, which is a time-limited URL.
I would recommend you to take a look at BlobShare Sample which is a simple file sharing application that demonstrates the storage services of the Windows Azure Platform, together with the authentication and authorization capabilities of Access Control Service (ACS). The full sample code is located at following link:
http://blobshare.codeplex.com/
You can use this sample code immediately, just by adding proper reference to your Windows Azure Account credentials. The best thing with this sample is that you can provide blob access directly through Access Control Services. You can also modify the code to add SAS support as well as blob download from public containers. Once you have it working and understood the concept you can tweak to make it the way you would want.

Resources