We're developing a web-application with Azure and the question is - how to upload into BLOB large images effectively directly from browser and make it secure and reliable?
Probably, we're experiencing bad performance because we're from Russia and currently using a trial Azure. Maybe with the full subscription this problem will go away?
Anyway, my concern is that our application has to pass our image through the following path:
WebBrowser > (image.jpg) > Azure WebRole [store name in DB] > (image.jpg) > Azure BLOB
So there is an overhead involving WebRole. What I'd like to do is to upload my large file to BLOB directly and send image name to WebRole in parallel:
WebBrowser > (image.jpg) > Azure BLOB
WebBrowser > WebRole [store name in DB]
The problem here is security. I'm talking about uploading user pictures and don't want hackers being able to write into one's container.
Is it reasonable at all?
Silverlight is an option, using Shared Access Signatures (special URLs that allow write access on a time-limited basis). See my series of blog posts: http://blog.smarx.com/posts/uploading-windows-azure-blobs-from-silverlight-part-1-shared-access-signatures
+1 for #smarx's suggestion of uploading via Shared Access Signature - that offers a time-limited URL that lets you access a private blob as if it were public. Someone would need to be running a network sniffer to attempt to discover an SAS-encoded URL, and even then, it would only be valid for a short period of time.
Just wanted to add that having a trial subscription is no different from a paid subscription, when it comes to performance. That's just a billing thing and has nothing to do with resource allocation.
Related
I've (think I've) thought through every way I can think of sharing an image in Azure and they all leave me open to someone abusing the download and costing me in bandwidth costs.
The goal is an AMI-like experience, except that seems right out, so settle for a solution that forces the user to copy the image to their subscription first, then create a Shared Image Gallery from that. But again, without exposing a raw download to the Internet, or allowing cross-region intra-Azure pulls that would also cost money.
public blob in Azure StorageV2 account- exposes you to bw attack
public blob in Azure StorageV2 account with Firewall - Microsoft Trusted Services that are default allowed doesn't seem to include the image service, though I didn't test this myself. If it did this might work, as the Image service blocks cross-location replication from blob storage by default IIRC.
Shared Image Gallery - cross tenant sharing is clunky, not at all feasible for AMI-like scenarios
???
I do not want to go through the process of being a Marketplace certified image, which as far as I can tell, is the only publicly available route for making a truly public image and not incurring costs.
Why not just put it in a storage account and user Shared Access Signatures?
Then its still possible to download over internet if you have the SAS, its easy to withdraw the SAS and you can limit it both in time and by IP if needed.
I have a very special requirement which is:
Two web roles accessing a local shared file location.
I am aware of the "Local Storage" role settings, but those are only accessible within each role scope.
Does anyone know another option to accomplish this?
------- EDIT --------
As suggested I will explain more clearly what I'm trying to achieve here.
I'm implementing Only Office which is a web editor for office files. Their product requires to have a file saved on the file system to be opened on the editor.
I don't want to mix their ASP.NET MVC open source project with my code, so that's why I want to deploy their website as a separate webrole.
-------- END EDIT ------------
Thanks
In your question, you state that (my emphasis):
I'm implementing Only Office which is a web editor for office files. Their product requires to have a file saved on the file system to be opened on the editor.
If Only Office's requirement is to have temporary file storage that is used while the document is being edited, you may be able to get away with this in a Cloud Service Web Role. This is assuming that your users wouldn't be too angry if the temp. working document was 'lost' during a role re-start.
Web (and Worker) Roles are non-durable and the Azure Service Fabric might bounce them if they need to patch the underlying host or they might just crash due to a fault (which is usually why you deploy them in pairs - fault-tolerance etc.) If you save something to the file system on a Web Role, you are not guaranteed that it will be there if the role is bounced.
If however you need durability, you will need to implement something based around Azure Blob Storage and possibly something based on Blob Leases. However I imagine that Only Office doesn't have an implementation for Azure....
Failing that, you could try running on Azure Web App Service, however I imagine you would have the same issue re. backing storage and would need to implement something on Blob Storage.
So, finally, if you want complete control and something akin to running on-premise, take a look at using an IaaS Virtual Machine where you have all of the file system to play-with as you please.
==UPDATE==
Taking a look at the Only Office website, there is a SaaS offering Only Office SaaS Hosting which is probably cheaper to run for a year than the time taken for me to write this answer!
Failing that, if you look at the requirements for Only Office Document Server there is no way you're going to run that on a Web Role. Go Azure IaaS VM's.
You basically have 2 options here, both mentioned in the comments. You can use BLOB storage, or you can use an SMB share using Azure Files, which I believe is in preview still. We have used Azure files to mount an SMB share on several linux boxes. One thing we have noticed is that it is not particularly fast. It is also built on top of blob storage. Here is a link to Azure Files https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-files/.
If you choose to use blob storage and you will need to consider concurrency.
https://azure.microsoft.com/en-us/blog/managing-concurrency-in-microsoft-azure-storage-2/
I would suggest to use Azure File Services, you could have a share like URI to be used.
take a look at this:
https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-files/
I have a local storage folder, called TempStore, set up on my Web Role instances.
Is it possible to expose files as a URI from my local storage?
E.g:
http://myapplication.cloudapp.net/TempStore/helloworld.jpg
I understand that I could use blobs for this, but I would prefer to use local storage in this case.
There is. However I really do not understand the reason for doing this? The only reason I see is some misunderstanding or not fully understanding the capabilities of the Windows Azure Platform Services (Storage, Cloud Service / Web Roles).
You have to know that local storage is not synced between role instances. Also if hardware failure happens, a role healing process will instantiate an entirely new VM with fresh image from your cloud service package. This will lead to an absolutely empty local storage resource. Windows Azure Load Balancer (the thing that sits in front of your web and worker roles, more here) uses Round Robin algorithm. Meaning that even if with one request user uploads file to your web role. The next request (that you will probably want to show preview) might go to another instance that has no idea of user uploaded.
If, after knowing all these facts, you still want to "shoot yourself in the foot" here is the solution:
Implement VirtualPathProvider
register it for desired public URL Path
Use the RoleEnvironment.GetLocalResource method in your VPP to obtain the full path to the local storage resource
don't blame anyone else when you realize this was a mistake ;)
I see nothing documented on this, so does anyone know if it is possible to restrict the domains that can access a resource placed in blob storage? When you make a container your only choices are public or private.
That's right. Currently there is no way to restrict access based on domains or IP. Your only solution to manage security on blob storage is by working with Shared Access Signatures (SAS).
The signature would be generated server side and should be appended to the blob's URL. The signature can be limited in time (making the signature only valid for 5min for example).
And this could be done in a web application to display images or videos for example. Even if someone 'steals' your content, the url would be invalid after a few minutes. Not exactly the same as limiting based on IP or domain, but still very effective.
I would like to create a Metro application that allows a group of people to interact. One person would create data and serve as the owner, and multiple others would be invited in and be allow to modify that data. I heard from Build talks that each Metro application will get per-user Azure storage, but will it be possible to share that data between multiple users? Does anyone have a link they could share where I could research this?
I think that you are confusing SkyDrive with Azure Blob Storage.
SkyDrive
Personal to a Live ID
Not really meant as a base for collaborative work
Azure Blob Storage
You can have public files that anyone can view and update
You can have a lease on file that only allows certain people to edit it
Since you own the Azure account you also control the content
You can learn the basics here
If you want to share private app data between users, the best way to do so would be via a shared server of some sort. You should have a server (running on Azure, Amazon EC2, or anything really) that exposes a REST-ful web service which each application connects to. The shared state then lives on that server.
This is better than trying to use skydrive or some file-based system for storing shared data. With a file on skydrive and multiple users trying to access it, you would run into concurrency issues when more than 1 person tries to write to it.
You don't get Azure with Metro.
With Live you get a free SkyDrive that is a personal cloud storage. Like 10 GB. Can share files but it is via sending an email link. It is not file storage that would readily support a server type application to manage that sharing.
Azure is a cloud platform for file and data sharing. Azure is not free but storage cost is only $0.125 / GB per month. 10 GB = $1.25 / month. Using SkyDrive as shared storage you are giving up a lot of developer and hosting tools that come with Azure to save $1.25 / month.
It looks like there is a more formal definition of this with the updated help now available. They were referring to roaming application data. I found the following links that provide guidance:
http://msdn.microsoft.com/en-us/library/windows/apps/hh464917.aspx
http://msdn.microsoft.com/en-us/library/windows/apps/hh465094.aspx
The general is that a small amount of temporary application data is provided on a per-app, per-user basis. The actual size you get is not detailed, but the guidance is pretty clear - app settings only, no large data sets, and don't use it for instant synchronization. Given this guidance, my plan is not a good one and will change.