I am trying to a deploy an app which has a frontend app and a backend worker. The worker runs a CPU intensive process. Now my requirements are to run the web app in a Azure A0 instance while the CPU intensive process runs in a D2 instance. Now both the instance must be able to share the files. I have read at places where they spoke of SBS.
I tried creating the linux VMs in same cloud services but couldnt figure out how to ssh into them separately since they use the same cloud service url. i followed this http://azure.microsoft.com/en-us/documentation/articles/cloud-services-connect-virtual-machine/
to create the 2nd vm.
Can anyone suggest me as how to achieve this setup? Also if possible how do i check if the disks are available to both the instances?
Azure docs aren't as helpful as aws. :(
If the two VMs just want to share files and you don't want to go to the extra effort of coding for blob storage then consider Azure Files which exposes an SMB share against a blob storage back end. This allows you to do standard file IO operations instead of custom blob storage code. See http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx which shows how to create the file share in Windows and Linux VMs.
[Probably easier to give an answer here]
BlobStorage is a universal storage container that can effectively act as the common drive you are looking for. Access to the blob storage container is made over HTTP / HTTPS either through a BlobStorage Client or over REST, where you will have functions to upload, download, list objects, etc.
For Python, you'll hopefully find this article sufficient although I've no experience with Python on Azure to comment, or if choosing REST and http requests - that should work fine.
HTH
Related
I am relatively new to cloud, so please guide me through complete process.
I have an application that will be hosted on containers in cloud environment. I want some temporary storage on the container or the cloud environment, and access it via my web application (written in C#), meaning I will generate a file and keep it there. First of all, is it possible without costing me extra? Secondly, if it is possible, how can I access the area with C# code? And even if it costs me extra, will I have any access issues?? Also, please let me know the limitations of that free space, in terms of storage, accessibility and cost.
Using an App Service you can store temporal files inside %TMP% folder which is maped to %SYSTEMDRIVE%\local\Temp with no extra cost.
Depending on your App Service Plan you will have from 10GB to 140GB free space to store files. Beware, your files will dissapear if App service is restarted.
Refer to this link:
https://github.com/projectkudu/kudu/wiki/Understanding-the-Azure-App-Service-file-system
I heard a lot that Azure Web Applications uploaded content such as images or any files should be stored in Azure Storage service, not in the app File System.
But I would like to keep the solution simple and store those files into local file system.
Does storing images or any files on the application's local file system hampers somehow the application deployment with more than one instance?
After researching a lot, I understand that, unlike Amazon Beanstalk, all instances of a Web App share the same file storage. despite of ecah apps runs into a different VM, they file system is the same.
The best way to think about it is that all the instances of your app map to the same network drive share that have your files.
when you spin up 2 or more instances the files in the file are using that same storage based file system even if you only have one instance.
You can see that easily by dropping a file (e.g. via FTP), and seeing it reflected instantly in all instances.
Sources: Microsoft, This question and this question
By storing files on the Web application, you're limiting your ability to scale.
The web server/app should do one thing: process your request and output the HTML. Everything else should be handled elsewhere if you truly want to take advantage of cloud computing. So for this instance you should store any files off the web app.
Now if you want to keep this as simple as possible and you're not overly concerned with achieving scale, there's really nothing wrong with your approach. It's my understanding that Web Apps, don't actually spin up new virtual machines for you, or if they do, they replicate exactly what is on the VM. For instance think about this -- if you had all your files stored on one VM and you spun up two new ones, you'd have to copy all those files over to the next two, and now you'd have to create a way to sync all the uploaded files among all your VMs.
I don't actually think you'd run into this problem with Azure Web Apps, but it i a problem that can arise if you're handling the VMs yourself or through an auto-scaling policy. You'll definitely run into the issue if you decide to spin up new web apps in different regions (Say the Ireland region to your EU customers get better performance - you'd now have two different locations where files could be uploaded to and you'd need to sync them as opposed to uploading them to Azure storage all along and keepin them centrally located)
I'm thinking about setting up 2 web VMs with a load balancer and availability set, and another VM for SQL server (not sure if I can set an availability set for a SQL Server as well - SQL Server Express / Standard?)
My main problem is how to keep both web servers in sync (prefer not to use the DFS) or having the files in more than one location...
Another issue - is user uploaded content that I want to be available in both web servers (I wonder if I can also direct cache objects to be saved on a specific storage disk)
So, I was thinking to setup a storage account and attach it to both web VMs for user uploaded content and images while each server still serve it's own separate web application with same shared access to content files...
Is that a good idea? I understand that Azure storage is a virtual disk that is supposed to be highly available and fast - is it true??
Do I get a major performance hit if using the same storage disk from 3 different VMs (is that even possible?)
UPDATE:
I found out that because I'm using the BizSpark program I can't really connect more than one server - and share resources between them (unless I pay extra for it). so this became irrelevant for now
Also, I'm talking about ASP.NET but this shouldn't matter
Azure Files enables you to run multiple IIS instances against a single file share and thus not have to worry about replicating files across the multiple shares - so this is definitely an option. See Getting Started with File Storage for more information.
We plan to migrate the existing website to Windows azure, and i have been told that we need to store files to blob storage.
My questions is:
If we want to use blob storage, that means i need to re-write the file storage function(we use file system for now), call blob service api to store files, that's very strange for me just because we want to use windows azure, how about in the future we want to use Amazon EC2 or other cloud platform, they might have there own way to store file, then may be i need to re-write the file storage function again, in my opinion , the implementation of a project should not depends on the cloud platform(or cloud server)! Can any body correct me, thanks!
I won't address the commentary about whether an app should have a dependency on a particular cloud environment (or specific ways to deal with that particular issue), as that's subjective and it's a nice debate to have somewhere else. What I will address is the actual storage in Azure, as your info is a bit out-of-date.
One reason to use blob storage directly (and possibly the reason you were told to use blob storage) is that it provides access from multiple instances of your app. Also, blob storage provides 500TB of storage per storage account, and it's triple-replicated within the deployed region (and optionally geo-replicated). With attached storage (either with local disk or blob-backed Azure Disk), the access is specific to a particular instance of your app. Shifting from file system access to blob storage access does require app modification.
If you choose not to modify your app's file I/O operations, then you can also consider the new Azure File Service, which provides SMB access to storage (backed by blob storage). Using File Service, your app would (hopefully) not need to be modified, although you might need to change your root path.
More information on Azure File Service may be found here.
Why does it seem strange? You need to store your files somewhere and the cloud is a good a place as any IF it suits your needs. The obvious advantages are redundancy and geo replication, sharing files across multiple projects and servers, The list goes on. It's difficult to advise on whether it would be a good idea or not without hearing some specifics.
You could use windows azure storage with amazon in the future if you wanted to (you'd just need to set up the access for it), obviously with slighter longer delay. Then again that slight performance drop may be significant and you may end up re-writing it.
Most importantly, swapping over from one cloud provider to another is not trivial depending on just how much you use it or how much data you've got in it, so I would strongly suggest looking at the advantages / disadvantages of each platform closely before putting your lot in with either one and then fully learn that platform.
Personally, I went for Azure cloud services + storage etc even though it was slightly more expensive at the time, because i'm a Microsoft Person (not that I didn't do my research). It was annoying in the early days when key features were missing, but it's really matured now and I like the pace that it's improving.
It's cheap to test, why not try both and see which one suits you? A small price to pay when you have big decisions to make.
Disclaimer: I don't know the current state of Amazon web services.
Nice question. We are in the middle of a migration of an old PHP/MySQL/LocalShare to WebRole/SQLAzure/AzureStorage ERP application. We faced the same problem and decision. Let me write some thoughts about the issue :
It is a good option to just be able to switch the storage provider but is it reasonable? You can always build the abstraction but do you plan how to do the actual change of storage provider - migration/sync while in production? What kind of argument will exactly drive the transition to another storage provider? How much users and data do you have? Do you plan to shard-rebalance the storage in the future? How reliable must be this system during this storage provider switch? Do you want to totally move the data when you want to switch or you just want to shard it so that you start using this different provider? Does the cost development of these (reliable) storage layers and the cost of development of reliable transitions (or bi-directional syncs) outweighs the money difference between any two storage providers?
Just switching storage mechanism from Azure Blob to Amazon will incur heavy latency penalty if your other services are on Azure - When you create Storage and Services on Azure you set affinity groups by region so that you minimize the network latency.
These are only a few of the questions to answer before doing all the weightlifting. We have abstracted the file repository (blob) because we planned to move from local NFS to Blob transparently and gradually and it answers our needs.
I have a local storage folder, called TempStore, set up on my Web Role instances.
Is it possible to expose files as a URI from my local storage?
E.g:
http://myapplication.cloudapp.net/TempStore/helloworld.jpg
I understand that I could use blobs for this, but I would prefer to use local storage in this case.
There is. However I really do not understand the reason for doing this? The only reason I see is some misunderstanding or not fully understanding the capabilities of the Windows Azure Platform Services (Storage, Cloud Service / Web Roles).
You have to know that local storage is not synced between role instances. Also if hardware failure happens, a role healing process will instantiate an entirely new VM with fresh image from your cloud service package. This will lead to an absolutely empty local storage resource. Windows Azure Load Balancer (the thing that sits in front of your web and worker roles, more here) uses Round Robin algorithm. Meaning that even if with one request user uploads file to your web role. The next request (that you will probably want to show preview) might go to another instance that has no idea of user uploaded.
If, after knowing all these facts, you still want to "shoot yourself in the foot" here is the solution:
Implement VirtualPathProvider
register it for desired public URL Path
Use the RoleEnvironment.GetLocalResource method in your VPP to obtain the full path to the local storage resource
don't blame anyone else when you realize this was a mistake ;)