Azure configuration advice - azure

I have a asp.net V2 website, which stores uploaded content to the file system and a SQL Server database (using Full-Text Search)
I'm trying to work out what the best configuration option would be for me on Azure?
I would like to have the site scalable, but if I do this how can I ensure that the uploaded content is shared across all the sites?
Also SQL Azure does not support Full-Text Searching, so does this mean I should setup a Virtual Machine and host it myself?

For your database, you'll want to run SQL Server in a Virtual Machine, as you'll then get all functionality of SQL Server, including FTS. It's very simple to get up and running with SQL Server VM's, as there's a gallery image with SQL Server preinstalled.
Regarding your file system storage: This won't scale to multiple instances. You'll need another mechanism for storage. Typically this would be Blob Storage, but... it depends on what you're doing with the files. If you're just serving / storing content (you mentioned uploaded content), this works great, and it's accessible across many instances. If, on the other hand, it's some type of file-based database or index, that won't really work well.
If you need to do some type of local processing on the files (e.g. photo or movie rendering), you can easily copy a blob's contents to a local VM disk, process the file with typical drive paths, then upload the results to another blob.

Related

Do I need Azure blob storage or just a simple web server on a VM?

I have a VM on Azure which is my content management system using nodejs and mongodb.
One of things the CMS does is have a social sharing function where html pages are created and users are given the url to this page.
I expect a large volume of users (probably 5000 at a given time) access the http pages. I do not want this load to be on the same server as my CMS.
So I was thinking about moving the html pages to another server. My question is do I need to look at Azure blob storage to do this or should I just use another VM and put files there?
The files are very small and minified. I want to keep my costs down whilst at the same time if I get more than 5000 requests, the server should auto scale.
The question itself is somewhat subjective/opinion-soliciting. And how you solve this problem is really up to you.
But from an objective perspective:
Blobs themselves are not the same as local file storage. If you're going to store content in them, either your CMS needs to support them natively or you're going to need to build that support into it (if that's even possible). Since they have their own REST API (and related SDKs) you cannot simple do file I/O operations against them. They are, however, accessible via URI (which may be made private or public).
Azure VMs store their disks (vhd's) in page blobs (so, you're already using blob storage technically speaking). And each VM may have attached disks (1TB each) also in page blobs, two disks per core (so a dual-core VM supports 4 attached 1TB disks). Just like your OS disk, these attached disks are durable, in blob storage. A CMS may access an attached disk once it's formatted and given a drive letter (Windows) or mounted (Linux). EDIT - forgot to mention: If you go with the attached-disk approach, you need to consider the fact that these disks are per-VM. That is, they are not shared across multiple VM's (in the event you scale your CMS to multiple instances).
Azure File Service is an SMB share sitting atop Azure Blob Storage. Again, durable storage, and drive-mappable. EDIT unlike attached disks, Azure File Service SMB shares are accessible across multiple VM's.

Azure Architecture Design

I'm new to Azure, and a little confused about blob storage. I have a need for clients to access via FTP / SFTP to push and pull files (XML, CSV, EDI, etc). The pushed files are read in by a .net application and written to a database. As I understand, we would use a VM role to create a FTP / SFTP server, a worker role to execute the .net code, SQL Storage for the DB and Blob storage for the files.
Am I correct in this assumption first, and second can a VM role attach a storage blob for writing and reading files and can a worker role attach to the same storage blob to read and write files as well.
Sample:
client pushed xml file to VM via FTP. VM writes XML file to storage. Worker role reads file, processes it and writes contents to db.
Is my thinking correct or am I missing the boat?
Thanks
Given Azure has an array of services so you have a few options. One important thing to keep in mind with Azure is that your worker roles, which are simply Windows Server 2008 without IIS installed, are very flexible so there is a lot you can do with them – this includes writing your own FTP server and being able to host it via a worker role VMs. The FTP to Azure Blob Storage Bridge (on CodePlex) solution is an example of this.
In addition, you could use a web role (which is the same as a worker role but with IIS enabled) to do the same - so rather than rolling your own FTP server you can use IIS. A visual guide to setting IIS up to run as an FTP server in Azure can be found on ITQ.
I’d recommend doing some further reading to determine which is the better option of the two. Also have a think about you requirements as this may influence your approach, i.e. scaling, bandwidth, costs, your preferred deployment model etc.
As far as storing the files goes you can certainly use Blob Storage. If you have no need for a relational database in your system then you could skip using SQL Azure altogether (in which case the web role solution referenced above won’t be of much use) – but again that comes down to your particular requirements.
The official Windows Azure website is a good source of knowledge, especially if you’re getting started, so do take the time to look through some of the pertinent documentation.

Backup Azure Virtual Machine local folders to blob storage?

I've just setup an extra small VM instance in Windows Azure to run a help console for our company. The help files can be updated and published through a simple .NET interface. Obviously the flat html files are getting deployed to the local drive on the VM and exposed publicly through IIS. I'm just wondering how stable this is? If the VM suffers a hardware failure, presumably there's no automatic failover and any edits we've made to the help system will be lost?
Can anyone recommend a way I can shuttle the source files out of the VM into blob storage? I could write a an application to do this, I'm just wondering if there is an out-of-the-box solution out there?
Additional information:
The VM instance is running Server 2008 R2 SP1 (As a Virtual Machine not a web-role)
A backup needs to be created once every 24 hours
Aged backups (3+ days old) need to be automatically cleared from the blob container
The help system we use is called HelpConsole 2012
New pages are added at a rate of myabe 2-3 per week
The answer depends on how whether you are running this in a Windows Azure Virtual Machine or on a Windows Azure Web role.
If you are running this on a Windows Azure Virtual Machine, then the VHD is stored in BLOB storage and, if the site is running of the C: drive and not on a data Disk, then the system has some Host caching turned on for both reads and writes. In this scenario it is possible (depending on the methods you use to write your files out) that the data is not pushed back to the VHD in BLOB storage before a failure occurs. You can either ensure that your writing methods do a write through operation, or turn off the write caching. Better yet, attach a data disk for your web site files. By default data disks have both read and write caching off (you could turn on read caching). Since the VHD's are persisted you don't have to worry about the concern of the edits getting lost. You can script out taking a snapshot of the files and move them to BLOB storage separately, or even push them somewhere else. Another thing to think about with this option is that you have to care for the VM instances and keep them patched and up to date.
If you are running a Web Role, then yes, if a failure occurs and the VM goes through self healing it will indeed redeploy with the older files. In this case I'd recommend changing the code in the web role that when it writes the updates to the local file it also puts a copy of the local file into BLOB Storage. In addition, in the web role OnStart you could reach out to BLOB storage and pull down all the new content locally. BE VERY CAREFUL with this approach though because it only really works well for ONE instance, not multiple. If you plan on running multiple instances of the server (and you will have to if you want the SLA for uptime) then your code will need to be a little more robust and do writes out to BLOB storage and then alert all instances of the role that there is a new file to pull down locally.
Another option for web roles is to also write a handler for the content so that requests come in and are mapped to a file BLOB Storage directly. Then updates can occur to direct edits to the file in BLOB storage. This offloads the serving of the flat files from your compute nodes to BLOB storage and you could even implement some caching and stream the content back through the handler rather than having them hit BLOB storage directly if you wanted to.
Now, another option, is to use Windows Azure Web Sites for this. The underlying storage of the web site files in Windows Azure Web Sites is a shared location and thus updating the files in it will immediately be reflected for all instances. Also, the content for the site is stored in BLOB storage and can be updated via FTP, source control, or directly from code. Lots of options here. You may end up moving to reserved instances to help keep away from some of the quotas that Web Sites have. Web Sites may not be an option for you currently depending on other requirements (as in how much control do you need over the environment since you don't get a lot of control for Web Sites).

SQLite on Azure website

I've been trying to get SQLite to work on an Azure website. I have deployed everything successfully but I need to point it to a file name for the database. I have looked at creating Blob storage but I'm unsure how to convert this into a file name that SQLite will accept.
I'm sure this has been done as I can see references to other issues related to SQLite on Azure.
I have read http://www.sqlite.org/whentouse.html.
Based on my experience if you want to use SQLite with Azure Websites you can keep the database file within your deployment package so it will stay at the same server where your website is. Azure websites provide 1GB of application storage which is plenty for a database file. Your content with the websites will persist and access to SQLite DB will be fast. This is super easy and you can very easily do with ASP.NET web application or any other.
The problem of choosing Azure Blob storage is that if the database file is stored at Azure Blob storage, there are no API that SQLite can write to that file. So one option you could have is to writing locally first and then syncing to Azure Blob storage back and forth while others on SO may have some other options. If you want to backup your database file to Azure Blob storage for any reason you sure can do that separately however I think if you choose the have SQLite, the best would be the keep the database file with website to make it simple.

How to write to a tmp/temp directory in Windows Azure website

How would I write to a tmp/temp directory in windows azure website? I can write to a blob, but i'm using an NPM that requires me to give it file names so that it can directly write to those filenames.
Are you using Cloud Services (PaaS) or Virtual Machines (IaaS).
If PaaS, look at Windows Azure Local Storage. This option gives you up to 250gb of disk space per core. Its a great location for temporary storage of information in a way that traditional apps will be familiar with. However, its not persistent so if you put anything there you need to make sure will be available if the VM instance gets repaved, then copy it to Blob storage. Also, this storage is specific to a given role instance. So if you have two instances of the same role, they each have their own local storage buckets.
Alternatively, you can use Azure Drive, which allows you to keep the information persisted, but still doesn't allow multiple parallel writes.
If IaaS, then you can just mount a data disk to the VM and write to it directly. Data disks are already persisted to blob storage so there's little risk of data loss.
Just from my understanding and please correct me if anything wrong.
In Windows Azure Web Site, the content of your website will be stored in blob storage and mounted as a drive, which will be used for all instances your web site is using. And since it's in blob storage it's persistent. So if you need the local file system I think you can use the folders under your web site root path. But I don't think you can use the system tmp or temp folder.

Resources