Azure Architecture Design - azure

I'm new to Azure, and a little confused about blob storage. I have a need for clients to access via FTP / SFTP to push and pull files (XML, CSV, EDI, etc). The pushed files are read in by a .net application and written to a database. As I understand, we would use a VM role to create a FTP / SFTP server, a worker role to execute the .net code, SQL Storage for the DB and Blob storage for the files.
Am I correct in this assumption first, and second can a VM role attach a storage blob for writing and reading files and can a worker role attach to the same storage blob to read and write files as well.
Sample:
client pushed xml file to VM via FTP. VM writes XML file to storage. Worker role reads file, processes it and writes contents to db.
Is my thinking correct or am I missing the boat?
Thanks

Given Azure has an array of services so you have a few options. One important thing to keep in mind with Azure is that your worker roles, which are simply Windows Server 2008 without IIS installed, are very flexible so there is a lot you can do with them – this includes writing your own FTP server and being able to host it via a worker role VMs. The FTP to Azure Blob Storage Bridge (on CodePlex) solution is an example of this.
In addition, you could use a web role (which is the same as a worker role but with IIS enabled) to do the same - so rather than rolling your own FTP server you can use IIS. A visual guide to setting IIS up to run as an FTP server in Azure can be found on ITQ.
I’d recommend doing some further reading to determine which is the better option of the two. Also have a think about you requirements as this may influence your approach, i.e. scaling, bandwidth, costs, your preferred deployment model etc.
As far as storing the files goes you can certainly use Blob Storage. If you have no need for a relational database in your system then you could skip using SQL Azure altogether (in which case the web role solution referenced above won’t be of much use) – but again that comes down to your particular requirements.
The official Windows Azure website is a good source of knowledge, especially if you’re getting started, so do take the time to look through some of the pertinent documentation.

Related

Azure Back Ground Services For File Processing

We currently have Window service to process Inbound/outbound files.
In Bound files we read data and perform some calculations and store data in Database.
Out Bound files we generate data from the database.
We want to migrate to azure now. I have following questions .
1) what is the best way to store files in azure (Blob or File Share in azure) . We have only ".pdf,.txt,.xlsx" formats no videos
2) Which process is better to process files - WebJobs, Virtual Machine and install window service , Azure Batch Jobs, azure kubernetes service,Service Fabric.
Please some can help me on this.
Thanks
How are you receiving the files API, FTP or some other way? There are a ton of details that are needed to really answer this, but here are my thoughts.
Blob storage would be more cost effective. You only need to use a file share if you want to be able to map a network drive from a VM.
If processing one file would complete in less than 10 minutes I would look at Azure functions for that. If you’re processing thousands of files per day Azure functions would be expensive so I would look at running them on an App Service on VMs or moving to Service Fabric.
If you have a web site that’s used to upload the files and you’re already using Azure App service then you could use Web Jobs.

Access Azure Files Services from Azure WebSites

As the title says, I'm looking for a way to access an azure files share (in preview) directly from an azure website. I cannot use any REST API or anything like this and I was looking for the possibility of mounting a SMB share directly into the website (through the new portal or any other way).
I found the following links, from which I understand that this is still under review (http://feedback.azure.com/forums/169385-web-apps-formerly-websites/suggestions/6084609-allow-map-azure-file-share-microsoft-azure-file-s) and also a SO question (Can the new Azure File Service be used from Azure WebSites?) that doesn't answer my question.
To be honest and for the sake of giving more details, my scenario is pretty simple - I have some websites and also some virtual machines that should access the files from the azure files service. Regarding the VM, the approach is pretty straight forward and easy but regarding the WebSites, I don't find any way at this moment.
On the other hand, regardless of the answer to the above question, does it make sense to (or do I have the possibility to) enable CDN over an Azure Files Share?
Thank you very much.
As of today, no single technology will serve your purpose. You can't use File Service as you don't have the capability to mount a share in an Azure Website as well as it is not suited for streaming purposes (all access to files there need to be authorized and there's no concept of Shared Access Signature in File Service today).
I guess, you would have to pick one of the two technologies (Blob Service and File Service) and make some compromises to make it work in both Websites and Virtual Machines.
Assuming you go with File Service, then you can mount them in the Virtual Machine and do the processing on the files there. On the website front, you would need to use Storage Client library to download the relevant files in some folder in your website and stream those files from there.
Assuming you go with Blob Service, then you can simply stream them in your website directly from blob storage (no need to have those files in your website). In the Virtual Machine, when you need to process those files (blobs), you would simply download them to your VM for processing and then re-upload them in blob storage.
Does it make sense to (or do I have the possibility to) enable CDN
over an Azure Files Share?
Currently it is not possible to serve Azure File Service files via CDN.

Converting FTP data sync to Azure services

I have an old legacy application built on .NET remoting, and transferring data via XML via with FTP.
Esentially, a CRM system is sending XML files to a directory on the web server, which has a windows service that uses a filewatcher to process the incoming XML file, updating the database.
Similarly, changes on the web application serialize down into an XML file into an out folder, that the CRM polls via FTP every 5 minutes.
Trying to map the best services to convert this to for Azure.
You could use Azure Blobs or Azure Files for this.
Azure Blobs: This is the lowest cost option, while still providing high throughput. However, note that Azure Blobs do not have File Watcher functionality, so you would have to poll the directory every few minutes to check for a new file. If you delete files after processing them, then this is really easy - all you have to do is list and see if there are any files. If you want to retain the files, then you might have to do more, since the file list will get big over time. Let me know if this is the case and I can suggest some options.
Azure Files: This is an SMB share that you can mount from a VM in the same region. This will map pretty closely to your exising filesystem based code, including FileWatcher. However, note that Azure Files can only be mounted by a VM in the same region.

How to Backup Windows Azure Server

I have a workgroup server on Windows Azure. I have used Rackspace before and simply image the server to back it up BUT thats not so easy on Azure as imaging the server deletes it!
My Azure server is used to run an application that uses an SQL Database. I backup the DB off site BUT need ensure I have a strategy for downtime of the server. I have looked into roles and instances but am fuzzy on it and getting lost in the many articles. See below what I have so far BUT I don't want the cost of two servers running for one application so **DOES ANYONE KNOW HOW TO ENSURE AVAILABILITY OF AN AZURE SERVER AND BACKUP THE CONTENTS IN THE EVENT OF A CRASH WITHOUT ftping EVERYTHING OFF SITE?
Azure is georedundant BUT you have to set up your server to avail of this feature
Current Azure setup is that we set up Workgroup servers and license them BUT I am fuzzy on where to go from here.
This is where it gets tricky
The number of per-role instances in a Windows Azure application is controlled by the Instances setting in the configuration (cscfg) file.
Windows Azure Service Configuration Schema http://msdn.microsoft.com/en-us/library/windowsazure/ee758710.aspx
How to Configure the Roles for a Windows Azure Application with Visual Studio http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspx
Change the Number of Instances
To improve the performance of your application, you can change the number of instances of a role that are running, based on the number of users or the load expected for a particular role. A separate virtual machine is created for each instance of a role when the application runs in Windows Azure. This will affect the billing for the deployment of this application. For more information about billing, see Windows Azure Billing Basics.
• I will continue to research but if any of you know the answer (how can I easily backup my Azure server docs and data without ftping offsite) please feel free to weigh in!
If all you want is to back up the server, then you could use Recovery Services Vaults. This feature allows you to backup any Azure VM. The backup is a snapshot of the entire server.
You can test your contingency plan by restoring the backup to a new VM.
It depends on what you are trying to backup and scale. A proper cloud architecture should not store or persist data on local Azure servers, since that does not scale. You should be persisiting data to azure table storage, blob storage, SQL db and backup the data from there. Then you can use the APIs to backup anything from a central location.
if you are running something like SQL Server or SharePoint then there are some files peristed on the local VMs that you will need to backup. Luckily, those vhd drives are stored on BLOB storage and can be backed up as well in addition to geo redundant backup.

Backup Azure Virtual Machine local folders to blob storage?

I've just setup an extra small VM instance in Windows Azure to run a help console for our company. The help files can be updated and published through a simple .NET interface. Obviously the flat html files are getting deployed to the local drive on the VM and exposed publicly through IIS. I'm just wondering how stable this is? If the VM suffers a hardware failure, presumably there's no automatic failover and any edits we've made to the help system will be lost?
Can anyone recommend a way I can shuttle the source files out of the VM into blob storage? I could write a an application to do this, I'm just wondering if there is an out-of-the-box solution out there?
Additional information:
The VM instance is running Server 2008 R2 SP1 (As a Virtual Machine not a web-role)
A backup needs to be created once every 24 hours
Aged backups (3+ days old) need to be automatically cleared from the blob container
The help system we use is called HelpConsole 2012
New pages are added at a rate of myabe 2-3 per week
The answer depends on how whether you are running this in a Windows Azure Virtual Machine or on a Windows Azure Web role.
If you are running this on a Windows Azure Virtual Machine, then the VHD is stored in BLOB storage and, if the site is running of the C: drive and not on a data Disk, then the system has some Host caching turned on for both reads and writes. In this scenario it is possible (depending on the methods you use to write your files out) that the data is not pushed back to the VHD in BLOB storage before a failure occurs. You can either ensure that your writing methods do a write through operation, or turn off the write caching. Better yet, attach a data disk for your web site files. By default data disks have both read and write caching off (you could turn on read caching). Since the VHD's are persisted you don't have to worry about the concern of the edits getting lost. You can script out taking a snapshot of the files and move them to BLOB storage separately, or even push them somewhere else. Another thing to think about with this option is that you have to care for the VM instances and keep them patched and up to date.
If you are running a Web Role, then yes, if a failure occurs and the VM goes through self healing it will indeed redeploy with the older files. In this case I'd recommend changing the code in the web role that when it writes the updates to the local file it also puts a copy of the local file into BLOB Storage. In addition, in the web role OnStart you could reach out to BLOB storage and pull down all the new content locally. BE VERY CAREFUL with this approach though because it only really works well for ONE instance, not multiple. If you plan on running multiple instances of the server (and you will have to if you want the SLA for uptime) then your code will need to be a little more robust and do writes out to BLOB storage and then alert all instances of the role that there is a new file to pull down locally.
Another option for web roles is to also write a handler for the content so that requests come in and are mapped to a file BLOB Storage directly. Then updates can occur to direct edits to the file in BLOB storage. This offloads the serving of the flat files from your compute nodes to BLOB storage and you could even implement some caching and stream the content back through the handler rather than having them hit BLOB storage directly if you wanted to.
Now, another option, is to use Windows Azure Web Sites for this. The underlying storage of the web site files in Windows Azure Web Sites is a shared location and thus updating the files in it will immediately be reflected for all instances. Also, the content for the site is stored in BLOB storage and can be updated via FTP, source control, or directly from code. Lots of options here. You may end up moving to reserved instances to help keep away from some of the quotas that Web Sites have. Web Sites may not be an option for you currently depending on other requirements (as in how much control do you need over the environment since you don't get a lot of control for Web Sites).

Resources