I have a windows VPS, not on azure. I'm looking into the Azure backup services.
Ideally I'd like to backup the whole VPS to azure. Lets say MY current VPS dies, then I can just use the Azure backup to create a new VPS, with all programs, settings, files, databases everything.
I'm not sure which of the azure options to pick for this:
Does anyone know any good resources or what each of the options mean, or have any suggestions? I've read lots on the Azure website but it's not particularly clear.
Apologies if this is basic stuff or I've missed an obvious resource, I'm new to servers.
Many thanks,
Phil.
If you want to have a full backup of your machine choosing 'files and folders' and 'system state' is your best option:
Files and folders will allow you to recover individual files and folders on your machine. Imagine a user accidentally deletes a file, you can recover it from the backup.
System State will allow you to recover your system state (configuration of your machine) if your machine would be corrupted.
The other items in there will allow you to recover from specific sources (Hyper-V or VMware) or to take application consistent back-ups.
To recover a full machine, I would enable files/folders and system state backup. With Azure Backup you can restore either on Azure (on a VM on Azure) or on the source server.
Make sure to also have a look at Azure Site Recovery. With Azure Site Recovery, you can 'mirror' a machine towards Azure. This will allow you to very quickly restore a machine in case of corruption on Azure. If your source is a VPS, you would only be able to restore to Azure with site recovery, not go back to the VPS.
Related
We have webserver & database are on azure VM machines. MySQL is installed on the azure VM machine. Recently, we had an issue with the database corrupt. And, asked Azure to restore the backup from old dates when everything working fine. Azure takes backup of the whole machine on daily basis. They restore the old backup on a separate machine. We supposed that the database will be fine there because the backup is of old date.
But, the issue is still the same.
So, my questions are:
How exactly VM takes backup of the whole machine?
And, does it reference the existing machine while restoring on some new VM machine?
How could I get the correct restored database files?
Note: MySQL logs are also attached.
The whole VM backup means that a point-in-time snapshot is taken. With a running database like MySQL this might mean that the database files are in an inconsistent state at the time of back-up. Extra configuration on the Virtual Machine is needed to provide a consistent backup in the form of pre- and post scripts. Microsoft details how to do this in this documentation
That however seems of little use in the situation you are in at the moment. As stated in the InnoDB recovery documentation a good option would be to force manual InnoDB recovery. Documentation for manual recovery can be found here.
I have just moved my web site to an Azure Virtual Machine and have been up and running since last weekend. So far I'm very happy with the results and looking forward to taking advantage of Azure further in due course.
I do have what would seem to be a pretty common scenario - and, to my surprise, I can't find an obvious solution. I have a couple of VMs - one my primary server and the other which will be suspended and ready to kick in (manually is fine) if the first one has an issue. I backup my web site to Azure Storage (my backup utility supports saving to an Azure blob). That's the good news.
I had assumed that I could somehow mount the storage blob as a drive, therefore effectively having shared storage across the two VMs. However, to my surprise, I haven't found an obvious way to do that. I have found a third party utility (Gladinet Cloud Desktop) but it seems painfully slow. As I say, I admit I just assumed this would be an easy thing to do.
So, stepping back, what is the most straightforward way to access a storage blob from multiple VMs? I really don't want to set up a private network and then set up network file sharing - that seems so old school :) and places a specific dependency on one specific VM.
Any suggestions?
Thanks.
This is now not just possible, but very easy, and it looks just like a filesystem. Check out the new Azure File Service (in preview as of this writing).
http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx
Quoting from the announcement:
"The Azure File service exposes file shares using the standard SMB 2.1 protocol. Applications running in Azure can now easily share files between VMs using standard and familiar file system APIs like ReadFile and WriteFile."
It is better than just an SMB drive, as the announcement goes on to mention:
"In addition, the files can also be accessed at the same time via a REST interface, which opens a variety of hybrid scenarios. Finally, Azure Files is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our platform."
In Azure Resource Manager "Storage Account" you can create a Network File Share that can be Mounted as a Drive to multiple VM's or to computers and devices not on Azure for both Unix, Linux and Windows.
In General, go to your Storage Account ➡ Files ➡ Create FileShare ➡ Name the Share and the Disk Space Quota ➡ Click Connect to obtain the command or windows or linux to mount the share to the respective devices. Note this ONLY WORKS for Local Redundant Storage, not Zone, not Geo Redundant.
https://www.youtube.com/watch?v=SGPJZMaSlis
The video tutorial above shows you step by step how to do this. The only restriction is needed is OS support of the SMB 3.0 protocol which Windows 8 or above does and Windows 2012 or above does. Requires Firewall Port 445 to be opened.
You can access blobs from multiple VMs. This is a very common pattern. What you can't do is mount a drive (stored in a blob) on multiple VMs simultaneously. That is, if you decide to create a VHD disk and attach it to a VM (whether Linux or Windows - doesn't matter), then the blob-backed disk is locked to a VM and that VM can then work with the vhd like it would a local file system.
If, on the other hand, you deal with blobs discretely as single objects, you can easily work with these blobs across any number of VMs.
If you're looking to do something like network sharing (e.g. SMB), you'd either need to use the Azure File Service or stage your own SMB server VM.
In the case where you absolutely must have a mounted file system, yet want to use the file system in a primary/backup fashion, you could always do something via the API to unmount from one VM and remount to another VM. This can be executed via PowerShell (Windows only) or via the cross-platform command-line interface on Linux/Mac/Windows. You'd do this if your primary VM failed for some reason.
this are good articles, I am also looking for that, hope find the right solution.
I hope you share your experience here with your choice.
Deciding when to use Azure Blobs, Azure Files, or Azure Disks
https://learn.microsoft.com/en-us/azure/storage/common/storage-decide-blobs-files-disks
there are premium disks
https://azure.microsoft.com/en-us/pricing/details/managed-disks/
Manually create and use a volume with Azure disks in Azure Kubernetes Service (AKS)
https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume
Note : An Azure disk can only be mounted to a single pod at a time. If you need to share a persistent volume across multiple pods, use Azure Files.
Performance guidelines for SQL Server in Azure Virtual Machines
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-performance
Deploy a SQL Server container in Kubernetes with Azure Kubernetes Services (AKS)
https://learn.microsoft.com/en-us/sql/linux/tutorial-sql-server-containers-kubernetes?view=sql-server-2017
I have a workgroup server on Windows Azure. I have used Rackspace before and simply image the server to back it up BUT thats not so easy on Azure as imaging the server deletes it!
My Azure server is used to run an application that uses an SQL Database. I backup the DB off site BUT need ensure I have a strategy for downtime of the server. I have looked into roles and instances but am fuzzy on it and getting lost in the many articles. See below what I have so far BUT I don't want the cost of two servers running for one application so **DOES ANYONE KNOW HOW TO ENSURE AVAILABILITY OF AN AZURE SERVER AND BACKUP THE CONTENTS IN THE EVENT OF A CRASH WITHOUT ftping EVERYTHING OFF SITE?
Azure is georedundant BUT you have to set up your server to avail of this feature
Current Azure setup is that we set up Workgroup servers and license them BUT I am fuzzy on where to go from here.
This is where it gets tricky
The number of per-role instances in a Windows Azure application is controlled by the Instances setting in the configuration (cscfg) file.
Windows Azure Service Configuration Schema http://msdn.microsoft.com/en-us/library/windowsazure/ee758710.aspx
How to Configure the Roles for a Windows Azure Application with Visual Studio http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspx
Change the Number of Instances
To improve the performance of your application, you can change the number of instances of a role that are running, based on the number of users or the load expected for a particular role. A separate virtual machine is created for each instance of a role when the application runs in Windows Azure. This will affect the billing for the deployment of this application. For more information about billing, see Windows Azure Billing Basics.
• I will continue to research but if any of you know the answer (how can I easily backup my Azure server docs and data without ftping offsite) please feel free to weigh in!
If all you want is to back up the server, then you could use Recovery Services Vaults. This feature allows you to backup any Azure VM. The backup is a snapshot of the entire server.
You can test your contingency plan by restoring the backup to a new VM.
It depends on what you are trying to backup and scale. A proper cloud architecture should not store or persist data on local Azure servers, since that does not scale. You should be persisiting data to azure table storage, blob storage, SQL db and backup the data from there. Then you can use the APIs to backup anything from a central location.
if you are running something like SQL Server or SharePoint then there are some files peristed on the local VMs that you will need to backup. Luckily, those vhd drives are stored on BLOB storage and can be backed up as well in addition to geo redundant backup.
I've just setup an extra small VM instance in Windows Azure to run a help console for our company. The help files can be updated and published through a simple .NET interface. Obviously the flat html files are getting deployed to the local drive on the VM and exposed publicly through IIS. I'm just wondering how stable this is? If the VM suffers a hardware failure, presumably there's no automatic failover and any edits we've made to the help system will be lost?
Can anyone recommend a way I can shuttle the source files out of the VM into blob storage? I could write a an application to do this, I'm just wondering if there is an out-of-the-box solution out there?
Additional information:
The VM instance is running Server 2008 R2 SP1 (As a Virtual Machine not a web-role)
A backup needs to be created once every 24 hours
Aged backups (3+ days old) need to be automatically cleared from the blob container
The help system we use is called HelpConsole 2012
New pages are added at a rate of myabe 2-3 per week
The answer depends on how whether you are running this in a Windows Azure Virtual Machine or on a Windows Azure Web role.
If you are running this on a Windows Azure Virtual Machine, then the VHD is stored in BLOB storage and, if the site is running of the C: drive and not on a data Disk, then the system has some Host caching turned on for both reads and writes. In this scenario it is possible (depending on the methods you use to write your files out) that the data is not pushed back to the VHD in BLOB storage before a failure occurs. You can either ensure that your writing methods do a write through operation, or turn off the write caching. Better yet, attach a data disk for your web site files. By default data disks have both read and write caching off (you could turn on read caching). Since the VHD's are persisted you don't have to worry about the concern of the edits getting lost. You can script out taking a snapshot of the files and move them to BLOB storage separately, or even push them somewhere else. Another thing to think about with this option is that you have to care for the VM instances and keep them patched and up to date.
If you are running a Web Role, then yes, if a failure occurs and the VM goes through self healing it will indeed redeploy with the older files. In this case I'd recommend changing the code in the web role that when it writes the updates to the local file it also puts a copy of the local file into BLOB Storage. In addition, in the web role OnStart you could reach out to BLOB storage and pull down all the new content locally. BE VERY CAREFUL with this approach though because it only really works well for ONE instance, not multiple. If you plan on running multiple instances of the server (and you will have to if you want the SLA for uptime) then your code will need to be a little more robust and do writes out to BLOB storage and then alert all instances of the role that there is a new file to pull down locally.
Another option for web roles is to also write a handler for the content so that requests come in and are mapped to a file BLOB Storage directly. Then updates can occur to direct edits to the file in BLOB storage. This offloads the serving of the flat files from your compute nodes to BLOB storage and you could even implement some caching and stream the content back through the handler rather than having them hit BLOB storage directly if you wanted to.
Now, another option, is to use Windows Azure Web Sites for this. The underlying storage of the web site files in Windows Azure Web Sites is a shared location and thus updating the files in it will immediately be reflected for all instances. Also, the content for the site is stored in BLOB storage and can be updated via FTP, source control, or directly from code. Lots of options here. You may end up moving to reserved instances to help keep away from some of the quotas that Web Sites have. Web Sites may not be an option for you currently depending on other requirements (as in how much control do you need over the environment since you don't get a lot of control for Web Sites).
I want to move an existing server 2008 instance from Rackspace/Hostway to Azure. Can I do a full OS/data backup, copy the backup file to the Azure server, and then restore from the backup file? How do you suggest me migrating this server to Azure? Hostway will not let me get a copy of the VMDK file.
Have you tried contacting the support for providing you with either VMDK or a VHD file? Why are you so sure they won't give it to you?
If they don't you could do a full system backup with either Windows Backup or any imaging software. Get that backup locally. Restore the backup locally on a virtual Machine. Run sysprep on that local VM. Get the VHD, upload it to Azure and finally create a Windows Azure VM from that VHD.
You can provision a SQL VM in the azure cloud and then synch your database to it directly or copy the database files directly to it. This has a vast performance improvement as well over the SQL Azure service and for a large business app with a lot of DB access we found that this is almost a requirement than an option. (DB should be on your C drive so that it runs on the local disk.) SQL Azure is slow because of how data is replicated, a local VM running the service is highly advised. We have an 11GB DB and this was the only way to get some reasonable performance out of it.
See https://stackoverflow.com/questions/2711868/azure-performance/13091125#13091125 for our benchmarks. I've done testing on SQL on VMS and it's on par with an on premises solution.