Azure Service Fabric backups (non-persistent data) - azure

I have 3 applications deployed to Azure Service Fabric via ARM template. The only items that have been identified as needing to be backed up are some resources. They include a central blob storage, about 5 SQL databases, and the key vault. The cluster and apps can be redeployed right away via the template.
Searching for backup solutions, I'm seeing a lot of info on backups for services, but not for specific resources like I have here. Can anyone point me to the right direction/sample code on how to do this or is it even an option?

Ok so, storage cannot be backed up using Azure services. You have to créate a program\script that will do that for you.
For the keyvault you can use this powershell cmdlet.
For the SQL there are a bunch of ways to do that, but perhaps you can settle with the built-in backup, which happens automatically and goes 7-35 days back (depending on your tier)

Related

Where does Azure keeps non-vm logs? Can them be downloaded programmatically?

Azure keeps a bunch of VM (and cloud service) related logs in WAD* tables. The question is about actions which do not necessarily affect VMs. Say one deleted a Table Storage. Does Azure keep a log record about that? If yes, where? How to fetch them using a program/script?
The Service Management REST API can be used to retrieve the operation logs programmatically.
List Subscription Operations
https://msdn.microsoft.com/library/azure/gg715318.aspx

Azure - Multiple Cloud Services, Single Storage Account

I want to create a couple of cloud services - Int, QA, and Prod. Each of these will connect to separate Db's.
Do these cloud services require "storage accounts"? Conceptually the cloud services have executables and they must be physically located somewhere.
Note: I do not use any blobs/queues/tables.
If so, must I create 3 separate storage accounts or link them up to one?
Storage accounts are more like storage namespaces - it has a url and a set of access keys. You can use storage from anywhere, whether from the cloud or not, from one cloud service or many.
As #sharptooth pointed out, you need storage for diagnostics with Cloud Services. Also for attached disks (Azure Drives for cloud services), deployments themselves (storing the cloud service package and configuration).
Storage accounts are free: That is, create a bunch, and still only pay for consumption.
There are some objective reasons why you'd go with separate storage accounts:
You feel that you could exceed the 20,000 transaction/second advertised limit of a single storage account (remember that storage diagnostics are using some of this transaction rate, which is impacted by your logging-aggressiveness).
You are concerned about security/isolation. You may want your dev and QA folks using an entirely different subscription altogether, with their own storage accounts, to avoid any risk of damaging a production deployment
You feel that you'll exceed 200TB 500TB (the limit of a single storage account)
Azure Diagnostics uses Azure Table Storage under the hood (and it's more convenient to use one storage account for every service, but it's not required). Other dependencies your service has might also use some of the Azure Storage services. If you're sure that you don't need Azure Storage (and so you don't need persistent storage of data dumped through Azure Diagnostics) - okay, you can go without it.
The service package of your service will be stored and managed by Azure infrastructure - that part doesn't require a storage account.

How to Backup Windows Azure Server

I have a workgroup server on Windows Azure. I have used Rackspace before and simply image the server to back it up BUT thats not so easy on Azure as imaging the server deletes it!
My Azure server is used to run an application that uses an SQL Database. I backup the DB off site BUT need ensure I have a strategy for downtime of the server. I have looked into roles and instances but am fuzzy on it and getting lost in the many articles. See below what I have so far BUT I don't want the cost of two servers running for one application so **DOES ANYONE KNOW HOW TO ENSURE AVAILABILITY OF AN AZURE SERVER AND BACKUP THE CONTENTS IN THE EVENT OF A CRASH WITHOUT ftping EVERYTHING OFF SITE?
Azure is georedundant BUT you have to set up your server to avail of this feature
Current Azure setup is that we set up Workgroup servers and license them BUT I am fuzzy on where to go from here.
This is where it gets tricky
The number of per-role instances in a Windows Azure application is controlled by the Instances setting in the configuration (cscfg) file.
Windows Azure Service Configuration Schema http://msdn.microsoft.com/en-us/library/windowsazure/ee758710.aspx
How to Configure the Roles for a Windows Azure Application with Visual Studio http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspx
Change the Number of Instances
To improve the performance of your application, you can change the number of instances of a role that are running, based on the number of users or the load expected for a particular role. A separate virtual machine is created for each instance of a role when the application runs in Windows Azure. This will affect the billing for the deployment of this application. For more information about billing, see Windows Azure Billing Basics.
• I will continue to research but if any of you know the answer (how can I easily backup my Azure server docs and data without ftping offsite) please feel free to weigh in!
If all you want is to back up the server, then you could use Recovery Services Vaults. This feature allows you to backup any Azure VM. The backup is a snapshot of the entire server.
You can test your contingency plan by restoring the backup to a new VM.
It depends on what you are trying to backup and scale. A proper cloud architecture should not store or persist data on local Azure servers, since that does not scale. You should be persisiting data to azure table storage, blob storage, SQL db and backup the data from there. Then you can use the APIs to backup anything from a central location.
if you are running something like SQL Server or SharePoint then there are some files peristed on the local VMs that you will need to backup. Luckily, those vhd drives are stored on BLOB storage and can be backed up as well in addition to geo redundant backup.

Why do we link an azure storage account to a cloud service?

Why do we link an azure storage account to a cloud service? How does it help? What happens if I do not link them?
Two reasons:
Easier management - you have better idea of what is your overall configuration for a particular deployment
Easier management - upon deleting a resource you are being asked whether you want to delete the linked resources also
By the way, you can also link a Windows Azure SQL Database to a Cloud Service.
The whole idea is to help you better manage the services. There is no other reason and nothing will happen if you do not link. But think a bit - if you manage 3 subscriptions, 2 cloud services deployments each, 2 storage accounts per deployment. That is 6 cloud services, 12 storage accounts. Can you easily tell which service is using which account?
The cloud service depends on the storage account. When deploying the cloud service it will create a container called vsdeploy with a block blob that is used for the VMs it creates.
It also stores crash dump files there as well under the container wad-crashdumps. The folder structure is WAD{GUID}{worker role}{instance}. Then it will store all the .dmp files as block blobs.

Getting Started with Azure Question

I'm trying to get up-and-going with Windows Azure. I understand that I need to create a "Storage Account". However, what I'm confused about is, how I should set it up. For instance, my Azure subscription is set to my company name. I intend to have multiple ASP.NET web applications (web roles) associated with my subscription. Each web application will have its own database.
My question is, should each web application have its own storage account? Or should only one storage account be used for all of my projects?
Thank you!
There's no one way to answer this, but here are some thoughts to help your decision:
Each storage account is limited to 100TB. If you feel that you will push the limits of this across multiple websites, then create multiple storage accounts for sure.
To make billing easier, I'd suggest separate storage accounts
Storage accounts have an SLA of a few thousand transactions per second across the entire storage account. For performance purposes, it's probably better to have separate storage accounts
Consider putting your diagnostic data in a separate storage account. This way, you can safely give your Storage Account key to a 3rd-party like ParaLeap (creators of AzureWatch) for monitoring your app, while not giving away the key to real customer data, for instance.
If you need more than 5 storage accounts, you'll need to contact Customer Support to increase this number.
Windows Azure Storage server is for simple blob storage. This is for when your app needs a file store. Any application, not just Azure web roles, can target a storage service. It's kind of like Amazon S3 if you're familiar with that.
Storage services are not required to run Azure applications. You just need a "compute" instance.

Resources