Azure application back up configuration error:Failed to save Backup Configuration - azure

I am trying to configure a back up for an application with schedule backup set to false. We do manual backups and I get this error
"Failed to save Backup Configuration. Error Details: Requested backup frequency exceeds maximum allowed for the plan."
As far as I can see, since schedule backup is set to false there is no request back up frequency.
What am I missing here.
Backup storage set to a container and blob
backup schedule is off
Include database backup is off
Application is using standard tier and container and application all are under same Azure subscription.

The Backup and Restore feature requires the App Service plan to be in the Standard tier or Premium tier. For the error message, you could upgrade the app service price tier to a higher tier. For example, you can scale up to P1 tier.
Please note that there are app service limits, The following App Service limits include limits for Web Apps, Mobile Apps, and API Apps.
For standard tier,
Scheduled backups every 2 hours, a maximum of 12 backups per day
(manual + scheduled)
For more information, you could see that requirements and restrictions when you backup your app in Azure.
Edit
As confirmed by Azure PG,
There is no restriction on number of apps under one subscription that
can be backed up. If the customer has 4 apps in Standard SKU, each can
be backed up for 12 times / day.
Also, this is a known Bug 5913246: Enabling Manual Backup submits
settings for Hourly Backup
Workaround for this is to Create a Daily Schedule first, save this,
then remove the schedule afterward.
So, to fix this, you can manually create a backup with the scheduled backup set to on, after backup succeeded, you can go back to set the scheduled backup off.

Related

Azure app service scale up or scale out for function app

I have function app which process XML files from Azure blob and put the data in Azure SQL DB. This is working fine when the Size of the file is in kb ( we have told the sender to send the file up to 100kb).
The problem comes when the file size increases to “2MB to 3MB”. The problem is it gets stuck in middle of processing and as the job runs in every 2hours, the blob receives files in every 2 hours - then everything stuck ( the current processing files and the new files as well )
I can’t change the schedule from 2hours to more. considering this, Is there anyway to scale up or scale out the app service plan so that it can process the larger size in 2hours time? Will there be a code or configuration change required for this ? Also, if yes, what is the costing plan for this?
Or, is there any other way to handle such situations?
Please note, the current app service plan is S2:2 and all deployment slots are in same app service plan.
Thank you Roman Kiss and Anupam Chand. Posting your suggestions as answer to help other community members.
The hosting plan you choose dictates the following behaviors:
How your function app is scaled.
The resources available to each function app instance.
Support for advanced functionality, such as Azure Virtual Network connectivity.
Check Hosting Options for further information.
Here is the pricing calculator for moving P1V2 to P1V3.
Below image will help you in scaling up your app service
If you app depends on other services like SQL and Storage you need to Scale up these resources separately.
In the Resource Group link go to summary and select the resource to scale up.
For further information click Scale up in Azure App Service.

How do you export or retain logs for up to 5 years from Azure?

I have a requirement whereby I need to retain logs from Azure for more than 30 – 90 days automatically (the log retention is set by MSFT).
The logs that i need to retain are from approval logs from AzureAD ID Governance.
I have thought about using a PowerShell script that logs its output to a storage account or OneDrive using RunBooks but that seems to open up a log of complexity.
Have you come across any better solutions or ideas that might be better than my current thoughts?
You need to archive the logs to a storage account using Azure Monitor. There is a pricing calculator that shows how much it will cost per year of storage. Unless archiving to a storage account was enabled, it's not possible to retain sign-in logs for more than the default (7 days for Azure AD free or 30 days for Azure AD premium).
You can use audit log retention policies set how long you want to maintain the logs.

Azure Sql Basic Plan switched to vCore General Purpose without notice

About 10 days ago I created my first Azure Sql Database. I choose the Basic Plan (4.21 €/month). This database is used only for testing purpose. Today I received an email from Microsoft Azure.
Subject of the mail : Your services were disabled because you reached your spending limit
Body of the mail : Keep building in Azure by adjusting your spending limit. Your services were disabled on May 7, 2020 because you’ve reached the monthly Azure spending limit provided by your Visual Studio subscription benefit. To keep using Azure, either:
1. Wait for your monthly spending limit to reset at the start of next month, or
2. Adjust your monthly limit for a specific month or for the life of your subscription—you only pay for the extra amount you use each month.
Why did Azure changed the Pricing Plan of my database without notifying me ? Can some actions cause this ?
I know that I did an Export Data-tier Application from Microsoft SQL Server Management Studio from which I was connected to my Azure Database (I made a backup from there). I doubt this explains that.
UPDATE
As suggested by NillsF i checked the deployment history and I can confirm I choose the Basic Plan when I created the database (see below). So I still have no clue what's happening to my database.
You can check the activity log on your subscription to see who initiated the switch from Basic to Vcore. It seems strange that MSFT would have done this on your behalf.
You can also check the deployment history on your resource group to verify the tier you picked when you created the resource itself:

Azure account suspended due to credit limit means cloud service instance deployments are automatically deleted - mitigation?

Problem
We deploy a mixed SaaS, PaaS, IaaS solutions on Micorosft Azure. Recently our account was suspended due to a Microsoft credit limit.
1) The account billing and technical contact received no warning of the approaching credit limit. When the account was suspended alerts were raised instantly. In response I simply lifted the credit limit and the account was accessible again.
2) All VMs could then be started again within seconds and thrid party add-ons were operational automatically.
3) Cloud Services were displayed but all the web/worker role instances in each were stopped. On attempting to start it was clear the deployments had been deleted !
Questions
Does any one know or understand why the deployment packages are removed when an Azure account subscription has been disabled ?
VM, storages accounts, add-ons are persist so why delete the cloud service instances / deployment packages ?
Anyway to mitigate this issue ?
Result is 60 min downtime to upload and deploy packages from source control. Examining enterprise accounts and invoicing.
Thank you for any advice.
Scott
Currently, subscriptions which has monthly credits such as MSDN, MPN and Bizspark plus has a feature called spending limit. This feature is enabled by default to prevent any charges on your credit card. When this sending limit is triggered, the subscription is disabled for the remaining billing cycle and will be automatically re-enabled when the credit is reset which is on the start of the new billing cycle.
When the subscription is disabled, Cloud services (web and worker role) deployments are deleted as only the deployment file is uploaded on Azure and the source file would still be available by the developer. However, Virtual machines are created within Azure platform, hence VMs are stopped de-allocated when the subscription is disabled. The web services deployments are dealt with differently i.e they are deleted it’s a legacy of how the platform was built and is scaled.
The Azure portal shows the credit utilized and remaining balance for the subscription and notifying the credit status over email is still not available. However, when the subscription is disabled, a notification is sent to the account owner.
Possible mitigation involves:
moving to standard payment terms , away from pay-as-you-go account.
remove the credit limit
possibly a continuous deployment strategy via Team Foundation Server or the like could automate redeployment (no doubt there are other automation methods too).
Unfortunately if the Azure subscription is suspended service deployments are deleted and must be uploaded again. If you have multiple large deployment packages this could take many hours.
Hope that helps someone.
Additionally, if you have shared websites, they will get suspended. There is no way to resume them until the credit period is reset, so you need to delete and recreate them.

Azure quota is exceeded

I'm trying to understand the correct way when hosting a web service using Windows Azure.
After reading some of the documentation available, I have reached these lines:
Windows Azure takes the following actions if a subscription's resource usage quotas are exceeded in a quota interval (24 hours):
Data Out - when this quota is exceeded, Windows Azure stops all web sites for a subscription which are configured to run in Shared mode for the remainder of the current quota interval. Windows Azure will start the web sites at the beginning of the next quota interval.
CPU Time - when this quota is exceeded, Windows Azure stops all web sites for a subscription which are configured to run in Shared mode for the remainder of the current quota interval. Windows Azure will start the web sites at the beginning of the next quota interval.
I was always under the impression that using cloud solution will prevent such events, as I really don't know a head of time what needs my web service will have, and that the cloud will provide the resources as needed (and off-course I will be charged for them) -
is that assumption is wrong?
EDIT
I found this great post that really explains Azure perfectly
Scott Hanselman - my own Q&A about Azure Websites and Pricing
If you are hosting the Windows Azure Website in the Shared mode, although you are paying, there are certain quotas that are in place because in the background you are basically sharing the resources with other websites which are hosted on the same Virtual Machine.
If you are hosting using the Standard mode, then you no longer have quotas and you will not experience this issue. As an added bonus, you can now setup Autoscale to automatically scale out your website under load.
Azure provides you different scalability levels according to the method of hosting you pick. For example if you host your web service on an azure web site you can't scale to thousands of servers. If you host your web services in a cloud service you can scale much further.
In Azure the scalability does not always happen transparently. In the case of a web service your choices are "azure web sites", "azure mobile services" and "azure cloud services". None of these will provide transparent scalability. You will need to define how you want scalability to be processed by azure. Most of the time you can do it in your azure management portal and define "Auto-Scaling" based on your pre-defined metrics as in "total amount of memory used" or "compute power used". Azure helps you gather metrics from a distributed environment, define scaling rules and scale without worrying about the underlying infrastructure but you will need to glue these pieces together as it defines how much you will get billed as well.
Hope this makes sense.

Resources