Is there any way to set up an alert on low disk space for Azure App Service - azure

I'm running an Azure App Service on a Standard App Service Plan which allows usage of max 50 GB of file storage. The application uses quite a lot of disk space for image cache. Currently the consumption level lies at around 15 GB but if cache cleaning policy fails for some reason it will grow up to the top very fast.
Vertical autoscaling (scaling up) is not a common practice as it often requires some service downtime according to this Microsoft article:
https://learn.microsoft.com/en-us/azure/architecture/best-practices/auto-scaling
So the question is:
Is there any way to set up an alert on low disk space for Azure App Service?
I can't find anything related to disk space in the options available under Alerts tab.

Is there any way to set up an alert on low disk space for Azure App Service?
I can't find anything related to disk space in the options available under Alerts tab.
As far as I know, the alter tab doesn't contain the web app's quota selection. So I suggest you could write your own logic to set up an alert on low disk space for Azure App Service.
You could use azure web app's webjobs to run a background task to check your web app's usage.
I suggest you could use webjob timertrigger(you need install webjobs extension from nuget) to run a scheduled job. Then you could send a rest request to azure management api to get your web app current usage. You could send e-mail or something else according your web app current usage.
More details, you could refer to below code sample:
Notice: If you want to use rest api to get the current web app's usage, you need firstly create an Azure Active Directory application and service principal. After you generate the service principal, you could get the applicationid,access key and talentid. More details, you could refer to this article.
Code:
// Runs once every 5 minutes
public static void CronJob([TimerTrigger("0 */5 * * * *" ,UseMonitor =true)] TimerInfo timer,TextWriter log)
{
if (GetCurrentUsage() > 25)
{
// Here you could write your own code to do something when the file exceed the 25GB
log.WriteLine("fired");
}
}
private static double GetCurrentUsage()
{
double currentusage = 0;
string tenantId = "yourtenantId";
string clientId = "yourapplicationid";
string clientSecret = "yourkey";
string subscription = "subscriptionid";
string resourcegroup = "resourcegroupbane";
string webapp = "webappname";
string apiversion = "2015-08-01";
string authContextURL = "https://login.windows.net/" + tenantId;
var authenticationContext = new AuthenticationContext(authContextURL);
var credential = new ClientCredential(clientId, clientSecret);
var result = authenticationContext.AcquireTokenAsync(resource: "https://management.azure.com/", clientCredential: credential).Result;
if (result == null)
{
throw new InvalidOperationException("Failed to obtain the JWT token");
}
string token = result.AccessToken;
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(string.Format("https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Web/sites/{2}/usages?api-version={3}", subscription, resourcegroup, webapp, apiversion));
request.Method = "GET";
request.Headers["Authorization"] = "Bearer " + token;
request.ContentType = "application/json";
//Get the response
var httpResponse = (HttpWebResponse)request.GetResponse();
using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
{
string jsonResponse = streamReader.ReadToEnd();
dynamic ob = JsonConvert.DeserializeObject(jsonResponse);
dynamic re = ob.value.Children();
foreach (var item in re)
{
if (item.name.value == "FileSystemStorage")
{
currentusage = (double)item.currentValue / 1024 / 1024 / 1024;
}
}
}
return currentusage;
}
Result:

Related

Azure Vaults Secrets Getting too many hits

My Azure Keys Vaults Secrets contains my connection string. My app only connects to my Azure Vault during the app Startup.cs and then saves the connection string in a static class for use elsewhere.
I noticed a few days ago that Key Vaults Metrics was showing 2.45k hits during a 6 minute interval.
Here is the Starup.cs code:
public class Startup
{
var SecretUri = config["SecretUri"];
var TenantId = config["TenantId"];
var ClientSecret = config["ClientSecret"];
var ClientId = config["ClientId"];
var secretclient = new SecretClient(new Uri(SecretUri), new ClientSecretCredential(TenantId, ClientId, ClientSecret));
KeyVaultSecret keyv = secretclient.GetSecret("{myconnection_secretname}");
SecretServices.ConnectionString = keyv.Value;
}
From this point I use the SecretServices.ConnectionString anywhere else in the app where I need the connection string.
My question is there any way my app can hit the vault 2000 times in a few minutes or is something else happening?
Here is the graph from Azure Vaults Metrics Total API Hits:
This graph shows the sudden jump in the number of hits to the API Service.
It's possible that your app is hitting the Key Vault 2000 times in a few minutes. To get confirmation on this, you can have logging to your code to see how often the SecretServices.ConnectionString is being accessed.
Thanks # Tore Nestenius for the comment.
If the Key Vault Metrics show 2.45k hits, you may want to investigate other areas of your system to see if other components are accessing the Key Vault. For example, it's possible that some background jobs or scheduled tasks are hitting the Key Vault repeatedly.
You might also want to consider reducing the frequency with which your app accesses the Key Vault by caching the connection string in memory after retrieving it on startup. That way, your app would only need to retrieve the connection string from the Key Vault once, which would reduce the number of hits to the Key Vault.
If you are having too many connections, then you can use a memory cache and set the caching to a certain time.
Memory Cache:
MemoryCache memoryCache = MemoryCache.Default;
string mCachekey = VaultUrl + "_" +key;
if (!memoryCache.Contains(mCachekey))
{
try
{
using (var client = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(GetAccessTokenAsync),
new HttpClient()))
{
memoryCache.Add(mCachekey, await client.GetSecretAsync(VaultUrl, key), new CacheItemPolicy() { SlidingExpiration = TimeSpan.FromHours(1) });
}
}
catch (Exception ex)
{
throw new ApplicationException($"some exception {key}", ex);
}
return memoryCache[mCachekey] as string;
}
For more information, please see the below MS Docs.
Azure Key Vault service limits
Azure Key Vault throttling

Using Azure Managed Service Identities to scale App Service Plan, but none listed

I am attempting to set up an Azure Function that will scale our App Service Plan to be larger during business hours, and smaller outside of them. I drew from the answer to this question.
public static class ScaleUpWeekdayMornings
{
[FunctionName(nameof(ScaleUpWeekdayMornings))]
public static async Task Run([TimerTrigger("0 0 13 * * 1-5")]TimerInfo myTimer, ILogger log) // Uses GMT timezone
{
var resourceGroup = "DesignServiceResourceGroup";
var appServicePlanName = "DesignService";
var webSiteManagementClient = await Utilities.GetWebSiteManagementClient();
var appServicePlanRequest = await webSiteManagementClient.AppServicePlans.ListByResourceGroupWithHttpMessagesAsync(resourceGroup);
appServicePlanRequest.Body.ToList().ForEach(x => log.LogInformation($">>>{x.Name}"));
var appServicePlan = appServicePlanRequest.Body.Where(x => x.Name.Equals(appServicePlanName)).FirstOrDefault();
if (appServicePlan == null)
{
log.LogError("Could not find app service plan.");
return;
}
appServicePlan.Sku.Family = "P";
appServicePlan.Sku.Name = "P2V2";
appServicePlan.Sku.Size = "P2V2";
appServicePlan.Sku.Tier = "Premium";
appServicePlan.Sku.Capacity = 1;
var updateResult = await webSiteManagementClient.AppServicePlans.CreateOrUpdateWithHttpMessagesAsync(resourceGroup, appServicePlanName, appServicePlan);
}
}
public static class Utilities
{
public static async Task<WebSiteManagementClient> GetWebSiteManagementClient()
{
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var accessToken = await azureServiceTokenProvider.GetAccessTokenAsync("https://management.azure.com/");
var tokenCredentials = new TokenCredentials(accessToken);
return new WebSiteManagementClient(tokenCredentials)
{
SubscriptionId = "<realSubscriptionIdHere>",
};
}
}
When I run this locally, it works, and actually performs the scale up on our Azure App Service Plan (I believe via Azure CLI). However, when this Azure Function is deployed to Azure, it does not find any App Service Plans. It hits the appservicePlan == null block and returns. I have turned on the Managed Service Identity for the deployed Azure Function, and granted it contributor permissions in the App Service I want to scale for.
Am I missing something? Why does
await webSiteManagementClient.AppServicePlans.ListByResourceGroupWithHttpMessagesAsync(resourceGroup);
not return anything when run as part of a published Azure Function?
The problem was the permission was set on the App Service (which was an app on the App Service Plan). It needed to be set on the Resource Group to be able to read the App Service Plans.

Upload Azure Batch Job Application Package programmatically

I have found how to upload/manage Azure Batch job Application Packages through the UI:
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
And how to upload and manage Resource Packages programmatically:
https://github.com/Azure/azure-batch-samples/tree/master/CSharp/GettingStarted/02_PoolsAndResourceFiles
But I can't quite seem to put 2 and 2 together on how to manage Application Packages programmatically. Is there an API endpoint we can call to upload/manage an Application Package when setting up a batch job?
Since this is not quite straightforward, I'll write down my findings.
These are the steps to programmatically upload Application Packages via an application that is unattended - no user input (e.g. Azure credentials) is needed.
In Azure Portal:
Create the Azure Batch application
Create a new Azure AD application (as Application Type use Web app / API)
Follow these steps to create the secret key and assign the role to the Azure Batch account
Note down the following credentials/ids:
Azure AD application id
Azure AD application secret key
Azure AD tenant id
Subscription id
Batch account name
Batch account resource group name
In your code:
Install NuGet packages Microsoft.Azure.Management.Batch, WindowsAzure.Storage and Microsoft.IdentityModel.Clients.ActiveDirectory
Get the access token and create the BatchManagementClient
Call the ApplicationPackageOperationsExtensions.CreateAsync method, which should return an ApplicationPackage
ApplicationPackage contains the StorageUrl which can now be used to upload the Application Package via the storage API
After you have uploaded the ApplicationPackage you have to activate it via ApplicationPackageOperationsExtensions.ActivateAsync
Put together the whole code looks something like this:
private const string ResourceUri = "https://management.core.windows.net/";
private const string AuthUri = "https://login.microsoftonline.com/" + "{TenantId}";
private const string ApplicationId = "{ApplicationId}";
private const string ApplicationSecretKey = "{ApplicationSecretKey}";
private const string SubscriptionId = "{SubscriptionId}";
private const string ResourceGroupName = "{ResourceGroupName}";
private const string BatchAccountName = "{BatchAccountName}";
private async Task UploadApplicationPackageAsync() {
// get the access token
var authContext = new AuthenticationContext(AuthUri);
var authResult = await authContext.AcquireTokenAsync(ResourceUri, new ClientCredential(ApplicationId, ApplicationSecretKey)).ConfigureAwait(false);
// create the BatchManagementClient and set the subscription id
var bmc = new BatchManagementClient(new TokenCredentials(authResult.AccessToken)) {
SubscriptionId = SubscriptionId
};
// create the application package
var createResult = await bmc.ApplicationPackage.CreateWithHttpMessagesAsync(ResourceGroupName, BatchAccountName, "MyPackage", "1.0").ConfigureAwait(false);
// upload the package to the blob storage
var cloudBlockBlob = new CloudBlockBlob(new Uri(createResult.Body.StorageUrl));
cloudBlockBlob.Properties.ContentType = "application/x-zip-compressed";
await cloudBlockBlob.UploadFromFileAsync("myZip.zip").ConfigureAwait(false);
// create the application package
var activateResult = await bmc.ApplicationPackage.ActivateWithHttpMessagesAsync(ResourceGroupName, BatchAccountName, "MyPackage", "1.0", "zip").ConfigureAwait(false);
}
Azure Batch Application Packages management operations occur on the management plane. The MSDN docs for this namespace are here:
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.management.batch
The nuget package for Microsoft.Azure.Management.Batch is here:
https://www.nuget.org/packages/Microsoft.Azure.Management.Batch/
And the following sample shows management plane operations in C#, although it is for non-application package operations:
https://github.com/Azure/azure-batch-samples/tree/master/CSharp/AccountManagement

Change Azure website app settings from code

Is it possible to change the app settings for a website from the app itself?
This is not meant to be an everyday operation, but a self-service reconfiguration option. A non-developer can change a specific setting, which should cause a restart, just like I can do manually on the website configuration page (app setting section)
You can also use the Azure Fluent Api.
using Microsoft.Azure.Management.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent.Authentication;
using Microsoft.Azure.Management.ResourceManager.Fluent.Core;
...
public void UpdateSetting(string key, string value)
{
string tenantId = "a5fd91ad-....-....-....-............";
string clientSecret = "8a9mSPas....................................=";
string clientId = "3030efa6-....-....-....-............";
string subscriptionId = "a4a5aff6-....-....-....-............";
var azureCredentials = new AzureCredentials(new
ServicePrincipalLoginInformation
{
ClientId = clientId,
ClientSecret = clientSecret
}, tenantId, AzureEnvironment.AzureGlobalCloud);
var _azure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(azureCredentials)
.WithSubscription(subscriptionId);
var appResourceId = "/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Web/sites/xxx"; //Get From WebApp -> Properties -> Resource ID
var webapp = _azure.WebApps.GetById(appResourceId);
//Set App Setting Key and Value
webapp.Update()
.WithAppSetting(key, value)
.Apply();
}
It wasn't that hard once I found the right lib to do it, Microsoft Azure Web Sites Management Library.
var credentials = GetCredentials(/*using certificate*/);
using (var client = new WebSiteManagementClient(credentials))
{
var currentConfig = await client.WebSites.GetConfigurationAsync(webSpaceName,
webSiteName);
var newConfig = new WebSiteUpdateConfigurationParameters
{
ConnectionStrings = null,
DefaultDocuments = null,
HandlerMappings = null,
Metadata = null,
AppSettings = currentConfig.AppSettings
};
newConfig.AppSettings[mySetting] = newValue;
await client.WebSites.UpdateConfigurationAsync(webSpaceName, webSiteName,
newConfig);
}
Have you read into the Service Management REST API? The documentation mentions that it allows you to perform most the actions that are available via the Management Portal programmatically.
In addition to Diego answer, to use the Azure Management Librairies within a WebApp (WebSites and/or WebJobs), you need to configure SSL which is a little bit tricky:
Using Azure Management Libraries from Azure Web Jobs

Staging or Production Instance?

Is there anywhere in the service runtime that would tell me if I'm currently running on 'Staging' or 'Production'? Manually modifying the config to and from production seems a bit cumbersome.
You should really not change your configurations when you're based upon if you're in Prod or Staging. Staging area is not designed to be a "QA" environment but only a holding-area before production is deployed.
When you upload a new deployment, current deployment slot where you upload your package to is destroyed and is down for 10-15minutes while upload and start of VM's is happening. If you upload straight into production, that's 15 minutes of production downtime. Thus, Staging area was invented: you upload to staging, test the stuff, and click "Swap" button and your Staging environment magically becomes Production (virtual IP swap).
Thus, your staging should really be 100% the same as your production.
What I think you're looking for is QA/testing environment? You should open up a new service for Testing environment with its own Prod/Staging. In this case, you will want to maintain multiple configuration file sets, one set per deployment environment (Production, Testing, etc.)
There are many ways to manage configuration-hell that occurs, especially with Azure that has on top of .config files, its own *.cscfg files. The way I prefer to do it with Azure project is as follows:
Setup a small Config project, create folders there that match Deployment types. Inside each folder setup sets of *.config & *.cscfg files that match to particular deployment environment: Debug, Test, Release... these are setup in Visual Studio as well , as build target types. I have a small xcopy command that occurs during every compile of the Config project that copies all the files from Build Target folder of Config project into root folder of the Config project.
Then every other project in the solution, LINKS to the .config or .cscfg file from the root folder of the Config project.
Voila, my configs magically adapt to every build configuration automatically. I also use .config transformations to manage debugging information for Release vs. non-Release build targets.
If you've read all this and still want to get at the Production vs. Staging status at runtime, then:
Get deploymentId from RoleEnvironment.DeploymentId
Then use Management API with a proper X509 certificate to get at the Azure structure of your Service and call the GetDeployments method (it's rest api but there is an abstraction library).
Hope this helps
Edit: blog post as requested about the setup of configuration strings and switching between environments # http://blog.paraleap.com/blog/post/Managing-environments-in-a-distributed-Azure-or-other-cloud-based-NET-solution
Sometimes I wish people would just answer the question.. not explain ethics or best practices...
Microsoft has posted a code sample doing exactly this here: https://code.msdn.microsoft.com/windowsazure/CSAzureDeploymentSlot-1ce0e3b5
protected void Page_Load(object sender, EventArgs e)
{
// You basic information of the Deployment of Azure application.
string deploymentId = RoleEnvironment.DeploymentId;
string subscriptionID = "<Your subscription ID>";
string thrumbnail = "<Your certificate thumbnail print>";
string hostedServiceName = "<Your hosted service name>";
string productionString = string.Format(
"https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}",
subscriptionID, hostedServiceName, "Production");
Uri requestUri = new Uri(productionString);
// Add client certificate.
X509Store store = new X509Store(StoreName.My, StoreLocation.LocalMachine);
store.Open(OpenFlags.OpenExistingOnly);
X509Certificate2Collection collection = store.Certificates.Find(
X509FindType.FindByThumbprint, thrumbnail, false);
store.Close();
if (collection.Count != 0)
{
X509Certificate2 certificate = collection[0];
HttpWebRequest httpRequest = (HttpWebRequest)HttpWebRequest.Create(requestUri);
httpRequest.ClientCertificates.Add(certificate);
httpRequest.Headers.Add("x-ms-version", "2011-10-01");
httpRequest.KeepAlive = false;
HttpWebResponse httpResponse = httpRequest.GetResponse() as HttpWebResponse;
// Get response stream from Management API.
Stream stream = httpResponse.GetResponseStream();
string result = string.Empty;
using (StreamReader reader = new StreamReader(stream))
{
result = reader.ReadToEnd();
}
if (result == null || result.Trim() == string.Empty)
{
return;
}
XDocument document = XDocument.Parse(result);
string serverID = string.Empty;
var list = from item
in document.Descendants(XName.Get("PrivateID",
"http://schemas.microsoft.com/windowsazure"))
select item;
serverID = list.First().Value;
Response.Write("Check Production: ");
Response.Write("DeploymentID : " + deploymentId
+ " ServerID :" + serverID);
if (deploymentId.Equals(serverID))
lbStatus.Text = "Production";
else
{
// If the application not in Production slot, try to check Staging slot.
string stagingString = string.Format(
"https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}",
subscriptionID, hostedServiceName, "Staging");
Uri stagingUri = new Uri(stagingString);
httpRequest = (HttpWebRequest)HttpWebRequest.Create(stagingUri);
httpRequest.ClientCertificates.Add(certificate);
httpRequest.Headers.Add("x-ms-version", "2011-10-01");
httpRequest.KeepAlive = false;
httpResponse = httpRequest.GetResponse() as HttpWebResponse;
stream = httpResponse.GetResponseStream();
result = string.Empty;
using (StreamReader reader = new StreamReader(stream))
{
result = reader.ReadToEnd();
}
if (result == null || result.Trim() == string.Empty)
{
return;
}
document = XDocument.Parse(result);
serverID = string.Empty;
list = from item
in document.Descendants(XName.Get("PrivateID",
"http://schemas.microsoft.com/windowsazure"))
select item;
serverID = list.First().Value;
Response.Write(" Check Staging:");
Response.Write(" DeploymentID : " + deploymentId
+ " ServerID :" + serverID);
if (deploymentId.Equals(serverID))
{
lbStatus.Text = "Staging";
}
else
{
lbStatus.Text = "Do not find this id";
}
}
httpResponse.Close();
stream.Close();
}
}
Staging is a temporary deployment slot used mainly for no-downtime upgrades and ability to roll back an upgrade.
It is advised not to couple your system (either in code or in config) with such Azure specifics.
Since Windows Azure Management Libraries and thanks to #GuaravMantri answer to another question you can do it like this :
using System;
using System.Linq;
using System.Security.Cryptography.X509Certificates;
using Microsoft.Azure;
using Microsoft.WindowsAzure.Management.Compute;
using Microsoft.WindowsAzure.Management.Compute.Models;
namespace Configuration
{
public class DeploymentSlotTypeHelper
{
static string subscriptionId = "<subscription-id>";
static string managementCertContents = "<Base64 Encoded Management Certificate String from Publish Setting File>";// copy-paste it
static string cloudServiceName = "<your cloud service name>"; // lowercase
static string ns = "http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration";
public DeploymentSlot GetSlotType()
{
var managementCertificate = new X509Certificate2(Convert.FromBase64String(managementCertContents));
var credentials = new CertificateCloudCredentials(subscriptionId, managementCertificate);
var computeManagementClient = new ComputeManagementClient(credentials);
var response = computeManagementClient.HostedServices.GetDetailed(cloudServiceName);
return response.Deployments.FirstOrDefault(d => d.DeploymentSlot == DeploymentSlot.Production) == null ? DeploymentSlot.Staging : DeploymentSlot.Production;
}
}
}
An easy way to solve this problem is setting at your instances an key to identify which environment it is running.
1) Set at your production slot:
Set it Settings >> Application settings >> App settings
And create a key named SLOT_NAME and value "production". IMPORTANT: check Slot setting.
2) Set at your staging slot:
Set it Settings >> Application settings >> App settings
And create a key named SLOT_NAME and value "staging". IMPORTANT: check Slot setting.
Access from your application the variable and identify which environment the application is running. In Java you can access:
String slotName = System.getenv("APPSETTING_SLOT_NAME");
Here are 4 points to consider
VIP swap only makes sense when your service faces the outside world. AKA, when it exposes an API and reacts to requests.
If all your service does is pull messages from a queue and process them, then your services is proactive and VIP swap is not a good solution for you.
If your service is both reactive and proactive, you may want to reconsider your design. Perhaps split the service into 2 different services.
Eric's suggestion of modifying the cscfg files pre- and post- VIP swap is good if the proactive part of your service can take a short down time (Because you first configure both Staging and Production to not pull messages, then perform VIP Swap, and then update Production's configuration to start pulling messages).

Resources