How to measure memory consumption in Azure web roles - azure

The Azure management dashboard gives you the possibility to monitor metrics such as CPU utilization, network in/out, response time, among others.
But how can you measure consumption/availability of memory? I am running a web app that is memory intensive, and it is hard for me to gauge which instance types (or number of instances) I should provision without having an understanding of the memory situation across time.
Yes, my service is a web role on Azure cloud services, I am not talking about VMs (IaaS) here.
Thanks

In your Azure project, in the Roles folder you'll find a folder for each of your Roles. If you use the latest version of the SDK you'll find a file called diagnostics.wadcfg. This is where you'll be able to configure Performance Counters, like \Memory\Available Bytes. This file will also allow you to configure the sample rate (ex: every 30 sec) and the scheduled transfer period (how frequently the logs should be transferred to your Storage Account).
Then you can use a tool like the Azure Diagnostics Manager to view memory consumption over time.
More information: Using performance counters in Windows Azure

A way to do this from the Management Console:
On the Configure tab for your web role, in the monitoring section, change level to Verbose.
On the Monitor tab at the bottom, click Add Metrics
With monitoring set to Verbose, the available metrics should include Memory Available.

Related

Monitoring Cpu, iops and memory on Azure

I want to monitor the cpu/iops/memory usage for my appliation in an azure vm. I need to run it different times for different families to chose the best fit for my app. Should I use Azure monitor or a cli app inside my VM? Which are the best tools to do so?
I would have gone with Azure Monitor Log Analytics as it has rich metrics to monitor VM; here are the list of supported metrics. To monitor the granular level of logs that might not be monitored by default by Log Analytics, I would leverage custom logs, performance counters, data collector API, etc.

Is it possible to see when my Azure Resources are idling?

I want to see when my resources are idling (e.g. certain resources might only be used during business hours and not used for any other background process). I'd like to do that preferably through an API call.
It would all depends on the type of resource and what you are wanting to do. You could use the Azure Monitor API or Azure Data Explorer API with Kusto to query out specific metrics for your different services. Depending on the type of data, this would require you to have more analytics enabled.
Here are some examples based on types of services.
Azure App Service - You could query for CPU, Memory, HTTP Requests, etc. This would give you an idea of activity. These same metrics tie into the auto-scaling.
Azure VMs - CPU, Memory, Disk IO, etc. You could determine your baseline then you would know when it is idle or not.
Azure Storage - Transactions, Ingress, Egress, Requests, etc. You could use that to determine if there is activity in your storage account.
As you can see it all depends on what you want to define as idling. If the goal is to reduce costs, then that will be difficult with many of these services. You could scale up and down your App Services with some scripts or scale in/out based on metrics. Same can be done with your Azure VMs, or using stopping and starting. Storage will not be able to be adjusted, but you are only charged for storage and egress so that is dictated by activity.
Hope this helps.
no, this is not possible. how do you define "idling"? how would azure know if your service does anything or not? besides, most of the PaaS resources cannot be stopped, so whats the use of that.
You can use Azure Advisor to get cost optimization advice, or Azure Monitor directly to gather performance data and then analyze it, but its not going to be trivial.

How can I manage AZURE RAM?

I have B3 service plan with 7GB RAM and 3 app services into a resource group. One service requires over 4GB but it throws out of memory exception after 2.4GB usage. It looks like available memory is divided between 3 services evenly. How can I manage RAM usage?
One way I can think of would be to put the apps on separate app service plans. One of your apps requires quite a lot, so why not give it its own plan?
Then you would have one Basic 3 service plan for that app, and maybe a Basic 1 plan for the other 2 if that is enough for them?
Yes, you are correct. The RAM listed for each instance size based on the pricing tier that you choose is for the underlying VM. So, this includes the OS and various infrastructure processes, in addition to IIS and your Web Apps. It is also shared with the App Services deployed under the same App Service plan.
This issue could be related to your web application, you may try the following:
View the memory utilization in Azure new portal
(http://portal.azure.com).
Use kudu Diagnostic Dump to see the information of memory
consumption.
Move the Web App to another App Service plan that has only one Web
App.
I would suggest you refer the blog post How to view the memory utilization of your Azure Web Site for more details.

Azure quota is exceeded

I'm trying to understand the correct way when hosting a web service using Windows Azure.
After reading some of the documentation available, I have reached these lines:
Windows Azure takes the following actions if a subscription's resource usage quotas are exceeded in a quota interval (24 hours):
Data Out - when this quota is exceeded, Windows Azure stops all web sites for a subscription which are configured to run in Shared mode for the remainder of the current quota interval. Windows Azure will start the web sites at the beginning of the next quota interval.
CPU Time - when this quota is exceeded, Windows Azure stops all web sites for a subscription which are configured to run in Shared mode for the remainder of the current quota interval. Windows Azure will start the web sites at the beginning of the next quota interval.
I was always under the impression that using cloud solution will prevent such events, as I really don't know a head of time what needs my web service will have, and that the cloud will provide the resources as needed (and off-course I will be charged for them) -
is that assumption is wrong?
EDIT
I found this great post that really explains Azure perfectly
Scott Hanselman - my own Q&A about Azure Websites and Pricing
If you are hosting the Windows Azure Website in the Shared mode, although you are paying, there are certain quotas that are in place because in the background you are basically sharing the resources with other websites which are hosted on the same Virtual Machine.
If you are hosting using the Standard mode, then you no longer have quotas and you will not experience this issue. As an added bonus, you can now setup Autoscale to automatically scale out your website under load.
Azure provides you different scalability levels according to the method of hosting you pick. For example if you host your web service on an azure web site you can't scale to thousands of servers. If you host your web services in a cloud service you can scale much further.
In Azure the scalability does not always happen transparently. In the case of a web service your choices are "azure web sites", "azure mobile services" and "azure cloud services". None of these will provide transparent scalability. You will need to define how you want scalability to be processed by azure. Most of the time you can do it in your azure management portal and define "Auto-Scaling" based on your pre-defined metrics as in "total amount of memory used" or "compute power used". Azure helps you gather metrics from a distributed environment, define scaling rules and scale without worrying about the underlying infrastructure but you will need to glue these pieces together as it defines how much you will get billed as well.
Hope this makes sense.

Windows Azure and dynamic elasticity

Is there a way do do dynamic elasticity in Windows Azure? If my workers begin to get overloaded, or queues start to get too full, or too many workers have no work to do, is there a way to dynamically add or remove workers through code or is that just done manually (requires human intervention) right now? Does anyone know of any plans to add that if its not currently available?
Microsoft shipped the Autoscaling Application Block (Wasabi) to provide dynamic scaling. Some of the supported scenarios:
Autoscaling both web and worker roles in Windows Azure by dynamically changing instance counts or performing application throttling.
Autoscaling Windows Azure roles based on timetables.
Autoscaling Windows Azure roles based on metrics collected from the application and/or Windows Azure but constrained by upper and lower bounds on the instance count per role.
Preventing fast oscillations in the number of role instances with the stabilizer. The stabilizer can also help to optimize costs by limiting scaling up operations to the beginning of the hour and scaling down operations to the end of the hour.
Monitoring and logging autoscaling activity.
Sending notifications to preview any scaling operations before they take place.
Encrypting the rules and other configuration in Windows Azure blob storage or in local file storage.
Managing the autoscaler configuration by using Windows PowerShell.
A comprehensie sample application (Tailspin Surveys) showcasing all these features is provided (installation instructions are available here). Also, check out the Developer's Guide and the Channel9 video walkthrough.
The block is available as standalone download of binaries, source or via NuGet.
Here are a couple of talks/demos showing Wasabi in action:
CloudCover Episode on autoscaling
p&p symposium talk "Windows Azure app scaling to need"
There's a Service Management API, and you can use that to scale your application (from code running in Windows Azure or from code running outside of Windows Azure).
http://msdn.microsoft.com/en-us/library/ee460799.aspx and http://code.msdn.microsoft.com/Release/ProjectReleases.aspx?ProjectName=windowsazuresamples&ReleaseId=3233.
Windows Azure has just added the autoscaling feature built into the platform. Now it's trivially easy to configure your autoscaling rules right in the management portal:
See the announcement and the demo. I've also written a post comparing Windows Azure Autoscale to Wasabi and outlining the path forward.
Create a queue named autoscale.[your_role_name].instance_count
In the Management Portal, set the autoscale to Queue.
Set the Target Count field to 1.
Now you can use standard enqueue and dequeue operations on that queue to control the number of worker role instances. You've got 7 days to process a message before it expires, so you might want to create a worker role that can ensure that the number of messages in the queue is tracking your target instance count.
If you're after dynamic elasticity, you've probably already got a worker-role-based controller in mind already, so that's probably not a problem.
Lokad.Cloud open source project for Windows Azure contains distributed executor framework. Among other things it provides auto-scaling with VM provisioning feature.

Resources