umbraco 7.2 on azure websites takes 1.5gb memory - azure

I have a simple Umbraco 7.2 website (no plugins/custom code/etc) on a Shared Azure Website. Azure has suspended the website twice in the last week because of over quota memory usage. I scaled it up to 6 instances for now. Looking at the dashboard right now, it shows me it's using 1.5gigs. I went on the Kudu interface (.scm.azurewebsites.net) and in the Process Explorer it shows that the the process is only taking ~150mb of both private memory and working set. It also says virtual memory is taking ~750mb.
Why is Azure saying it's taking up so much memory? Does increasing instance count actually mean more memory for my app or does it just mean more instances are running the same app... so it's basically 200mb*6 instances = 1200megs?
Thanks!

When you scale up the number of instances for an Azure Website it changes the number of VMs your website is running on - so it means more instances running the same app.
Similarly, when you log into the Kudu interface this is going to connect to the process running on one particular instance - so it won't show you total memory being used by all instances at once, just what's used by the one particular instance you're connected to.

Related

Enabling NUMA on IIS when migrating to Azure VMs

So I'm trying to migrate a Legacy website from an AWS VM to an Azure VM and we're trying to get the same level of performance. The problem is I'm pretty new to setting up sites on IIS.
The authors of the application are long gone and we struggle with the application for many reasons. One of the problems with the site is when it's "warming up" it pulls back a ton of data to store in memory for the entire day. This involves executing long running stored procs and in memory processes which means first load of certain pages takes up to 7 minutes. It then uses a combination of in memory data and output caching to deliver the pages.
Sessions do seem to be in use although the site is capable of recovering session data from the database in some more relatively long running database operations so sessions are better to stick with where possible which is why I'm avoiding a web garden.
That's a little bit of background, however my question is really about upping the performance on IIS. When I went through their settings on the AWS box they had something call NUMA enabled with what appears to be the default settings and then the maximum worker processes set to 0 which seems to enable NUMA. I don't know why they enabled NUMA or if it was necessary, but I am trying to get as close to a like for like transition as possible and if it gives extra performance in this application we'll probably need it!
On the Azure box I can see options to set the maximum worker processes to 0 but no NUMA options. My question is whether NUMA is enabled with those default options or is there something further I need to do to enable NUMA.
Both are production sized VMs but the one on Azure I'm working with is a Standard D16s_v3 with 16 vCores and 64Gb RAM. We are load balancing across a few of them.
If you don't see the option in the Azure VM it's because the server is using symmetric processing and isn't NUMA aware.
Now to optimize your loading a bit:
HUGE CAVEAT - if you have memory leak type issues, don't do this! To ensure you don't, put on a private bytes limit roughly 70% the size of memory on the server. If you see that get hit/issue an IIS recycle (that event is logged by default) then you may want to ignore further steps. Either that or mess around with perfmon (or more easily iteratively check peak bytes in task manager where you'll have to add that column in the details pane)
Change your app pool startup mode to: AlwaysRunning
Change your web app to preloadenabled=true
Set an initialization page in your web.config (so that preloading knows what to load).
*Edit forgot some steps. Make sure your idle timeout is clear or set it to midnight.
Make sure you don't have the default recycle time enabled, clear that out.
If you want to get fancy you can add a loading page and set an http refresh or due further customizations seen below:
https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization

Stopping / Killing Azure Functions Running Instances on Consumption Plan

How do u kill azure function runnable instances (executions) on a Comsumption Plan (previously known as Dynamic Plan).
I am running the azure function on a runtime version of 1.0.
Few (some not shown in the log in the screenshot below) were running past the FIVE MINUTES functionTimeout threshold (check the one with DOTTED status).
There were however few instances that DID get killed AS expected when they reached the FIVE MINUTES THRESHOLD (check the one with CROSSED status)
What I tried:
As suggested in this SO question Stop/Kill a running Azure Function I restarted the website hosting the azure function
I even stopped / started the website just to be sure
I killed the processes from kudu interface but the logs still keep showing there was a rouge instance.
Process explorer showed 32 Threads but all of them were in WAITING status. Nothing was running from what I could observe.
Finally
I deleted the website and moved over a App Service Plan based function since that seems to be the only option azure functions which need flexible timeouts.
This is a monitoring bug, and although it looks confusing, would have no impact on the runtime behavior.
I have opened an issue to track this here and it will be updated as we make progress.
Thank you for your patience with this and for reporting the problem!

Why does Azure Cache use more that 30% memory?

I'm running an Azure Cloud Service that uses the "new Azure Cache".
I've configured the Cloud Service to use 30% of memory (default), but still the CacheService keeps eating the memory to a point where the server starts to swap out memory to disk. The server has 3.5 GB RAM (Medium), and the CacheServices used 2GB after running for 3 days (and keeps growing). Se attached picture.
We don't even use the cache, so this makes me rather nervous.
Another weird thing is that the other server in the same deployment does not have this problem.
Can anyone tell me if this is normal, should I be worried, or is there a setting somewhere, that I'm missing?

How to reduce memory consumption for Orchard CMS site hosted on Windows Azure Websites

I have an Orchard CMS website currently hosted on Windows Azure Websites.
Its a pretty standard blog where images are hosted via skydrive and linked, so the blog itself only serves html.
I've set it in Shared mode, running 1 instance.
But I keep getting quota reached. and it seems like my site is always maxing out the memory (max is 512mb per hour) and I can't understand why?
I've tried increasing to 3 instances, but it doesn't increase the maximum memory I can use.
Update:
The maximum usage for websites under Shared mode are:
CPU Time: 4 hours per day, 2.5minutes per 5 minute
File System: 1024mb
Memory usage: 512mb per hour
Database: 1024mb (web instance)
Update2:
I've tried re-creating my website in different zones. Currently my site is hosted in US West, which has the above limits, but other zones have slightly different limits, such as East Asia has 1024mb per hour memory usage limit! I haven't been able to dig up any documentation on this, which is puzzling.
Update3:
In Update2 I mentioned that different regions have different "memory usage per hour limit". This is actually not true. I had set up a new site under the "Free" setting with 1024mb per hour, but when I switched this to "Shared" the memory usage limit came down to 512mb per hour.
I have not been able to reproduce this issue in any of my other sites despite being the same source code, which leads me to believe its something weird with my particular azure website set up. Possibly something to do with the dashboard as mentioned by #Vinblad.
I'm planning to set up a new azure website in a different region, and while I'm at it, upgrade to Orchard 1.6
Had a similar issue on Azure with Orchard. It was due to the error log files continually increasing and taking up space. Manually deleting files at the moment but have to look into a more automated solution.
512MB / hour doesn't make any sense at all, I agree with Steve. 512MB (not per hour) is more than enough to host Orchard however. Try to measure memory on your local copy of the site. If you do get abnormal memory consumption, try to profile it and find the module that's responsible for it. If not, then contact Azure support and ask them why the same application would take more memory on Azure than on your local machine.
Another thing to investigate would be caching: do you have output caching enabled?
I saw this post on the Azure forums where they recommend disabling the dynamic module loader. We gave this a try but this gave us problems with the images so we had to revert back.

Is a Windows Azure worker role instance a whole VM?

When I run a worker role instance on Azure, is it a complete VM running in a shared host (like EC2)? Or is it running in a shared system (like Heroku)?
For example, what happens if my application starts requesting 100 GB of memory? Will it get killed off automatically for violation of limits (รก la Google App Engine), or will it just exhaust the VM, so that the Azure fabric restarts it?
Do two roles ever run in the same system?
It's a whole VM, and the resources allocated are based directly on the size of VM you choose, from 1.75GB (Small) to 14GB (XL), with 1-8 cores. There's also an Extra Small instance with 768MB RAM and shared core. Full VM size details are here.
With Windows Azure, your VM is allocated on a physical server, and it's the fabric's responsibility of finding such servers to properly allocate all of your web or worker role instances. If you have multiple instances, this means allocating these VMs across fault domains.
With your VM, you don't have to worry about being killed off if you try allocating too much in the resource dep't: it's just like having a machine, and you can't go beyond what's there.
As far as two roles running on the same system: Each role has instances, and with multiple instances, as I mentioned above, your instances are divided into fault domains. If, say, you had 4 instances and 2 fault domains, it's possible that you may have two instances on the same rack (or maybe same server).
I ran a quick test to check this. I'm using a "small" instance that has something like 1,75 gigabytes of memory. My code uses an ArrayList to store references to large arrays so that those arrays are not garbage collected. Each array is one billion bytes and once it is allocated I run a loop that sets each element to zero and then another loop to check that each element is zero to ensure that memory is indeed allocated from the operating system (not sure if it matters in C#, but it indeed mattered in C++). Once the array is created, written to and read from, it is added to the ArrayList.
So my code successfully allocated five such arrays and the attempt to allocate the sixth one resulted in System.OutOfMemoryException. Since 5 billion bytes plus overhead is definitely more that 1,75 gigabytes of physical memory allocated to the machine I believe this proves that page file is enabled on the VM and the behavior is the same as on usual Windows Server 2008 with the limitations induced by the machine where it is running.

Resources