I am trying to figure out the reason why an Azure WebJob sporadically fails with OutOfMemoryException.
There are 3 App Services assigned to the same Standard 3 App Service plan (4 cores, 7GB RAM). One of them consists of a set of static HTML, CSS and JS files. Another App service is an ASP.NET MVC application. The last one is a set of WebJobs (1 continuous and 3 triggered).
One of the triggered WebJobs is an import which runs at night when there are no traffic from the users. Some times the import fails with an OutOfMemoryException. According to the Azure Metrics graphics, RAM utilization of the Service Plan is never bigger than 50% with an average value of about 40%. The App Service with the job uses up to 1.3 GB of RAM.
My question is how is it possible that OutOfMemoryException occurs, but there is no respective evidence of critical RAM usage on the graphics?
Related
I'm experiencing a really strange problem.
My API is taking a long time to response and I can't see why from AppInsights.
Attached is the end-to-end transaction view. It shows that the API call took 16.7 seconds. It's clear that a dependency call took 3.8 seconds. Some DB operation took several ms. But what's causing the long delay? What happened in the red rectangle marked with question mark?
The web app is hosted with a P1V2 app service plan. The plan is shared by 3 API apps. Could this be a problem?
The web app is hosted with a P1V2 app service plan. The plan is shared by 3 API apps. Could this be a problem?
No, that couldn't be the problem because the P1V2 app service plan is a premium service plan. We can reduce the cost consumption by combining multiple apps into a single app service plan. We can add multiple apps to an existing plan as long as the plan has enough resources.
Here are my few observations.
The P1-V2 service plan will have 3.5 GB of RAM and 1 VCore processor. We need to cross-check the utilization logs once.
APIs are calling token methods Get/msi/token, and there onwards it's forward to the PUT call on the portal; there it's taking close to 16 seconds.
The one mentioned in the red box is not an issue because it takes only 16 milliseconds to execute SQL.
I'm trying to understand memory reporting in the Azure App Service. I have an Azure App Service plan of "S1" which includes 1.75 gigs of ram.
When I look in the Kudu process explorer and add up and add up all of the "private memory" of various , my app is using ~990mb. I don't have any other processes or deployment slots running. One single App Service, 1 deployment slot running.
However, in the dashboard, it says my memory percentage usage is 82% (very stable between 80-85% btw). 82% of 1.75 gig is 1.4 gig.
So I'm trying to figure out where the other 400 meg is going, or if the dashboard is incorrect? Are there other processes which are running which aren't included in the process explorer? The details of the process explorer is
w3wp.exe (<- main app service) ~765 meg
snapshotuploader64.exe ~33 meg
snapshotuploader64.exe ~33 meg
w3wp.exe (scm) ~126 meg
cmd.exe ~4 meg
DaasRunner.exe ~30 meg
In kudu -> Process Explorer, it only shows the memory used by the scm site and the web instance.
In fact, the memory is also used by the hosting environment like OS / other background tasks, which are not reported in Process Explorer. Even if you create an empty azure web app, in the dashboard, you can still see the memory is used around 50%.
There're some feedbacks / issues about that, see here and here.
I am using Azure Web app for hosting background jobs. I have Scale out plan to increase instance count by 1 if CPU or memory utilization goes above 85% and scale down if it is less than 65%. Now interesting part is that when my web app is on S2, web app goes to 10 instance while if I scale it up to S3, the instance count remains to 1 instance.
I am confused as it doesnt add up. If I keep my web app on S3, CPU and memory utilization remains below 50%.
Additional details
Job is subscriber to Azure Service bus. It keep watch on a service bus queue
Azure SDK - latest for dotnet core
I have a App Service plan consisting of 2 S3 instances (each is 4 cores and 7 GB RAM). In the portal on the service plan blade I see this:
What exactly do these percentages mean? Is it 44.41% of 7 + 7 GB RAM? The plan hosts 7 web apps and I get an alert that one of the app exceeds 85% memory limit. How is that possible? 85% of what? Does that mean each app gets 7/7 = 1GB on each instance? If I open a specific web app blade I see the following:
Is it ~1GB on each of 2 instances or total? How do I understand memory consumption by each web app per service plan instance?
Is there any good tutorial on these metrics as the official documentation is not very clear?
For the first graph,The memory percentage showing is indeed the memory used by the resources in the app service plan . So its actually 44.41% of 7+7 Gb . If the resources are using 85% of it then it will by default create a alert to user . So in that case ,upgrade the instances to allocate more memory and cores or scaling up the instances so that the performance of the app is increased and user won't receive the alert.
The other one is DataIn is the average incoming bandwidth used across all instances of the plan and for the Memory Working Set is referred as the maximum physical memory of the RAM used for a process performed by instances .
Please refer more to this document for monitoring app service.
My app deployed in Azure with basic tier having 10GB space. Now it showing the usage warning error in Server. So I want change the scale from basic to standard. Then which instance size should choose having ( Small-1 core, Medium-2cores and Large- 4 cores) ? Also while saving following notifications are showing
In Standard mode, if a web app is stopped, billing continues, and changing the scaling for an app affects other apps. Are you sure you want to continue?
This will scale the following web apps in the East US 2 region. This can take several minutes to complete. Your web apps will keep running during the process.
please help
To answer your question, here is a table with App Service sizes in which you can see that the Standard size has 50GB and the Premium has 500GB of disk space.
To answer your other questions:
The reality is that you pay for the App Service Plan, each plan can host dozens of Apps. Think of it as a Platform running all the time that hosts your Apps, if you stop one App, the Platform is still running (because you might have other Apps running on it), and thus, you are still charged for it.
Like I said, because what you pay is the App Service Plan, scaling the Plan will automatically scale all the Apps contained in it, that's the reason of the second message.
Think of the App Service Plan as a server in which you run your Apps, the moment you delete all the Apps in the Plan, the Plan stops billing, but as long as you have at least one App (running or stopped) in it, it will keep charging.