I had migrated from Heroku to Microsft Azure, and the speed is really very slow, my App service is having the following specs OS (linux):
P1V2
210 total ACU
3.5 GB memory
Dv2-Series compute equivalent
then when it comes to my Azure Database for PostgreSQL flexible server, the following are the specs OS (linux):
General Purpose (2-64 vCores) - Balanced configuration for most common workloads
This is my response time of 15 sec because of Redis cache, sometimes it goes up to 30 sec or beyond :
Am sure all these Specs are higher than the default Heroku specs it used to give, but why is my Django project very slow when it comes to the response time of the API requests?
ADDITION :
I am using a container registry which connects to the App service wherever there's an auto-deployment.
I also fixed the n + 1 issue on the endpoints.
Always on is on, I read several posts like this one.
UPDATE :
I have an ps and top via bash with Kudu, but I don't seem to see any zomibe processes, I also searched with S=Z after pressing 'o', but I didn't find any, below are the screenshots :
top - 16:31:58 up 1 day, 1:47, 1 user, load average: 0.36, 0.62, 0.48
Tasks: 7 total, 1 running, 6 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.9 us, 4.6 sy, 2.2 ni, 89.5 id, 2.4 wa, 0.0 hi, 0.5 si, 0.0 st
MiB Mem : 13993.7 total, 2266.7 free, 1967.4 used, 9759.6 buff/cache
MiB Swap: 2048.0 total, 2032.2 free, 15.8 used. 11719.2 avail Mem
Just to highlight that an App service always runs in an App Service plan. When you create an App Service plan in any region a set of compute resources is created for that plan in that region.
Whatever apps you put into this App Service plan run on these compute resources as defined by your App Service plan. Each App Service plan defines:
Operating System (Windows, Linux)
Region (West US, East US, etc.)
Number of VM instances
Size of VM instances (Small, Medium, Large)
Pricing tier (Free, Shared, Basic, Standard, Premium, PremiumV2, PremiumV3, Isolated, IsolatedV2)
As per diagnostic tool, its reflecting that there is Too many active container running per host and high load average, and its recommended to move some of your app to other app service plan and consider scaling out to reduce load.
Suggest you to refer this detailed step by step guide on Move an app to another App Service plan
Please note that you can move an app to another App Service plan, as long as the source plan and the target plan are in the same resource group and geographical region.
For scaling out suggest you to follow detailed step mentioned in : Scale instance count manually or automatically you can choose to run your application on more than one instance.
Scaling out not only provides you with more processing capability, but also gives you some amount of fault tolerance. If the process goes down on one instance, the other instances continue to serve requests. You can set the scaling to be Manual or Automatic.
Further you may also consider Scale up as there is new PremiumV3 pricing tier gives you faster processors, SSD storage, and quadruple the memory-to-core ratio of the existing pricing tiers (double the PremiumV2 tier). With the performance advantage, you could save money by running your apps on fewer instances.
Check this article on to learn how to create an app in PremiumV3 tier or scale up an app to PremiumV3 tier.
More details:
Azure App Service plan overview
Update:
Also suggest you to go to App Service Diagnostics and see as below:
If Linux Zombie processes detected this may effect the performance and makes application slow. Zombie Process or defunct process is one which has completed execution but still exists in system process table. i.e, the parent process has not yet read the child processes exit status.
Zombie processes can either be detected by looking at top or ps output.
Recommended Action if Linux Zombie process detected:
SSH into your app container by going to
https://sitename.scm.azurewebsites.net.
Use ps to check for any <defunct> processes. Sample below.
ps -aux | grep -w defunct
root 3300 0.0 0.0 0 0 pts/24 ZN+ 18:51 0:00 [newzombie]
Use top to show any processes in a 'Z' state. Sample below (press 'o' and filter using 'S=Z')
top - 19:02:22 up 28 days, 13:35, 26 users, load average: 0.39, 0.65,
0.86
Tasks: 66 total, 1 running, 64 sleeping, 0 stopped, 1 zombie
%Cpu(s): 2.7 us, 2.0 sy, 1.0 ni, 93.9 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 1975528 total, 123776 free, 1049580 used, 802172 buff/cache
KiB Swap: 1910780 total, 769432 free, 1141348 used. 658264 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3317 root 30 10 0 0 0 Z 0.0 0.0 0:00.00 newzombie
Once the process is identified, try restarting the process or consider restarting your site.
Look for if there is HTTP Server Errors as HTTP 500.0 error typically indicates an application code issue. An un-handled exception in the application code or an error in application is what typically causes this error.
There are a number of issues which can impact performance like:
Network requests taking a long time
Application code or database
Queries being inefficient Application using high memory/CPU
Application crashing due to an exception
To isolate the issue. You may try below troubleshooting steps:
Observe and monitor application behavior
Collect data
Mitigate the issue
Would suggest you to navigate to your web app in the Azure portal and select the 'diagnose and solve' blade of your web app> click on Linux web app Slow under popular troubleshooting tools, the information provided here would be helpful for fixing this.
Further you can follow to speed up for drf try removing the unwanted apps in INSTALLED_APPS and MIDDLEWARE this may help in boosting your django rest framework performance.
There could be several causes for high response time, to isolate the issue. Kindly try these steps:
If it’s not done already, turn on Always On feature. By default, web apps are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the app loaded all the time.
On the App Service App, in the left navigation, click on Diagnose and solve problems – Checkout the tile for “Diagnostic Tools” > “Availability and Performance” & "Best Practices".
Update the CPU utilization as 75% for scale-out condition or 25% for scale-in condition as test and see if that makes any difference (to avoid flapping condition/I understand you have already analyzed CPU usage)
Isolate/Avoiding the outbound TCP limits is easier to solve, as the limits are set by the size of your worker. You can see the limits in Sandbox Cross VM Numerical Limits - TCP Connections- - To avoid outbound TCP limits, you can either increase the size of your workers, or scale out horizontally.
Troubleshooting intermittent outbound connection errors in Azure App Service - (to isolate port exhaustion).
f there are multiple Apps under a single App Service Plan, distribute the apps across multiple app service plans to obtain additional compute levels (to isolate the issue further/shared more details on this in the ‘comment’ section below)
Review the logs to fetch more details on this issue.
Note: Linux is currently the recommended option for running Python apps in App Service and I believe you’re leveraging the App Service Linux flavor.
After an engagement with the Microsoft team, the issue was that My Azure flexible server and App service were in different regions, one was in North South Africa and the other was in East US. So after ensuring all are in the same region, the issue was resolved.
Secondly, I had a field which had both text and images(base 64),I was using Django summernote, it provides a WYSIWYG experience, so it can store by default all the images and text together in the same field, so I optimized it, now the speed is super fast.
I'm planning to make a NodeJS app with Express and an SQL database and upload it all to Heroku. I am going to get the Postgres Hobby Basic plan.
On the Heroku website it says that my database is limited to 10 000 000 rows, but I don't know if there are any memory limits. For example if I can't store more that 0.5 GB of data on my database. I would be grateful if someone could tell me is my database limited only by the 10 000 000 rows limit, or is there a memory limit as well.
Storage (disk) and memory (RAM) are different things. Dynos have a memory limit, e.g.
free, hobby and standard-1x have 512 MB
Heroku Postgres plans have different types of limits by tier. Hobby tier limits are based on row count. Standard and above tiers have no row limits, but they do have storage limits. For example, Standard-0 plans have a storage limit of 64 GB.
I ran into a situation where out of memory exceptions were generated in our Azure App Service for a .Net Core Web API even though memory & utilization topped 50% in the App Service Plan (P2V2: 7GB RAM).
I have looked at this SO article to check private bytes and other things but still don't see where the memory of exhaustion comes from. I see a max usage of 1.5GB on the memory working set which is well below the 7GB.
Nothing shows up under Support + Troubleshooting -> Resource Health or App Service Advisor.
I am not sure where to look next and any help would be appreciated.
Azure App Services caps memory usage at 1.5G by default. But you can change this behaviour with this application setting (to be added under Configuration):
WEBSITE_MEMORY_LIMIT_MB = 3072
See also my answer here:
Is there way to determine why Azure App Service restarted?
The Metrics view on the portal can only go up to a 1 minute granularity level.
(The default is 5 minutes)
This means that each metric point is an average value over a 60-second interval.
It may be spiking up and down over 60 seconds, so you need a more real-time view.
Try the SCM console (Advanced Tools > Go), and check the Process Explorer to see the actual memory consumption.
I have installed OIM [11GR2 PS2] and OAM [R2PS2] in my PC, but system hangs with 12Gb of RAM.
I have I3 5th generation processor along with 12 Gb of RAM.I use win10 as my basic OS; however for installing oracle product I use VM where I have installed win7[ultimate version ].
Though as per oracle pre-requisite chart, 8GB of RAM is enough to run single instance of OIM / OAM, however I have allocated almost 10.5 GB of RAM to those VM's running OIM / OAM, but each time, after admin server start, whenever I try to start any of the manage server, the CPU consumption reaches 100% and everything hangs, I had to shut down my VM.
Though the question is a basic one, but have not found exact answer anywhere. Looking for help/suggestion .
The memory requirement of 8 GB is bare minimum and 16 GB is recommended. See this 11gR2 memory requirements and 11gR2 requirements. Also Refer to 3.1 Minimum Memory Requirements for Oracle Identity and Access Management and the section 3.3 Examples: Determining Memory Requirements for an Oracle Identity and Access Management Production Environment. (Even though it is mentioned Production but is valid for your instance since you have one VM, which is hosting all the components, inlcuding WebLogic server, OIM server, SOA server and also OAM server.
Here is the estimate of RAM from the above Oracle 11gR2 reference
To estimate the suggested memory requirements, you could use the following formula:
4 GB for the operating system and other software
+ 4 GB for the Administration Server
+ 8 GB for the two Managed Servers (OIM Server and SOA Server)
-----------------------------------------------------------
16 GB
With 4 GB for OS and 4 GB for Admin, that makes 8 GB RAM consumed already. And as you start a Managed server which would make it 12 GB, which the VM does not have... Hence as soon as you start your Managed server the all RAM is consumed which makes your VM to hang.
As you can see Oracle is recommending 16 GB and that too it is without OAM server (which also you have installed on the same VM). So definitely you are constrained with your current 10.5 GB. Since your PC max is 12 GB, suggest you install only OIM on one VM on the current PC and OAM on a different VM on separate PC if possible. Yes Oracle IAM software is definitely a memory hog.
BTW, I have two suggestions for you, first if you want to install 11gR2 version then go for PS3 (11.1.2.3) or better go with 12c which is latest. 11.1.2.2 is considered old now. Here is link for PS3 download. And second consider Oracle's free downloadable Pre-built VMs here. Although the pre-built VMs will be on linux.
I'm running an Azure Cloud Service that uses the "new Azure Cache".
I've configured the Cloud Service to use 30% of memory (default), but still the CacheService keeps eating the memory to a point where the server starts to swap out memory to disk. The server has 3.5 GB RAM (Medium), and the CacheServices used 2GB after running for 3 days (and keeps growing). Se attached picture.
We don't even use the cache, so this makes me rather nervous.
Another weird thing is that the other server in the same deployment does not have this problem.
Can anyone tell me if this is normal, should I be worried, or is there a setting somewhere, that I'm missing?