I am testing node-webrtc project on 16 core cpu and 32 GB RAM.
I started process with pm2 and after some time node process stop responding.
Url returns not reachable, video streaming stopped.
What i noticed:
1) Every time it stopped at memory consumption 3.5 GB , CPU 900% but i tried to increase old memory size to 24 GB then it failed randomly after reaching 9 GB Memory and 1100 cpu..
2) In pm2 logs i found
"(node:3397) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 newBroadcast listeners added. Use emitter.setMaxListeners() to increase limit" but process keep running after this warning..
A) not sure this is memory leakage issue?
B) cpu consumption (900% out of 1600%) as i know node is single thread process so is there any chance thread assign to main node process reached to their peak point?
Please any suggestion how i can debug it..
concurrent users that time are around 110-120
Issue was server outbound bandwidth.
Server has maximum uplink speed 128 MB/s (~1 Gbps) and stream consuming maximum allowed bandwidth and after that connection to server goes unreachable...
It fixed by swtcihing our server to 500 MB/s bandwidth..
Related
I have a backend server with 1G RAM for my HTTP server and my MariaDB.
I noticed the database keeps getting killed by OOM once or twice a day. Most of the time the OOM is triggered by the HTTP server, but not always.
I tried limiting innodb_buffer_pool_size many times, it is at 64M at this moment, but the process is still taking 40% to 60% of the server's memory.
How do I find the reason of this memory usage? It appears to be some kind of memory leak, because it keeps increasing throughout the day.
The database usually starts consuming about 7% to 9% of memory usage.
MariaDB version 10.5
I am using swagger-express-mw for my REST API application with express. However I am observing continuous memory increase in my production environment.
My Servers are Linux 4C,16 GB under an Application Load Balancer(ALB). Currently there are 2 servers under the ALB and memory usage has increased from 2% to 6%. I am not sure if GC has been executed on it or not yet.
Below is sample snapshot of memory. Here the app is using approximately 100 MB for process but buffers are increasing. Is it a memory leakage issue?
I am trying to better understand scaling a Node.js server on Heroku. I have an app that handles large amounts of data and have been running into some memory issues.
If a Node.js server is upgraded to a 2x dyno, does this mean that automatically my application will be able to handle up to 1.024 GB of RAM on a single thread? My understanding is that a single Node thread has memory limit of ~1.5 GB, which is above the limit of a 2x dyno.
Now, let's say that I upgrade to a performance-M dyno (2.5 GB of memory), would I need to use clustering to fully take advantage of the 2.5 GB of memory?
Also, if a single request is made to my Node.js app for a large amount of data, which while being processed exceeds the amount of memory allocated to that cluster, will the process then use some of the memory allocated to another cluster or will it just throw an error?
IIS hosted WCF service is consuming Large memory like 18 GB and the server has slowed down.
I Analyzed Mini dump file and it shows only 1 GB or active objects. I understand the GC is not clearing the memory and GC must be running in server mode in 64 bit System. Any idea why the whole computer is stalling and app is taking huge memory?
The GC was running on Server Mode it was configured for better performance. I Understand GC running in Server mode will have a performance improvement because the GC's will not be triggered frequently due to high available memory and in server mode it will have high limit on memory usage. Here the problem was when the high limit is reached for the process CLR triggered the GC and it was trying to clear the Huge 18 GB of memory in one shot, so it was using 90% of system resource and rest applications were lagging.
We tried restarting but it was forever going so We had to kill the process. and now with Workstation mode GC smooth and clean. The only difference is response time has some delay due to GC after 1.5 GB allocation.
One more info: .NET 4.5 version has revision regarding this which has resolved this issue in GC.
I am working on application which is receiving high traffic. Each request takes around 100-500ms.
There is no memory leak.
When i enable garbage logging, i can see that when GC happens when i allocate 8 GB memory.
Log clearly clarify how much memory is reclaimed. But GC triggers stop the world event.
When i compare 8 GB allocation with 4 GB allocation, i find that with 4 GB GC occurs frequently but takes less time than 8 GB, which is expected.
But application response time is variable. For some request it could be higher.
I just want to know what is best deployment for application which receive high traffic.
Is running 2 tomcat with 4 GB is better or one tomcat with 8 GB is better.
Is there any other solution so that Tomcat always send response within time limit
I searched a lot but couldn't find a way to control stop the world event so that my response time is not affected for some of the request.