php-fpm memory consumption - memory-leaks

I have php5-fpm (with apc) and nginx installed on ubuntu-powered vps server (2000MHz/512MB).
Hosting reported there's abnormal memory consumption on my server.
top shows that there're some php-fpm processes using up to 1gb of memory.
I tried to adjust pm.max_requests and pm.max_children
but the issue remains.
Any advice will be appreciated greatly.
Here are my configs:
php.ini
php-conf
nginx.conf:

you have to much server in php-fpm.
you can configure your php-fpm pool server using this below calculator.
Now we can decide how many processes the server is allowed to spin up.
Total Max Processes = (Total Ram - (Used Ram + Buffer)) / (Memory per php process)
This server is has about 512MB of ram. Let's say PHP(your request) uses about 30mb of ram per request.
We can start with: 521 / 30 = 17.06 Let's just go with 17 max servers.
if your request consume 100MB per request then
512/100 = 5.12 so you have to configure 5/4 max server for php-fpm pool.
now your apache/nginx take some ram so i suggest to set 10 server only.

Related

Node process often loses CPU time on Linux VM which increases latencies for client requests

Problem - Increase in latencies ( p90 > 30s ) for a simple WebSocket server hosted on VM.
Repro
Run a simple websocket server on a single VM. The server simply receives a request and then upgrades it to websocket without any logic. The client will continuously send 50 parallel requests for a period of 5 mins ( so approximately 3000 requests ).
Issue
Most requests have a latency of the range 100ms-2s. However, for 300-500 requests, we observe that latencies are high ( 10-40s with p90 greater than 30s ) while some have TCP timeouts (Linux default timeout of 127s).
When analyzing the VM processes, we observe that when requests are taking a lot of time, the node process loses it CPU share in favor of some processes started by the VM.
Further Debugging
Increasing process priority (renice) and i/o priority (ionice) did not solve the problem
Increasing cores and memories to 8 core, 32 GiB memory did not solve the problem.
Edit - 1
Repro Code ( clustering enabled ) - https://gist.github.com/Sid200026/3b506a9f77cfce3fa4efdd1ec9dd29bc
When monitoring active processes via htop, we find that the processes started by the following commands are causing an issue
python3 -u bin/WALinuxAgent-2.9.0.4-py2.7.egg -run-exthandlers
/usr/lib/linux-tools/5.15.0-1031-azure/hv_kvp_daemon -n

Socket.io hangs after 2k concurrent connections

I have purchased 2 vCPU core and 4 GB Ram memory VPS server and deployed nodejs Socket.io server. its working fine without any issue upto 2k concurrent connection. But this limit is very small according to me. when connection is reached at 3k socketio server hang and stopped working.
Normally memory usage is 300mb but after 3k connection memory usage is reaching upto 2.5 GB and not emitting packets for several seconds and after that works for very few second and server hang again.
My server is not very small for this amount of connection.
Is there any suggestion for optimisations how to increase concurrent connection without hang after few thousand clients connected simultaneously. for few clients its working fine.

Spike in Tomcat Response times

I have been facing the intermittent spikes in the Rest API responses hosted on Tomcat. General response time is within 2 ms. But there are some spikes in between where one request took more than 1.5 seconds, these requests are causing the timeouts at the client end as the client is configured with very low connection timeout. This spike is occurring with in every 1 hour to 1:30 hr. There is no spike in CPU and Memory. The application is fetching Data from Redis and there are no spikes in the Redis machines as well. The number of requests processed per second is 500. The thread pool is always under utilized. Following is the Tomcat configuration.
<Connector port="8080"
connectionTimeout="60000"
maxThreads="500"
minSpareThreads="50"
acceptCount="2000"
protocol="org.apache.coyote.http11.Http11NioProtocol"
useSendfile="false"
compression="force"
enableLookups="false"
redirectPort="8443" />
The RAM of the machine is 8GB and the JVM is configured with XMS and XMX as 4GB. I am not using any explicit GC arguments. (Tomcat 9.0.26, Java 11, 4 core,8 GB RAM)
I suspect GC might be causing the issue, but as I don't see any spike in either CPU or memory I don't have any clue why this is happening. Can anyone help me by throwing some ideas in resolving this issue?

Why node.js+mongodb does not gives 100 req/sec throughput for 100 req sent in a second?

I kept node.js sever on one machine and mongodb sever on another machine. requests were mixture of 70% read and 30% write. It is observed that at 100 request in a second throughput is 60req/sec and at 200 requests second throughput is 130 req/sec. cpu and memory usage is same in both the cases. If application can server 130 req/sec then why it has not server 100req/sec in first case since cpu and memory utilization is same. machines are using ubuntu server 14.04
Make user threads in Jmeter and use loop forever for 300 sec. Then get the values.

App of chat using node.js with NetworkOut of AWS Amazon this increasing dramatically

I have a app of chat using node.js, but NetworkOut of AWS Amazon this increasing dramatically, this is image of graph:
http://www.pictureshoster.com/files/jillq3urhlq7cgqwoyt.png
When the NetworkOut increased the number acesses is low.
My server is AWS Amazon:
Type: m1.small
ECUs: 1
vCPUs: 1
Memory: 1.7
Network Performance: Low
More graph using Nodetime.com
Garbage Collection / Full GCs per minute
Free Memory Server
Process / Node RSS (MB)
URL: http://www.pictureshoster.com/files/70lhz327kvm9xvuicx9.jpg
note that the graphs has increase dramatically at 12:00, moment that the server was slow.
The chat has 250 sockets.
What is a basic configuration to host a chat which receives about 250 requests per minute, each server request is sent to approximately 200 sockets, generating approximately 250x200 = 50000 requests sent between server and client
The app has 250 sockets.
Users update the page whenever, creating and removing socket always.
Help me.

Resources