High CPU usage when using parse-server with clusters - node.js

I am using latest version of parse-server(2.3.8) for my slots casino app and app is comunicating to server frequently just like real time game.
My server configuaration is -
DB Server - Mongo Atlas(M30 - 8GB RAM, 40GB storage), Mongodb 3.4
Parse Server - Rackspace(15GB RAM, 50GB Storage, 8 Core), Nodejs - 6.10.2
For multithread support i am using clustering using pm2(latest version). I am using 6 core from 8 for clustering.
Currently there is max 10-20 Concurrent users. Initial when i startup the parse server using clustering -
pm2 start main.js -i 6
The cpu usage us max 2% - 5%. After 1 day the cpu usage becomes 70% - 90% while the concurrent users are 10-20 or less.
I am sharing image the process list by result of "$ top " command.
https://i.stack.imgur.com/BY0fV.png
Please help me how to minimize the cpu usage, where i am going wrong.

Related

Scaling elastic search for read heavy applications

We have Node JS (Nest JS / Express) - based application services on the GCP cloud.
We are using elastic search to support full-text search on a blog/news website.
Our requirement is to support 2000 reads per second minimum.
While performing load testing, we observed that until a concurrency of 300 is reached, elastic search performs well and response times are acceptable.
CPU usage also spikes under this load. But, when the load is increased to 500 or 1000, CPU usage drops, and response times increase drastically.
What we don't understand is, why our CPU usage is 80% for a load of 300 and just 30 ~ 40% when load increases. Shouldn't CPU pressure increase with load?
What is the right way to tune elastic search for read-heavy usage? (Our write frequency is just 1 document in 2-3 hours)
We have one single index with approx 2 million documents. The index size is just 6GB.
Elastic cluster is deployed on Kubernetes using helm charts with:
- 1 dedicated master node
- 1 dedicated coordinating node
- 5 dedicated data nodes
Considering the small data size, the index is not sharded and the number of reading replicas is set to 4.
The index refresh rate is set to 30 sec.
RAM allocated to each data node is 2GB and the heap size is 1GB
CPU allocated to each data node is 1 vCPU
We tried to increase the search thread pool size up to 20 and queue_size to 10000 but that didn't help much either

Node web app running in Fargate crashes under load with memory and CPU relatively untaxed

We are running a Koa web app in 5 Fargate containers. They are pretty straightforward crud/REST API's with Koa over Mongo Atlas. We started doing capacity testing, and noticed that the node servers started to slow down significantly with plenty of headroom left on CPU (sitting at 30%), Memory (sitting at or below 20%), and Mongo (still returning in < 10ms).
To further test this, we removed the Mongo operations and just hammered our health-check endpoints. We did see a lot of throughput, but significant degradation occurred at 25% CPU and Node actually crashed at 40% CPU.
Our fargate tasks (containers) are CPU:2048 (2 "virtual CPUs") and Memory 4096 (4 gigs).
We raised our ulimit nofile to 64000 and also set the max-old-space-size to 3.5 GB. This didn't result in a significant difference.
We also don't see significant latency in our load balancer.
My expectation is that CPU or memory would climb much higher before the system began experiencing issues.
Any ideas where a bottleneck might exist?
The main issue here was that we were running containers with 2 CPUs. Since Node only effectively uses 1 CPU, there was always a certain amount of CPU allocation that was never used. The ancillary overhead never got the container to 100%. So node would be overwhelmed on its 1 cpu while the other was basically idle. This resulted in our autoscaling alarms never getting triggered.
So adjusted to 1 cpu containers with more horizontal scale out (ie more instances).

Cassandra keep using 100% of CPU and not utilizing memoery?

We have setup Cassandra single node of 3.11 with JDK 1.8 on ec2 with instance type t2.large which has 2 CPU and 7 GB of RAM.
We facing the issue that Cassandra keeps reaching CPU 100% even we do not have that much load.
We have 7GB of RAM but Cassandra not utilizing that Memory.it only uses 1.7-1.8 GB of RAM.
What configuration needs to change to reduce CPU utilization to not reach to 100%.
what best configuration to get better performance out of Cassandra.
Right now we able to get only about 100-120 read and 50-100 write operation per sec.
Please, some one helps us to understand the issue and what ways to improve performance configuration.

Scaling Node.js App on Heroku Using Dynos

I am trying to better understand scaling a Node.js server on Heroku. I have an app that handles large amounts of data and have been running into some memory issues.
If a Node.js server is upgraded to a 2x dyno, does this mean that automatically my application will be able to handle up to 1.024 GB of RAM on a single thread? My understanding is that a single Node thread has memory limit of ~1.5 GB, which is above the limit of a 2x dyno.
Now, let's say that I upgrade to a performance-M dyno (2.5 GB of memory), would I need to use clustering to fully take advantage of the 2.5 GB of memory?
Also, if a single request is made to my Node.js app for a large amount of data, which while being processed exceeds the amount of memory allocated to that cluster, will the process then use some of the memory allocated to another cluster or will it just throw an error?

Why is my client CPU utilization so high when I use a cassandra cluster?

This is a follow-on question to Why is my cassandra throughput not improving when I add nodes?. I have configured my client and nodes as closely as I could to what is recommended here: http://docs.datastax.com/en/cassandra/2.1/cassandra/install/installRecommendSettings.html. The whole setup is not exactly world class (the client is on a laptop with 32G of RAM and a modern'ish processor, for example). I am more interested in developing an intuition for the cassandra infrastructure at this point.
I notice that if I shut down all but one of the nodes in the cluster and run my test client against it, I get a throughput of ~120-140 inserts/s and a CPU utilization of ~30-40%. When I crank up all 6 nodes and run this one client against them, I see a throughput of ~110-120 inserts/s and my CPU utilization gets to between ~80-100%.
All my tests are run with a clean DB (I completely delete all DB files and restart from scratch) and I insert 30M rows.
My test client is multi-threaded and each thread writes exclusively to one partition using unlogged batches, as recommend by various sources for a schema like mine (e.g. https://lostechies.com/ryansvihla/2014/08/28/cassandra-batch-loading-without-the-batch-keyword/).
Is this CPU spike expected behavior?

Resources