EC2 ubuntu server storage memory consumption increasing day by day - node.js

I have an EC2 ubuntu t2.micro server and running a Node.js app with MongoDB database which is also installed in same server. I'm observing that the storage memory consumption is increasing day by day. I'm trying to find out the reason but I'm not seeing any process of file which is causing.
I have installed Nginx and PM2 and tried to turn off it logs as well but no luck.
The output of df -h
The output of sudo du -sh /*
Can someone please help me to figure out if any hidden process is utilizing the storage?
Thanks

Related

"/home/nodejs/.pm2/logs" is taking 16GB of disk storage

I have a droplet on digital ocean runs nodejs app. Everything was fine until the system stopped due to the limited storage. "/home/nodejs/.pm2/logs" takes 16GB of the storage.
I have done "pm2 flush" but the mentioned logs folder still exists. The logs should be deleted.
Any ideas on how to delete these logs?

How do I check the trend of disk utilization in linux(centos specifically)

I have a centos server which are running MySQL, kafka, and other services, I have separate LVM disks mounted to each of these services.
How do I get the trend of disk utilization for these services? Is there any specific command in Linux through which I can check?
I want to make sure I will not be out of disk space in the coming days.
Thanks.
The df command will output the info you desire. you may create a periodic script that checks on it.

Node process suddenly hit 100% CPU in EC2 instance

I am running Node application (Socket.io) in AWS EC2(2CPU-4gbRAM) with Loadbalancer and Autoscaling setup, I've set Systemd for the Node service and it runs fine for a day with only 1% CPU the next day it's not working.
When I check details with TOP Node CPU utilization is 100%, but it shows only 50% usage in AWS Monitor and it stands there.
From various references, I'm not able to find the area of high consumption, root cause.
Please help me to figure out why Node hit only one CPU and what's the cause of high consumption.
From cat /proc/PID/status
Thanks in advance!

Docker containers freezing

I'm currently trying to deploy a node.js app on docker containers. I need to deploy 30 of them but they begin to have a weird behavior at some point, some of them freeze.
I am currently running Docker version for windows 18.03.0-ce, build 0520e24302, my computer specs (cpu and memory):
I5 4670 K
24 GB of ram
My docker default machine resource allocation is the following :
Allocated RAM : 10 Gb
Allocated vCPUs : 4
My node application is running on apline3.8 and node.js 11.4 and mostly do http requests every 2-3 seconds.
When i try to deploy 20 containers everything is running like a charm, my application do the job and i can see that there is an activity on every on my containers through the logs, activity stats.
The problem comes when i try to deploy more containers, more than 20, i can notice that some of the previously deployed containers start to stop their activities (0% cpu using, logs freezing). When everything is deployed (30 containers), Docker start to block the activity of some of them and unblock them at some point to block some others (blocking/unblocking is random). It seems to be sequential. I tried to wait and see what happened and the result is that some of the containers are able to poursue their activities and some others are stuck forever (still running but no more activity).
It's important to notice that i used the following resources restrictions on each of my containers :
MemoryReservation : 160mb
Memory soft limit : 160mb
NanoCPUs : 250000000 (0.25 cpus)
I had to increase my docker default machine resource allocation and decrease container's ressource allocation because it was using almost 100% of my cpu, maybe i did a mistake in my configuration. I tried to tweak those values, but no success i still have some containers freezing.
I'm kind of lost right know.
Any help would be appreciated even a little one, thank you in advance !

ELK stack performance tuning

I am new to ELK stack, i just installed it to give it a test drive for our production systems logs management and started pushing logs(IIS & Event) from 10 Windows VMs using nxlog.
After the installation, I am receiving around 25K hits/15 minutes as per my Kibana dashboard. The size of /var/lib/elasticsearch/ has been increased to around 15GBs in just 4 days.
I am facing serious performance issues, Elasticsearch process is eating up all my CPU and around 90% of memory.
Elasticsearch service was stuck previously and /etc/init.d/elasticsearch stop/start/restart wasn't even working. The process was running even after trying to kill it with kill command. A system reboot also took the machine to same condition. I just deleted all the indices with curl command and now i am able to restart Elasticsearch.
I am using a standard A3 Azure instance(7GB RAM 4 cores) for this ELK setup.
Please guide me to tune my ELK stack to achieve good performance. Thanks.
your are using 7GB RAM your jvm heap size for elasticsearch should be less than 3.5GB
for more information you can read elasticsearch heap sizing

Resources