I am facing issue in React NextJs Node where if i try to do pm2 reload it takes time to register port as i guess its scan all files of project before restarting it and due to my public/media size is 10 TB its hang up. I tried --ignore-watch options as well but not worked not sure if implemented wrong.
Related
I'm building a website with react. whenever I start react app using npm start. it takes too much time to exicute (almost 30-40 minutes). and after starting the app my laptop be too much slower. How can I fix this problem.?
Note: I have used yarn start and almost same result.
So I have a node (v8.9.4) server that is running on an AWS EC2 instance using the forever package to start it up. The server has worked without any issues for years but now that it's grown and more people are using it, it suddenly starts to time out all requests at seemingly random times, after working for a few hours.
I've found that running forever restart on the server gets all requests working again so I've got a temporary cronjob to restart it every hour but this is not good design and I would much rather have the server running without any issues.
I've gone through my server logs and found this which may be significant:
error: Forever detected script was killed by signal: SIGKILL
error: Script restart attempt #131
Warning: connect.session() MemoryStore is not designed for a production environment, as it will leak memory, and will not scale past a single process.
Another thing that may be important, the server stays running while this issue occurs so any checks on the server status through UptimeRobot (or any server status checker) returns a success.
Considering the server will run fine for a few hours and also start up again with no issues after a restart, I'm thinking it is not an issue with the code but something else that I am not aware of. My current hypothesis is the requests will start timing out if the server runs out of CPU but I would like to explore more options before making the final call on the issue. If anyone had any insight into this issue, I would be super grateful! :)
I am running the MEAN application which has client and server both separate folders and I am using pm2 for this. When I do pm2 restart all then application working fine and using less memory but after some time it's getting increased and slowing the application.
I have attached 2 screenshots of 10 hrs interval to explain my problem.
Check Image: After restarting PM2
Check Image: After 10 hrs of running application
Nothing can be said without looking at your code. It looks like you must be storing data in memory which gets bigger over time. Please check if you can clear all such variables and keep there size minimum.
Primarily, a more efficient code that avoids redundant vars would be the best solution but that is easier said than done, if you are using PM2 you can use max_memory_restart and give it a memory limit for the application. Doing so will make your app restart every time it goes over and thus, keeps your systems clear.
You can enable this by editing your ecosystem.config.js
You can read it more about it on their docs: https://pm2.keymetrics.io/docs/usage/application-declaration/
I have a Parse Server which is a Node.js + express wrapper for a mobile app (about 100 simultaneous users every day), hosted on DigitalOcean. The app server communicates with MongoDB, which is hosted on another droplet of DigitalOcean. I'm using pm2 as a process manager and its monitoring tool, which is web-based. On the same process, we operate LiveQuery, a WebSocket server made by the Parse community as well.
The thing is, I've been having some performance issues with the server. Everything works smoothly, until the Active handles rise up uncontrollably! (see the image below) It's like after one point the server says "I'm done! Now I rest!"
Usually the active handles stay between 30 to 70. The moment I restart the process with pm2 restart everything goes back to normal!
I've been having this issue for quite some time now and I haven’t been able to figure out what’s causing it! Any help will be greatly appreciated!
EDIT: I did a stress test where I created 200 LiveQuery sockets for 1 user, instead of 2 that a user normally has and there was a spike of 300 active handles, for like 5 seconds! The moment the sockets were all created, everything went back to normal!
I usually use restart based on memory usage
pm2 start filename.js --max-memory-restart 160 --exp-backoff-restart-delay=100
pm2 has also built-in cron job or autostart script setup in case the server ever restarts, see https://pm2.keymetrics.io/docs/usage/restart-strategies/
it would be could if pm2 would provide restart options based on active connections or heap memory
I'm trying to figure out why my nodejs app becomes unresponsive after 11h 20min. It happens every time, no matter if I run it on amazon-linux or Red Hat.
My Stack:
nodejs (v. 6.9.4)
mongodb (3.2)
pm2 process manager
AWS EC2 instance T2 medium
Every time I'm running the app it becomes unresponsive with an error returned to the browser:
net::ERR_CONNECTION_RESET
Pm2 doesn't restart the app, so I suspect it has nothing to do with nodejs, I also analysed the app and it doesn't have memory leaks. Db logs also look alright.
The only constant factor is the fact that the app crashes after it runs for 11h 20min.
I'm handling all possible errors from the nodejs app, but no errors in the log files occur so I suspect it has to be something else.
I also checked var/log/messages and /home/centos/messages but nothing related to the crash of the app there either.
/var/log/mongodb/mongo.log doesn't show anything specific either.
What would be the best way to approach the problem ?
Any clues how can I debug it or what could be the reason ?
Thanks
Copied from the comment since it apparently led to the solution:
You're leaking something other than memory is my guess, maybe file descriptors. Try using netstat or lsof to see if there are a lot more open connections or files than you expect.