I am using Nuxt for server side rendering. I finished this project but at the last stage when i deploy to production, nuxt or ssr (idk) use 100% cpu on the system side. For this reason centos machine stop running.
Have you any suggestion about this problem? What should i look?
I solved this problem. All i had to do is;
'pm2 start npm --name server -i max -- run start'
When you use;
pm2 start npm --name server -- run start
node working just one core, but when you use '-i max' node working maximum capability of your server.
maybe this info good for anyone...
Related
I have an ec2 instances that is running a node application. I am thinking of doing a container implementation using docker. The pm2 is running two application one is the actual node application (express and pug) and a cronjob using agenda. Is it a good idea to put my applications in one container?
I am not yet familiar with the pros and cons of this and I read that docker is already a process manager. How will the pm2 fit in all of this once I implement it. Or should I just ditch docker and run the applications in the native linux of my ec2.
You have a couple of questions, I try to answer them below:
1. Is it a good idea to put my applications in one container?
It depends, there are many cases why you would like to run the same container doing multiple things. But it really depends on the CPU/RAM/Memory usage of the job. And how often does it run?
Anyway from experience I can say if I run a cronjob from the same container, I would always use a worker approach for this using either NodeJS cores worker_threads or cluster module. Because you do not want that a cronjob impacts the behavior of the main thread. I have an example of running 2 applications on multiple threads in the following repo.
2. should I just ditch docker and run the applications in the native linux of my ec2
Docker and PM2 are 2 really different things. Docker is basically to containerize your entire Node app, so it is much easier to ship it. PM2 is a process manager for node and makes sure your app is up and comes with some nice metrics and logs UI on PM2 metrics. You can definitely use the 2 together, as PM2 makes also sure your app will start up after it crashes.
However, if you use pm2 you have to use the pm2-runtime when using docker. Example Dockerfile:
FROM node:16.9.0
WORKDIR /home/usr/app
COPY . .
RUN npm ci && npm run build
# default command is starting the server
CMD ["npx", "pm2-runtime", "npm", "--", "start"]
I would like to have some guidance about what is supposed to be the best development workflow with JHipster.
What I did expect:
With one docker-compose command, I could up and run everything the project needs (in this case, MongoDB, Kafka, backend, etc.);
When modifying front-end, saving the modified files, could fire livesync (ng serve --watch?).
What I did find:
The one command option that I found (docker-compose -f src/main/docker/app.yml up -d), which I guess that depends of a ./mvnw package -Pprod verify jib:dockerBuild before, does not livesync and seems that is not compatible with the individual execution of front-end with npm run start - application started this way points to different backend's modules ports (?).
I have experience with Angular and MongoDB (and a little with Docker), but I'm super new to JHipster and am trying to understand what I am doing wrong.
Thanks in advance!
For development workflow, you should start the dependencies individually. The app.yml will start the app's Docker image with the prod profile, useful for testing locally before deploying.
Start Containers for Mongo and Kafka
docker-compose -f src/main/docker/mongodb.yml up -d
docker-compose -f src/main/docker/kafka.yml up -d
Start the backend
./mvnw
Start frontend live-reload
npm start
If Docker is not accessible on localhost, you may need to configure application-dev.yml to point to the Docker IP.
This is my first time with EC2 so keep that in mind. I spun an EC2 instance and put a really basic nodejs/express app up on it. I connected to the ec2 server via the terminal on my personal computer and ran node app.js to start the app and everything is running fine. The part I am confused about is how long this will run for. Ideally, I just want it to sit there and not touch it and have it run for hopefully years. Will it do this? If not what do I need to do? What if the server restarts for some reason? What is the common practice here?
Go to root directory of your project and type this command to run the server permanently.
sudo npm install forever -g
forever start -c "node app.js" ./
This blog may be helpful, in setting up node for production environments
I have built a node app inside docker and it builds and runs perfectly on my local machine (mint 18). But when I upload the same to Digital ocean's Docker droplet (ubuntu 16.04) it hangs mid way while building and eventually throws an error. This occurs at exactly the same place each time.
Here is the last line & the error message I can see when building ..
npm info lifecycle app#0.0.1~preinstall: app#0.0.1
Killed
The command '/bin/sh -c npm install' returned a non-zero code: 137
PS:I am new to docker and only been using it a few days so this might be something very obvious.
If you look at issue 1554, it could be a resource issue.
Either a low memory or low disk would cause such an error message.
This Digital Ocean tutorial mentions the basic Droplet has only 512MB disk space. Maybe the combined images of your Dockerfile project are too important.
Details
I tried to deploy NodeJS app via docker-compose to Digital Ocean droplet. My app hanged every time on building step. But when I executed docker-compose up --build locally, I had no problems.
P.S. I have 1 Gb RAM memory on my DO droplet.
Solution
So, I just added .dockerignore (source) to the NodeJS project
# Logs
logs
*.log
# Runtime data
pids
*.pid
*.seed
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# node-waf configuration
.lock-wscript
# Compiled binary addons (http://nodejs.org/api/addons.html)
build/Release
# Dependency directory
# https://www.npmjs.org/doc/misc/npm-faq.html#should-i-check-my-node_modules-folder-into-git
node_modules
server/*.spec.js
kubernetes
You probably is lacking some swap space!
I use docker's node to build react applications and the server requirements after build are pretty low, a 512MB or 1G are enough for a testing environments and even for some small production production environments.
Although, node requires much more memory during building time and digital ocean droplets comes with no swap space, but it is easy to work around.
I have 2 or more node app. That have to run forever if i reboot my PC that i don't want to start server it's automatic start for all node app.I used /ect/init.d node-app file and made some changes it's work but only for one node app but I have to many app on 1 server. Please any on help me.
Here is what you need to do this:
https://github.com/nodejitsu/forever