Docker nodejs build works locally but hangs on server - node.js

I have built a node app inside docker and it builds and runs perfectly on my local machine (mint 18). But when I upload the same to Digital ocean's Docker droplet (ubuntu 16.04) it hangs mid way while building and eventually throws an error. This occurs at exactly the same place each time.
Here is the last line & the error message I can see when building ..
npm info lifecycle app#0.0.1~preinstall: app#0.0.1
Killed
The command '/bin/sh -c npm install' returned a non-zero code: 137
PS:I am new to docker and only been using it a few days so this might be something very obvious.

If you look at issue 1554, it could be a resource issue.
Either a low memory or low disk would cause such an error message.
This Digital Ocean tutorial mentions the basic Droplet has only 512MB disk space. Maybe the combined images of your Dockerfile project are too important.

Details
I tried to deploy NodeJS app via docker-compose to Digital Ocean droplet. My app hanged every time on building step. But when I executed docker-compose up --build locally, I had no problems.
P.S. I have 1 Gb RAM memory on my DO droplet.
Solution
So, I just added .dockerignore (source) to the NodeJS project
# Logs
logs
*.log
# Runtime data
pids
*.pid
*.seed
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# node-waf configuration
.lock-wscript
# Compiled binary addons (http://nodejs.org/api/addons.html)
build/Release
# Dependency directory
# https://www.npmjs.org/doc/misc/npm-faq.html#should-i-check-my-node_modules-folder-into-git
node_modules
server/*.spec.js
kubernetes

You probably is lacking some swap space!
I use docker's node to build react applications and the server requirements after build are pretty low, a 512MB or 1G are enough for a testing environments and even for some small production production environments.
Although, node requires much more memory during building time and digital ocean droplets comes with no swap space, but it is easy to work around.

Related

Issue with running Node application in AWS ECS container

I have created a Docker image and pushed it to the AWS ECR repository
I'm creating a task with 3 containers included, one for Redis one for PostgreSQL and another one for the given Image which is my Node project
In Dockerfile, I have added a CMD to run the App with node command, here is the Dockerfile content:
FROM node:16-alpine as build
WORKDIR /usr/token-manager/app
COPY package*.json .
RUN npm install
COPY . .
RUN npm run build
FROM node:16-alpine as production
ARG ENV_ARG=production
ENV NODE_ENV=${ENV_ARG}
WORKDIR /usr/token-manager/app
COPY package*.json .
RUN npm install --production
COPY --from=build /usr/token-manager/app/dist ./dist
CMD ["node", "./dist/index.js"]
This image is working in a docker-compose locally without any issue
The issue is when I run the task in ECS Cluster it's not running the Node project, it seems that it's not running the CMD command
I tried to override that CMD command by adding a new command to the Task definition:
When I run task with this command, there is nothing in the CloudWatch log and obviously the Node App is not running, here you can see that there is no log for api-container:
When I change the command to something else, for example "ls" it gets executed and I can see the result in CloudWatch log:
or when I change it to a wrong command, I get an error in the log:
But When I change it to the right command which should run the App, nothing happens, it's not even showing anything in the log as error
I have added inbound rules to allow the port number needed for connecting to the App but it seems it's not running at all!
What should I do? How can I check to see what is the issue?
UPDATE: I changed the App Container configuration to make it Essential, it means that the whole Task will fail and stop if this container exits with any error, then I started the Task again and it gets stopped, so now I'm sure that the App Container is crashing and exiting some how but there is nothing in the log!
First: Make Sure your Docker image in deployed to ECR(you can using Codepipeline) because that is where the ECS will look for the DockerImage.
Second:Please Specify your launch-Type, in case of Ec2 make sure you are using latest Node Image while adding container.
Here you can find latest Docker Image for Node: https://hub.docker.com/_/node
Third: Create Task-Definition and Run the task, now make sure you navigate to cluster and check if task is running and check task status.
Fourth: Make sure you allow all inbound traffic in Security group and open HTTP for 0.0.0.0/0
You can test using curl i.e :http://ec2-52-38-113-251.us-west-2.compute.amazonaws.com
In case you failed to do so, i would recommend deploying simple Node App and get that running and then deploy your project. Thank you
I found the issue, I'll post it here, it may help someone else
If you go to Cluster details screen > Tasks tab > Stopped > Task ID, then you can see a brief status message regarding each container in Containers list:
it saying that container killed due to Memory issue, we can fix it by increasing the memory we specify for containers when adding new Task Definition
This is the total amount of memory you want to give to the whole Task, which will be shared between all containers:
When you are adding new Container, there is a place for specifying the memory limit:
Hard Limit: If you specify a Hard Limit, your container will get killed when attempt to exceed that limit of memory usage
Soft Limit: If you specify the Soft Limit, ECS will reserve that memory for your container, but your container can request more memory up to the Hard Limit
So the main point here is when there is some kind of Initial issue for container, there won't be any log in CloudWatch and when there is and issue but we didn't find anything in Log, then we should check possibilities like Memory or anything prevent container from being started

Nuxt SSR project use 100% CPU

I am using Nuxt for server side rendering. I finished this project but at the last stage when i deploy to production, nuxt or ssr (idk) use 100% cpu on the system side. For this reason centos machine stop running.
Have you any suggestion about this problem? What should i look?
I solved this problem. All i had to do is;
'pm2 start npm --name server -i max -- run start'
When you use;
pm2 start npm --name server -- run start
node working just one core, but when you use '-i max' node working maximum capability of your server.
maybe this info good for anyone...

JHipster + Angular + MongoDB + Docker: beginner question

I would like to have some guidance about what is supposed to be the best development workflow with JHipster.
What I did expect:
With one docker-compose command, I could up and run everything the project needs (in this case, MongoDB, Kafka, backend, etc.);
When modifying front-end, saving the modified files, could fire livesync (ng serve --watch?).
What I did find:
The one command option that I found (docker-compose -f src/main/docker/app.yml up -d), which I guess that depends of a ./mvnw package -Pprod verify jib:dockerBuild before, does not livesync and seems that is not compatible with the individual execution of front-end with npm run start - application started this way points to different backend's modules ports (?).
I have experience with Angular and MongoDB (and a little with Docker), but I'm super new to JHipster and am trying to understand what I am doing wrong.
Thanks in advance!
For development workflow, you should start the dependencies individually. The app.yml will start the app's Docker image with the prod profile, useful for testing locally before deploying.
Start Containers for Mongo and Kafka
docker-compose -f src/main/docker/mongodb.yml up -d
docker-compose -f src/main/docker/kafka.yml up -d
Start the backend
./mvnw
Start frontend live-reload
npm start
If Docker is not accessible on localhost, you may need to configure application-dev.yml to point to the Docker IP.

Nodemon Doesn't Restart in Windows Docker Environment

My goal is to set up a Docker container that automatically restarts a NodeJS server when file changes are detected from the host machine.
I have chosen nodemon to watch the files for changes.
On Linux and Mac environments, nodemon and docker are working flawlessly.
However, when I am in a Windows environment, nodemon doesn't restart the server.
The files are updated on the host machine, and are linked using the volumes parameter in my docker-compose.yml file.
I can see the files have changed when I run docker exec <container-name> cat /path/to/fileChanged.js. This way I know the files are being linked correctly and have been modified in the container.
Is there any reason why nodemon doesn't restart the server for Windows?
Use nodemon --legacy-watch to poll for file changes instead of listening to file system events.
VirtualBox doesn't pass file system events over the vboxfs share to your Linux VM. If you're using Docker for Windows, it would appear HyperV doesn't propagate file system events either.
As a 2021 side note, Docker for Mac/Windows new GRPCfuse file system for mounting local files into the VM should send file system events across now.
2022 note: Looks like Windows/WSL Docker doesn't share FS events to the Linux VM (see comments #Mohamed Mirghani and #Ryan Wheale and github issue).
It is simple, according to the doc you must change:
nodemon server.js
to:
nodemon --legacy-watch server.js
As mentioned by others, using node --legacy-watch will work, however, the default polling rate is quite taxing on your cpu. In my case, it was consuming 30% of my CPU just by looping through all the files in my project. I would advise you to specify the polling interval as mention by #Sandokan El Cojo.
You can do so by either adding "pollingInterval": 4000 (4 seconds in this example) to your nodemon.json file or specifying it with the -P or --polling-interval flag in the command.
This was an issue in the docker for Windows. Now it's fixed
https://www.docker.com/blog/new-filesharing-implementation-in-docker-desktop-windows/

How can I run my MEANjs app on Digital Ocean permanently?

I've successfully just created a new droplet on Digital Ocean using their MEAN on Ubuntu 14.04 image. I can run my app from the terminal using 'grunt serve' and then view it in the browser at "ip_address:3000". But I still don't understand how I can serve it permanently, by which I mean, keep the app running even after I close my terminal. I've heard of the tool "Forever", but I don't really understand it. Do I even need it or is there another simpler way?
On the command line do:
$ export NODE_ENV=production
will setup production environmental
$ grunt build
will create necessary .min.js and min.css
$ forever start server.js
will load the server with forever, that its a package thats makes sure the node server will restart if an error and will log.
I don't know digital ocean at all, but I can tell you that you are looking for a webserver such as nginx.
The way you are running your server is really just for development purposes. That's why when you close your terminal the application stops execution.
Setting up servers can be its own large task. This is a nodejs nginx example Node.js + Nginx - What now?
You may have to Google for some more specific examples or tutorials on how to do it with digital ocean.
EDIT: you can also run a background process that will not stop executing when you exit the shell session. http://linuxtidbits.wordpress.com/2008/02/01/background-a-process/

Resources