how can i run two different nodejs apps in one docker image?
two different CMD [ "node", "app.js"] and CMD [ "node", "otherapp.js"] won't work, cause there can be only one CMD directive in Dockerfile.
I recommend using pm2 as the entrypoint process which will handle all your NodeJS applications within docker image. The advantage of this is that pm2 can bahave as a proper process manager which is essential in docker. Other helpful features are load balancing, restarting applications which consume too much memory or just die for whatever reason, and log management.
Here's a Dockerfile I've been using for some time now:
#A lightweight node image
FROM mhart/alpine-node:6.5.0
#PM2 will be used as PID 1 process
RUN npm install -g pm2#1.1.3
# Copy package json files for services
COPY app1/package.json /var/www/app1/package.json
COPY app2/package.json /var/www/app2/package.json
# Set up working dir
WORKDIR /var/www
# Install packages
RUN npm config set loglevel warn \
# To mitigate issues with npm saturating the network interface we limit the number of concurrent connections
&& npm config set maxsockets 5 \
&& npm config set only production \
&& npm config set progress false \
&& cd ./app1 \
&& npm i \
&& cd ../app2 \
&& npm i
# Copy source files
COPY . ./
# Expose ports
EXPOSE 3000
EXPOSE 3001
# Start PM2 as PID 1 process
ENTRYPOINT ["pm2", "--no-daemon", "start"]
# Actual script to start can be overridden from `docker run`
CMD ["process.json"]
process.json file in the CMD is described here
Related
i'm trying to turn UP my project with a Virtual Private Server. I've installed Docker and Portainer and i can start the project. But its not running in any port. I did set to run in port 3000 but when i put in browser IP_Of_My_VPS:3000 nothing happens. I'm new with docker and every configuration that i did was based on my searchs.
This print shows that image is running in no one port.
This other print shows that my application is running (but i dont know how access it).
My docker config:
FROM node:12-alpine
RUN apk --no-cache add curl
RUN apk --no-cache add git
RUN git --version
WORKDIR /app
COPY package*.json ./
RUN npm set progress=false && npm config set depth 0 && npm cache clean --force
RUN npm ci
COPY . .
RUN npm run build && rm -rf src
HEALTHCHECK --interval=30s --timeout=3s --start-period=30s \
CMD curl -f http://localhost:3000/health || exit 1
EXPOSE 3000
CMD ["node", "./dist/main.js"]
When docker container up, perform port forwarding
for examples,
docker run -p <your_forwarding_port>:3000 ~~~
# docker-compose.yaml
~~~
ports:
- "<your_forwarding_port>:3000"
~~~
you can see ref
: docker-container port
: docker-compose port
I am trying to run an angular application in development mode inside a docker container, but when i run it with docker-compose build it works correctly but when i try to put up the container i obtain the below error:
ERROR: for sypgod Cannot start service sypgod: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"npm\": executable file not found in $PATH
The real problem is that it doesn't recognize the command npm serve, but why??
The setup would be below:
Docker container (Nginx Reverse proxy -> Angular running in port 4000)
I know that there are better ways of deploying this but at this moment I need this setup for some personals reasons
Dockerfile:
FROM node:10.9
COPY package.json package-lock.json ./
RUN npm ci && mkdir /angular && mv ./node_modules ./angular
WORKDIR /angular
RUN npm install -g #angular/cli
COPY . .
FROM nginx:alpine
COPY toborFront.conf /etc/nginx/conf.d/
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
CMD ["npm", "serve", "--port 4000"]
NginxServerSite
server{
listen 80;
server_name sypgod;
location / {
proxy_read_timeout 5m;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:4000/;
}
}
Docker Compose file(the important part where I have the problem)
sypgod: # The name of the service
container_name: sypgod # Container name
build:
context: ../angular
dockerfile: Dockerfile # Location of our Dockerfile
The image that's finally getting run is this:
FROM nginx:alpine
COPY toborFront.conf /etc/nginx/conf.d/
EXPOSE 8080
CMD ["npm", "serve", "--port 4000"]
The first stage doesn't have any effect (you could COPY --from=... files out of it), and if there are multiple CMDs, only the last one has an effect. Since you're running this in a plain nginx image, there's no npm command, leading to the error you see.
I'd recommend using Node on the host for a live development environment. When you've built and tested your application and are looking to deploy it, then use Docker if that's appropriate. In your Dockerfile, run ng build in the first stage to compile the application to static files, add a COPY --from=... in the second stage to get the built application into the Nginx image, and delete all the CMD lines (nginx has an appropriate default CMD). #VikramJakhar's answer has a more complete Dockerfile showing this.
It looks like you might be trying to run both Nginx and the Angular development server in Docker. If that's your goal, you need to run these in two separate containers. To do this:
Split this Dockerfile into two. Put the CMD ["npm", "serve"] line at the end of the first (Angular-only) Dockerfile.
Add a second block in the docker-compose.yml file to run the second container. The backend npm serve container doesn't need to publish ports:.
Change the host name of the backend server in the Nginx config from localhost to the Docker Compose name of the other container.
It would appear the npm can't be accessed from the container.
Try defining where it tries to execute it from:
docker run -v "$PWD":/usr/src/app -w /usr/src/app node:10.9 npm serve --port 4000
source: https://gist.github.com/ArtemGordinsky/b79ea473e8bc6f67943b
Also make sure that npm is installed on the computer running the docker container.
You can do something like below
### STAGE 1: Build ###
# We label our stage as ‘builder’
FROM node:alpine as builder
RUN apk --no-cache --virtual build-dependencies add \
git \
python \
make \
g++
RUN mkdir -p /ng-app/dist
WORKDIR /ng-app
COPY package.json package-lock.json ./
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm install
COPY . .
## Build the angular app in production mode and store the artifacts in dist folder
RUN npm run ng build -- --prod --output-path=dist
### STAGE 2: Setup ###
FROM nginx:1.14.1-alpine
## Copy our default nginx config
COPY toborFront.conf /etc/nginx/conf.d/
## Remove default nginx website
RUN rm -rf "/usr/share/nginx/html/*"
## From ‘builder’ stage copy over the artifacts in dist folder to default nginx public folder
COPY --from=builder /ng-app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
If you have Portainer.io installed for managing your Docker setup, you can open the console for a particular container from a browser.
This is useful if you want to run a reference command like "npm list" to show what versions of dependencies have been loaded.
So that you can view it like this:
I found this useful for diagnosing issues where an update to a dependency had broken something, which worked fine in a test environment, but the docker version had installed newer minor versions which broke the application.
I have docker installed on Ubuntu 16.04 VM and I'm working on a personal project using nodejs and Docker image is from the DockerFile.
the container runs but when I try to access it with the VP'm public IP It's not accessible.
I tried to curl and I get curl: (52) empty reply from the server. after taking a very long time.
The port is mapped correctly and no firewall issues as well.
here is my docker file
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN apk update && apk upgrade \
&& apk add --no-cache git \
&& apk --no-cache add --virtual builds-deps build-base python \
&& npm install -g nodemon cross-env eslint npm-run-all node-gyp
node-pre-gyp && npm install\
&& npm rebuild bcrypt --build-from-source
RUN npm install --production --silent && mv node_modules ../
COPY . .
RUN pwd
EXPOSE 3001
CMD npm start
docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
8588419b40c4 xxx:v1 "/bin/sh -c 'npm sta…" 2 days ago
Up 2 days 0.0.0.0:3000->3001/tcp youthful_roentgen
Let xxx:v1 be the image name built by the Dockerfile you provided.
If you want to access your app via your host (curl localhost:3001), then you should run :
docker run -p 3001:3000 xxx:v1
This command binds port 3000 in your container to your port 3001 on your host (IIRC, 3000 is the default port used by npm start).
You should then be able to access localhost:3001 from your host with curl.
Note that EXPOSE directive in the Dockerfile does not automatically expose a port when running docker run. It's just an indication telling that your container listens on port you EXPOSEd. Here, your EXPOSE directive is wrong, you should have written :
EXPOSE 3000
because only port 3000 is exposed in the container (3000 is the default port used by npm). What port you choose to bind on the host (or not) is specified at runtime only.
If you don't want to access your app via localhost, but only via the container's IP, there is no need to bind the port (no -p). You only need to do curl <container_ip>:3000 from your host.
Here's the Dockerfile:
FROM nginx:stable-alpine
COPY ./mailservice /var/www/backend
COPY ./dist /usr/share/nginx/html
COPY ./docker/nginx_config/default.conf /etc/nginx/conf.d/default.conf
COPY ./docker/nginx_config/.htpasswd /etc/nginx
RUN chown -R nginx:nginx /usr/share/nginx/html/ \
&& chown -R nginx:nginx /etc/nginx/.htpasswd \
&& apk add --update nodejs nodejs-npm
WORKDIR /var/www/backend
RUN npm run start
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
But my RUN npm run start doesn't work, i have to manually attach shell to container and then run this by my self. What's the correct way to launch npm run start after container is started?
UPDATE
CMD ["nginx", "-g", "daemon off;"]
ENTRYPOINT ["node", "server.js"]
Would this work?
The best practice say that you shouldn't run more than one process per container. Unless your application its made in a way that starts multiples process from a unique entrypoint.
But there is some workaround that you can use. Try to check this question: Docker multiple entrypoints
Solved this way:
Dockerfile
FROM nginx:stable-alpine
COPY ./mailservice /var/www/backend
COPY ./dist /usr/share/nginx/html
COPY ./docker/nginx_config/default.conf /etc/nginx/conf.d/default.conf
COPY ./docker/nginx_config/.htpasswd /etc/nginx
RUN chown -R nginx:nginx /usr/share/nginx/html/ \
&& chown -R nginx:nginx /etc/nginx/.htpasswd \
&& apk add --update nodejs nodejs-npm
ADD ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod 755 /docker-entrypoint.sh
EXPOSE 80
WORKDIR /
CMD ["/docker-entrypoint.sh"]
docker-entrypoint.sh
#!/usr/bin/env sh
exec node /var/www/backend/server.js > /var/log/node-server.log &
exec /usr/sbin/nginx -g "daemon off;"
You're confusing build time (basically RUN instructions) with runtime (ENTRYPOINT or CMD) and,
after that, you're breaking the rule: one container, one process, even this is not a sacred one.
My suggestion is to use Supervisord with this configuration
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[supervisord]
logfile=/var/log/supervisord/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
pidfile=/var/run/supervisord.pid ; pidfile location
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
user=root ; default user
childlogdir=/var/log/supervisord/ ; where child log files will live
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:npm]
command=npm run --prefix /path/to/app start
stderr_logfile = /dev/stdout
stdout_logfile = /dev/stderr
[program:nginx]
command=nginx -g "daemon off;"
stderr_logfile = /dev/stdout
stdout_logfile = /dev/stderr
With this configuration you will have logs redirected to Standard Output and this is a good
practice instead of files inside container that could be ephemeral, also you will have a
PID responsible for handling child processes and restart them with specific rules.
You could try to achieve this also with a bash script but could be tricky.
Another best solution should be using separated container with network namespace in order to
forward NGINX requests to the NPM upstream... but without Kubernetes it could be hardly to
maintain, even it's not impossible just with Docker :)
Your current approach is fundamentally wrong by design.
Your current approach is a clear Anti-pattern of using containers
Please create a Dockerfile for your app
Please create a Dockerfile for nginx separately
Use docker-compose to build the stack or you can compose it your own way
Always run the app and proxy on separate containers
Good Morning. I am trying to run the docker file to start my mock api and my UI.
When I run those inside individual terminals, I am able to see the UI up and running. But when I run those inside a docker container the API doesn't start for some reasons.
Can you help me with this?
# My Docker file.
FROM node:11
# Set working directory for API
RUN mkdir /usr/src/api
WORKDIR /usr/src/api
COPY ./YYY/. /usr/src/api/.
RUN npm install
RUN npm start &
# set working directory for UI
RUN mkdir /usr/src/app/
WORKDIR /usr/src/app/
COPY ./ZZZ/. /usr/src/app/.
ENV PATH /usr/src/app/node_modules/.bin:$PATH
EXPOSE 3000
RUN npm install
RUN npm start
Thanks,
Ranjith
The command npm start starts a web server that only listens on the loopback interface of the container. To fix this, in package.json, under start, add —host 0.0.0.0. This will allow you to access the app in your browser using the container ip.