Dockerfile - Build docker image for multiple Node.js app - node.js

How do I run all my Node.js file in a single container?
app1.js running on port 1001
app2.js running on port 1002
app3.js running on port 1003
app4.js running on port 1004
Dockerfile
FROM node:latest
WORKDIR /rootfolder
COPY package.json ./
RUN npm install
COPY . .
RUN chmod +x /script.sh
RUN /script.sh
script.sh
#!/bin/sh
node ./app1.js
node ./app2.js
node ./app3.js
node ./app4.js

You would almost always run these in separate containers. You're allowed to run multiple containers from the same image, you can override the default command for an image when you start it up, and you can remap the ports an application uses when you start it.
In your Dockerfile, delete the RUN /script.sh line at the end. (That will try to start the servers during the image build, which you don't want.) Now you can build and run containers:
docker build -t myapp . # build the image
docker network create mynet # create a Docker network
docker run \ # run the first container...
-d \ # in the background
--net mynet \ # on that network
--name app1 \ # with a known name
-p 1001:3000 \ # publishing a port
myapp \ # from this image
node ./app1.js # running this command
docker run \
-d \
--net mynet \
--name app2 \
-p 1002:3000 \
myapp \
node ./app2.js
(I've assumed all of the scripts listen on the default Express port 3000, which is the second port number in the -p options.)
Docker Compose is a useful tool for running multiple containers together and can replicate this functionality. A docker-compose.yml file matching this setup would look like:
version: '3.8'
services:
app1:
build: .
ports:
- 1001:3000
command: node ./app1.js
app2:
build: .
ports:
- 1002:3000
command: node ./app2.js
Compose will create a Docker network on its own, and take responsibility for naming the images and containers. docker-compose up will start all of the services in parallel.

You need to expose the ports first using:
EXPOSE 1001
...
EXPOSE 1004
in your dockerfile and later run the container using the -p parameter as with -p 1501:1001
to expose -for example- the port 1501 of the host to work as the 1001 port of the container.
ref: https://docs.docker.com/engine/reference/commandline/run/
However, it is suggested to minimize the number of programs to be run from a docker container. So, you might like to have a container for each of your js scripts.
Yet, Nothing stops you from using:
docker exec -it yourDockerMachineName bash
several times where you can use each of your node cmds.

What you are trying to achieve is considered to be an anti-pattern.
Conversely, having in mind the single-responsibility-principle when building up the stacks of your apps will give you better leverages to manage, monitor, change your app etc.
This article from the official documentation explains when you might want to do this.
If you want to manage multiple containers as a whole, having one Dockerfile for each js, combined with a docker-compose file to bring up all the containers at once on different ports might answer your question. Here is a minimal example:
docker-compose.yml
version: '3.7'
services:
app1:
image: your-js-app-1-image
container_name: app-1
ports:
- '1001:3000'
app2:
image: your-js-app-2-image
container_name: app-2
ports:
- '1002:3000'

Ideally you should run each app on a separated container, if your applications are different. In the case they are equal and you want to run multiple instances on different ports
docker run -p <your_public_tcp_port_number>:3000 <image_name>
or a good docker-compose.yaml would suffice.
Technically you may want to run each different application on a different container and run multiple instances of the same application in order to make it easy to version each of your app on a newer independent image. It will allows you to independently stop, deploy and start your apps on the production environment.

Related

Docker: Running two services with one Dockerfile

I am working on telegram group for premium members in which I have two services, one is monitoring all the joinee and other one is monitoring if any member has expired premium plan so It would kick out that user from the channel. I am very very new to Docker and deployment things. So I am very confused that, to run two processes simultaneously with one Dockerfile. I have tried like this.
here is the file structure:
start.sh
#!/bin/bash
cd TelegramChannelMonitor
pm2 start services/kickNonPremium.js --name KICK_NONPREMIUM
pm2 start services/monitorTelegramJoinee.js --name MONITOR_JOINEE
Dockerfile
FROM node:12-alpine
WORKDIR ./TelegramChannelMonitor
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
ENTRYPOINT ["/start.sh"]
What should I do to achieve this?
A Docker container only runs one process. On the other hand, you can run arbitrarily many containers off of a single image, each with a different command. So the approach I'd take here is to build a single image; as you've shown it, except without the ENTRYPOINT line.
FROM node:12-alpine
# Note that the Dockerfile already puts us in the right directory
WORKDIR /TelegramChannelMonitor
...
# Important: no ENTRYPOINT
# (There can be a default CMD if you think one path is more likely)
Then when you want to run this application, run two containers, and in each, make the main container command run a different script.
docker build -t telegram-channel-monitor .
docker run -d -p 8080:8080 --name kick-non-premium \
telegram-channel-monitor \
node services/kickNonPremium.js
docker run -d -p 8081:8080 --name monitor-joined \
telegram-channel-monitor \
node services/monitorTelegramJoinee.js
You can have a similar setup using Docker Compose. Set all of the containers to build: ., but set a different command: for each.
(The reason to avoid ENTRYPOINT here is because the syntax to override the command gets very clumsy: you need --entrypoint node before the image name, but then the rest of the arguments after it. I've also used plain node instead of pm2 since a Docker container provides most of the functionality of a process supervisor; see also what is the point of using pm2 and docker together?.)
Try pm2 ecosystem for apps(i.e services) declaration and run pm2 in non-backrgound mode or pm2-runtime
https://pm2.keymetrics.io/docs/usage/application-declaration/
https://pm2.keymetrics.io/docs/usage/docker-pm2-nodejs/

how to make container for multiple servers in one code base to deploy golang app with docker?

I have a repo that has multiple servers gonna running. the structure like this
// Golang Apps
- account = port 4001
- event = port 4002
- place = port 4003
// Node js
- gateway = port 4000
I usually run in local using script like this
// script.sh here:
#!/bin/bash
EnvAPP="${ENV_APP:-dev}"
function cleanup {
kill "$ACCOUNTS_PID"
kill "$EVENTS_PID"
kill "$PLACES_PID"
}
trap cleanup EXIT
go build -tags $EnvAPP -o ./tmp/srv-accounts ./cmd/server/accounts
go build -tags $EnvAPP -o ./tmp/srv-events ./cmd/server/events
go build -tags $EnvAPP -o ./tmp/srv-places ./cmd/server/places
./tmp/srv-accounts &
ACCOUNTS_PID=$!
./tmp/srv-events &
EVENTS_PID=$!
./tmp/srv-places &
PLACES_PID=$!
sleep 1
node ./cmd/gateway/index.js
is that possible I create one Dockerfile for this case into Production? should I run the script.sh in the Dockerfile for this case? how about the image should I use in Dockerfile? I have no idea for this case using docker because the in one code base for multiple servers running , and the problem also port of servers running
maybe one of you ever has this case? it would be great to know how to solve this problem
I am using GraphQL Federation ( Go ) for this case, so I have multiple services and Gateway ( NodeJS )
I want to deploy this into Production for this question
You need four separate Dockerfiles for this, to launch four separate containers with four different programs. The Go component Dockerfiles can be fairly straightforward:
# Dockerfile.accounts
FROM golang:1.16 AS build
WORKDIR /app
COPY . .
ARG ENV_APP=dev
RUN go build -tags "$ENV_APP" -o /accounts ./cmd/server/accounts
FROM ubuntu:20.04
COPY --from=build /accounts /usr/local/bin
CMD accounts
(If the three images are really identical aside from the specific command directory being built, you could pass that in as an ARG as well. I'm assuming the ./cmd/server/* packages require packages elsewhere in your source directory like a ./pkg/support or whatever, which would require the Dockerfiles to be at the top level.)
Since your script is just running the four programs, I'd generally recommend using Docker Compose as a way to launch the four containers together. "Launch some containers with known options" is the only thing Compose does, but it would do everything your script does.
# docker-compose.yml
version: '3.8'
services:
accounts:
build:
context: .
dockerfile: Dockerfile.accounts
events:
build:
context: .
dockerfile: Dockerfile.events
places:
build:
context: .
dockerfile: Dockerfile.places
gateway:
build:
context: .
dockerfile: Dockerfile.gateway
# (Since a Node app can't reuse Go code, this could also
# reasonably be `build: cmd/gateway` using a
# `cmd/gateway/Dockerfile`)
ports:
- 3000:3000
Just running docker-compose up will start all four containers in the foreground; once it's up, pressing Ctrl+C will stop them all. You can configure the gateway to use the other container names accounts, events, places as host names; http://accounts/graphql for example.
You could also adapt your launcher script as-is. Run docker build instead of go build to build images, docker run to start a container (probably with fixed --names), docker stop && docker rm to stop them. You should docker network create a network and docker run --net all of the containers on them so they can communicate in the same way as the Compose setup.

How to stabilize the port used by docker-compose?

I have an Node.js application that I want to run with docker-compose. Inside the container it listens for port 4321, set by an environment variable.
This port is also exposed by my Dockerfile and I specify it like so in my docker-compose.yml:
version: '3.4'
services:
previewcrawler:
image: previewcrawler
build:
context: .
dockerfile: ./Dockerfile
environment:
NODE_ENV: development
ports:
- 4321:4321
- 9229:9229
command: ['node', '--inspect=0.0.0.0:9229', 'dist/index.js']
I run the app with a VSCode task, which executes this:
docker run -dt -P --name "previewcrawler-dev" -e "DEBUG=*" -e "NODE_ENV=development" --label "com.microsoft.created-by=visual-studio-code" -p "9229:9229" "previewcrawler:latest" node --inspect-brk=0.0.0.0:9229 .
When I choose to open the application in my browser, it has some crazy port like 49171, which also changes every time I start my container.
How can I make this port stable? So that it is 4321 every time, like I specified in my docker-compose.yml
docker run -P (with a capital P) tells Docker to pick a host port for anything the Dockerfile EXPOSEs. You have no control over which host port or interfaces the port uses.
docker run -p 4321:4321 (with a lowercase p) lets you explicitly pick which ports get published, and on which host port. It is exactly equivalent to the Compose ports: option.
This is further detailed in the Docker run reference.
(That link is more specifically to a section entitled "expose incoming ports". However, "expose" as a verb means almost nothing in modern Docker. Functionally, it does only two things: if you use docker run -P then all exposed ports get published; and if you don't have a -p or -P option at all, the port will be listed in the docker ps output anyways. Exposed ports aren't automatically published, and there's not really any reason to use the docker run --expose or Compose expose: options.)
Apparently I started my app with the wrong command. I now use
docker-compose -f "docker-compose.debug.yml" up -d --build
which works great. The port is also correct then.

Run node Docker without port mapping

I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the ‍‍--publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the ‍‍-d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped

What is the proper way to write docker for following app structure?

I have 3 nodejs microservices running on nodejs. one of which runs in a seperate subdomain and the other 2 are routed based on path. My Docker file is as below
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 9000
CMD [ "npm", "start" ]
The port is different for each image. After this i have an nginx running on bare metal server with all configurations for reverse proxy. I know that this is not the best way to go around. How can i have 3 seperate instances run and listen on the same port ?
Also for database linking i am using --link flag but that is shown to be as depreciated in the docs, what is the right way to go around that ?
Instead of NGiNX, use Traefik: it will adapt its reverse proxy rule depending on the containers it discovers through consul.
See "Traefik Swarm cluster" in order to setup a cluster.
You can then declare your database in order for said base to run always on the same node, using service constrains.
See for instance "Running a MongoDB Replica Set on Docker 1.12 Swarm Mode: Step by Step":
The basic plan is to define each member of the replica set as a separate service, and use constraints to prevent swarm orchestration moving them away from their data volumes
For instance:
docker#manager1:~$ docker node update --label-add mongo.replica=1 $(docker node ls -q -f name=manager1)
docker service create --replicas 1 --network mongo \
--mount type=volume,source=mongodata1,target=/data/db \
--mount type=volume,source=mongoconfig1,target=/data/configdb \
--constraint 'node.labels.mongo.replica == 1' \
--name mongo1 mongo:3.2 mongod --replSet example

Resources