I am working on telegram group for premium members in which I have two services, one is monitoring all the joinee and other one is monitoring if any member has expired premium plan so It would kick out that user from the channel. I am very very new to Docker and deployment things. So I am very confused that, to run two processes simultaneously with one Dockerfile. I have tried like this.
here is the file structure:
start.sh
#!/bin/bash
cd TelegramChannelMonitor
pm2 start services/kickNonPremium.js --name KICK_NONPREMIUM
pm2 start services/monitorTelegramJoinee.js --name MONITOR_JOINEE
Dockerfile
FROM node:12-alpine
WORKDIR ./TelegramChannelMonitor
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
ENTRYPOINT ["/start.sh"]
What should I do to achieve this?
A Docker container only runs one process. On the other hand, you can run arbitrarily many containers off of a single image, each with a different command. So the approach I'd take here is to build a single image; as you've shown it, except without the ENTRYPOINT line.
FROM node:12-alpine
# Note that the Dockerfile already puts us in the right directory
WORKDIR /TelegramChannelMonitor
...
# Important: no ENTRYPOINT
# (There can be a default CMD if you think one path is more likely)
Then when you want to run this application, run two containers, and in each, make the main container command run a different script.
docker build -t telegram-channel-monitor .
docker run -d -p 8080:8080 --name kick-non-premium \
telegram-channel-monitor \
node services/kickNonPremium.js
docker run -d -p 8081:8080 --name monitor-joined \
telegram-channel-monitor \
node services/monitorTelegramJoinee.js
You can have a similar setup using Docker Compose. Set all of the containers to build: ., but set a different command: for each.
(The reason to avoid ENTRYPOINT here is because the syntax to override the command gets very clumsy: you need --entrypoint node before the image name, but then the rest of the arguments after it. I've also used plain node instead of pm2 since a Docker container provides most of the functionality of a process supervisor; see also what is the point of using pm2 and docker together?.)
Try pm2 ecosystem for apps(i.e services) declaration and run pm2 in non-backrgound mode or pm2-runtime
https://pm2.keymetrics.io/docs/usage/application-declaration/
https://pm2.keymetrics.io/docs/usage/docker-pm2-nodejs/
Related
So I have this docker file and i want to run feed-consumers and consumers multiple times and i tried to do so. We have a node.js application for feed-consumers and consumer and pass user_levels to it.
I just want to ask is this the right approach?
FROM ubuntu:18.04
# Set Apt to noninteractive mode
ENV DEBIAN_FRONTEND noninteractive
# Install Helper Commands
ADD scripts/bin/* /usr/local/bin/
RUN chmod +x /usr/local/bin/*
RUN apt-install-and-clean curl \
build-essential \
git >> /dev/null 2>&1
RUN install-node-12.16.1
RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app
#RUN yarn init-cache
#RUN yarn init-temp
#RUN yarn init-user
RUN yarn install
RUN yarn build
RUN node ./feedsconsumer/consumer.js user_level=0
RUN for i in {1..10}; do node ./feedsconsumer/consumer.js user_level=1; done
RUN for i in {1..20}; do node ./feedsconsumer/consumer.js user_level=2; done
RUN for i in {1..20}; do node ./feedsconsumer/consumer.js user_level=3; done
RUN for i in {1..30}; do node ./feedsconsumer/consumer.js user_level=4; done
RUN for i in {1..40}; do node ./feedsconsumer/consumer.js user_level=5; done
RUN for i in {1..10}; do node ./consumer/consumer.js; done
ENTRYPOINT ["tail", "-f", "/dev/null"]
Or is there any other way around?
Thanks
A container runs exactly one process. Your container's is
ENTRYPOINT ["tail", "-f", "/dev/null"]
This translates to "do absolutely nothing, in a way that's hard to override". I typically recommend using CMD over ENTRYPOINT, and the main container command shouldn't ever be an artificial "do nothing but keep the container running" command.
Before that, you're trying to RUN the process(es) that are the main container process. The RUN only happens during the image build phase, the running process(es) aren't persisted in the image, the build will block until these processes complete, and they can't connect to other containers or data stores. These are the lines you want to be the CMD.
A container only runs one processes, but you can run multiple containers off the same image. It's somewhat easier to add parameters by setting environment variables than by adjusting the command line (you have to replace the whole thing), so in your code look for process.env.USER_LEVEL. Also make sure the process stays as a foreground process and doesn't use a package to daemonize itself.
Then the final part of the Dockerfile just needs to set a default CMD that launches one copy of your application:
...
COPY package.json yarn.lock .
RUN yarn install
COPY . .
RUN yarn build
CMD node ./feedsconsumer/consumer.js
Now you can start a single container running this process
docker build -t my/consumer .
docker run -d --name consumer my/consumer
And you can start multiple containers to run the whole set of them
for user_level in `seq 5`; do
for i in `seq 10`; do
docker run -d \
--name "feed-consumer-$user_level-$i" \
-e "USER_LEVEL=$user_level" \
my/consumer
done
done
for i in `seq 10`; do
docker run -d --name "consumer-$i" \
my/consumer \
node ./consumer/consumer.js
done
Notice this last invocation overrides the CMD to run the alternate script; this becomes a more contorted invocation if it needs to override ENTRYPOINT instead. (docker run --entrypoint node my/consumer ./consumer/consumer.js)
If you're looking forward to cluster environments like Kubernetes, it's often straightforward to run multiple identical copies of a container, which is what you're trying to do here. A Kubernetes Deployment object has a replicas: count, and you can kubectl scale deployment feed-consumer-5 --replicas=40 to change what's in the question, or potentially configure a HorizontalPodAutoscaler to set it dynamically based on the topic length (this last is involved, but possible and rewarding).
How do I run all my Node.js file in a single container?
app1.js running on port 1001
app2.js running on port 1002
app3.js running on port 1003
app4.js running on port 1004
Dockerfile
FROM node:latest
WORKDIR /rootfolder
COPY package.json ./
RUN npm install
COPY . .
RUN chmod +x /script.sh
RUN /script.sh
script.sh
#!/bin/sh
node ./app1.js
node ./app2.js
node ./app3.js
node ./app4.js
You would almost always run these in separate containers. You're allowed to run multiple containers from the same image, you can override the default command for an image when you start it up, and you can remap the ports an application uses when you start it.
In your Dockerfile, delete the RUN /script.sh line at the end. (That will try to start the servers during the image build, which you don't want.) Now you can build and run containers:
docker build -t myapp . # build the image
docker network create mynet # create a Docker network
docker run \ # run the first container...
-d \ # in the background
--net mynet \ # on that network
--name app1 \ # with a known name
-p 1001:3000 \ # publishing a port
myapp \ # from this image
node ./app1.js # running this command
docker run \
-d \
--net mynet \
--name app2 \
-p 1002:3000 \
myapp \
node ./app2.js
(I've assumed all of the scripts listen on the default Express port 3000, which is the second port number in the -p options.)
Docker Compose is a useful tool for running multiple containers together and can replicate this functionality. A docker-compose.yml file matching this setup would look like:
version: '3.8'
services:
app1:
build: .
ports:
- 1001:3000
command: node ./app1.js
app2:
build: .
ports:
- 1002:3000
command: node ./app2.js
Compose will create a Docker network on its own, and take responsibility for naming the images and containers. docker-compose up will start all of the services in parallel.
You need to expose the ports first using:
EXPOSE 1001
...
EXPOSE 1004
in your dockerfile and later run the container using the -p parameter as with -p 1501:1001
to expose -for example- the port 1501 of the host to work as the 1001 port of the container.
ref: https://docs.docker.com/engine/reference/commandline/run/
However, it is suggested to minimize the number of programs to be run from a docker container. So, you might like to have a container for each of your js scripts.
Yet, Nothing stops you from using:
docker exec -it yourDockerMachineName bash
several times where you can use each of your node cmds.
What you are trying to achieve is considered to be an anti-pattern.
Conversely, having in mind the single-responsibility-principle when building up the stacks of your apps will give you better leverages to manage, monitor, change your app etc.
This article from the official documentation explains when you might want to do this.
If you want to manage multiple containers as a whole, having one Dockerfile for each js, combined with a docker-compose file to bring up all the containers at once on different ports might answer your question. Here is a minimal example:
docker-compose.yml
version: '3.7'
services:
app1:
image: your-js-app-1-image
container_name: app-1
ports:
- '1001:3000'
app2:
image: your-js-app-2-image
container_name: app-2
ports:
- '1002:3000'
Ideally you should run each app on a separated container, if your applications are different. In the case they are equal and you want to run multiple instances on different ports
docker run -p <your_public_tcp_port_number>:3000 <image_name>
or a good docker-compose.yaml would suffice.
Technically you may want to run each different application on a different container and run multiple instances of the same application in order to make it easy to version each of your app on a newer independent image. It will allows you to independently stop, deploy and start your apps on the production environment.
When I am not exposing any ports when writing my Dockerfile, nor am I binding any ports when running docker run, I am still able to interact with applications running inside the container. Why?
I am writing my Dockerfile for my Node application. It's pretty simple and looks like this:
FROM node:8
COPY . .
RUN yarn
RUN yarn run build
ARG PORT=80
EXPOSE $PORT
CMD yarn run serve
Using this Dockerfile, I was able to build the image using docker build
$ cd ~/project/dir/
$ docker build . --build-arg PORT=8080
And run it using docker run
$ docker run -p 8080 <image-id>
I then accessed the application, running inside the Docker container, on an IP address like http://172.17.0.12:8080/ and it works.
However, when I removed the EXPOSE instruction from the Dockerfile, and remove the -p option in docker run, the application still works! It's like Docker is automatically binding my ports
Additional Notes:
It appears that another user have experienced the same issue
I have tried rebuilding my image using --no-cache after I removed the EXPOSE instructions, but this problem still exists.
Using docker inspect, I see no entries for Config.ExposedPorts
the EXPOSE command in Dockerfile really doesnt do much and I think it is more for people that read the Dockerfile to know what ports/services are running inside the container. However, the EXPOSE is usefull when you start contianer with capital -P argument (-P, --publish-all Publish all exposed ports to random ports)
docker run -P my_image
but if you are using the lower case -p you have to specify the source:destination port... See this thread
If you dont write EXPOSE in Dockerfile it doesnt have any influence to the app inside container, it is only for the capital -P argument....
I've deployed some docker containers with golang apps. One of them I need to start by this command:
docker run --restart unless-stopped -it myapp /bin/bash
The next step I enter the container and edit some config files, then I run
go build main.go
and ./main
After that I press ctrl+q and leave it out.
Everything works perfectly and all my containers restart perfectly after restarting server. But there is one issue, when myapp container restarts, the golang application doesn't run while container still works. I have to enter this again and run ./main. How can I fixed it?
Dockerfile
FROM golang:1.8
WORKDIR /go/src/app
COPY . .
RUN go-wrapper download # "go get -d -v ./..."
RUN go-wrapper install # "go install -v ./..." RUN ["apt-get","update"]
RUN ["apt-get","install","-y","vim"]
EXPOSE 3000
CMD ["app"]
When you create a container and pass in /bin/bash as the command, that's as far as Docker cares. When the container restarts, it will start up another instance of /bin/bash.
Docker doesn't watch your shell session and see what things you do after it starts the command. If you want to actually run ./main as the command of the container, then you'll need to pass in /go/src/app/main as the command instead of /bin/bash.
Additionally, compiling code is something better done during the image build phase instead of at container runtime.
I have 3 nodejs microservices running on nodejs. one of which runs in a seperate subdomain and the other 2 are routed based on path. My Docker file is as below
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 9000
CMD [ "npm", "start" ]
The port is different for each image. After this i have an nginx running on bare metal server with all configurations for reverse proxy. I know that this is not the best way to go around. How can i have 3 seperate instances run and listen on the same port ?
Also for database linking i am using --link flag but that is shown to be as depreciated in the docs, what is the right way to go around that ?
Instead of NGiNX, use Traefik: it will adapt its reverse proxy rule depending on the containers it discovers through consul.
See "Traefik Swarm cluster" in order to setup a cluster.
You can then declare your database in order for said base to run always on the same node, using service constrains.
See for instance "Running a MongoDB Replica Set on Docker 1.12 Swarm Mode: Step by Step":
The basic plan is to define each member of the replica set as a separate service, and use constraints to prevent swarm orchestration moving them away from their data volumes
For instance:
docker#manager1:~$ docker node update --label-add mongo.replica=1 $(docker node ls -q -f name=manager1)
docker service create --replicas 1 --network mongo \
--mount type=volume,source=mongodata1,target=/data/db \
--mount type=volume,source=mongoconfig1,target=/data/configdb \
--constraint 'node.labels.mongo.replica == 1' \
--name mongo1 mongo:3.2 mongod --replSet example