Why app in docker container doesn't restart? - linux

I've deployed some docker containers with golang apps. One of them I need to start by this command:
docker run --restart unless-stopped -it myapp /bin/bash
The next step I enter the container and edit some config files, then I run
go build main.go
and ./main
After that I press ctrl+q and leave it out.
Everything works perfectly and all my containers restart perfectly after restarting server. But there is one issue, when myapp container restarts, the golang application doesn't run while container still works. I have to enter this again and run ./main. How can I fixed it?
Dockerfile
FROM golang:1.8
WORKDIR /go/src/app
COPY . .
RUN go-wrapper download # "go get -d -v ./..."
RUN go-wrapper install # "go install -v ./..." RUN ["apt-get","update"]
RUN ["apt-get","install","-y","vim"]
EXPOSE 3000
CMD ["app"]

When you create a container and pass in /bin/bash as the command, that's as far as Docker cares. When the container restarts, it will start up another instance of /bin/bash.
Docker doesn't watch your shell session and see what things you do after it starts the command. If you want to actually run ./main as the command of the container, then you'll need to pass in /go/src/app/main as the command instead of /bin/bash.
Additionally, compiling code is something better done during the image build phase instead of at container runtime.

Related

Docker container don't start

Hi I've got a problem with docker. I'm using it on s390x Debian, everything was working fine but now i can't start my containers. Old containers are working but when i create new container using for example: docker run ubuntu then i'm trying docker start [CONTAINER] my container don't start. When i use docker ps -a I've got all of my containers, but after when I use docker ps i can't see my new container. As you can see on scr. I created container with name practical_spence and ID 3e8562694e9f but when i use docker start, it's not starting. Please help.
As you do not specify a CMD or entrypoint to run, the default is used which is set to "bash". But you are not running the container in interactive terminal mode, so the bash just exits. Run:
docker run -it ubuntu:latest
to attach the running container to you terminal. Or specify the command you want to run in the container.
You container did start but exit instantly as it has nothing to do. You can start like this docker run -d ubuntu sleep infinity. Then use docker ps to see the running container. You can of course exec into it to do something docker exec -it <container> bash. You can stop it docker stop <container>. Re-start it docker start <container>. Finally delete (stopped) it as you don't need it anymore docker container rm <container>.

Docker: Running two services with one Dockerfile

I am working on telegram group for premium members in which I have two services, one is monitoring all the joinee and other one is monitoring if any member has expired premium plan so It would kick out that user from the channel. I am very very new to Docker and deployment things. So I am very confused that, to run two processes simultaneously with one Dockerfile. I have tried like this.
here is the file structure:
start.sh
#!/bin/bash
cd TelegramChannelMonitor
pm2 start services/kickNonPremium.js --name KICK_NONPREMIUM
pm2 start services/monitorTelegramJoinee.js --name MONITOR_JOINEE
Dockerfile
FROM node:12-alpine
WORKDIR ./TelegramChannelMonitor
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
ENTRYPOINT ["/start.sh"]
What should I do to achieve this?
A Docker container only runs one process. On the other hand, you can run arbitrarily many containers off of a single image, each with a different command. So the approach I'd take here is to build a single image; as you've shown it, except without the ENTRYPOINT line.
FROM node:12-alpine
# Note that the Dockerfile already puts us in the right directory
WORKDIR /TelegramChannelMonitor
...
# Important: no ENTRYPOINT
# (There can be a default CMD if you think one path is more likely)
Then when you want to run this application, run two containers, and in each, make the main container command run a different script.
docker build -t telegram-channel-monitor .
docker run -d -p 8080:8080 --name kick-non-premium \
telegram-channel-monitor \
node services/kickNonPremium.js
docker run -d -p 8081:8080 --name monitor-joined \
telegram-channel-monitor \
node services/monitorTelegramJoinee.js
You can have a similar setup using Docker Compose. Set all of the containers to build: ., but set a different command: for each.
(The reason to avoid ENTRYPOINT here is because the syntax to override the command gets very clumsy: you need --entrypoint node before the image name, but then the rest of the arguments after it. I've also used plain node instead of pm2 since a Docker container provides most of the functionality of a process supervisor; see also what is the point of using pm2 and docker together?.)
Try pm2 ecosystem for apps(i.e services) declaration and run pm2 in non-backrgound mode or pm2-runtime
https://pm2.keymetrics.io/docs/usage/application-declaration/
https://pm2.keymetrics.io/docs/usage/docker-pm2-nodejs/

NPM start script runs from local shell but fails inside Docker container command

I have a Node app which consists of three separate Node servers, each run by pm2 start. I use concurrently to run the three servers, as a start-all script in package.json:
"scripts": {
...
"start-all": "concurrently \" pm2 start ./dist/foo.js \" \"pm2 start ./dist/bar.js \" \"pm2 start ./dist/baz.js\"",
"stop-all": "pm2 stop all",
"reload-all": "pm2 reload all",
...
}
This all runs fine when running from the command line on localhost, but when I run it as a docker-compose command - or as a RUN command in my Dockerfile - only one of the server scripts (a random one each time I try it!) will launch, but then immediately exit. In my --verbose docker-compose output I can see the pm2 panel (listing name, version, mode, pid, etc.), but then this error message:
pm2 start ./dist/foo.js exited with code 0.
N.B: This is all with Docker running locally (on a Mac Mini with 16GB of RAM), not on a remote server.
If I docker exec -it <container_name> /bin/bash into the container and the run npm run start-all manually from the top level of the src directory (which I COPY over in my Dockerfile) everything works. Here is my Dockerfile:
FROM node:latest
# Create the workdir
RUN mkdir /myapp
WORKDIR /myapp
# Install packages
COPY package*.json ./
RUN npm install
# Install pm2 and concurrently globally.
RUN npm install -g pm2
RUN npm install -g concurrently
# Copy source code to the container
COPY . ./
In my docker-compose file I simply list npm run start-all as a command for the Node service. But it makes no difference if I add it to the Dockerfile like this:
RUN npm run start-all
What could possibly be going on? The pm2 logs show report nothing other than that the app has started.
the first reason is pm2 start app.js start the application in background so that is why your container stops as soon as it runs pm2 start.
You need to start an application with pm2_runtime, it starts an application in the foreground. also you do not need concurrently, pm2 process.yml will do this job.
Docker Integration
Using Containers? We got your back. Start today using pm2-runtime, a
perfect companion to get the most out of Node.js in production
environment.
The goal of pm2-runtime is to wrap your applications into a proper
Node.js production environment. It solves major issues when running
Node.js applications inside a container like:
Second Process Fallback for High Application Reliability Process Flow
Control Automatic Application Monitoring to keep it always sane and
high performing Automatic Source Map Discovery and Resolving Support
docker-pm2-nodejs
The second important thing, you should put all your application in pm2 config file, as docker can only run the process from CMD.
Ecosystem File
PM2 empowers your process management workflow. It allows you to
fine-tune the behavior, options, environment variables, logs files of
each application via a process file. It’s particularly useful for
micro-service based applications.
pm2 config application-declaration
Create file process.yml
apps:
- script : ./dist/bar.js
name : 'bar'
- script : ./dist/foo.js
name : 'worker'
env :
NODE_ENV: development
then add CMD in Dockerfile
CMD ["pm2-runtime", "process.yml"]
remove command from docker-compose.
Docker and pm2 provide overlapping functionality: both have the ability to restart processes and manage logs, for example. In Docker it's generally considered a best practice to only run one process inside a container, and if you do that, you don't necessarily need pm2. what is the point of using pm2 and docker together?
discusses this in more detail.
When you run your image you can specify the command to run, and you can start multiple containers off of the same image. Given the Dockerfile you show initially you can launch these as
docker run --name foo myimage node ./dist/foo.js
docker run --name bar myimage node ./dist/bar.js
docker run --name baz myimage node ./dist/baz.js
This will let you do things like restart only one of the containers when its code changes while leaving the rest untouched.
You hint at Docker Compose; its command: directive sets the same property.
version: '3'
services:
foo:
build: .
command: node ./dist/foo.js
bar:
build: .
command: node ./dist/bar.js
baz:
build: .
command: node ./dist/baz.js

How to open remote shell to node.js container under docker-compose (Alpine linux)

I have a docker-compose.yml configuration file with several containers and one of the containers is node.js docker instance.
By some reason the docker instance returns error during start. In the result it's not possible to connect to the node.js container and investigate issue.
What is the simplest way to connect to the broken node.js under Alpine linux?
Usually in my docker-compose.yml
I just replace the command or entrypoint by :
command: watch ps
It's a bit hackish, but that keeps the container up.
Alternatively, once the image has been built, you can run it using docker. But then you have to do what you did in your docker-compose.yml file in your command, like mount volumes and open ports manually.
FOR DOCKER-COMPOSE
In case if you use docker-compose the simplest way is to add the following command line into your docker-compose.yml file.
services:
api:
build: api/.
command: ["/bin/sh", "-c", "while sleep 3600; do :; done"]
depends_on:
- db
- redis
...
also it need to comment line by line from bottom up inside the Dockerfile for node.js until the container will be able to start.
After the node.js container will be able to start you can easy connect to your container via
docker exec -it [container] sh
FOR DOCKER
You can simply add at the end of Dockerfile the following line
CMD echo "^D for exit" && wc -
and comment line by line (from bottom up) above this line until the container will be able to start.
You can docker-compose run an alternate command. This requires no changes in your Dockerfile or docker-compose.yml. For example,
docker-compose run --rm web /bin/sh
This creates a new container which is configured identically to what is requested in the docker-compose.yml (with environment variables and mounted volumes), except that ports: aren't published by default. It is essentially identical to docker run with the same options, except it defaults to -i -t being on.
If your Dockerfile uses ENTRYPOINT instead of CMD to declare the main container command, you need the same --entrypoint option. For example, to get a listing of the files in the image's default working directory, you could
docker-compose run --rm --entrypoint /bin/ls web -l
(If your ENTRYPOINT is a wrapper script that ultimately exec "$#" you don't need this.)

Docker automatically binds port

When I am not exposing any ports when writing my Dockerfile, nor am I binding any ports when running docker run, I am still able to interact with applications running inside the container. Why?
I am writing my Dockerfile for my Node application. It's pretty simple and looks like this:
FROM node:8
COPY . .
RUN yarn
RUN yarn run build
ARG PORT=80
EXPOSE $PORT
CMD yarn run serve
Using this Dockerfile, I was able to build the image using docker build
$ cd ~/project/dir/
$ docker build . --build-arg PORT=8080
And run it using docker run
$ docker run -p 8080 <image-id>
I then accessed the application, running inside the Docker container, on an IP address like http://172.17.0.12:8080/ and it works.
However, when I removed the EXPOSE instruction from the Dockerfile, and remove the -p option in docker run, the application still works! It's like Docker is automatically binding my ports
Additional Notes:
It appears that another user have experienced the same issue
I have tried rebuilding my image using --no-cache after I removed the EXPOSE instructions, but this problem still exists.
Using docker inspect, I see no entries for Config.ExposedPorts
the EXPOSE command in Dockerfile really doesnt do much and I think it is more for people that read the Dockerfile to know what ports/services are running inside the container. However, the EXPOSE is usefull when you start contianer with capital -P argument (-P, --publish-all Publish all exposed ports to random ports)
docker run -P my_image
but if you are using the lower case -p you have to specify the source:destination port... See this thread
If you dont write EXPOSE in Dockerfile it doesnt have any influence to the app inside container, it is only for the capital -P argument....

Resources