docker keep restarting until file is created in a mounted volume - linux

I am trying to create a script that would restart itself, a micro service (in my case its node-red).
Here is my docker compose file:
docker-compose.yml
version: '2.1'
services:
wifi-connect:
build: ./wifi-connect
restart: always
network_mode: host
privileged: true
google-iot:
build: ./google-iot
volumes:
- 'app-data:/data'
restart: always
network_mode: host
depends_on:
- "wifi-connect"
ports:
- "8883:8883"
node-red:
build: ./node-red/node-red
volumes:
- 'app-data:/data'
restart: always
privileged: true
network_mode: host
depends_on:
- "google-iot"
volumes:
app-data:
I am using wait-for-it.sh in order to check if the previous container.
Here is an extract from the Dockerfile of the node-red microservice.
RUN chmod +x ./wait-for-it/wait-for-it.sh
# server.js will run when container starts up on the device
CMD ["bash", "/usr/src/app/start.sh", "bash", "/usr/src/app/wait-for-it/wait-for-it.sh google-iot:8883 -- echo Google IoT Service is up and running"]
I have seen the inotify.
Basically all I want is to restart the container node-red after a file has been created within the app-data volume which is mounted to the node-red container as well under /data folder path, the file path for e.g. would be: /data/myfile.txt.
Please note that this file gets generated automatically to the google-iot micro service but node-red container needs that file and pretty often is the case that the node-red container starts and /data/myfile.txt file is not present.

It sounds like you're trying to delay one container's startup until another has produced the file you're looking for, or exit if it's not available.
You can write that logic into a shell script fairly straightforwardly. For example:
#!/bin/sh
# entrypoint.sh
# Wait for the server to be available
./wait-for-it/wait-for-it.sh google-iot:8883
if [ $? -ne 0 ]; then
echo 'google-iot container did not become available' >&2
exit 1
fi
# Wait for the file to be present
seconds=30
while [ $seconds -gt 0 ]; do
if [ -f /data/myfile.txt ]; then
break
fi
sleep 1
seconds=$(($seconds-1))
done
if [ $seconds -eq 0 ]; then
echo '/data/myfile.txt was not created' >&2
exit 1
fi
# Run the command passed to us as arguments
exec "$#"
In your Dockerfile, make this script be the ENTRYPOINT. You must use JSON-array syntax in the ENTRYPOINT line. Your CMD can use any valid syntax. Note that we're running the wait-for-it script in the entrypoint wrapper, so you don't need to include that in the CMD. (And since the script is executable and begins with a "shebang" line #!/bin/sh, we do not need to explicitly name an interpreter to run it.)
# Dockerfile
RUN chmod +x entrypoint.sh wait-for-it/wait-for-it.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
CMD ["/usr/src/app/start.sh"]
The entrypoint wrapper has two checks, first that the google-iot container eventually accepts TCP connections on port 8883 and a second that the file is created. If either of these cases fails the script exit 1 before it runs the CMD. This will cause the container as a whole to exit with that status code (a restart: on-failure will still restart it).
I also might consider whether some other approach to get the file might work, like using curl to make an HTTP request to the other container. There are several practical issues with sharing Docker volumes (particularly around ownership, but also if an old copy of the file is still around from a previous run) and sharing files works especially badly in a clustered environment like Kubernetes.

You can fix the issue with the race condition by using the long-syntax of depends_on where you can specify a health check. This will guarantee that your file is present when your node-red service runs.
node-red:
build: ./node-red/node-red
volumes:
- 'app-data:/data'
restart: always
privileged: true
network_mode: host
depends_on:
google-iot:
condition: service_healthy
Then you can define a health-check (see docs here) to see if your file is present in the volume. You can add the following to the service description for google-iot service:
healthcheck:
test: ["CMD", "cat", "/data/myfile.txt"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
Feel free to tune the duration values as needed.
Does this fix your problem?

Related

I am trying to keep a container running using a docker compose file. I added tty: true and std_in: true but it still exits with code 0

Here is my docker compose file.
version: '3'
services:
web:
image: emarcs/nginx-git
ports:
- 8081:80
container_name: Avida
working_dir: /usr/share/nginx/html
command: bash -c "git clone https://github.com/raju/temp.git && echo "cloned successfully" && mv Avida-ED-Eco /usr/share/nginx/html && echo "Successfully moved the file""
volumes:
- srv:/srv/git
- logs:/var/log/nginx
environment:
GIT_POSTBUFFER: 1048576
stdin_open: true
tty: true
firefox:
image: jlesage/firefox
ports:
- 5800:5800
volumes:
- my-vol:/data/db
depends_on:
- web
volumes:
my-vol:
driver: local
srv:
driver : local
logs:
driver : local
What I am doing is I am using a docker nginx image with git installed on it and using that image to clone a directory and moving that directory to ngnix HTML path to read it. but after cloning the container exits and the code goes away. How can I keep container running without exiting with code 0. I tried some options such as tty: true and std_in: true nothing works.
container exit with code 0
So keep it running.
command:
- bash
- -ec
- |
git clone https://github.com/raju/temp.git
echo "cloned successfully"
mv -v Avida-ED-Eco /usr/share/nginx/html
echo "Successfully moved the file""
# keep it running
sleep infinity
But you would rather create a Dockerfile, in which you would prepare the image.
I changed the format, you can read about | in YAML documentation. I replaced && by set -e, you can read about it https://mywiki.wooledge.org/BashFAQ/105 .
If you want to do some modification and then start nginx, you should:
docker inspect the docker image and get the command it uses, or inspect the Dockerfile the image was built with
invoke the shell as you do and do the stuff that you want to do
and then call the command that it had previously calling
You're trying to set an individual container's command: to copy some code into the container, every time it starts up. It'd be better to write a Dockerfile to do this work, only once. You can start a Dockerfile FROM any image you want and do any setup work you need to do in the container. It will inherit the ENTRYPOINT and/or CMD from the base image if you don't override them.
# Dockerfile
FROM emarcs/nginx-git
RUN mkdir /x
&& git clone https://github.com/raju/temp.git /x
&& mv /x/temp/Avida-ED-Eco /usr/share/nginx/html
&& rm -rf /x
# (Can you `COPY ./ /usr/share/nginx/html/` instead?)
Then in your Compose file, specify to build: an image instead of using the Docker Hub image:; do not override command:; and do not overwrite parts of the image with named volumes.
version: '3.8'
services:
web:
build: .
ports:
- 8081:80
# this is all that's required

"Cannot find module errors" when using docker compose

I'm trying to learn how to understand node containers within a docker-compose environment.
I've built a simple docker container using a nodejs alpine base container. My container has a simple hello world express server. This container has been pushed to the public docker hub repo, zipzit/node-web-app
Container works great from the command line, via docker run -p 49160:8080 -d zipzit/node-web-app
Function verified via browser at http://localhost:49160
After the command line above is run, I can shell into the running container via $ docker exec -it <container_id> sh
I can look at the directories within the running docker container, and see my server.js, package.json files, etc...
From what I can tell, from the command line testing, everything seems to be working just fine.
Not sure why, but this is a total fail when I try to use this in a docker-compose test.
version: "3.8"
services:
nginx-proxy:
# http://blog.florianlopes.io/host-multiple-websites-on-single-host-docker
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
mongo_db:
image: mongo
ports:
- "27017:27017"
volumes:
- ./data:/data/db
restart: on-failure:8
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: mongo_password
react_frontend:
image: "zipzit/node-web-app"
## working_dir: /usr/src/app ## remarks on / off here testing all the combinations
## environment:
## - NODE_ENV=develop
ports:
- "3000:3000"
volumes:
- ./frontend:/usr/src/app
backend_server:
## image: "node:current-alpine3.10"
image: "zipzit/node-web-app"
user: "node"
working_dir: /usr/src/app
environment:
- NODE_ENV=develop
ports:
- "8080:8080"
volumes:
- ./backend:/usr/src/app
- ./server_error_log:/usr/src/app/error.log
links:
- mongo_db
depends_on:
- mongo_db
volumes:
frontend: {}
backend: {}
data: {}
server_error_log: {}
When I run docker-compose up -d, and then let things settle, the two containers based on zipzit/node-web-app start to run and immediately shut down.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
92710df9aa89 zipzit/node-web-app "docker-entrypoint.s…" 6 seconds ago Exited (1) 5 seconds ago root_react_frontend_1
48a8abdf02ca zipzit/node-web-app "docker-entrypoint.s…" 8 minutes ago Exited (1) 5 seconds ago root_backend_server_1
27afed70afa0 jwilder/nginx-proxy "/app/docker-entrypo…" 20 minutes ago Up 20 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-proxy
b44344f31e52 mongo "docker-entrypoint.s…" 20 minutes ago Up 20 minutes 0.0.0.0:27017->27017/tcp root_mongo_db_1
When I go to docker logs <container_id> I see:
node:internal/modules/cjs/loader:903
throw err;
^
Error: Cannot find module '/usr/src/app/server.js'
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:900:15)
at Function.Module._load (node:internal/modules/cjs/loader:745:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:72:12)
at node:internal/main/run_main_module:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
When I check out the frontend volume, its totally blank. No server.js file, nothing.
Note: I'm running the docker-compose stuff on a Virtual Private Server (VPS) Ubuntu server. I built the container and did my command line testing on a Mac laptop with docker. Edit: I went back and did complete docker-compose up -d testing on the laptop with exactly the same errors as observed in the Virtual Private Server.
I don't understand what's happening here. Why does that container work from the command line but not via docker-compose?
You are binding volumes in ./frontend or ./backend with the same image running the following command:
CMD ["node" "server.js"]
Are you sure this is the right command ? Are you sure there is a server.js in each folder mounted on your host ?
Hope this helps.
Looks like you're trying to mount local paths into your containers, but you're then declaring those paths as volumes in the compose file.
The simple solution is to remove the volumes section of your compose file.
Here's the reference - it's not very clear from the docs but basically when you mount a volume, docker-compose first looks for that volume in the volumes section, and if it is present uses that volume. The name of the volume is not treated as a local path in this scenario, it is an ID of a volume that is then created on the fly (and will be empty). If however the volume is not declared in volumes, then it is treated as a local path and mounted as such.

How to open remote shell to node.js container under docker-compose (Alpine linux)

I have a docker-compose.yml configuration file with several containers and one of the containers is node.js docker instance.
By some reason the docker instance returns error during start. In the result it's not possible to connect to the node.js container and investigate issue.
What is the simplest way to connect to the broken node.js under Alpine linux?
Usually in my docker-compose.yml
I just replace the command or entrypoint by :
command: watch ps
It's a bit hackish, but that keeps the container up.
Alternatively, once the image has been built, you can run it using docker. But then you have to do what you did in your docker-compose.yml file in your command, like mount volumes and open ports manually.
FOR DOCKER-COMPOSE
In case if you use docker-compose the simplest way is to add the following command line into your docker-compose.yml file.
services:
api:
build: api/.
command: ["/bin/sh", "-c", "while sleep 3600; do :; done"]
depends_on:
- db
- redis
...
also it need to comment line by line from bottom up inside the Dockerfile for node.js until the container will be able to start.
After the node.js container will be able to start you can easy connect to your container via
docker exec -it [container] sh
FOR DOCKER
You can simply add at the end of Dockerfile the following line
CMD echo "^D for exit" && wc -
and comment line by line (from bottom up) above this line until the container will be able to start.
You can docker-compose run an alternate command. This requires no changes in your Dockerfile or docker-compose.yml. For example,
docker-compose run --rm web /bin/sh
This creates a new container which is configured identically to what is requested in the docker-compose.yml (with environment variables and mounted volumes), except that ports: aren't published by default. It is essentially identical to docker run with the same options, except it defaults to -i -t being on.
If your Dockerfile uses ENTRYPOINT instead of CMD to declare the main container command, you need the same --entrypoint option. For example, to get a listing of the files in the image's default working directory, you could
docker-compose run --rm --entrypoint /bin/ls web -l
(If your ENTRYPOINT is a wrapper script that ultimately exec "$#" you don't need this.)

Docker wait for web app to start (dependency)

Short question. I have the following Dockerfile:
FROM node:6.9.2
RUN apt-get update
RUN useradd --user-group --create-home --shell /bin/false app
ENV HOME=/home/app
RUN mkdir -p $HOME
RUN chown -R app:app $HOME
USER app
WORKDIR $HOME/
VOLUME ["/home/app/uploads"]
EXPOSE 5001
CMD [ "npm", "run", "test-integration" ]
and the corresponding docker-compose:
version: '2.0'
services:
komed-test-integration:
image: borntraegermarc/komed-test-integration
container_name: komed-test-integration
build: .
depends_on:
- komed-app
- komed-mongo
volumes:
- .:/home/app
environment:
- HOST_URL=komed-app
- HOST_PORT=5001
- MONGO_HOST=komed-mongo
- MONGO_DATABASE=komed-health
I have the dependency to komed-app in my compose file and that works fine. But how do I wait for these integration tests to start until the web server (komed-app) is actually running? Tried with CMD [ "./wait-for-it.sh", "komed-app-test:5001", "--", "npm", "run", "test-integration" ] but didn't work. Always get the error exited with code 127.
Any ideas?
The best practice is described in Controlling startup order in Compose.
You can either use depends-on or (better) vishnubob/wait-for-it (which you did).
127 is mentioned in this issue, when the timeout fails.
Docker node 6.9.2 depends on jessie, not Alpine, so it should not be affected by issue 6: so try and debug the script, by adding -x to the first line
#!/usr/bin/env bash -x
You will see which exit $RESULT produced that 127 error code.
The OP Marc Borni confirms in the comments the origin of the issue: windows file encoding for a unix script.
Sometimes, using dos2unix directly in a Dockerfile can help.

Init script for Cassandra with docker-compose

I would like to create keyspaces and column-families at the start of my Cassandra container.
I tried the following in a docker-compose.yml file:
# shortened for clarity
cassandra:
hostname: my-cassandra
image: my/cassandra:latest
command: "cqlsh -f init-database.cql"
The image my/cassandra:latest contains init-database.cql in /. But this does not seem to work.
Is there a way to make this happen ?
I was also searching for the solution to this question, and here is the way how I accomplished it.
Here the second instance of Cassandra has a volume with the schema.cql and runs CQLSH command
My Version with healthcheck so we can get rid of sleep command
version: '2.2'
services:
cassandra:
image: cassandra:3.11.2
container_name: cassandra
ports:
- "9042:9042"
environment:
- "MAX_HEAP_SIZE=256M"
- "HEAP_NEWSIZE=128M"
restart: always
volumes:
- ./out/cassandra_data:/var/lib/cassandra
healthcheck:
test: ["CMD", "cqlsh", "-u cassandra", "-p cassandra" ,"-e describe keyspaces"]
interval: 15s
timeout: 10s
retries: 10
cassandra-load-keyspace:
container_name: cassandra-load-keyspace
image: cassandra:3.11.2
depends_on:
cassandra:
condition: service_healthy
volumes:
- ./src/main/resources/cassandra_schema.cql:/schema.cql
command: /bin/bash -c "echo loading cassandra keyspace && cqlsh cassandra -f /schema.cql"
NetFlix Version using sleep
version: '3.5'
services:
cassandra:
image: cassandra:latest
container_name: cassandra
ports:
- "9042:9042"
environment:
- "MAX_HEAP_SIZE=256M"
- "HEAP_NEWSIZE=128M"
restart: always
volumes:
- ./out/cassandra_data:/var/lib/cassandra
cassandra-load-keyspace:
container_name: cassandra-load-keyspace
image: cassandra:latest
depends_on:
- cassandra
volumes:
- ./src/main/resources/cassandra_schema.cql:/schema.cql
command: /bin/bash -c "sleep 60 && echo loading cassandra keyspace && cqlsh cassandra -f /schema.cql"
P.S I found this way at one of the Netflix Repos
We recently tried to solve a similar problem in KillrVideo, a reference application for Cassandra. We are using Docker Compose to spin up the environment needed by the application which includes a DataStax Enterprise (i.e. Cassandra) node. We wanted that node to do some bootstrapping the first time it was started to install the CQL schema (using cqlsh to run the statements in a .cql file just like you're trying to do). Basically the approach we took was to write a shell script for our Docker entrypoint that:
Starts the node normally but in the background.
Waits until port 9042 is available (this is where clients connect to run CQL statements).
Uses cqlsh -f to run some CQL statements and init the schema.
Stops the node that's running in the background.
Continues on to the usual entrypoint for our Docker image that starts up the node normally (in the foreground like Docker expects).
We just use the existence of a file to indicate whether the node has already been bootstrapped and check that on startup to determine whether we need to do that logic above or can just start it normally. You can see the results in the killrvideo-dse-docker repository on GitHub.
There is one caveat to this approach. This worked great for us because in our reference application, we're only spinning up a single node (i.e. we aren't creating a cluster with more than one node). If you're running multiple nodes, you'll probably want to make sure that only one of the nodes does the bootstrapping to create the schema because multiple clients modifying the schema simultaneously can cause some issues with your cluster. (This is a known issue and will hopefully be fixed at some point.)
I solved this problem by patching cassandra's docker-entrypoint.sh so it will execute sh and cql files located in /docker-entrypoint-initdb.d on startup. This is similar to how MySQL docker containers work.
Basically, I add a small script at the end of the docker-entrypoint.sh (right before the last line, exec "$#"), that will run the cql scripts once cassandra is up. A simplified version is:
INIT_DIR=docker-entrypoint-initdb.d
# this whole block will execute in the background
(
cd $INIT_DIR
# wait for cassandra to be ready
while ! cqlsh -e 'describe cluster' > /dev/null 2>&1; do sleep 6; done
echo "$0: Cassandra cluster ready: executing cql scripts found in $INIT_DIR"
# find and execute cql scripts, in name order
for f in $(find . -type f -name "*.cql" -print | sort); do
echo "$0: running $f"
cqlsh -f "$f"
echo "$0: $f executed"
done
) &
This solution works for all cassandra versions (at least until 3.11, as the time of writing).
Hence, you only have to build and use this cassandra image version, and then add proper initializations scripts to the container using docker-compose volumes.
A complete gist with a more robust entrypoint patch (and example) is available here.

Resources