I'm new in Docker and I'm facing a problem with my custom Dockerfile which needs some help from you guys. It's working fine until I add some code to run the cronjob in the docker container.
This is my Dockerfile file:
FROM php:7.2-fpm-alpine
COPY cronjobs /etc/crontabs/root
// old commands
ENTRYPOINT ["crond", "-f", "-d", "8"]
This is cronjobs file:
* * * * * cd /var/www/html && php artisan schedule:run >> /dev/null 2>&1
This is docker-compose.yml file:
version: '3'
networks:
laravel:
services:
nginx:
image: nginx:stable-alpine
container_name: nginx_ctrade
ports:
- "8081:80"
volumes:
- ./app:/var/www/html
- ./config/nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./config/certs:/etc/nginx/certs
- ./log/nginx:/var/log/nginx
depends_on:
- php
- mysql
networks:
- laravel
working_dir: /var/www/html
php:
build:
context: ./build
dockerfile: php.dockerfile
container_name: php_ctrade
volumes:
- ./app:/var/www/html
- ./config/php/php.ini:/usr/local/etc/php/php.ini
networks:
- laravel
mysql:
image: mysql:latest
container_name: mysql_ctrade
tty: true
volumes:
- ./data:/var/lib/mysql
- ./config/mysql/my.cnf:/etc/mysql/my.cnf
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_USER=admin
- MYSQL_DATABASE=laravel
- MYSQL_PASSWORD=secret
networks:
- laravel
I re-build the docker images and run it. The cronjob is working ok but when I access the localhost at localhost:8081. It isn't working anymore. The page show 502 Bad Gateway, so I checked the Nginx error log. This is the Nginx error shown me:
2020/04/10 13:33:36 [error] 8#8: *28 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.224.1, server: localhost, request: "GET /trades HTTP/1.1", upstream: "fastcgi://192.168.224.3:9000", host: "localhost:8081", referrer: "http://localhost:8081/home"
All the containers are still running after updated.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a2403ece8509 nginx:stable-alpine "nginx -g 'daemon of…" 18 seconds ago Up 17 seconds 0.0.0.0:8081->80/tcp nginx_ctrade
69032097b7e4 ctrade_php "docker-php-entrypoi…" 19 seconds ago Up 18 seconds 9000/tcp php_ctrade
592b483305d5 mysql:latest "docker-entrypoint.s…" 3 hours ago Up 18 seconds 3306/tcp, 33060/tcp mysql_ctrade
Is there someone get this issue before? Any help would be appreciated! Thanks so much!
According to the documentation, running two (or more) services inside of a Docker container breaks it's philosophy of single responsability.
It is generally recommended that you separate areas of concern by
using one service per container. That service may fork into multiple
processes (for example, Apache web server starts multiple worker
processes). It’s ok to have multiple processes, but to get the most
benefit out of Docker, avoid one container being responsible for
multiple aspects of your overall application. [...]
If you choose to follow this recommendation, you will end up with two options:
Option 1. Create a separated container that will handle the scheduling tasks.
Example:
# File: Dockerfile
FROM php:7.4.8-fpm-alpine
COPY ./cron.d/tasks /cron-tasks
RUN touch /var/log/cron.log
RUN chown www-data:www-data /var/log/cron.log
RUN /usr/bin/crontab -u www-data /cron-tasks
CMD ["crond", "-f", "-l", "8"]
# File: cron.d/tasks
* * * * * echo "Cron is working :D" >> /var/log/cron.log 2>&1
# File: docker-compose.yml
services:
[...]
scheduling:
build:
context: ./build
dockerfile: cron.dockerfile
[...]
Option 2. Use own host's crontab to execute the scheduled tasks on containers (as defended in this post).
Example:
# File on host: /etc/cron.d/my-laravel-apps
* * * * * root docker exec -t laravel-container-A php artisan schedule:run >> /dev/null 2>&1
* * * * * root docker exec -t laravel-container-B php artisan schedule:run >> /dev/null 2>&1
* * * * * root docker exec -t laravel-container-C php artisan schedule:run >> /dev/null 2>&1
PS: In your case, replace <laravel-container-*> by php_ctrade.
Option 3: Use supervisord
On the other hand, if you really want just one container at all, you may still use supervisord as your main process and configure it to initialize (and supervise) both php-fpm and crontab applications.
Note that this is a moderately heavy-weight approach and requires you
to package supervisord and its configuration in your image (or base
your image on one that includes supervisord), along with the different
applications it manages.
You will find an example of how to do it here.
References:
https://docs.docker.com/config/containers/multi-service_container/
Recommended reading:
https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container
Related
I am trying to create a script that would restart itself, a micro service (in my case its node-red).
Here is my docker compose file:
docker-compose.yml
version: '2.1'
services:
wifi-connect:
build: ./wifi-connect
restart: always
network_mode: host
privileged: true
google-iot:
build: ./google-iot
volumes:
- 'app-data:/data'
restart: always
network_mode: host
depends_on:
- "wifi-connect"
ports:
- "8883:8883"
node-red:
build: ./node-red/node-red
volumes:
- 'app-data:/data'
restart: always
privileged: true
network_mode: host
depends_on:
- "google-iot"
volumes:
app-data:
I am using wait-for-it.sh in order to check if the previous container.
Here is an extract from the Dockerfile of the node-red microservice.
RUN chmod +x ./wait-for-it/wait-for-it.sh
# server.js will run when container starts up on the device
CMD ["bash", "/usr/src/app/start.sh", "bash", "/usr/src/app/wait-for-it/wait-for-it.sh google-iot:8883 -- echo Google IoT Service is up and running"]
I have seen the inotify.
Basically all I want is to restart the container node-red after a file has been created within the app-data volume which is mounted to the node-red container as well under /data folder path, the file path for e.g. would be: /data/myfile.txt.
Please note that this file gets generated automatically to the google-iot micro service but node-red container needs that file and pretty often is the case that the node-red container starts and /data/myfile.txt file is not present.
It sounds like you're trying to delay one container's startup until another has produced the file you're looking for, or exit if it's not available.
You can write that logic into a shell script fairly straightforwardly. For example:
#!/bin/sh
# entrypoint.sh
# Wait for the server to be available
./wait-for-it/wait-for-it.sh google-iot:8883
if [ $? -ne 0 ]; then
echo 'google-iot container did not become available' >&2
exit 1
fi
# Wait for the file to be present
seconds=30
while [ $seconds -gt 0 ]; do
if [ -f /data/myfile.txt ]; then
break
fi
sleep 1
seconds=$(($seconds-1))
done
if [ $seconds -eq 0 ]; then
echo '/data/myfile.txt was not created' >&2
exit 1
fi
# Run the command passed to us as arguments
exec "$#"
In your Dockerfile, make this script be the ENTRYPOINT. You must use JSON-array syntax in the ENTRYPOINT line. Your CMD can use any valid syntax. Note that we're running the wait-for-it script in the entrypoint wrapper, so you don't need to include that in the CMD. (And since the script is executable and begins with a "shebang" line #!/bin/sh, we do not need to explicitly name an interpreter to run it.)
# Dockerfile
RUN chmod +x entrypoint.sh wait-for-it/wait-for-it.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
CMD ["/usr/src/app/start.sh"]
The entrypoint wrapper has two checks, first that the google-iot container eventually accepts TCP connections on port 8883 and a second that the file is created. If either of these cases fails the script exit 1 before it runs the CMD. This will cause the container as a whole to exit with that status code (a restart: on-failure will still restart it).
I also might consider whether some other approach to get the file might work, like using curl to make an HTTP request to the other container. There are several practical issues with sharing Docker volumes (particularly around ownership, but also if an old copy of the file is still around from a previous run) and sharing files works especially badly in a clustered environment like Kubernetes.
You can fix the issue with the race condition by using the long-syntax of depends_on where you can specify a health check. This will guarantee that your file is present when your node-red service runs.
node-red:
build: ./node-red/node-red
volumes:
- 'app-data:/data'
restart: always
privileged: true
network_mode: host
depends_on:
google-iot:
condition: service_healthy
Then you can define a health-check (see docs here) to see if your file is present in the volume. You can add the following to the service description for google-iot service:
healthcheck:
test: ["CMD", "cat", "/data/myfile.txt"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
Feel free to tune the duration values as needed.
Does this fix your problem?
I'm trying to set up a lab using docker containers with base image centos7 and docker-compose.
Here is my docker-compose.yaml file
version: "3"
services:
base:
image: centos_base
build:
context: base
master:
links:
- base
build:
context: master
image: centos_master
container_name: master01
hostname: master01
volumes:
- ansible_vol:/var/ans
networks:
- net
host01:
links:
- base
- master
build:
context: host
image: centos_host
container_name: host01
hostname: host01
command: ["/var/run.sh"]
volumes:
- ansible_vol:/var/ans
networks:
- net
networks:
net:
volumes:
ansible_vol:
My Docker files are as below
Base Image docker file:
# For centos7.0
FROM centos:7
RUN yum install -y net-tools man vim initscripts openssh-server
RUN echo "12345" | passwd root --stdin
RUN mkdir /root/.ssh
Master Dockerfile :
FROM centos_base:latest
# install ansible package
RUN yum install -y epel-release
RUN yum install -y ansible openssh-clients
RUN mkdir /var/ans
# change working directory
WORKDIR /var/ans
RUN ssh-keygen -t rsa -N 12345 -C "master key" -f master_key
CMD /usr/sbin/sshd -D
Host Image Dockerfile:
FROM centos_base:latest
RUN mkdir /var/ans
COPY run.sh /var/
RUN chmod 755 /var/run.sh
My run.sh file:
#!/bin/bash
cat /var/ans/master_key.pub >> /root/.ssh/authorized_keys
# start SSH server
/usr/sbin/sshd -D
My Problems are:
If I run docker-compose up -d --build, I see no containers coming up. they all getting created but exiting.
Successfully tagged centos_host:latest
Creating working_base_1 ... done
Creating master01 ... done
Creating host01 ... done
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
433baf2dd0d8 centos_host "/var/run.sh" 12 minutes ago Exited (1) 12 minutes ago host01
a2a57e480635 centos_master "/bin/sh -c '/usr/sb…" 13 minutes ago Exited (1) 12 minutes ago master01
a4acf6fb3e7b centos_base "/bin/bash" 13 minutes ago Exited (0) 13 minutes ago working_base_1
ssh keys generated in 'centos_master' image are not available in centos_host container, even though I have added a volume mapping 'ansible_vol:/var/ans' in docker-compose file
My intention is these ssh key files generated in master should be available in host containers ,therefore the run.sh script can copy them to authorized_keys section of host containers.
Any help is greatly appreciated.
Try to put in base/Dockerfile :
RUN echo "12345" | passwd root --stdin; \
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -b 4096 -t rsa
and rerun docker-compose build
/etc/ssh/ssh_host_rsa_key is a key used by sshd (ssh daemon), so that containers can be started properly.
The key you generated and copied into authorized_keys will be used to allow ssh client to connect to container via ssh.
Try to use external: false, to not attempt the container to create it and override the previous data at creation
version: "3"
services:
base:
image: centos_base
build:
context: base
master:
links:
- base
build:
context: master
image: centos_master
container_name: master01
hostname: master01
volumes:
- ansible_vol:/var/ans
networks:
- net
host01:
links:
- base
- master
build:
context: host
image: centos_host
container_name: host01
hostname: host01
command: ["/var/run.sh"]
volumes:
- ansible_vol:/var/ans
networks:
- net
networks:
net:
volumes:
ansible_vol:
external: false
I have a Docker container for my application with REST API.
I want to check what happened via docker logs, but it prints in the strange format:
docker logs -f 2b46ac8629f5
* Running on http://0.0.0.0:9002/ (Press CTRL+C to quit)
172.18.0.5 - - [03/Mar/2019 13:53:38] code 400, message Bad HTTP/0.9 request type
("\x16\x03\x01\x00«\x01\x00\x00§\x03\x03\x0euçÍ'ïá\x98\x12\\W5¥Ä\x01\x08")
172.18.0.5 - - [03/Mar/2019 13:53:38] "«§uçÍ'ïá\W5¥µuìz«Ôw48À,À0̨̩̪À+À/À$À(kÀ#À'gÀ" HTTPStatus.BAD_REQUEST -
Is it possible to fix these strings somehow?
UPDATE:
Part of my docker-compose file looks like:
api:
container_name: api
restart: always
build: ./web
ports:
- "9002:9002"
volumes:
- ./myapp.co.cert:/usr/src/app/myapp.co.cert
- /usr/src/app/myapp/static
- ./myapp.co.key:/usr/src/appmyapp.co.key
depends_on:
- postgres
Docker file of web container looks:
cat web/Dockerfile
# For better understanding of what is going on please follow this link:
# https://github.com/docker-library/python/blob/f12c2df135aef8c3f645d90aae582b2c65dbc3b5/3.6/jessie/onbuild/Dockerfile
FROM python:3.6.4-onbuild
# Start myapp API.
CMD ["python", "api.py"]
api.py starts Flask app using Python 3.6
I’m running a Flask application with Celery for submitting sub-processes using docker-compose.
However I cannot make Celery work when trying to run it in a different container.
If I run Celery in the same container I’m running the flask app it works, but feels like the wrong way, I’m coupling two different things in one container, by adding this in the startup script before the flask app runs:
nohup celery worker -A app.controller.engine.celery -l info &
However if I add Celery as a new container in my docker-compose.yml it doesn’t work. This is my config:
(..)
engine:
image: engine:latest
container_name: engine
ports:
- 5000:5000
volumes:
- $HOME/data/engine-import:/app/import
depends_on:
- mongo
- redis
environment:
- HOST=localhost
celery:
image: engine:latest
environment:
- C_FORCE_ROOT=true
command: ["/bin/bash", "-c", "./start-celery.sh"]
user: nobody
depends_on:
- redis
(..)
And this is the start-celery.sh:
#!/bin/bash
source ./env/bin/activate
cd ..
celery worker -A app.controller.engine.celery -l info
Its logs:
INFO:engineio:Server initialized for eventlet.
INFO:engineio:Server initialized for threading.
[2018-09-12 09:43:19,649: INFO/MainProcess] Connected to redis://redis:6379//
[2018-09-12 09:43:19,664: INFO/MainProcess] mingle: searching for neighbors
[2018-09-12 09:43:20,697: INFO/MainProcess] mingle: all alone
[2018-09-12 09:43:20,714: INFO/MainProcess] celery#8729618bd4bc ready.
And that’s all, processes are not submited to it.
What can be missing?
I've found that It works only if I add this to the docker-compose definition of the celery service:
environment:
- C_FORCE_ROOT=true
I wonder though why I didn't get any error otherwise.
I have simple but curious question, i have based my image on nodejs image and i have installed redis on the image, now i wanted to start redis and nodejs app both running in the container when i do the docker-compose up. However i can only get one working, node always gives me an error. Does anyone has any idea to
How to start the nodejs application on the docker-compose up ?
How to start the redis running in the background in the same image/container ?
My Docker file as below.
# Set the base image to node
FROM node:0.12.13
# Update the repository and install Redis Server
RUN apt-get update && apt-get install -y redis-server libssl-dev wget curl gcc
# Expose Redis port 6379
EXPOSE 6379
# Bundle app source
COPY ./redis.conf /etc/redis.conf
EXPOSE 8400
WORKDIR /root/chat/
CMD ["node","/root/www/helloworld.js"]
ENTRYPOINT ["/usr/bin/redis-server"]
Error i get from the console logs is
[36mchat_1 | [0m[1] 18 Apr 02:27:48.003 # Fatal error, can't open config file 'node'
Docker-yml is like below
chat:
build: ./.config/etc/chat/
volumes:
- ./chat:/root/chat
expose:
- 8400
ports:
- 6379:6379
- 8400:8400
environment:
CODE_ENV: debug
MYSQL_DATABASE: xyz
MYSQL_USER: xyz
MYSQL_PASSWORD: xyz
links:
- mysql
#command: "true"
A docker file can have but one entry point(either CMD or ENTRYPOINT, not both). But, you can run multiple processes in a single docker image using a process manager like systemd. There are countless recipes for doing this all over the internet. You might use this docker image as a base:
https://github.com/million12/docker-centos-supervisor
However, I don't see why you wouldn't use docker compose to spin up a separate redis container, just like you seem to want to do with mysql. BTW where is the mysql definition in the docker-compose file you posted?
Here's an example of a compose file I use to build a node image in the current directory and spin up redis as well.
web:
build: .
ports:
- "3000:3000"
- "8001:8001"
environment:
NODE_ENV: production
REDIS_HOST: redis://db:6379
links:
- "db"
db:
image: docker.io/redis:2.8
It should work with a docker file looking like the one you have minus trying to start up redis.