I’m running a Flask application with Celery for submitting sub-processes using docker-compose.
However I cannot make Celery work when trying to run it in a different container.
If I run Celery in the same container I’m running the flask app it works, but feels like the wrong way, I’m coupling two different things in one container, by adding this in the startup script before the flask app runs:
nohup celery worker -A app.controller.engine.celery -l info &
However if I add Celery as a new container in my docker-compose.yml it doesn’t work. This is my config:
(..)
engine:
image: engine:latest
container_name: engine
ports:
- 5000:5000
volumes:
- $HOME/data/engine-import:/app/import
depends_on:
- mongo
- redis
environment:
- HOST=localhost
celery:
image: engine:latest
environment:
- C_FORCE_ROOT=true
command: ["/bin/bash", "-c", "./start-celery.sh"]
user: nobody
depends_on:
- redis
(..)
And this is the start-celery.sh:
#!/bin/bash
source ./env/bin/activate
cd ..
celery worker -A app.controller.engine.celery -l info
Its logs:
INFO:engineio:Server initialized for eventlet.
INFO:engineio:Server initialized for threading.
[2018-09-12 09:43:19,649: INFO/MainProcess] Connected to redis://redis:6379//
[2018-09-12 09:43:19,664: INFO/MainProcess] mingle: searching for neighbors
[2018-09-12 09:43:20,697: INFO/MainProcess] mingle: all alone
[2018-09-12 09:43:20,714: INFO/MainProcess] celery#8729618bd4bc ready.
And that’s all, processes are not submited to it.
What can be missing?
I've found that It works only if I add this to the docker-compose definition of the celery service:
environment:
- C_FORCE_ROOT=true
I wonder though why I didn't get any error otherwise.
Related
docker compose-up locally is able to build and bring up the services, but when doing the same on Azure Container Instances I get the below error
containerinstance.ContainerGroupsClient#CreateOrUpdate: Failure
sending request: StatusCode=400 -- Original Error:
Code="InaccessibleImage" Message="The image
'docker/aci-hostnames-sidecar:1.0' in container group 'djangodeploy'
is not accessible. Please check the image and registry credential."
Also, what is the purpose of this image docker/aci-hostnames-sidecar
The ACI deployment was working fine and now suddenly it doesn't work anymore
The docker-compose.yml contents are provided below
version: '3.7'
services:
django-gunicorn:
image: oasisdatacr.azurecr.io/oasis_django:latest
env_file:
- .env
build:
context: .
command: >
sh -c "
python3 manage.py makemigrations &&
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
python3 manage.py runserver 0:8000"
ports:
- "8000:8000"
celery-worker:
image: oasisdatacr.azurecr.io/oasis_django_celery_worker:latest
restart: always
build:
context: .
command: celery -A oasis worker -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
celery-beat:
image: oasisdatacr.azurecr.io/oasis_django_celery_beat:latest
build:
context: .
command: celery -A oasis beat -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
- celery-worker
UPDATE There might have been some issue from Azure end as I was able to deploy the containers as I usually do without any changes whatsoever
When you use the docker-compose to deploy multiple containers to ACI, firstly you need to build the images locally, and then you need to push the images to the ACR through the command docker-compose push, of course, you need to log in to your ACR first. See the example here.
And if you already push the images to your ACR, then you need to make sure if you log in to your ACR with the right credential and the image name with the tag is absolutely right.
I have a Docker container for my application with REST API.
I want to check what happened via docker logs, but it prints in the strange format:
docker logs -f 2b46ac8629f5
* Running on http://0.0.0.0:9002/ (Press CTRL+C to quit)
172.18.0.5 - - [03/Mar/2019 13:53:38] code 400, message Bad HTTP/0.9 request type
("\x16\x03\x01\x00«\x01\x00\x00§\x03\x03\x0euçÍ'ïá\x98\x12\\W5¥Ä\x01\x08")
172.18.0.5 - - [03/Mar/2019 13:53:38] "«§uçÍ'ïá\W5¥µuìz«Ôw48À,À0̨̩̪À+À/À$À(kÀ#À'gÀ" HTTPStatus.BAD_REQUEST -
Is it possible to fix these strings somehow?
UPDATE:
Part of my docker-compose file looks like:
api:
container_name: api
restart: always
build: ./web
ports:
- "9002:9002"
volumes:
- ./myapp.co.cert:/usr/src/app/myapp.co.cert
- /usr/src/app/myapp/static
- ./myapp.co.key:/usr/src/appmyapp.co.key
depends_on:
- postgres
Docker file of web container looks:
cat web/Dockerfile
# For better understanding of what is going on please follow this link:
# https://github.com/docker-library/python/blob/f12c2df135aef8c3f645d90aae582b2c65dbc3b5/3.6/jessie/onbuild/Dockerfile
FROM python:3.6.4-onbuild
# Start myapp API.
CMD ["python", "api.py"]
api.py starts Flask app using Python 3.6
Consider a simple docker-compose.yml that looks something like this:
version: "3"
services:
api:
image: my-container:latest
command: ["gunicorn", "--bind", "0.0.0.0:8000", "wsgi:app"]
volumes:
- ./api:/api
The api service is a nginx-based Python Flask web app that runs gunicorn. Occasionally, I will break the Flask app and gunicorn will throw a non-zero exit code and stop running. I then rebuild all my containers. I have tried the following to restart the container upon fail to no avail:
version: "3"
services:
api:
image: my-container:latest
command: ["gunicorn", "--bind", "0.0.0.0:8000", "wsgi:app"]
volumes:
- ./api:/api
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 5
window: 60s
This configuration ignores the deploy config option with the following warning: WARNING: Some services (api) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - usedocker stack deployto deploy to a swarm. I'm not deploying to a swarm.
How can I automatically restart my container upon failing with a non-zero exit code?
The deploy section only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
The restart section only takes effect when using docker-compose up and docker-compose run.
version: "3"
services:
api:
image: my-container:latest
command: ["gunicorn", "--bind", "0.0.0.0:8000", "wsgi:app"]
volumes:
- ./api:/api
restart: no|always|on-failure|unless-stopped
See docs: https://docs.docker.com/compose/compose-file/#restart
I am using Docker to create multiple containers, one of which contains a RabbitMQ instance and another contains the node.js action that should respond to queue activity. Traversing the docker-compose logs, I see a lot of ECONNREFUSED errors, before I see where the line begins indicating that RabbitMQ has started in its container. This seems to indicate that RabbitMQ seems to be starting after the service that needs it.
As a sidebar, just to eliminate any other possible causes here is the connection string for node.js to connect to RabbitMQ:
amqp://rabbitmq:5672
and here is the entry for RabbitMQ in the docker-compose.yaml file:
rabbitmq:
container_name: "myapp_rabbitmq"
tty: true
image: rabbitmq:management
ports:
- 15672:15672
- 15671:15671
- 5672:5672
volumes:
- /rabbitmq/lib:/var/lib/rabbitmq
- /rabbitmq/log:/var/log/rabbitmq
- /rabbitmq/conf:/etc/rabbitmq/
service1:
container_name: "service1"
build:
context: .
dockerfile: ./service1.dockerfile
links:
- mongo
- rabbitmq
depends_on:
- mongo
- rabbitmq
service2:
container_name: "service2"
build:
context: .
dockerfile: ./service2/dockerfile
links:
- mongo
- rabbitmq
depends_on:
- mongo
- rabbitmq
What is the fix for this timing issue?
How could I get RabbitMQ to start before the consuming container starts?
Might this not be a timing issue, but a configuration issue in the docker-compose.yml entry I have listed?
It doesn't look like you have included a complete docker-compose file. I would expect to also see your node container in the compose. I think the problem is that you need a
depends_on:
- "rabbitmq"
In the node container part of your docker compose
More info on compose dependancies here: https://docs.docker.com/compose/startup-order/
note, as this page suggests you should do this in conjunction with making your app resilient to outages on external services.
You need to control the boot-up process of your dependent containers. Below documents the same
https://docs.docker.com/compose/startup-order/
I usually use wait-for-it.sh file from below project
https://github.com/vishnubob/wait-for-it
So I will have a below command in my service1
wait-for-it.sh rabbitmq:5672 -t 90 -- command with args to launch service1
I have simple but curious question, i have based my image on nodejs image and i have installed redis on the image, now i wanted to start redis and nodejs app both running in the container when i do the docker-compose up. However i can only get one working, node always gives me an error. Does anyone has any idea to
How to start the nodejs application on the docker-compose up ?
How to start the redis running in the background in the same image/container ?
My Docker file as below.
# Set the base image to node
FROM node:0.12.13
# Update the repository and install Redis Server
RUN apt-get update && apt-get install -y redis-server libssl-dev wget curl gcc
# Expose Redis port 6379
EXPOSE 6379
# Bundle app source
COPY ./redis.conf /etc/redis.conf
EXPOSE 8400
WORKDIR /root/chat/
CMD ["node","/root/www/helloworld.js"]
ENTRYPOINT ["/usr/bin/redis-server"]
Error i get from the console logs is
[36mchat_1 | [0m[1] 18 Apr 02:27:48.003 # Fatal error, can't open config file 'node'
Docker-yml is like below
chat:
build: ./.config/etc/chat/
volumes:
- ./chat:/root/chat
expose:
- 8400
ports:
- 6379:6379
- 8400:8400
environment:
CODE_ENV: debug
MYSQL_DATABASE: xyz
MYSQL_USER: xyz
MYSQL_PASSWORD: xyz
links:
- mysql
#command: "true"
A docker file can have but one entry point(either CMD or ENTRYPOINT, not both). But, you can run multiple processes in a single docker image using a process manager like systemd. There are countless recipes for doing this all over the internet. You might use this docker image as a base:
https://github.com/million12/docker-centos-supervisor
However, I don't see why you wouldn't use docker compose to spin up a separate redis container, just like you seem to want to do with mysql. BTW where is the mysql definition in the docker-compose file you posted?
Here's an example of a compose file I use to build a node image in the current directory and spin up redis as well.
web:
build: .
ports:
- "3000:3000"
- "8001:8001"
environment:
NODE_ENV: production
REDIS_HOST: redis://db:6379
links:
- "db"
db:
image: docker.io/redis:2.8
It should work with a docker file looking like the one you have minus trying to start up redis.