Consider a simple docker-compose.yml that looks something like this:
version: "3"
services:
api:
image: my-container:latest
command: ["gunicorn", "--bind", "0.0.0.0:8000", "wsgi:app"]
volumes:
- ./api:/api
The api service is a nginx-based Python Flask web app that runs gunicorn. Occasionally, I will break the Flask app and gunicorn will throw a non-zero exit code and stop running. I then rebuild all my containers. I have tried the following to restart the container upon fail to no avail:
version: "3"
services:
api:
image: my-container:latest
command: ["gunicorn", "--bind", "0.0.0.0:8000", "wsgi:app"]
volumes:
- ./api:/api
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 5
window: 60s
This configuration ignores the deploy config option with the following warning: WARNING: Some services (api) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - usedocker stack deployto deploy to a swarm. I'm not deploying to a swarm.
How can I automatically restart my container upon failing with a non-zero exit code?
The deploy section only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
The restart section only takes effect when using docker-compose up and docker-compose run.
version: "3"
services:
api:
image: my-container:latest
command: ["gunicorn", "--bind", "0.0.0.0:8000", "wsgi:app"]
volumes:
- ./api:/api
restart: no|always|on-failure|unless-stopped
See docs: https://docs.docker.com/compose/compose-file/#restart
Related
I am trying to access a Docker container which exposes an Express API (using Docker Compose services) in GitLab CI in order to run a number of tests against it.
I setup and instantiate the Docker services necessary as one task, then I attempt accessing it via axios requests in my tests. I have set 0.0.0.0 as the endpoint base.
However, I keep receiving the error:
[Error: connect ECONNREFUSED 0.0.0.0:3000]
My docker-compose.yml:
version: "3"
services:
st-sample:
container_name: st-sample
image: sample
restart: always
build: .
expose:
- "3000"
ports:
- "3000:3000"
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- /sampledb
expose:
- "27017"
ports:
- "27017:27017"
My gitlab-ci.yml:
image: docker:latest
services:
- node
- mongo
- docker:dind
stages:
- prepare_image
- setup_application
- test
- teardown_application
prepare_image:
stage: prepare_image
script:
- docker build -t sample .
setup_application:
stage: setup_application
script:
- docker-compose -f docker-compose.yml up -d
test:
image: node:latest
stage: test
allow_failure: true
before_script:
- npm install
script:
- npm test
teardown_application:
stage: teardown_application
script:
- docker-compose -f docker-compose.yml stop
Note that I also have registered the runner in my machine, giving it privileged permissions.
Locally everything works as expected - docker containers are initiated and are accessed for the tests.
However I am unable to do this via GitLab CI. The Docker containers build and get set up normally, however I am unable to access the exposed API.
I have tried many things, like setting the hostname for accessing the container, setting a static IP, using the container name etc, but to no success - I just keep receiving ECONNREFUSED.
I understand that they have their own network isolation strategy for security reasons, but I am just unable to expose the docker service to be tested.
Can you give an insight to this please? Thank you.
I finally figured this out, following 4 days of reading, searching and lots of trial and error. The job running the tests was in a different container from the ones that exposed the API and the database.
I resolved this by creating a docker network in the device the runner was on:
sudo network create mynetwork
Following that, I set the network to the docker-compose.yml file, with external config, and associated both services with it:
st-sample:
# ....
networks:
- mynetwork
mongo:
# ....
networks:
- mynetwork
networks:
mynetwork:
external: true
Also, I created a custom docker image including tests (name: test),
and in gitlab-ci.yml, I setup the job to run it within mynetwork.
docker run --network=mynetwork test
Following that, the containers/services were accessible by their names along each other, so I was able to run tests against http://st-sample.
It was a long journey to figure it all out, but it was well-worth it - I learned a lot!
I am trying to deploy a Django application on the azure app service using docker-compose, I already pushed two images to ACR and when I executed the docker-compose.yml on my computer everything worked fine, but when I deploy the app and visit the URL it only shows the next message:
":( Application Error
If you are the application administrator, you can access the diagnostic resources."
I also searched in the deployment center logs for any kind of info but there is only this message
"Failed to load container logs: Resource containerlog of type text not found"
any idea how can I fix it?
this is my docker-compose.yml
services:
web:
image: zaito.azurecr.io/backgroundtasks:latest
command: uwsgi --http "0.0.0.0:8000" --protocol uwsgi --module zaitoTasksApi.wsgi --master --processes 4 --threads 2
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- "8000"
nginx:
image: zaito.azurecr.io/nginx:django
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- "80:80"
depends_on:
- web
volumes:
media_volume:
static_volume:
I have a dockerized nodejs service which unfortunatly due to 3rd party modules used keeps crashing randomly like once or twice per day. The problem is I need to have that service up all the time. I tried to use the "restart" option in my compose file, but it doesn't do anything.
Is there anything I am doing wrong, or missed in order to restart the docker container if the node process crashes?
Here is my compose snippet
version: "3"
services:
api:
env_file:
- .env
build:
context: .
ports:
- '5000:5000'
depends_on:
- postgres
networks:
- postgres
restart: always
docker compose-up locally is able to build and bring up the services, but when doing the same on Azure Container Instances I get the below error
containerinstance.ContainerGroupsClient#CreateOrUpdate: Failure
sending request: StatusCode=400 -- Original Error:
Code="InaccessibleImage" Message="The image
'docker/aci-hostnames-sidecar:1.0' in container group 'djangodeploy'
is not accessible. Please check the image and registry credential."
Also, what is the purpose of this image docker/aci-hostnames-sidecar
The ACI deployment was working fine and now suddenly it doesn't work anymore
The docker-compose.yml contents are provided below
version: '3.7'
services:
django-gunicorn:
image: oasisdatacr.azurecr.io/oasis_django:latest
env_file:
- .env
build:
context: .
command: >
sh -c "
python3 manage.py makemigrations &&
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
python3 manage.py runserver 0:8000"
ports:
- "8000:8000"
celery-worker:
image: oasisdatacr.azurecr.io/oasis_django_celery_worker:latest
restart: always
build:
context: .
command: celery -A oasis worker -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
celery-beat:
image: oasisdatacr.azurecr.io/oasis_django_celery_beat:latest
build:
context: .
command: celery -A oasis beat -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
- celery-worker
UPDATE There might have been some issue from Azure end as I was able to deploy the containers as I usually do without any changes whatsoever
When you use the docker-compose to deploy multiple containers to ACI, firstly you need to build the images locally, and then you need to push the images to the ACR through the command docker-compose push, of course, you need to log in to your ACR first. See the example here.
And if you already push the images to your ACR, then you need to make sure if you log in to your ACR with the right credential and the image name with the tag is absolutely right.
I do not understand how to run webdriverIO e2e tests of my nodeJS application.
As you can see my nodeJS application is also running as a docker container.
But now I got stucked with some very basic things:
So where do I have to put the test files which I want to run? Do I have to copy them into webdriverio container? If yes, in which folder?
How do I run the tests then?
This is my docker compose setup for all needed docker container:
services:
webdriverio:
image: huli/webdriverio:latest
depends_on:
- chrome
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
hub:
image: selenium/hub
ports:
- 4444:4444
chrome:
image: selenium/node-chrome
ports:
- 5900
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
depends_on:
- hub
myApp:
container_name: myApp
image: 'registry.example.com/project/app:latest'
restart: always
links:
- 'mongodb'
environment:
- ROOT_URL=https://example.com
- MONGO_URL=mongodb://mongodb/db
mongodb:
container_name: mongodb
image: 'mongo:3.4'
restart: 'always'
volumes:
- '/opt/mongo/db/live:/data/db'
I have a complete minimal example in this link: Github
Q. So where do I have to put the test files which I want to run?
Put them in the container that will run Webdriver.io.
Q. Do I have to copy them into Webdriverio container? If yes, in which folder?
Yes. Put them wherever you want. Then you can run the sequentially, from a command: script if you like.
Q. How do I run the tests then?
With docker-compose up the test container will start and begin to run them all.