I have an application which has 10 different services including nginx, celery and postgresql. Is it possible to deploy this using Azure Container Instances?
I tried few times including taking image from the ACR but I am not able to get this work. I only see one container instead of all 10. I thought docker compose will automatically create all the instances but I am struggling understand the exactly issue.
Here is the sample docker-compose file. Any guidance would be really helpful.
services:
app: &app
image: registry....../app
build:
context: .
env_file: variables.env
volumes:
# - ./app/:/usr/src/app/
- static:/usr/src/app/static
- media:/usr/src/app/media
command: /bin/true
web:
<<: *app
command: uwsgi --ini /usr/src/app/uwsgi.ini
expose:
- 8000
channelserver:
<<: *app
command: daphne --bind 0.0.0.0 --port 5000 -v 1 APP.asgi:application
expose:
- 5000
db:
image: postgres:10.5-alpine
volumes:
- postgres:/var/lib/postgresql/data/
env_file: variables.env
redis:
image: redis:3.0-alpine
nginx:
image: registry...../nginx
build: ./nginx
environment:
- SERVERNAME=${DOMAIN:-localhost}
volumes:
- type: bind
source: $PWD/nginx/nginx.conf
target: /etc/nginx/conf.d/nginx.conf
- static:/usr/src/app/static
- media:/usr/src/app/media
ports:
- 8080:80
celery-main:
<<: *app
command: "celery worker -l INFO -E -A [APP] -Ofair --prefetch-multiplier 1 -Q default -c ${CELERY_MAIN_CONC:-10} --max-tasks-per-child=10"
celery-low-priority:
<<: *app
command: "celery worker -l INFO -E -A [APP] -Ofair --prefetch-multiplier 1 -Q low-priority -c ${CELERY_LOW_CONC:-10} --max-tasks-per-child=10"
celery-gpu: &celery-gpu
<<: *app
environment:
- KRAKEN_TRAINING_DEVICE=cpu
command: "celery worker -l INFO -E -A [APP] -Ofair --prefetch-multiplier 1 -Q gpu -c 1 --max-tasks-per-child=1"
shm_size: '3gb'
flower:
image: mher/flower
command: ["flower", "--broker=redis://redis:6379/0", "--port=5555"]
ports:
- 5555:5555
mail:
build: ./exim
image: registry....../mail
expose:
- 25
environment:
- PRIMARY_HOST=${DOMAIN:-localhost}
- ALLOWED_HOSTS=web ; celery-main ; celery-low-priority; docker0
volumes:
static:
media:
postgres:
esdata:
To deploy multiple containers in the Azure Container Instance, you can use the YAML file and the Azure Template, these two ways can deploy the containers directly to ACI. In addition, you can also use the docker-compose file following this document.
And if you have more questions about this issue, you can provide details and I'm willing to help you.
Related
I have a project and I use redis in this project.
This is my .gitlab-cu.yml
image: node:latest
stages:
- build
- deploy
build:
image: docker:stable
services:
- docker:dind
stage: build
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
- docker build --pull -t "$DOCKER_IMAGE" .
- docker push "$DOCKER_IMAGE"
deploy:
stage: deploy
services:
- redis:latest
script:
- ssh $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- ssh $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME
docker run --name=$CI_PROJECT_NAME --restart=always --network="host" -v "/app/storage:/src/storage" -d $DOCKER_IMAGE
- ssh -o StrictHostKeyChecking=no -i $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker compose up
And this my docker-compose.yml:
version: "3.2"
services:
redis:
image: redis:latest
volumes:
- redisdata:/data/
command: --port 6379
ports:
- "6379:6379"
expose:
- "6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
volumes:
redisdata:
everything is OK, but last line not work for me:
- ssh -o StrictHostKeyChecking=no -i $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker compose up
So, my redis image not running in this container
Note: I want 1 container with 2 images contains my project and redis
My redis image not running
So, my redis image not running in this container
Note: I want 1 container with 2 images contains my project and redis
I have a docker compose file
version: "3.5"
services:
service1:
image: serv1image
container_name: service1
networks:
- local
depends_on:
- service2
- service3
service2:
image: serv2image
container_name: service2
networks:
- local
ports:
- "27017:27017"
service3:
image: serv3image
container_name: service3
networks:
- local
ports:
- "2000:2000"
service4:
image: serv4image
container_name: service4
networks:
- local
ports:
- "8090:8090"
depends_on:
- service2
- service3
networks:
local-network:
driver: bridge
name: local
I intially started service1 using
docker-compose -f docker-compose.yaml up --no-recreate -d service1
Obviously it started service 1 and also booted up service2 and service 3 as they are it's dependencies.
Now I had to start the service4 at a later stage of time. I started it using the following command.
docker-compose -f docker-compose.yaml up --no-recreate -d service4
But unfortunately this also tried to recreate the service 2 and service3 as they are its dependencies irrespective of mentioning --no-recreate flag and I got conflict error too stating
Cannot create container for service service4: Conflict. The container name "service2" is already in use by container
Why this is happening and is there any way to avoid this behaviour?
I am currently facing a problem with docker, docker-compose, and postgres that is driving me insane. I have updated my docker-compose with a new postgres password and I have updated my sqlalchemy create_all method with a new table model. But none of these changes are taking affect.
When I go to login to the database container it is still using the old password and the table columns have not been updated. I have run all the docker functions I can think of to no avail
docker-compose down --volumes
docker rmi $(docker images -a -q)
docker system prune -a
docker-compose build --no-cache
After running these commands I do verify that the docker image is gone. I have no images or containers living on my machine but the new postgres image still always is created using the previous password. Below is my docker-compose (I am aware that passwords in docker-compose files is a bad idea, this is a personal project and I intend to change it to pull a secret from KMS down the road)
services:
api:
# container_name: rebindme-api
build:
context: api/
restart: always
container_name: rebindme_api
environment:
- API_DEBUG=1
- PYTHONUNBUFFERED=1
- DATABASE_URL=postgresql://rebindme:password#db:5432/rebindme
# context: .
# dockerfile: api/Dockerfile
ports:
- "8443:8443"
volumes:
- "./api:/opt/rebindme/api:ro"
depends_on:
db:
condition: service_healthy
image: rebindme_api
networks:
web-app:
aliases:
- rebindme-api
db:
image: postgres
container_name: rebindme_db
# build:
# context: ./postgres
# dockerfile: db.Dockerfile
volumes:
- ./postgres-data:/var/lib/postgresql/data
# - ./sql/create_tables.sql:/docker-entrypoint-initdb.d/create_tables.sql
environment:
POSTGRES_USER: rebindme
POSTGRES_PASSWORD: password
POSTGRES_DB: rebindme
#03c72130-a807-491e-86aa-d4af52c2cdda
healthcheck:
test: ["CMD", "psql", "postgresql://rebindme:password#db:5432/rebindme"]
interval: 10s
timeout: 5s
retries: 5
restart: always
networks:
web-app:
aliases:
- postgres-network
client:
container_name: rebindme_client
build:
context: client/
volumes:
- "./client:/opt/rebindme/client"
# - nodemodules:/node_modules
# ports:
# - "80:80"
image: rebindme-client
networks:
web-app:
aliases:
- rebindme-client
nginx:
depends_on:
- client
- api
image: nginx
ports:
- "80:80"
volumes:
- "./nginx/default.conf:/etc/nginx/conf.d/default.conf"
- "./nginx/ssl:/etc/nginx/ssl"
networks:
web-app:
aliases:
- rebindme-proxy
# volumes:
# database_data:
# driver: local
# nodemodules:
# driver: local
networks:
web-app:
# name: db_network
# driver: bridge
The password commented out under POSTGRES_DB: rebindme is the one that it is still using somehow. I can post more code or whatever else is needed, just let me know. Thanks in advance for your help!
The answer ended up being that the images were still existing. The below command did not actually remove all containers just unused ones:
docker system prune -a
I did go ahead and delete the postgres data as Pooya recommended though I am not sure that was necessary as I had already done that which I forgot to mention. The real solution for me was:
docker image ls
docker rmi rebindme-client:latest
docker rmi rebindme-api:latest
Then finally the new config for postgres took.
Because you mount volume manually (Host volumes) and when using docker-compose down --volumes actually docker doesn't remove volume. If you don't need to volume and you want to remove that you have to delete this folder (It depends on the operation system) and then run docker-compose
Command docker-compose down -v just remove below volumes type:
Named volumes
Anonymous volumes
# Linux operation system
rm -rf ./postgres-data
docker-compose build --no-cache
This is my first time using docker and I have downloaded a docker-compose.yml from https://github.com/wodby/docker4wordpress which is a repo with docker images for wordpress . I am planning to use wordpress as a backend and use created blog posts from it's API in a Gatsby site . My problem is that since I don't have experience with docker I am unable to
run my gatsby site locally using the node image the above link provides inside my docker-compose.yml . I can run wordpress succesfully however as I had help from my boss to configure the above file before trying to use gatsby and node .
My project structure :
Wordpress_Project
|
+-- numerous wordpress folders
|
+-- gatsby_site folder
|
+-- docker-compose.yml
My docker-compose.yml
version: "3"
services:
mariadb:
image: wodby/mariadb:$MARIADB_TAG
container_name: "${PROJECT_NAME}_mariadb"
stop_grace_period: 30s
environment:
MYSQL_ROOT_PASSWORD: $DB_ROOT_PASSWORD
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
volumes:
# - ./mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
- bedrock_dbdata:/var/lib/mysql # I want to manage volumes manually.
php:
image: wodby/wordpress-php:$PHP_TAG
container_name: "${PROJECT_NAME}_php"
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
DB_HOST: $DB_HOST
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
DB_NAME: $DB_NAME
PHP_FPM_USER: www-data
PHP_FPM_GROUP: www-data
# # Read instructions at https://wodby.com/docs/stacks/wordpress/local#xdebug
# PHP_XDEBUG: 1
# PHP_XDEBUG_MODE: debug
# PHP_IDE_CONFIG: serverName=my-ide
# PHP_XDEBUG_IDEKEY: "my-ide"
# PHP_XDEBUG_CLIENT_HOST: 172.17.0.1 # Linux
# PHP_XDEBUG_CLIENT_HOST: host.docker.internal # Docker 18.03+ Mac/Win
# PHP_XDEBUG_CLIENT_HOST: 10.0.75.1 # Windows
# PHP_XDEBUG_LOG: /tmp/php-xdebug.log
volumes:
- ./:${CODEBASE}:cached
## Alternative for macOS users: Mutagen https://wodby.com/docs/stacks/wordpress/local#docker-for-mac
# - mutagen:/var/www/html
# # For XHProf and Xdebug profiler traces
# - files:/mnt/files
crond:
image: wodby/wordpress-php:$PHP_TAG
container_name: "${PROJECT_NAME}_crond"
environment:
CRONTAB: "0 * * * * wp cron event run --due-now --path=${CODEBASE}/web/wp"
command: sudo -E LD_PRELOAD=/usr/lib/preloadable_libiconv.so crond -f -d 0
volumes:
- ./:${CODEBASE}:cached
mailhog:
image: mailhog/mailhog
container_name: "${PROJECT_NAME}_mailhog"
labels:
- "traefik.http.services.${PROJECT_NAME}_mailhog.loadbalancer.server.port=8025"
- "traefik.http.routers.${PROJECT_NAME}_mailhog.rule=Host(`mailhog.${PROJECT_BASE_URL}`)"
nginx:
image: wodby/nginx:$NGINX_TAG
container_name: "${PROJECT_NAME}_nginx"
depends_on:
- php
environment:
NGINX_STATIC_OPEN_FILE_CACHE: "off"
NGINX_ERROR_LOG_LEVEL: debug
NGINX_BACKEND_HOST: php
NGINX_VHOST_PRESET: wordpress
NGINX_SERVER_NAME: ${PROJECT_BASE_URL}
NGINX_SERVER_ROOT: ${CODEBASE}/web
volumes:
- ./:${CODEBASE}:cached
## Alternative for macOS users: Mutagen https://wodby.com/docs/stacks/wordpress/local#docker-for-mac
# - mutagen:/var/www/html
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`)"
portainer:
image: portainer/portainer-ce
container_name: "${PROJECT_NAME}_portainer"
command: -H unix:///var/run/docker.sock
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.http.services.${PROJECT_NAME}_portainer.loadbalancer.server.port=9000"
- "traefik.http.routers.${PROJECT_NAME}_portainer.rule=Host(`portainer.${PROJECT_BASE_URL}`)"
traefik:
image: traefik:v2.4
container_name: "${PROJECT_NAME}_traefik"
command: --api.insecure=true --providers.docker
ports:
- '80:80'
# - '8080:8080' # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
#this is the node image where I want to run gatsby locally
node:
image: wodby/node:$NODE_TAG
container_name: "${PROJECT_NAME}_node"
working_dir: /app
labels:
- "traefik.http.services.${PROJECT_NAME}_node.loadbalancer.server.port=3000"
- "traefik.http.routers.${PROJECT_NAME}_node.rule=Host(`node.${PROJECT_BASE_URL}`)"
expose:
- "3000"
volumes:
- ./gatsby_site:/app
command: sh -c 'npm install && npm run start'
volumes:
bedrock_dbdata:
external: true
I am trying to configure the above node image but I am unsuccessfull so far . I try accessing the node url on localhost only to get error 404 .
I would appreciate your help.
docker-compose.yaml
version: '3'
services:
cassandra-seed:
image: cassandra:latest
deploy:
replicas: 1
ports:
- "9042"
- "7199"
- "9160"
- "7000"
- "7001"
networks:
default:
volumes:
- ./data:/var/lib/cassandra/data
cassandra-node-1:
image: cassandra:latest
deploy:
replicas: 1
command: /bin/bash -c "echo 'Waiting for seed node' && sleep 120 && /docker-entrypoint.sh cassandra -f"
environment:
- "CASSANDRA_SEEDS=cassandra-seed"
ports:
- "9042"
- "7199"
- "9160"
- "7000"
- "7001"
networks:
default:
depends_on:
- "cassandra-seed"
cassandra-node-2:
image: cassandra:latest
deploy:
replicas: 1
command: /bin/bash -c "echo 'Waiting for seed node' && sleep 120 && /docker-entrypoint.sh cassandra -f"
environment:
- "CASSANDRA_SEEDS=cassandra-seed"
depends_on:
- "cassandra-seed"
ports:
- "9042"
- "7199"
- "9160"
- "7000"
- "7001"
networks:
default:
networks:
default:
external:
name: cassandra-net
docker network create --scope swarm cassandra-net
Add all the nodes to the swarm
docker stack deploy --compose-file docker-compose.yml cassandra-cluster
WARN [main] 2018-02-01 21:32:07,965 SimpleSeedProvider.java:60 - Seed provider couldn't lookup host cassandra-seed
- "CASSANDRA_SEEDS=cassandra-seed"
This is where you set the seed. The cassandra docker image entrypoint is expecting this value to be a comma-seperated list with IP addresses. You will have to find the ips somehow. I would recommend reading about service discovery. Then create your own docker image with a custom entrypoint where you set CASSANDRA_SEEDS by resolving the dns with
the host command. You could also create a custom seed provider for this purpose.
I think you shouldn't write
- "CASSANDRA_SEEDS=cassandra-seed"
instead, try this:
CASSANDRA_SEEDS: "cassandra-seed"
and also, if you want to make a cluster, you have to use the "CASSANDRA_BROADCAST_ADDRESS" as well.
For example:
environment:
CASSANDRA_BROADCAST_ADDRESS: cassandra-1
references:
https://dzone.com/articles/swarmweek-part-1-multi-host-cassandra-cluster-with
https://forums.docker.com/t/cassandra-on-docker-swarm/27923/3