docker-compose does not run in gitlab CICD after pull image - node.js

I have a project and I use redis in this project.
This is my .gitlab-cu.yml
image: node:latest
stages:
- build
- deploy
build:
image: docker:stable
services:
- docker:dind
stage: build
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
- docker build --pull -t "$DOCKER_IMAGE" .
- docker push "$DOCKER_IMAGE"
deploy:
stage: deploy
services:
- redis:latest
script:
- ssh $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- ssh $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME
docker run --name=$CI_PROJECT_NAME --restart=always --network="host" -v "/app/storage:/src/storage" -d $DOCKER_IMAGE
- ssh -o StrictHostKeyChecking=no -i $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker compose up
And this my docker-compose.yml:
version: "3.2"
services:
redis:
image: redis:latest
volumes:
- redisdata:/data/
command: --port 6379
ports:
- "6379:6379"
expose:
- "6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
volumes:
redisdata:
everything is OK, but last line not work for me:
- ssh -o StrictHostKeyChecking=no -i $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker compose up
So, my redis image not running in this container
Note: I want 1 container with 2 images contains my project and redis
My redis image not running
So, my redis image not running in this container
Note: I want 1 container with 2 images contains my project and redis

Related

Azure Containers - with 10 different services

I have an application which has 10 different services including nginx, celery and postgresql. Is it possible to deploy this using Azure Container Instances?
I tried few times including taking image from the ACR but I am not able to get this work. I only see one container instead of all 10. I thought docker compose will automatically create all the instances but I am struggling understand the exactly issue.
Here is the sample docker-compose file. Any guidance would be really helpful.
services:
app: &app
image: registry....../app
build:
context: .
env_file: variables.env
volumes:
# - ./app/:/usr/src/app/
- static:/usr/src/app/static
- media:/usr/src/app/media
command: /bin/true
web:
<<: *app
command: uwsgi --ini /usr/src/app/uwsgi.ini
expose:
- 8000
channelserver:
<<: *app
command: daphne --bind 0.0.0.0 --port 5000 -v 1 APP.asgi:application
expose:
- 5000
db:
image: postgres:10.5-alpine
volumes:
- postgres:/var/lib/postgresql/data/
env_file: variables.env
redis:
image: redis:3.0-alpine
nginx:
image: registry...../nginx
build: ./nginx
environment:
- SERVERNAME=${DOMAIN:-localhost}
volumes:
- type: bind
source: $PWD/nginx/nginx.conf
target: /etc/nginx/conf.d/nginx.conf
- static:/usr/src/app/static
- media:/usr/src/app/media
ports:
- 8080:80
celery-main:
<<: *app
command: "celery worker -l INFO -E -A [APP] -Ofair --prefetch-multiplier 1 -Q default -c ${CELERY_MAIN_CONC:-10} --max-tasks-per-child=10"
celery-low-priority:
<<: *app
command: "celery worker -l INFO -E -A [APP] -Ofair --prefetch-multiplier 1 -Q low-priority -c ${CELERY_LOW_CONC:-10} --max-tasks-per-child=10"
celery-gpu: &celery-gpu
<<: *app
environment:
- KRAKEN_TRAINING_DEVICE=cpu
command: "celery worker -l INFO -E -A [APP] -Ofair --prefetch-multiplier 1 -Q gpu -c 1 --max-tasks-per-child=1"
shm_size: '3gb'
flower:
image: mher/flower
command: ["flower", "--broker=redis://redis:6379/0", "--port=5555"]
ports:
- 5555:5555
mail:
build: ./exim
image: registry....../mail
expose:
- 25
environment:
- PRIMARY_HOST=${DOMAIN:-localhost}
- ALLOWED_HOSTS=web ; celery-main ; celery-low-priority; docker0
volumes:
static:
media:
postgres:
esdata:
To deploy multiple containers in the Azure Container Instance, you can use the YAML file and the Azure Template, these two ways can deploy the containers directly to ACI. In addition, you can also use the docker-compose file following this document.
And if you have more questions about this issue, you can provide details and I'm willing to help you.

Setting up PostGIS on Gitlab CI fails: psql could not connect to server: No such file or directory

I'm trying to create a PostGIS extension in the testing job of GitLab CI, as some of the tests require that extension on the PostgreSQL database to pass. My .gitlab-ci.yml looks like this:
image: docker:stable
stages:
- build
- test
variables:
IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}
build:
stage: build
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
script:
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:latest || true
- docker build
--cache-from $IMAGE:latest
--tag $IMAGE:latest
--file ./Dockerfile.prod
"."
- docker push $IMAGE:latest
test:
stage: test
image: $IMAGE:latest
services:
- postgis/postgis:latest
variables:
POSTGRES_DB: users
POSTGRES_USER: runner
POSTGRES_PASSWORD: runner
DATABASE_TEST_URL: postgis://runner:runner#postgres:5432/users
script:
- psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "CREATE EXTENSION \"postgis\";"
- pytest "src/tests" -p no:warnings
The build job passes, test fails with psql: could not connect to server: No such file or directory for psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "CREATE EXTENSION \"postgis\";" line. Why?
Turns out using an alias for the PostGIS service like so:
services:
- name: postgis/postgis:latest
alias: postgres
and using the following commands:
script:
- export PGPASSWORD=$POSTGRES_PASSWORD
- psql --username $POSTGRES_USER --host postgres -d $POSTGRES_DB -c "CREATE EXTENSION IF NOT EXISTS postgis;"
did the trick!

docker container can not see the data from a shared volume

I'm trying to set up a lab using docker containers with base image centos7 and docker-compose.
Here is my docker-compose.yaml file
version: "3"
services:
base:
image: centos_base
build:
context: base
master:
links:
- base
build:
context: master
image: centos_master
container_name: master01
hostname: master01
volumes:
- ansible_vol:/var/ans
networks:
- net
host01:
links:
- base
- master
build:
context: host
image: centos_host
container_name: host01
hostname: host01
command: ["/var/run.sh"]
volumes:
- ansible_vol:/var/ans
networks:
- net
networks:
net:
volumes:
ansible_vol:
My Docker files are as below
Base Image docker file:
# For centos7.0
FROM centos:7
RUN yum install -y net-tools man vim initscripts openssh-server
RUN echo "12345" | passwd root --stdin
RUN mkdir /root/.ssh
Master Dockerfile :
FROM centos_base:latest
# install ansible package
RUN yum install -y epel-release
RUN yum install -y ansible openssh-clients
RUN mkdir /var/ans
# change working directory
WORKDIR /var/ans
RUN ssh-keygen -t rsa -N 12345 -C "master key" -f master_key
CMD /usr/sbin/sshd -D
Host Image Dockerfile:
FROM centos_base:latest
RUN mkdir /var/ans
COPY run.sh /var/
RUN chmod 755 /var/run.sh
My run.sh file:
#!/bin/bash
cat /var/ans/master_key.pub >> /root/.ssh/authorized_keys
# start SSH server
/usr/sbin/sshd -D
My Problems are:
If I run docker-compose up -d --build, I see no containers coming up. they all getting created but exiting.
Successfully tagged centos_host:latest
Creating working_base_1 ... done
Creating master01 ... done
Creating host01 ... done
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
433baf2dd0d8 centos_host "/var/run.sh" 12 minutes ago Exited (1) 12 minutes ago host01
a2a57e480635 centos_master "/bin/sh -c '/usr/sb…" 13 minutes ago Exited (1) 12 minutes ago master01
a4acf6fb3e7b centos_base "/bin/bash" 13 minutes ago Exited (0) 13 minutes ago working_base_1
ssh keys generated in 'centos_master' image are not available in centos_host container, even though I have added a volume mapping 'ansible_vol:/var/ans' in docker-compose file
My intention is these ssh key files generated in master should be available in host containers ,therefore the run.sh script can copy them to authorized_keys section of host containers.
Any help is greatly appreciated.
Try to put in base/Dockerfile :
RUN echo "12345" | passwd root --stdin; \
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -b 4096 -t rsa
and rerun docker-compose build
/etc/ssh/ssh_host_rsa_key is a key used by sshd (ssh daemon), so that containers can be started properly.
The key you generated and copied into authorized_keys will be used to allow ssh client to connect to container via ssh.
Try to use external: false, to not attempt the container to create it and override the previous data at creation
version: "3"
services:
base:
image: centos_base
build:
context: base
master:
links:
- base
build:
context: master
image: centos_master
container_name: master01
hostname: master01
volumes:
- ansible_vol:/var/ans
networks:
- net
host01:
links:
- base
- master
build:
context: host
image: centos_host
container_name: host01
hostname: host01
command: ["/var/run.sh"]
volumes:
- ansible_vol:/var/ans
networks:
- net
networks:
net:
volumes:
ansible_vol:
external: false

Cassandra with docker swarm, "couldn't lookup host cassandra-seed"

docker-compose.yaml
version: '3'
services:
cassandra-seed:
image: cassandra:latest
deploy:
replicas: 1
ports:
- "9042"
- "7199"
- "9160"
- "7000"
- "7001"
networks:
default:
volumes:
- ./data:/var/lib/cassandra/data
cassandra-node-1:
image: cassandra:latest
deploy:
replicas: 1
command: /bin/bash -c "echo 'Waiting for seed node' && sleep 120 && /docker-entrypoint.sh cassandra -f"
environment:
- "CASSANDRA_SEEDS=cassandra-seed"
ports:
- "9042"
- "7199"
- "9160"
- "7000"
- "7001"
networks:
default:
depends_on:
- "cassandra-seed"
cassandra-node-2:
image: cassandra:latest
deploy:
replicas: 1
command: /bin/bash -c "echo 'Waiting for seed node' && sleep 120 && /docker-entrypoint.sh cassandra -f"
environment:
- "CASSANDRA_SEEDS=cassandra-seed"
depends_on:
- "cassandra-seed"
ports:
- "9042"
- "7199"
- "9160"
- "7000"
- "7001"
networks:
default:
networks:
default:
external:
name: cassandra-net
docker network create --scope swarm cassandra-net
Add all the nodes to the swarm
docker stack deploy --compose-file docker-compose.yml cassandra-cluster
WARN [main] 2018-02-01 21:32:07,965 SimpleSeedProvider.java:60 - Seed provider couldn't lookup host cassandra-seed
- "CASSANDRA_SEEDS=cassandra-seed"
This is where you set the seed. The cassandra docker image entrypoint is expecting this value to be a comma-seperated list with IP addresses. You will have to find the ips somehow. I would recommend reading about service discovery. Then create your own docker image with a custom entrypoint where you set CASSANDRA_SEEDS by resolving the dns with
the host command. You could also create a custom seed provider for this purpose.
I think you shouldn't write
- "CASSANDRA_SEEDS=cassandra-seed"
instead, try this:
CASSANDRA_SEEDS: "cassandra-seed"
and also, if you want to make a cluster, you have to use the "CASSANDRA_BROADCAST_ADDRESS" as well.
For example:
environment:
CASSANDRA_BROADCAST_ADDRESS: cassandra-1
references:
https://dzone.com/articles/swarmweek-part-1-multi-host-cassandra-cluster-with
https://forums.docker.com/t/cassandra-on-docker-swarm/27923/3

docker-compose run --rm doesn't exit container after command is run

docker-compose version 1.8.0. The container is linked to another container. I'm not sure if that has anything to do with why the container isn't exiting
mongodb:
image: 'mongo:3.2'
ports:
- '27017:27017'
environment:
- AUTH=no
test:
image: node:6.3.1
command: node tests/ref-api.js
working_dir: /opt/runapp
volumes:
- ./:/opt/runapp
- ../webapp/server:/opt/runapp/node_modules/server
- ../webapp/package/node_modules:/opt/runapp/node_modules/server/node_modules
links:
- mongodb

Resources