I'm trying to set up a lab using docker containers with base image centos7 and docker-compose.
Here is my docker-compose.yaml file
version: "3"
services:
base:
image: centos_base
build:
context: base
master:
links:
- base
build:
context: master
image: centos_master
container_name: master01
hostname: master01
volumes:
- ansible_vol:/var/ans
networks:
- net
host01:
links:
- base
- master
build:
context: host
image: centos_host
container_name: host01
hostname: host01
command: ["/var/run.sh"]
volumes:
- ansible_vol:/var/ans
networks:
- net
networks:
net:
volumes:
ansible_vol:
My Docker files are as below
Base Image docker file:
# For centos7.0
FROM centos:7
RUN yum install -y net-tools man vim initscripts openssh-server
RUN echo "12345" | passwd root --stdin
RUN mkdir /root/.ssh
Master Dockerfile :
FROM centos_base:latest
# install ansible package
RUN yum install -y epel-release
RUN yum install -y ansible openssh-clients
RUN mkdir /var/ans
# change working directory
WORKDIR /var/ans
RUN ssh-keygen -t rsa -N 12345 -C "master key" -f master_key
CMD /usr/sbin/sshd -D
Host Image Dockerfile:
FROM centos_base:latest
RUN mkdir /var/ans
COPY run.sh /var/
RUN chmod 755 /var/run.sh
My run.sh file:
#!/bin/bash
cat /var/ans/master_key.pub >> /root/.ssh/authorized_keys
# start SSH server
/usr/sbin/sshd -D
My Problems are:
If I run docker-compose up -d --build, I see no containers coming up. they all getting created but exiting.
Successfully tagged centos_host:latest
Creating working_base_1 ... done
Creating master01 ... done
Creating host01 ... done
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
433baf2dd0d8 centos_host "/var/run.sh" 12 minutes ago Exited (1) 12 minutes ago host01
a2a57e480635 centos_master "/bin/sh -c '/usr/sb…" 13 minutes ago Exited (1) 12 minutes ago master01
a4acf6fb3e7b centos_base "/bin/bash" 13 minutes ago Exited (0) 13 minutes ago working_base_1
ssh keys generated in 'centos_master' image are not available in centos_host container, even though I have added a volume mapping 'ansible_vol:/var/ans' in docker-compose file
My intention is these ssh key files generated in master should be available in host containers ,therefore the run.sh script can copy them to authorized_keys section of host containers.
Any help is greatly appreciated.
Try to put in base/Dockerfile :
RUN echo "12345" | passwd root --stdin; \
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -b 4096 -t rsa
and rerun docker-compose build
/etc/ssh/ssh_host_rsa_key is a key used by sshd (ssh daemon), so that containers can be started properly.
The key you generated and copied into authorized_keys will be used to allow ssh client to connect to container via ssh.
Try to use external: false, to not attempt the container to create it and override the previous data at creation
version: "3"
services:
base:
image: centos_base
build:
context: base
master:
links:
- base
build:
context: master
image: centos_master
container_name: master01
hostname: master01
volumes:
- ansible_vol:/var/ans
networks:
- net
host01:
links:
- base
- master
build:
context: host
image: centos_host
container_name: host01
hostname: host01
command: ["/var/run.sh"]
volumes:
- ansible_vol:/var/ans
networks:
- net
networks:
net:
volumes:
ansible_vol:
external: false
Related
I have a project and I use redis in this project.
This is my .gitlab-cu.yml
image: node:latest
stages:
- build
- deploy
build:
image: docker:stable
services:
- docker:dind
stage: build
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
- docker build --pull -t "$DOCKER_IMAGE" .
- docker push "$DOCKER_IMAGE"
deploy:
stage: deploy
services:
- redis:latest
script:
- ssh $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- ssh $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME
docker run --name=$CI_PROJECT_NAME --restart=always --network="host" -v "/app/storage:/src/storage" -d $DOCKER_IMAGE
- ssh -o StrictHostKeyChecking=no -i $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker compose up
And this my docker-compose.yml:
version: "3.2"
services:
redis:
image: redis:latest
volumes:
- redisdata:/data/
command: --port 6379
ports:
- "6379:6379"
expose:
- "6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
volumes:
redisdata:
everything is OK, but last line not work for me:
- ssh -o StrictHostKeyChecking=no -i $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker compose up
So, my redis image not running in this container
Note: I want 1 container with 2 images contains my project and redis
My redis image not running
So, my redis image not running in this container
Note: I want 1 container with 2 images contains my project and redis
I have this docker-compose:
version: "3.9"
services:
myserver:
image: <some image>
restart: always
volumes:
- ./res:/tmp/configs
- ./myfolder:/tmp/logs
Should I expect to see the files that are in /tmp/logs inside the container, in the host folder 'myfolder'?
'myfolder' is empty and I want it to be updated with the contents of the /tmp/logs folder in the container.
For formatting purposes, I post this as an answer instead of comment.
Can you put following in test.sh on the host?
#!/usr/bin/env bash
testdir=/tmp/docker-compose
rm -rf "$testdir"; mkdir -p "$testdir"/{myfolder,res}
cd "$testdir"
cat << EOF > docker-compose.yml
version: "3.9"
services:
myserver:
image: debian
restart: always
volumes:
- ./res:/tmp/configs
- ./myfolder:/tmp/logs
command: sleep inf
EOF
docker-compose up
and run bash test.sh on the host and see if it works. host dir is /tmp/docker-compose/myfolder.
What worked for me is using full path for the host path.
C:\myfolder for example.
I have docker-compose.yml for my node app and mongo db.
I have 2 NFS volumes mounted on compose file. The problem is when I run container, app container logs are not getting saved in volumes logs folder. Mongo-data volume works fine, it does persist data.
Inside Container:
[admin#ip-10-x-x-x bot-app]$ docker exec -it bot-app bash
root#b78428d61861:/bot# cd logs
root#b78428d61861:/bot/logs# ls -l
total 32
-rw-r--r--. 1 root root 23328 Jul 8 21:08 access-bot.2021-07-08.log
-rw-r--r--. 1 root root 8145 Jul 8 21:05 bot.2021-07-08.log
-rw-r--r--. 1 root root 0 Jul 8 20:59 text.txt
root#b78428d61861:/bot/logs#
from host:
[admin#ip-10-x-x-x logs]$ ls -l
total 0
[admin#ip-10-x-x-x logs]$ pwd
/mnt/chatbot-efs/logs
docker-compose file
version: '3.7'
services:
db:
image: mongo:4.2
container_name: db
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- "27017:27017"
volumes:
- ./mongo-entrypoint/:/docker-entrypoint-initdb.d/
- type: volume
source: mongo_data
target: /data/db
volume:
nocopy: true
command: mongod
nodejs:
build:
context: .
dockerfile: Dockerfile
image: bot
container_name: bot-app
restart: unless-stopped
env_file: .env
ports:
- "9090:9090"
- "9093:9093"
- "9092:9092"
volumes:
- type: volume
source: logs
target: /logs
depends_on:
- "db"
command: ["./wait-for.sh","db.bot-app_default:27017","--","node", "bot.js"]
volumes:
mongo_data:
driver_opts:
type: "nfs"
o: "addr=10.10.152.15,rw,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
device: ":/mongodata"
logs:
driver_opts:
type: "nfs"
o: "addr=10.10.152.15,rw,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
device: ":/logs"
Here is the dockerfile for app
FROM node:14
# Code to install Oracle
RUN apt-get update && apt-get -y upgrade && apt-get -y dist-upgrade && apt-get install -y alien libaio1
RUN wget https://yum.oracle.com/repo/OracleLinux/OL7/oracle/instantclient/x86_64/getPackage/oracle-instantclient19.3-basiclite-19.3.0.0.0-1.x86_64.rpm
RUN alien -i --scripts oracle-instantclient*.rpm
RUN rm -f oracle-instantclient19.3*.rpm && apt-get -y autoremove && apt-get -y clean
# Create app directory
WORKDIR /bot
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 9090 4040
RUN mkdir /logs && chmod 777 /logs
CMD [ "node","bot.js" ]
I check to inspect of the both containers I don't see any major difference on how volumes are mounted. What am I missing here?
Ok I give up! I spent far too much time on this:
So I want my app inside a docker container to talk to my postgres which is inside another container.
Docker-compose.yml
version: "3.8"
services:
foodbudget-db:
container_name: foodbudget-db
image: postgres:12.4
restart: always
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: foodbudget
PGDATA: /var/lib/postgresql/data
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- 5433:5432
web:
image: node:14.10.1
env_file:
- .env
depends_on:
- foodbudget-db
ports:
- 8080:8080
build:
context: .
dockerfile: Dockerfile
Dockerfile
FROM node:14.10.1
WORKDIR /src/app
ADD https://github.com/palfrey/wait-for-db/releases/download/v1.0.0/wait-for-db-linux-x86 /src/app/wait-for-db
RUN chmod +x /src/app/wait-for-db
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5433 -t 1000000
EXPOSE 8080
But I keep getting this error when I build the Dockerfile, even though the database is up and running when I run docker ps. I tried connecting to the postgres database in my host machine, and it connected successfully...
Temporary error (pausing for 3 seconds): PostgresError { error: Error(Io(Custom { kind: Other, error: "failed to lookup address information: Name does not resolve" })) }
Have anyone created an app and talk to a db in another docker instance be4?
This line is the issue:
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5433 -t 1000000
You must use the internal port of the docker container (5432) instead of the exposed one within a network:
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5432 -t 1000000
How to create another /tmp directory, for example, in the same container and give it r/w permissions?
docker-compose.yml:
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./volumes/nginx/conf.d:/etc/nginx/conf.d
command: nginx -g "daemon off;"
networks:
- network
You can create a directory or perform any other action by defining it in a Dockerfile. In the same directory as your docker-compose.yml create a Dockerfile:
touch Dockerfile
Add to your Dockerfile following line:
RUN mkdir /tmp2
RUN chmod 755 /tmp2
to the docker-compose.yaml add build information:
nginx:
image: nginx
build: .
ports:
- 80:80
volumes:
- ./volumes/nginx/conf.d:/etc/nginx/conf.d
command: nginx -g "daemon off;"
networks:
- network
If you are using only docker-compose without Dockerfile, can be done this way:
You can get into container, like this:
docker exec -ti $(docker ps --filter name='nginx' --format "{{ .ID }}")
Then, inside the container, you can run:
mkdir /tmp2
chmod 755 /tmp2
Simply just add a volume with the directory you wish to create and it would be created automatically during startup
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./volumes/nginx/conf.d:/etc/nginx/conf.d
- ./volumes/hosted-direcotry/hosted-sub-direcotry:/etc/created-direcotry/created-sub-directory/
command: nginx -g "daemon off;"
networks:
- network