docker compose - container is not persisting data in mounted volume - node.js

I have docker-compose.yml for my node app and mongo db.
I have 2 NFS volumes mounted on compose file. The problem is when I run container, app container logs are not getting saved in volumes logs folder. Mongo-data volume works fine, it does persist data.
Inside Container:
[admin#ip-10-x-x-x bot-app]$ docker exec -it bot-app bash
root#b78428d61861:/bot# cd logs
root#b78428d61861:/bot/logs# ls -l
total 32
-rw-r--r--. 1 root root 23328 Jul 8 21:08 access-bot.2021-07-08.log
-rw-r--r--. 1 root root 8145 Jul 8 21:05 bot.2021-07-08.log
-rw-r--r--. 1 root root 0 Jul 8 20:59 text.txt
root#b78428d61861:/bot/logs#
from host:
[admin#ip-10-x-x-x logs]$ ls -l
total 0
[admin#ip-10-x-x-x logs]$ pwd
/mnt/chatbot-efs/logs
docker-compose file
version: '3.7'
services:
db:
image: mongo:4.2
container_name: db
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- "27017:27017"
volumes:
- ./mongo-entrypoint/:/docker-entrypoint-initdb.d/
- type: volume
source: mongo_data
target: /data/db
volume:
nocopy: true
command: mongod
nodejs:
build:
context: .
dockerfile: Dockerfile
image: bot
container_name: bot-app
restart: unless-stopped
env_file: .env
ports:
- "9090:9090"
- "9093:9093"
- "9092:9092"
volumes:
- type: volume
source: logs
target: /logs
depends_on:
- "db"
command: ["./wait-for.sh","db.bot-app_default:27017","--","node", "bot.js"]
volumes:
mongo_data:
driver_opts:
type: "nfs"
o: "addr=10.10.152.15,rw,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
device: ":/mongodata"
logs:
driver_opts:
type: "nfs"
o: "addr=10.10.152.15,rw,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
device: ":/logs"
Here is the dockerfile for app
FROM node:14
# Code to install Oracle
RUN apt-get update && apt-get -y upgrade && apt-get -y dist-upgrade && apt-get install -y alien libaio1
RUN wget https://yum.oracle.com/repo/OracleLinux/OL7/oracle/instantclient/x86_64/getPackage/oracle-instantclient19.3-basiclite-19.3.0.0.0-1.x86_64.rpm
RUN alien -i --scripts oracle-instantclient*.rpm
RUN rm -f oracle-instantclient19.3*.rpm && apt-get -y autoremove && apt-get -y clean
# Create app directory
WORKDIR /bot
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 9090 4040
RUN mkdir /logs && chmod 777 /logs
CMD [ "node","bot.js" ]
I check to inspect of the both containers I don't see any major difference on how volumes are mounted. What am I missing here?

Related

Running chown in Dockerfile does nothing

I'm having some trouble setting up a Nuxt and Rails container using Docker. The two containers are separate, but interact with each other.
Currently, I'm having trouble running the dev servers for both the Nuxt and the Rails containers due to insufficient permissions. Looking at the logs for both of the containers, it seems that Docker can't do actions such as mkdir.
EACCESS: Permission Denied: 'mkdir: /usr/src/app/.nuxt' # nuxt
EACCESS: Permission Denied: 'mkdir: /usr/src/app/tmp' # rails
My docker-compose.dev.yml file
version: 3
services:
backend:
privileged: true
image: tablevibes-backend
build:
dockerfile: Dockerfile-dev
context: tablevibes-backend
args:
UID: ${UID:-1001}
BUNDLER_VERSION: 2.0.2
PG_MAJOR: 10
mode: development
tty: true
stdin_open: true
volumes:
- ./tablevibes-backend:/usr/src/app:Z
- gem_data_api:/usr/local/bundle:cached
ports:
- "3000:3000"
depends_on:
- db
user: rails
client-ui:
image: client-ui
command: yarn run dev
build:
context: client-ui
dockerfile: Dockerfile-dev
args:
UID: ${UID:-1001}
PORT: 5000
MODE: DEV
restart: always
volumes:
- ./client-ui:/usr/src/app
- client_ui_node_modules:/usr/src/app/node_modules:cached
ports:
- 5000:5000
user: client-ui
The 2 Dockerfiles
The Rails Dockerfile-dev
FROM ruby:2.6.3
ARG PG_MAJOR
ARG BUNDLER_VERSION
ARG UID
ARG MODE
RUN adduser rails --uid $UID --disabled-password --gecos ""
# Add POSTGRESQL to the source list using the right version
RUN curl -sSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& echo 'deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main' $PG_MAJOR > /etc/apt/sources.list.d/pgdg.list
ENV RAILS_ENV $MODE
RUN apt-get update -qq && apt-get install -y postgresql-client-$PG_MAJOR vim
RUN apt-get -y install sudo
WORKDIR /usr/src/app
CMD chown -R rails /usr/src/app
COPY Gemfile /usr/src/app/Gemfile
COPY Gemfile.lock /usr/src/app/Gemfile.lock
ENV BUNDLER_VERSION $BUNDLER_VERSION
RUN gem install bundler:$BUNDLER_VERSION
RUN bundle install
COPY . /usr/src/app
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
The Nuxt Dockerfile-dev
FROM node:10
ARG UID
ARG MODE=DEV
ARG PORT
RUN adduser client-ui --uid $UID --disabled-password --gecos ""
RUN apt-get update
RUN apt-get -y install sudo
RUN mkdir /usr/src/app
RUN chown -R client-ui /usr/src/app
COPY package.json yarn.lock /usr/src/app
RUN yarn install
COPY . /usr/src/app
ENV API_URL=http://localhost:3000/v1
ENV REVIEW_URL=http://localhost:8000
# expose 5000 on container
EXPOSE $PORT
# set app serving to permissive / assigned
ENV NUXT_HOST=0.0.0.0
# set app port
ENV NUXT_PORT=$PORT
My problem is that those lines where I do RUN chown ... never seem to take place. If I manually go into the containers with docker exec -u root -it backend bash and run chown -R rails . manually, everything works as expected. Likewise I tried running chmod 777 as a test, however that also had no effect on the permission denied error I keep getting.
What might be causing Docker to ignore my chown command?
This Stack Overflow question seems relevant, however it doesn't quite apply because I don't have any VOLUME mounts inside my Dockerfiles. A user in the comments of the accepted answer has my same issue, though unfortunately no solution.
Containers are like ogres, or onions, they have layers.
When using VOLUMEs or MOUNTs, the directory (or file) is not actually IN the container, but only appears to be in it.
Your Dockerfile uses a layer for /usr/src/app, which as you probably already know is your ./tablevibes-backend directory on your host computer.
services:
backend:
volumes:
- ./tablevibes-backend:/usr/src/app:Z
When you use a volume or mount in this way, the only thing Docker can do is simple CRUD (create, read, update, delete) options, it can not (and should not) modify the metadata as it is modifying your host drive, which could be a security issue.
try this:
sudo chown -R $USER:$USER .

docker container can not see the data from a shared volume

I'm trying to set up a lab using docker containers with base image centos7 and docker-compose.
Here is my docker-compose.yaml file
version: "3"
services:
base:
image: centos_base
build:
context: base
master:
links:
- base
build:
context: master
image: centos_master
container_name: master01
hostname: master01
volumes:
- ansible_vol:/var/ans
networks:
- net
host01:
links:
- base
- master
build:
context: host
image: centos_host
container_name: host01
hostname: host01
command: ["/var/run.sh"]
volumes:
- ansible_vol:/var/ans
networks:
- net
networks:
net:
volumes:
ansible_vol:
My Docker files are as below
Base Image docker file:
# For centos7.0
FROM centos:7
RUN yum install -y net-tools man vim initscripts openssh-server
RUN echo "12345" | passwd root --stdin
RUN mkdir /root/.ssh
Master Dockerfile :
FROM centos_base:latest
# install ansible package
RUN yum install -y epel-release
RUN yum install -y ansible openssh-clients
RUN mkdir /var/ans
# change working directory
WORKDIR /var/ans
RUN ssh-keygen -t rsa -N 12345 -C "master key" -f master_key
CMD /usr/sbin/sshd -D
Host Image Dockerfile:
FROM centos_base:latest
RUN mkdir /var/ans
COPY run.sh /var/
RUN chmod 755 /var/run.sh
My run.sh file:
#!/bin/bash
cat /var/ans/master_key.pub >> /root/.ssh/authorized_keys
# start SSH server
/usr/sbin/sshd -D
My Problems are:
If I run docker-compose up -d --build, I see no containers coming up. they all getting created but exiting.
Successfully tagged centos_host:latest
Creating working_base_1 ... done
Creating master01 ... done
Creating host01 ... done
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
433baf2dd0d8 centos_host "/var/run.sh" 12 minutes ago Exited (1) 12 minutes ago host01
a2a57e480635 centos_master "/bin/sh -c '/usr/sb…" 13 minutes ago Exited (1) 12 minutes ago master01
a4acf6fb3e7b centos_base "/bin/bash" 13 minutes ago Exited (0) 13 minutes ago working_base_1
ssh keys generated in 'centos_master' image are not available in centos_host container, even though I have added a volume mapping 'ansible_vol:/var/ans' in docker-compose file
My intention is these ssh key files generated in master should be available in host containers ,therefore the run.sh script can copy them to authorized_keys section of host containers.
Any help is greatly appreciated.
Try to put in base/Dockerfile :
RUN echo "12345" | passwd root --stdin; \
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -b 4096 -t rsa
and rerun docker-compose build
/etc/ssh/ssh_host_rsa_key is a key used by sshd (ssh daemon), so that containers can be started properly.
The key you generated and copied into authorized_keys will be used to allow ssh client to connect to container via ssh.
Try to use external: false, to not attempt the container to create it and override the previous data at creation
version: "3"
services:
base:
image: centos_base
build:
context: base
master:
links:
- base
build:
context: master
image: centos_master
container_name: master01
hostname: master01
volumes:
- ansible_vol:/var/ans
networks:
- net
host01:
links:
- base
- master
build:
context: host
image: centos_host
container_name: host01
hostname: host01
command: ["/var/run.sh"]
volumes:
- ansible_vol:/var/ans
networks:
- net
networks:
net:
volumes:
ansible_vol:
external: false

Unable to authenticate to company LDAP using flask-ldap3-login in Docker container

I`m trying to connect to my company's LDAP server to authenticate users in my flask web app. I'm constantly getting this error:
2020-06-22 09:55:07,459 ERROR flask_ldap3_login MainThread : no active server available in server pool after maximum number of tries
I also tried to telnet to the ldap server from the web container and not connection can be made. What do I need to do to allow my containers to run on our network to be able to access LDAP?
I tried enabling SSL and added the certs, but still no success.
docker-compose file
# docker-compose.yml
version: '3'
services:
db:
build: ./application/db
container_name: dqm_db
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
container_name: dqm_web
restart: always
ports:
- 5000:5000
- 389:389
- 636:636
env_file:
- .env
depends_on:
- db
links:
- redis
volumes:
- .:/data-quality-management
nginx:
build: ./nginx
container_name: dqm_nginx
restart: always
ports:
- 80:80
depends_on:
- web
redis:
container_name: dqm_redis
env_file:
- .env
image: redis:latest
restart: always
command: redis-server
ports:
- 6379:6379
volumes:
- .:/data-quality-management
worker:
build: .
hostname: worker
container_name: dqm_worker
entrypoint: celery
command: -A application.run_celery:celery worker --loglevel=info
links:
- redis
- web
depends_on:
- web
- redis
env_file:
- .env
volumes:
- .:/data-quality-management
volumes:
postgres_data:
Dockerfile:
FROM python:3.7-buster
RUN apt-get update
RUN apt-get install python-dev -y
RUN apt-get install libsasl2-dev -y
RUN apt-get install libldap2-dev -y
RUN apt-get install libssl-dev -y
RUN apt-get clean -y
WORKDIR /data-quality-management
ENV PYTHONUNBUFFERED 1
COPY requirements.txt .
EXPOSE 5000
EXPOSE 389
EXPOSE 636
COPY *.crt /etc/ssl/certs/
RUN update-ca-certificates
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . /data-quality-management
CMD gunicorn -w $WEB_CONCURRENCY -b $WEB_BIND wsgi:app

Docker. Create directory in container after docker-compose up and give it r/w permissions

How to create another /tmp directory, for example, in the same container and give it r/w permissions?
docker-compose.yml:
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./volumes/nginx/conf.d:/etc/nginx/conf.d
command: nginx -g "daemon off;"
networks:
- network
You can create a directory or perform any other action by defining it in a Dockerfile. In the same directory as your docker-compose.yml create a Dockerfile:
touch Dockerfile
Add to your Dockerfile following line:
RUN mkdir /tmp2
RUN chmod 755 /tmp2
to the docker-compose.yaml add build information:
nginx:
image: nginx
build: .
ports:
- 80:80
volumes:
- ./volumes/nginx/conf.d:/etc/nginx/conf.d
command: nginx -g "daemon off;"
networks:
- network
If you are using only docker-compose without Dockerfile, can be done this way:
You can get into container, like this:
docker exec -ti $(docker ps --filter name='nginx' --format "{{ .ID }}")
Then, inside the container, you can run:
mkdir /tmp2
chmod 755 /tmp2
Simply just add a volume with the directory you wish to create and it would be created automatically during startup
nginx:
image: nginx
ports:
- 80:80
volumes:
- ./volumes/nginx/conf.d:/etc/nginx/conf.d
- ./volumes/hosted-direcotry/hosted-sub-direcotry:/etc/created-direcotry/created-sub-directory/
command: nginx -g "daemon off;"
networks:
- network

How to fix PSQL connection error with Docker Compose

I'm trying to connect my Python-Flask app with a Postgres database in a docker environment. I am using a docker-compose file to build my web and db environment.
However, I am getting the following error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Here is my docker file:
FROM ubuntu:16.04 as base
RUN apt-get update -y && apt-get install -y python3-pip python3-dev postgresql libpq-dev libffi-dev jq
ENV LC_ALL=C.UTF-8 \
LANG=C.UTF-8
ENV FLASK_APP=manage.py \
FLASK_ENV=development \
APP_SETTINGS=config.DevelopmentConfig \
DATABASE_URL=postgresql://user:pw#postgres/database
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
FROM base as development
EXPOSE 5000
CMD ["bash"]
Here is my Docker-compose file:
version: "3.6"
services:
development_default: &DEVELOPMENT_DEFAULT
build:
context: .
target: development
working_dir: /app
volumes:
- .:/app
environment:
- GOOGLE_CLIENT_ID=none
- GOOGLE_CLIENT_SECRET=none
web:
<<: *DEVELOPMENT_DEFAULT
ports:
- "5000:5000"
depends_on:
- db
command: flask run --host=0.0.0.0
db:
image: postgres:10.6
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=db

Resources