Flask App runs locally but errors out on DigitalOcean - python-3.x

The following is my docker-compose.yml file. It has to do with my docker-compose.yml file.. I think. Like I said, the app builds locally but when I run "docker-compose up -d" in my DigitalOcean Droplet I get this error.
ERROR: Couldn't find env file: /root/.env
The following is my docker-compose.yml file.
version: '2'
services:
postgres:
image: 'postgres:9.5'
container_name: postgress
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
networks:
- db_nw
redis:
image: 'redis:3.0-alpine'
container_name: redis
command: redis-server --requirepass pass123456word
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
website:
restart: always
build: .
container_name: website
command: >
gunicorn -c "python:config.gunicorn" --reload "app.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/app'
ports:
- 8000:8000
expose:
- 8000
networks:
- db_nw
- web_nw
depends_on:
- postgres
links:
- celery
- redis
- postgres
celery:
build: .
container_name: celery
command: celery worker -B -l info -A app.blueprints.contact.tasks
env_file:
- '.env'
volumes:
- '.:/app'
nginx:
restart: always
build: ./nginx
image: 'nginx:1.13'
container_name: nginx
volumes:
- /www/static
- .:/app
ports:
- 80:80
networks:
- web_nw
links:
- website
depends_on:
- website
networks:
db_nw:
driver: bridge
web_nw:
driver: bridge
volumes:
postgres:
redis:
My dockerfile
FROM python:3.7.5-slim-buster
RUN apt-get update \
&& apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install --upgrade pip -r requirements.txt
COPY . .
RUN pip install --editable .
CMD gunicorn -c "python:config.gunicorn" "app.app:create_app()"
Is something wrong with my volumes in my docker-compose.yml file? or am I doing something weird in my Dockerfile with the ENV to where its hard coded to a local machine rather than the "root" directory on DigitalOcean?F
I'm new to hosting docker images so this is my first go at something like this. Thanks!

When you access a Droplet, you're generally running as root.
You appear to have copied the docker-compose.yml correctly to the Droplet but you have not copied the .env file on which it depends to the Droplet's /root/.env.
If you copy the .env file to /root/.env on the Droplet, it should work.

Related

ModuleNotFoundError: No module named 'api_yamdb.wsgi'

When deploying a project, the web container crashes with the No module named 'api_yamdb.wsgi' error. Already spent a lot of time but could not figure out what exactly needs to be done to solve this problem. Please help!)
My Dockerfile:
FROM python:3.7-slim
WORKDIR /app
COPY ./api_yamdb/requirements.txt .
RUN pip3 install -r requirements.txt --no-cache-dir
COPY . .
CMD ["gunicorn", "api_yamdb.wsgi:application", "--bind", "0:8000" ]
My docker-compose:
version: '3.8'
services:
db:
image: postgres:13.0-alpine
volumes:
- /var/lib/postgresql/data/
env_file:
- ./.env
web:
image: porter4ch/yamdb_final:latest
restart: always
volumes:
- static_value:/app/static/
- media_value:/app/media/
depends_on:
- db
env_file:
- ./.env
nginx:
image: nginx:1.21.3-alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- static_value:/var/html/static/
- media_value:/var/html/media/
depends_on:
- web
volumes:
static_value:
media_value:

Redirecting API call to a dockerized node-express server with nginx

I have a node-express server running inside a dockerized container with expose port 3001. My nginx is installed on OS with sudo apt install nginx. What I want is everytime a call is made to app.domain.com:3001 I want to redirect that call to localhost:3001.I am new to nginx configuration. I would prefer I could do the same with *.conf file in conf.d folder of nginx.Also the response of API should have same domain .domain.com so that I can set httpOnly cookies on a angular app running on app.domain.com
My node docker-compose file:
version: "3"
services:
container1:
image: node:18.12-alpine
working_dir: /usr/src/app
container_name: container1
depends_on:
- container_mongodb
restart: on-failure
env_file: .env
ports:
- "$APP_PORT:$APP_PORT"
volumes:
- .:/usr/src/app
networks:
- network1
command: ./wait-for.sh container_mongodb:$MONGO_PORT -- npm run dev
container_mongodb:
image: mongo:6.0.3
container_name: container_mongodb
restart: on-failure
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
ports:
- "$MONGO_EXPOSE_PORT:$MONGO_PORT"
volumes:
- container_mongodb_data:/data/db
- ./src/config/auth/mongodb.key:/data/mongodb.key
networks:
- network1
entrypoint:
- bash
- -c
- |
cp /data/mongodb.key /data/mongodb.copy.key
chmod 400 /data/mongodb.copy.key
chown 999:999 /data/mongodb.copy.key
exec docker-entrypoint.sh $$#
command: ["mongod", "--replSet", "rs0", "--bind_ip_all", "--keyFile", "/data/mongodb.copy.key"]
networks:
network1:
external: true
volumes:
container_mongodb_data:

Docker-compose confuses building a frontend with a backend

Using Docker-compose I want to build 3 containers: backend(node.js), frontend(react.js) and MySQL.
version: '3.8'
services:
backend:
container_name: syberiaquotes-restapi
build: ./backend
env_file:
- .env
command: "sh -c 'npm install && npm run start'"
ports:
- '3000:3000'
volumes:
- ./backend:/app
- /app/node_modules
depends_on:
- db
frontend:
container_name: syberiaquotes-frontend
build: ./frontend
ports:
- '5000:5000'
volumes:
- ./frontend/src:/app/src
stdin_open: true
tty: true
depends_on:
- backend
db:
image: mysql:latest
container_name: syberiaquotes-sql
env_file:
- .env
environment:
- MYSQL_DATABASE=${SQL_DB}
- MYSQL_USER=${SQL_USER}
- MYSQL_ROOT_PASSWORD=${SQL_PASSWORD}
- MYSQL_PASSWORD=${SQL_PASSWORD}
volumes:
- data:/var/lib/mysql
restart: unless-stopped
ports:
- '3306:3306'
volumes:
data:
My files structure:
Everything worked fine until I've added a new 'frontend' container!
It seems that docker is treating my frontend container as second backend because it's trying to launch nodemon, and it's not even included in frontend dependencies!:
Obviously I have two Dockerfiles for each service, they are almost the same files.
Backend:
Frontend:
Do You have any ideas where the problem should be?
RESOLVED! I had to delete all images and volumes:
$ docker rm $(docker ps -a -q) -f
$ echo y | docker system prune --all
$ echo y | docker volume prune
and run it again:
$ docker-compose up

Docker compose connection refused

I am getting a 'connection refused' error when trying to hit my NodeJS server running in a Docker container. If I try to cURL the server from the host machine, I get the error "curl: (56) Recv failure: Connection reset by peer".
When I run sudo docker-compose up -d it starts up my services and running sudo docker ps -a shows the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e56b30b1de9c mongo:4.1.8-xenial "docker-entrypoint.s…" About a minute ago Up About a minute 27017/tcp db
0d82b0a881e5 nodejs "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:3000->3000/tcp nodejs
The containers are running on a server with IP '192.168.0.24' so I try to hit an endpoint in my 'nodejs' app via '192.168.0.24:3000/items' but this results in a connection refused error.
docker-compose.yml
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "3000:3000"
volumes:
- .:/home/node/app
- /home/node/app/node_modules
networks:
- app-network
command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
db:
image: mongo:4.1.8-xenial
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
Dockerfile
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 3000
CMD [ "npm", "app.js" ]

Unable to authenticate to company LDAP using flask-ldap3-login in Docker container

I`m trying to connect to my company's LDAP server to authenticate users in my flask web app. I'm constantly getting this error:
2020-06-22 09:55:07,459 ERROR flask_ldap3_login MainThread : no active server available in server pool after maximum number of tries
I also tried to telnet to the ldap server from the web container and not connection can be made. What do I need to do to allow my containers to run on our network to be able to access LDAP?
I tried enabling SSL and added the certs, but still no success.
docker-compose file
# docker-compose.yml
version: '3'
services:
db:
build: ./application/db
container_name: dqm_db
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
container_name: dqm_web
restart: always
ports:
- 5000:5000
- 389:389
- 636:636
env_file:
- .env
depends_on:
- db
links:
- redis
volumes:
- .:/data-quality-management
nginx:
build: ./nginx
container_name: dqm_nginx
restart: always
ports:
- 80:80
depends_on:
- web
redis:
container_name: dqm_redis
env_file:
- .env
image: redis:latest
restart: always
command: redis-server
ports:
- 6379:6379
volumes:
- .:/data-quality-management
worker:
build: .
hostname: worker
container_name: dqm_worker
entrypoint: celery
command: -A application.run_celery:celery worker --loglevel=info
links:
- redis
- web
depends_on:
- web
- redis
env_file:
- .env
volumes:
- .:/data-quality-management
volumes:
postgres_data:
Dockerfile:
FROM python:3.7-buster
RUN apt-get update
RUN apt-get install python-dev -y
RUN apt-get install libsasl2-dev -y
RUN apt-get install libldap2-dev -y
RUN apt-get install libssl-dev -y
RUN apt-get clean -y
WORKDIR /data-quality-management
ENV PYTHONUNBUFFERED 1
COPY requirements.txt .
EXPOSE 5000
EXPOSE 389
EXPOSE 636
COPY *.crt /etc/ssl/certs/
RUN update-ca-certificates
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . /data-quality-management
CMD gunicorn -w $WEB_CONCURRENCY -b $WEB_BIND wsgi:app

Resources