When deploying a project, the web container crashes with the No module named 'api_yamdb.wsgi' error. Already spent a lot of time but could not figure out what exactly needs to be done to solve this problem. Please help!)
My Dockerfile:
FROM python:3.7-slim
WORKDIR /app
COPY ./api_yamdb/requirements.txt .
RUN pip3 install -r requirements.txt --no-cache-dir
COPY . .
CMD ["gunicorn", "api_yamdb.wsgi:application", "--bind", "0:8000" ]
My docker-compose:
version: '3.8'
services:
db:
image: postgres:13.0-alpine
volumes:
- /var/lib/postgresql/data/
env_file:
- ./.env
web:
image: porter4ch/yamdb_final:latest
restart: always
volumes:
- static_value:/app/static/
- media_value:/app/media/
depends_on:
- db
env_file:
- ./.env
nginx:
image: nginx:1.21.3-alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- static_value:/var/html/static/
- media_value:/var/html/media/
depends_on:
- web
volumes:
static_value:
media_value:
Related
Here is my docker-compose.yml, when I comment the volumes code of "khaothi-manager" my services work correctly. But when uncomment it, my Node service throw an error that it can not connect to Mongo
version: "3.8"
services:
mongo:
image: mongo
restart: always
env_file: ./.env
ports:
- $MONGO_LOCAL_PORT:$DB_PORT
volumes:
- ./data:/data/db
networks:
- hm_khaothi
khaothi-manager:
container_name: khaothi-manager
image: khaothi-manager
restart: always
volumes:
- ./admin:/app
build: ./admin
env_file: ./.env
links:
- mongo
- khaothi-resource
ports:
- $MANAGER_PORT:$MANAGER_PORT
environment:
- MANAGER_HOST=$MANAGER_HOST
- MANAGER_PORT=$MANAGER_PORT
- RESOURCE_HOST=khaothi-resource
- RESOURCE_PORT:$RESOURCE_PORT
- DB_HOST=mongo
- DB_NAME=$DB_NAME
- DB_PORT=$DB_PORT
networks:
- hm_khaothi
My Dockerfile
# syntax=docker/dockerfile:1
FROM node:14-alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "start"]
This is the error
(node:37) UnhandledPromiseRejectionWarning: MongooseServerSelectionError: connection timed out
at NativeConnection.Connection.openUri (/app/node_modules/mongoose/lib/connection.js:807:32)
at /app/node_modules/mongoose/lib/index.js:342:10
...
It worked correctly when I add another volume /app/node_modules
khaothi-manager:
container_name: khaothi-manager
image: khaothi-manager
restart: always
volumes:
- ./admin:/app
- /app/node_modules
Using Docker-compose I want to build 3 containers: backend(node.js), frontend(react.js) and MySQL.
version: '3.8'
services:
backend:
container_name: syberiaquotes-restapi
build: ./backend
env_file:
- .env
command: "sh -c 'npm install && npm run start'"
ports:
- '3000:3000'
volumes:
- ./backend:/app
- /app/node_modules
depends_on:
- db
frontend:
container_name: syberiaquotes-frontend
build: ./frontend
ports:
- '5000:5000'
volumes:
- ./frontend/src:/app/src
stdin_open: true
tty: true
depends_on:
- backend
db:
image: mysql:latest
container_name: syberiaquotes-sql
env_file:
- .env
environment:
- MYSQL_DATABASE=${SQL_DB}
- MYSQL_USER=${SQL_USER}
- MYSQL_ROOT_PASSWORD=${SQL_PASSWORD}
- MYSQL_PASSWORD=${SQL_PASSWORD}
volumes:
- data:/var/lib/mysql
restart: unless-stopped
ports:
- '3306:3306'
volumes:
data:
My files structure:
Everything worked fine until I've added a new 'frontend' container!
It seems that docker is treating my frontend container as second backend because it's trying to launch nodemon, and it's not even included in frontend dependencies!:
Obviously I have two Dockerfiles for each service, they are almost the same files.
Backend:
Frontend:
Do You have any ideas where the problem should be?
RESOLVED! I had to delete all images and volumes:
$ docker rm $(docker ps -a -q) -f
$ echo y | docker system prune --all
$ echo y | docker volume prune
and run it again:
$ docker-compose up
I am using Docker and Docker-compose for the first time and I am getting this Error ERROR: The Compose file is invalid because: Service volumes have neither an image nor a build context specified. At least one must be provided. but my docker-compose file has a build context specified. I tried moving the build context around different places but still no headway. Here is my docker-compose file
version: '3.4'
services:
express:
container_name: express
image: node:12-alpine
volumes:
- type: bind
source: ./
target: /app
- type: volume
source: nodemodules
target: /app/node_modules
volume:
nocopy: true
build: .
working_dir: /app
command: npm run dev
ports:
- '4000:4000'
environment:
- NODE_ENV=development
- PORT=4000
volumes:
nodemodules:
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- '27017:27017'
- '27018:27018'
- '27019:27019'
What exactly could I be doing wrong? Thank you.
As #DavidMaze wrote:
The lines called volumes: and nodemodules: are wrong indented. They are at the same level as other services, but are empty. You probably want to delete these lines.
Take a look: docker-compose-file.
I was able to fix it after some days of googling. Apparently, all I had to do was to put
volumes:
nodemodules:
exactly like that at the last lines of the file and remove them from the services mapping. So my file became like this at the end of the day.
version: '3.4'
services:
express:
container_name: express
image: node:12-alpine
volumes:
- type: bind
source: ./
target: /app
- type: volume
source: nodemodules
target: /app/node_modules
volume:
nocopy: true
build: .
working_dir: /app
command: npm run dev
ports:
- '4000:4000'
environment:
- NODE_ENV=development
- PORT=4000
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- '27017:27017'
- '27018:27018'
- '27019:27019'
volumes:
nodemodules:
I am having the connection error with the docker-composer when I try to access a postgres database through an API in Node.
I'm using Sequelize as ORM to acess the database. But I'dont know what happened.
docker-compose.yml:
version: '3.5'
services:
api-service:
build:
context: .
dockerfile: ./api-docker.dockerfile
image: api-service
container_name: api-service
restart: always
env_file: .env
environment:
- NODE_ENV=$NODE_ENV
ports:
- ${PORT}:3000
volumes:
- .:/home/node/api
- node_modules:/home/node/api/node_modules
depends_on:
- postgres-db
networks:
- api-network
command: npm run start:dev
postgres-db:
expose:
- ${PORT_SERVICE}
ports:
- ${PORT_SERVICE}:5432
restart: always
env_file: .env
volumes:
- pgReportData:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${USER_SERVICE}
POSTGRES_PASSWORD: ${PASSWORD_SERVICE}
POSTGRES_DB: ${DATABASE_SERVICE}
networks:
- api-network
container_name: postgres-db
image: postgres:10
networks:
api-network:
driver: bridge
volumes:
pgReportData:
driver: local
node_modules:
.env:
NODE_ENV=development
PORT=30780
HOST_SERVICE=postgres-db
DATABASE_SERVICE=base
USER_SERVICE=user
PASSWORD_SERVICE=password
DIALECT=postgres
PORT_SERVICE=5444
api-docker.dockerfile:
FROM node:12
WORKDIR /src
COPY . .
COPY --chown=node:node . .
USER node
RUN npm install
EXPOSE $PORT
ENTRYPOINT ["npm", "run", "start:dev"]
And when I run:
docker-compose up
I'm getting this error:
Any ideia?
Can someone help me ??
If node is the only app that connects to postgre-db you can remove the networks, and expose the postgredb running port (5432). To connect to the db you can simply use the container name as host.
Connection String: "postgres://YourUserName:YourPassword#postgres-db:5432/YourDatabase";
version: '3.5'
services:
api-service:
build:
context: .
dockerfile: ./api-docker.dockerfile
image: api-service
container_name: api-service
restart: always
env_file: .env
environment:
- NODE_ENV=$NODE_ENV
ports:
- ${PORT}:3000
volumes:
- .:/home/node/api
- node_modules:/home/node/api/node_modules
depends_on:
- postgres-db
command: npm run start:dev
postgres-db:
expose:
- 5432
restart: always
env_file: .env
volumes:
- pgReportData:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${USER_SERVICE}
POSTGRES_PASSWORD: ${PASSWORD_SERVICE}
POSTGRES_DB: ${DATABASE_SERVICE}
container_name: postgres-db
image: postgres:10
volumes:
pgReportData:
driver: local
node_modules:
The following is my docker-compose.yml file. It has to do with my docker-compose.yml file.. I think. Like I said, the app builds locally but when I run "docker-compose up -d" in my DigitalOcean Droplet I get this error.
ERROR: Couldn't find env file: /root/.env
The following is my docker-compose.yml file.
version: '2'
services:
postgres:
image: 'postgres:9.5'
container_name: postgress
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
networks:
- db_nw
redis:
image: 'redis:3.0-alpine'
container_name: redis
command: redis-server --requirepass pass123456word
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
website:
restart: always
build: .
container_name: website
command: >
gunicorn -c "python:config.gunicorn" --reload "app.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/app'
ports:
- 8000:8000
expose:
- 8000
networks:
- db_nw
- web_nw
depends_on:
- postgres
links:
- celery
- redis
- postgres
celery:
build: .
container_name: celery
command: celery worker -B -l info -A app.blueprints.contact.tasks
env_file:
- '.env'
volumes:
- '.:/app'
nginx:
restart: always
build: ./nginx
image: 'nginx:1.13'
container_name: nginx
volumes:
- /www/static
- .:/app
ports:
- 80:80
networks:
- web_nw
links:
- website
depends_on:
- website
networks:
db_nw:
driver: bridge
web_nw:
driver: bridge
volumes:
postgres:
redis:
My dockerfile
FROM python:3.7.5-slim-buster
RUN apt-get update \
&& apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install --upgrade pip -r requirements.txt
COPY . .
RUN pip install --editable .
CMD gunicorn -c "python:config.gunicorn" "app.app:create_app()"
Is something wrong with my volumes in my docker-compose.yml file? or am I doing something weird in my Dockerfile with the ENV to where its hard coded to a local machine rather than the "root" directory on DigitalOcean?F
I'm new to hosting docker images so this is my first go at something like this. Thanks!
When you access a Droplet, you're generally running as root.
You appear to have copied the docker-compose.yml correctly to the Droplet but you have not copied the .env file on which it depends to the Droplet's /root/.env.
If you copy the .env file to /root/.env on the Droplet, it should work.