How to fix PSQL connection error with Docker Compose - python-3.x

I'm trying to connect my Python-Flask app with a Postgres database in a docker environment. I am using a docker-compose file to build my web and db environment.
However, I am getting the following error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Here is my docker file:
FROM ubuntu:16.04 as base
RUN apt-get update -y && apt-get install -y python3-pip python3-dev postgresql libpq-dev libffi-dev jq
ENV LC_ALL=C.UTF-8 \
LANG=C.UTF-8
ENV FLASK_APP=manage.py \
FLASK_ENV=development \
APP_SETTINGS=config.DevelopmentConfig \
DATABASE_URL=postgresql://user:pw#postgres/database
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
FROM base as development
EXPOSE 5000
CMD ["bash"]
Here is my Docker-compose file:
version: "3.6"
services:
development_default: &DEVELOPMENT_DEFAULT
build:
context: .
target: development
working_dir: /app
volumes:
- .:/app
environment:
- GOOGLE_CLIENT_ID=none
- GOOGLE_CLIENT_SECRET=none
web:
<<: *DEVELOPMENT_DEFAULT
ports:
- "5000:5000"
depends_on:
- db
command: flask run --host=0.0.0.0
db:
image: postgres:10.6
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=db

Related

Docker Cannot find module after builing

Dockerfile:
FROM node:lts-slim AS base
# Install dependencies
RUN apt-get update \
&& apt-get install --no-install-recommends -y openssl
# Create app directory
WORKDIR /usr/src
FROM base AS builder
# Files required by npm install
COPY package*.json ./
# Files required by prisma
COPY prisma ./prisma
# Install app dependencies
RUN npm ci
# Bundle app source
COPY . .
# Build app
RUN npm install -g prisma --force
RUN prisma generate
RUN npm run build \
&& npm prune --omit=dev
FROM base AS runner
# Copy from build image
COPY --from=builder /usr/src/node_modules ./node_modules
COPY --from=builder /usr/src/dist ./dist
COPY --from=builder /usr/src/package*.json ./
COPY prisma ./prisma
RUN apt-get update \
&& apt-get install --no-install-recommends -y procps openssl
RUN chown -R node /usr/src/node_modules
RUN chown -R node /usr/src/dist
RUN chown -R node /usr/src/package*.json
USER node
# Start the app
EXPOSE 80
CMD ["node", "dist/index.js"]
docker-compose.yml
version: '3'
services:
mysql:
image: mysql:latest
container_name: mysql
ports:
- 3306:3306
bot:
container_name: bot
build:
context: .
depends_on:
- mysql
docker-compose.prod.yml
version: '3'
services:
mysql:
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: '123123'
MYSQL_DATABASE: 'test'
MYSQL_USER: 'test'
MYSQL_PASSWORD: '123123'
bot:
ports:
- "3000:80"
env_file:
- docker-compose.prod.bot.env
volumes:
mysql:
for some reason after running this commands:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml run bot npx prisma migrate deploy
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
im getting an error when the bot container running up, that he cant find any node module...
im using ubuntu 20.4 to run docker inside, installed docker and for some reason only this part is not working, wehn im running a build on normal machine without docker, build is working fine.
The only problem is with the docker.
error:
bot | node:internal/modules/cjs/loader:936
bot | throw err;
bot | ^
bot |
bot | Error: Cannot find module 'envalid'
bot | Require stack:
bot | - /usr/src/dist/config.js

How to dockerize aspnet core application and postgres sql with docker compose

In the process of integrating the docker file into my previous sample project so everything was automated for easy code sharing and execution. I have some dockerize problem and tried to solve it but to no avail. Hope someone can help. Thank you. Here is my problem:
My repository: https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate
Branch for dockerize (my problem in macOS):
https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate/pull/1
Docker file:
# syntax=docker/dockerfile:1
FROM node:16.11.1
FROM mcr.microsoft.com/dotnet/sdk:5.0
RUN apt-get update && \
apt-get install -y wget && \
apt-get install -y gnupg2 && \
wget -qO- https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y build-essential nodejs
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
RUN dotnet tool restore
EXPOSE 80/tcp
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
Docker compose:
version: "3.9"
services:
web:
container_name: backendnet5
build: .
ports:
- "5005:5000"
depends_on:
- database
database:
container_name: postgres
image: postgres:latest
ports:
- "5433:5433"
environment:
- POSTGRES_PASSWORD=admin
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
Commands:
docker-compose build
docker compose up
Problems:
I guess the problem is not being able to run command line dotnet ef database update my migrations. Many thanks for any help.
In your appsettings.json file, you say that the database hostname is 'localhost'. In a container, localhost means the container itself.
Docker compose creates a bridge network where you can address each container by it's service name.
You connection string is
User ID=postgres;Password=admin;Host=localhost;Port=5432;Database=sample_db;Pooling=true;
but should be
User ID=postgres;Password=admin;Host=database;Port=5432;Database=sample_db;Pooling=true;
You also map port 5433 on the database to the host, but postgres listens on port 5432. If you want to map it to port 5433 on the host, the mapping in the docker compose file should be 5433:5432. This is not what's causing your issue though. This just prevents you from connecting to the database from the host, if you need to do that.

Dockerized Node JS application not accessible from host machine

I have Node JS application
docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
command: 'yarn nuxt'
ports:
- 3000:3000
volumes:
- '.:/app'
Dockerfile
FROM node:15
RUN apt-get update \
&& apt-get install -y curl
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& apt-get update \
&& apt-get install -y yarn
WORKDIR /app
After running $ docker-compose up -d application starts and inside container it's accessible
$ docker-compose exec admin sh -c 'curl -i localhost:3000'
// 200 OK
But outside of container it's doesnt work. For example in chrome ERR_SOCKET_NOT_CONNECTED
Adding this to app service solves problem in docker-compose.yml
environment:
HOST: 0.0.0.0
Thanks to Marc Mintel article Development setup with Nuxt, Node and Docker
did you try to add
published: 3000
you can read more here - https://docs.docker.com/compose/compose-file/compose-file-v3/

Unable to authenticate to company LDAP using flask-ldap3-login in Docker container

I`m trying to connect to my company's LDAP server to authenticate users in my flask web app. I'm constantly getting this error:
2020-06-22 09:55:07,459 ERROR flask_ldap3_login MainThread : no active server available in server pool after maximum number of tries
I also tried to telnet to the ldap server from the web container and not connection can be made. What do I need to do to allow my containers to run on our network to be able to access LDAP?
I tried enabling SSL and added the certs, but still no success.
docker-compose file
# docker-compose.yml
version: '3'
services:
db:
build: ./application/db
container_name: dqm_db
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
container_name: dqm_web
restart: always
ports:
- 5000:5000
- 389:389
- 636:636
env_file:
- .env
depends_on:
- db
links:
- redis
volumes:
- .:/data-quality-management
nginx:
build: ./nginx
container_name: dqm_nginx
restart: always
ports:
- 80:80
depends_on:
- web
redis:
container_name: dqm_redis
env_file:
- .env
image: redis:latest
restart: always
command: redis-server
ports:
- 6379:6379
volumes:
- .:/data-quality-management
worker:
build: .
hostname: worker
container_name: dqm_worker
entrypoint: celery
command: -A application.run_celery:celery worker --loglevel=info
links:
- redis
- web
depends_on:
- web
- redis
env_file:
- .env
volumes:
- .:/data-quality-management
volumes:
postgres_data:
Dockerfile:
FROM python:3.7-buster
RUN apt-get update
RUN apt-get install python-dev -y
RUN apt-get install libsasl2-dev -y
RUN apt-get install libldap2-dev -y
RUN apt-get install libssl-dev -y
RUN apt-get clean -y
WORKDIR /data-quality-management
ENV PYTHONUNBUFFERED 1
COPY requirements.txt .
EXPOSE 5000
EXPOSE 389
EXPOSE 636
COPY *.crt /etc/ssl/certs/
RUN update-ca-certificates
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . /data-quality-management
CMD gunicorn -w $WEB_CONCURRENCY -b $WEB_BIND wsgi:app

docker-compose with container mongo ECONNREFUSED

I m new at docker so i tried to connect multiple container
- mongo
- my app
- redis
and i get this error in chrome=> code: "ECONNREFUSED", errno: "ECONNREFUSED", syscall: "connect", address: "127.0.0.1", port: 8080}
here is my docker-compose file :
version: "2"
services:
mongo:
image: "mongo"
restart: always
ports:
- "27017:27017"
networks:
- all
redis:
image: "redis:3.2.1"
networks:
- all
node:
image: "project"
links:
- mongo
ports:
- "8080:8080"
networks:
- all
backoffice:
image: "back"
links:
- node
- mongo
- redis
depends_on:
- mongo
- node
- redis
ports:
- "8181:8181"
networks:
- all
networks:
all:
driver: bridge
my differents Dockerfile:
for mongo:
FROM mongo:2.6
COPY ./data ./
EXPOSE 27017
CMD ["mongod"]
for service node:
FROM node:4.4.7
WORKDIR /app
COPY /api ./
RUN npm install
RUN apt-get -q update && apt-get install -y -qq \ git \ curl
EXPOSE 8080
CMD ["node","index.js"]
for service back:
FROM node:4.4.7
WORKDIR /api
COPY . ./
RUN npm install && npm install bower -g && npm install gulp -g
RUN bower install --allow-root && gulp build
RUN apt-get -q update && apt-get install -y -qq \ git \ curl
EXPOSE 8181
CMD ["node","index.js"]
can you please help me figure this out ?
Probably your port 8080 is already in use. Open your cmd and type netstat -a. This is for checking the ports availability.
I solve my issue, i was using version 2 of docker-compose but links are available only from version 3.
Just upgrade and it works fine.

Resources