Knexfile not reading environment variables? - node.js

I'm building a node/express service using docker and knex for database interaction. I have an env_file (defined in docker-compose file) that has some environment variables defined. The app is reading them correctly, as a console.log(process.env.DATABASE_USER); will log the correct value.
I followed the knex documentation to setup a knexfile that looks like so:
module.exports = {
development: {
client: 'pg',
connection: {
host: process.env.DATABASE_HOST,
port: process.env.DATABASE_PORT,
user: process.env.DATABASE_USER,
password: process.env.DATABASE_PASSWORD,
database: process.env.DATABASE_NAME_DEV,
},
migrations: {
directory: __dirname + '/db/migrations',
},
seeds: {
directory: __dirname + '/db/seeds',
},
},
};
If I hardcode the values into the knexfile, all is well. I can connect to the database, run migrations, etc.
When I use my environment variables (like above), they return undefined. Why is that?
UPDATE:
My docker compose file--api.env is just a basic .env file:
version: '3.3'
services:
db:
container_name: db
build:
context: ./services/api/src/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
nginx:
container_name: nginx
build: ./services/nginx
restart: always
ports:
- 80:80
depends_on:
- api
links:
- api
api:
container_name: api
build:
context: ./services/api
dockerfile: Dockerfile
volumes:
- './services/api:/usr/src/app'
- './services/api/package.json:/usr/src/app/package.json'
ports:
- 8887:8888
env_file: ./api.env
depends_on:
- db
links:
- db
client:
container_name: client
build:
context: ./services/client
dockerfile: Dockerfile
volumes:
- './services/client:/usr/src/app'
ports:
- 3007:3000
environment:
- NODE_ENV=development
depends_on:
- api
links:
- api
Dockerfile for api service:
FROM node:latest
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
ADD package.json /usr/src/app/package.json
RUN npm install
CMD ["npm", "start"]

dotenv reads the vars from the files on same level as it is, so if your knexfile is not on the same directory level with you .env, you may want to set the path. Something like this:
import dotenv from 'dotenv'
dotenv.config({ path: '../../.env' })
You can read the docs for more details https://www.npmjs.com/package/dotenv

You are not setting environment variables correctly in your setup. Maybe you are referring env_file wrong or you might have typo in your environment variable setups.
You need to give more information how is your docker compose file etc. Your knexfile.js looks correct.
EDIT:
From your docker-compose file it looks like you are passing database connection details only to your API container. Nothing shows how are you running knex migrations and how are you passing those variables for that call.
EDIT2:
bash -c "sleep 10 && npm run knex migrate:latest"
Command opens new shell, which does not have environment variables having connection details set. You need to pass environment variables with connection details to your shell.
Probably easiest for you would be to write run-migrations.sh script, which sets and passes variables correctly before running the migrate. Like:
for i in $(cat api.env); do
export $i;
done;
sleep 10;
npm run knex migrate:latest;

Related

Sequelize CLI ,Docker ,ConnectionRefusedError [SequelizeConnectionRefusedError] [duplicate]

I have a CRUD app working on node, on my local machine. It is running on node, with postgres as the database, using knex.js as a query builder, etc.
I have created a docker file, and a docker-compose file, and the containers start, but the node container can't reach the postgres container. I suspect it has to do with the enviornment variables but I am not sure. here is my docker file:
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm ci --only=production
# Bundle app source
COPY . .
ENV PORT=8080
EXPOSE 8080
CMD [ "npm", "start" ]
This is the docker-compose file:
version: '2'
services:
postgres:
image: postgres:alpine
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: app
POSTGRES_DB: db
app:
build: .
depends_on:
- "postgres"
links:
- "postgres"
environment:
DB_PASSWORD: 'password'
DB_USER: 'app'
DB_NAME: 'db'
DB_HOST: 'postgres'
PORT: 8080
ports:
- '8080:8080'
command: npm start
also, here is the knex.js file on root that handles the db connections based on the environment:
// Update with your config settings.
module.exports = {
development: {
client: 'pg',
connection: 'postgres://localhost/db'
},
test: {
client: 'pg',
connection: 'postgres://localhost/test-db'
}
};
additionally when I check the hosts file on the node app inside the docker i don't see anything mentioning the link to postgres container. Any help would be appreciated, thanks.
The reason why your node application is not connecting is because it is trying to connect to itself as you are referencing localhost. Your database is in a second container which is not local so you need to reference it by service name which would be postgres.
So assuming your application is handling authentication another way, your config would be something like this:
// Update with your config settings.
module.exports = {
development: {
client: 'pg',
connection: 'postgres://postgres/db'
},
test: {
client: 'pg',
connection: 'postgres://postgres/test-db'
}
};
If you can, you should use the environment variables you assigned to the app container.
Docker-compose creates an internal network shared by the different containers it launches.
Since app and postgres are 2 separate containers, they are considered as 2 hosts. This causes app to look for postgres on the same container when you point it at localhost instead of the postgres container.
You can solve this by just changing localhost with postgres in your knex.js file.

Docker - Redis connect ECONNREFUSED 127.0.0.1:6379

I know this is a common error, but I literally spent the entire day trying to get past this error, trying everything I could find online. But I cant find anything that works for me.
I am very new to Docker and using it for my NodeJS + Express + Postgresql + Redis application.
Here is what I have for my docker-compose file:
version: "3.8"
services:
db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=admin
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
cache:
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
api:
container_name: api
build:
context: .
# target: production
# image: api
depends_on:
- db
- cache
ports:
- 3000:3000
environment:
NODE_ENV: production
DB_HOST: db
DB_PORT: 5432
DB_USER: postgres
DB_PASSWORD: admin
DB_NAME: postgres
REDIS_HOST: cache
REDIS_PORT: 6379
links:
- db
- cache
volumes:
- ./:/src
volumes:
db:
driver: local
cache:
driver: local
Here is my app.js upper part:
const express = require('express')
const app = express()
const cors = require('cors')
const redis = require('redis')
const client = redis.createClient({
host: 'cache',
port: 6379,
legacyMode: true // Also tried without this line, same behavior
})
client.connect()
client.on('connect', () => {
log('Redis connected')
})
app.use(cors())
app.use(express.json())
And my Dockerfile:
FROM node:16.15-alpine3.14
WORKDIR ./
COPY package.json ./
RUN npm install
COPY ./ ./
EXPOSE 3000 6379
CMD [ "npm", "run", "serve" ]
npm run serve is nodemon ./app.js.
I also already tried to prune the system and network.
What am I missing? Help!
There are two things to put in mind here,
First of All Docker Network:
Containers are exposed to your localhost system, so as a "Server" you can access each of them directly through the browser or the command-line, But
Taken for granted that you only can access the containers because they are exposed to a default network that is accessible by the root of the system - the docker user which you can inspect by the way.
The deployed containers are not exposed to each other by default, so you need to define a virtual network and expose them to it so they can talk to each other through the ports or the host name -- which will be the container_name
So you need to do two things:
Add a container name to the redis, in the compose file just like you did on the API
Create a network and bind all the services to it, one way of doing that will be:
version: "3.8"
Network:
my-network:
name: my-network
services:
....
cache:
container_name: cache
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
networks: # add it in all containers that communicate together
- my-network
Then and only then you can call redis container name as the host, since docker network will create a host name for the service by the container name,
When deploying the whole compose file later, the containers will be created and all will be joined to the network by default on startup and that will allow you API app to communicate with Redis container via the docker container name as the host name
Refer to these resources for more details:
Networking on Docker Compose
Docker Network Overview
A side Unrelated note:
I personally used redis from npm for some testing projects, but I found that ioredis was much better with TypeScript projects and more expected in its behavior
To Avoid any problems with Redis, make sure to create a password and use it to connect, sometimes redis randomly considers the client as a ReadOnly client and fails to find a read replica, adding the password solved it for me

Nestjs Typeorm + Postgres Docker Compose doesn't seem to work

I have a Nestjs API + TypeORM with a active PostgreSQL connection. I am currently trying to dockerize the whole api.
Dockerfile:
ENV DIR=/home/node/app
RUN mkdir -p ${DIR}
WORKDIR ${DIR}
COPY package*.json ./
RUN npm install
RUN npm install -g #nestjs/cli
RUN npm run build
COPY . .
EXPOSE 3000
CMD ["node", "dist/main.js"]
Typeorm Config:
require('dotenv').config()
#Module({
imports: [
TypeOrmModule.forRoot({
type: "postgres",
host: process.env.DB_HOST,
username: process.env.DB_USER,
password: process.env.DB_PASSWORD,
port: 5432,
database: "postgres",
entities: ["dist/**/*.entity{.ts,.js}"],
synchronize: false }),
.env file:
DB_HOST=postgres
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=test
docker-compose.yaml file:
version: "3.9"
services:
middleware:
build: .
container_name: "middleware"
ports:
- "3000:3000"
depends_on:
- postgres
env_file:
- .env
postgres:
image: "postgres:12"
restart: always
container_name: "postgres"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=postgres
ports:
- "5433:5432"
volumes:
- api:/var/lib/postgresql/data
- ./src/migs/001_create.sql:/docker-entrypoint-initdb.d/001_create.sql
volumes:
api:
I also have a .sql file, which creates the first tables and fills them with data, but i guess it is not relevant for my question. From what i have seen, the problems seems to be with the .env file, but i don't understand what am I doing wrong? I am accessing the postgres host, which is also the name of the running container, i don't believe the problem lies here.
I can access the postgres container from outside via
winpty docker exec -it postgres psql -U postgres -h postgres postgres
The tables are created successfully, has to be something with the .env file.
The console output from api is :
Unable to connect to the database. Retrying (8)...
error: no PostgreSQL user name specified in startup packet
Any help is welcomed! Thanks in advance!
PS: I found similar topics in stackoverflow, but didn't seem to fix my problem. If i by any chance have missed an already answered topic, please point me to it.
Putting all environment variables for postgres into the .env file
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=postgres
and then telling docker-compose to use the .env file for the postgres container solved the problem.
clear your volume and run docker-compose again

Docker Prisma Error P1001: Can't reach database server at `postgres`:`5432`

After hours of searchs, I must bow dow and ask you some advices on my problem :
My backend (express + prisma + postgresql) is Dockerized, functionning BUT I can't use npx prisma commands from my wsl2 zsh terminal.
Here is my .env
# Database settings
NODE_ENV=dev
DB_USER=user
DB_PASS=password
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}#postgres/chimere?schema=public"
Dockerfile :
FROM node:17-alpine3.14 as base
WORKDIR /user/src/app
COPY package*.json /user/src/app/
EXPOSE 5000
FROM base as dev
ENV NODE_ENV=development
RUN npm install -g nodemon && npm install
COPY . /user/src/app/
RUN npx prisma generate
CMD ["nodemon", "src/index.js"]
FROM base as production
ENV NODE_ENV=production
RUN npm ci
COPY . /user/src/app/
RUN npx prisma generate
CMD ["node", "src/index.js"]
docker-compose.yml :
version: '3.8'
services:
postgres:
image: postgres
restart: always
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
web:
build:
context: ./
target: dev
restart: always
volumes:
- .:/usr/src/app
- uploaded-files:/usr/src/app/public/media/files
- uploaded-pictures:/usr/src/app/public/media/pictures
command: npm run start:dev
ports:
- "5000:5000"
environment:
NODE_ENV: development
DEBUG: nodejs-docker-express:*
volumes:
postgres:
uploaded-files:
uploaded-pictures:
and Prisma Schema :
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "linux-musl"]
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
Like you can see I'm prettry new to Docker and almost everything is an adjusted copypasta from Google (:
How can I get my app to work AND get my commands to work aswell ?
Thanks !
I was facing the same issue but it was on a mysql container. Basically I was having a problem with my DATABASE_URL in my .env file. It was looking something like this:
DATABASE_URL="mysql://${DB_USER}:${DB_PASS}#localhost:3306/project_name"
The problem was that localhost. Apparently, when running inside a container, instead of the localhost, it uses the container's name. I changed my docker compose to specify the container's name:
version: '3.1'
services:
db:
image: mysql
container_name: mysql
ports:
- 3306:3306
Notice the container_name property. After that, I changed my .env to:
DATABASE_URL="mysql://${DB_USER}:${DB_PASS}#mysql:3306/project_name"
I would suggest you to try something similar. Maybe something along these lines:
version: '3.8'
services:
postgres:
image: postgres
container_name: postgres
restart: always
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
And for your .env you can leave the way it is now unless you chose a different container's name, then you would subsitute with the name you chose inside the curly braces (remember to remove the curly braces):
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}#{container_name}/chimere?schema=public"
Check this GitHub issue for more information as well:
https://github.com/prisma/prisma/issues/1385
You need to act from inside the container.
First, create an interactive shell in the container using docker exec:
docker exec -it <name of your containter> sh
Note: the -i flag keeps input open to the container, and the -t flag creates a pseudo-terminal that the shell can attach to.
Then, once inside the container, execute the commands you need:
npx prisma migrate dev --name <name of your migration>

Docker NodeJS app cant connect to postgres database

I am farely new to docker and docker-compose. I tried to spin up a few services using docker which contain of a nodejs (Nest.js) api, a postgres db and pgadmin. Without the API (nodejs) app beeing dockerized I could connect to the docker database containers, but now that I also have dockerized the node app, it is not connecting anymore and I am clueless why. Is there anything wrong with the way I have set it up?
Here is my docker-compose file
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '5000:5000'
depends_on:
- postgres
volumes:
- .:/app
- /app/node_modules
networks:
- postgres
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/data/postgres
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
volumes:
- /data/pgadmin:/root/.pgadmin
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
This is the nodejs app Dockerfile which builds successfully and in the logs I see the app is trying to connect to the databse but it cant (no specific error) just that it doesnt find the db.
# Image source
FROM node:14-alpine
# Docker working directory
WORKDIR /app
# Copying file into APP directory of docker
COPY ./package.json /app/
RUN apk update && \
apk add git
# Then install the NPM module
RUN yarn install
# Copy current directory to APP folder
COPY . /app/
EXPOSE 5000
CMD ["npm", "run", "start:dev"]
I have 2 env files in my projecs root directory.
.env
docker.env
As mentioned above, when I remove the "nftapi" service from docker and run the nodejs up with a simple npm start it is connecting to the postgres container.
TypeOrmModule.forRoot({
type: 'postgres',
host: process.env.POSTGRES_HOST,
port: Number(process.env.POSTGRES_PORT),
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_DB,
synchronize:true,
entities: ['dist/**/*.entity{.ts,.js}'],
}),
The host from the .env file that is used in the typeorm module is localhost
When using networks with docker-compose you should use the name of the service as you hostname.
so in your case the hostname should be postgres and not localhost
You can read more about at here:
https://docs.docker.com/compose/networking/

Resources