I've been working on a small project with Node.js/Express + redis, with basic frontend w/o any frameworks (full code can be seen here). I've been testing my API with Postman (because my frontend is currently not working), and it worked like a charm.
So, after implementing redis to the project, I've tried adding it to my docker-compose.yml file, here it is and some other files:
docker-compose.yml
version: '3.8'
services:
redis:
container_name: redis-db
image: redis
networks:
- webnet
backend:
container_name: api
build: ./backend
restart: always
ports:
- '8080:8080'
networks:
- webnet
depends_on:
- redis
frontend:
container_name: frontend
build: ./frontend
restart: always
ports:
- '5000:5000'
networks:
- webnet
depends_on:
- backend
networks:
webnet:
config.yml
server:
host: localhost
port: 8080
app:
# '-1' for turning expiry off; otherwise, should be at least '1' (minute)
timeToLive: -1
db:
type: redis # ['in-memory', 'redis']
redis:
# select 'redis' if using docker-compose
host: redis # ['localhost', 'redis']
port: 6379
redis.js
import Redis from 'redis';
import config from 'config';
import AppError from '../utils/appError.js';
import { StatusCodes } from 'http-status-codes';
import { promisify } from 'util';
export const redisClient = Redis.createClient({
host: config.get('db.redis.host'),
port: config.get('db.redis.port'),
});
redisClient.on('error', function (err) {
throw new AppError(err, StatusCodes.INTERNAL_SERVER_ERROR);
});
redisClient.on('connect', () => {
console.log('š connected redis successfully š');
});
But, unfortunately, after docker-compose up I'm not able to make Postman requests anymore. Redis connected without any problems, in the console everything seems to be fine, and I even can access my frontend pages, but I cannot make requests to my backend anymore.
My frontend is running on localhost:5000, backend supposed to be running on localhost:8080, and redis seems to be running on redis:6379.
I've tried my best searching for the answer, but there are hundreds of questions on how to connect redis to your express project with docker-compose, but no questions about troubles accessing backend after connecting redis.
EDIT_1: here is what postman says:
POST http://127.0.0.1:8080/api/v1/queue
Error: socket hang up
Request Headers
Content-Type: application/json
User-Agent: PostmanRuntime/7.28.3
Accept: */*
Postman-Token: a61d8b9c-a2f5-4fe4-92e1-921a5680bb85
Host: 127.0.0.1:8080
Accept-Encoding: gzip, deflate, br
Here is docker ps output:
f7f6fbe12863 project1_frontend "docker-entrypoint.sā¦" 3 minutes ago Up 2 minutes 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp frontend
9802680a1a3d project1_backend "docker-entrypoint.sā¦" 3 minutes ago Up 3 minutes 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp api
66fde12be888 redis "docker-entrypoint.sā¦" 3 minutes ago Up 3 minutes 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis-db
And here are my console logs after docker-compose up:
api |
api | > queue-resolution-api#0.1.0 start /app
api | > nodemon server.js
api |
api | [nodemon] 2.0.12
api | [nodemon] to restart at any time, enter `rs`
api | [nodemon] watching path(s): *.*
api | [nodemon] watching extensions: js
api | [nodemon] starting `node server.js`
api | NODE_ENV: development
api | Listening on port 8080...
api | š connected redis successfully š
redis-db | 1:C 16 Aug 2021 14:35:18.728 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-db | 1:C 16 Aug 2021 14:35:18.728 # Redis version=6.2.5, bits=64, commit=00000000, modified=0, pid=1, just started
redis-db | 1:C 16 Aug 2021 14:35:18.728 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis-db | 1:M 16 Aug 2021 14:35:18.728 * monotonic clock: POSIX clock_gettime
redis-db | 1:M 16 Aug 2021 14:35:18.729 * Running mode=standalone, port=6379.
redis-db | 1:M 16 Aug 2021 14:35:18.729 # Server initialized
redis-db | 1:M 16 Aug 2021 14:35:18.729 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis-db | 1:M 16 Aug 2021 14:35:18.729 * Ready to accept connections
redis-db | 1:M 16 Aug 2021 14:35:20.587 * DB saved on disk
frontend |
frontend | > frontend#1.0.0 start /app
frontend | > nodemon server.js
frontend |
frontend | [nodemon] 2.0.12
frontend | [nodemon] to restart at any time, enter `rs`
frontend | [nodemon] watching path(s): *.*
frontend | [nodemon] watching extensions: js,mjs,json
frontend | [nodemon] starting `node server.js`
frontend | Listening on port 5000...
I've figured out the answer. In my server.js file I've been forcing the app to listen on host = localhost, and after deleting the host part everything started working.
app.listen(port, host, () => {
console.log('NODE_ENV: ' + config.util.getEnv('NODE_ENV'));
console.log(`Listening on port ${port}...`);
});
Related
This question already has answers here:
Docker - Can't reach database server at `localhost`:`3306` with Prisma service
(2 answers)
ECONNREFUSED for Postgres on nodeJS with dockers
(7 answers)
Closed 17 days ago.
This post was edited and submitted for review 16 days ago and failed to reopen the post:
Original close reason(s) were not resolved
I am trying to build a docker image from my project and run it in a container,
The project is keystone6 project connecting to a postgres database, everything worked well when I normally run the project and it connects successfully to the database.
Here is my dockerfile:
FROM node:18.13.0-alpine3.16
ENV NODE_VERSION 18.13.0
ENV NODE_ENV=development
LABEL Name="di-wrapp" Version="1"
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
RUN npm install
COPY .env .
EXPOSE 9999
CMD ["npm", "run", "dev"]
I am building an image using the command docker build -t di-wrapp:1.0 .
after that I run docker-compose file which contains the following code:
version: "3.8"
services:
postgres:
image: postgres:15-alpine
container_name: localhost
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=di_wrapp
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
dashboard:
image: di-wrapp:1.0
container_name: di-wrapp-container
restart: always
environment:
- DB_CONNECTION=postgres
- DB_PORT=5432
- DB_HOST=localhost
- DB_USER=postgres
- DB_PASSWORD=postgres
- DB_NAME=di_wrapp
tty: true
depends_on:
- postgres
ports:
- 8273:9999
links:
- postgres
command: "npm run dev"
volumes:
- /usr/src/app
volumes:
postgres-data:
And this is the connection URI used to connect my project to postgres:
DATABASE_URL=postgresql://postgres:postgres#localhost:5432/di_wrapp
which I am using to configure my db setting in keystone config file like this:
export const db: DatabaseConfig<BaseKeystoneTypeInfo> = {
provider: "postgresql",
url: String(DATABASE_URL!),
};
when I run the command docker-compose -f docker-compose.yaml up
This is what I receive:
localhost | 2023-02-03 13:43:35.034 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
localhost | 2023-02-03 13:43:35.034 UTC [1] LOG: listening on IPv6 address "::", port 5432
localhost | 2023-02-03 13:43:35.067 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
localhost | 2023-02-03 13:43:35.121 UTC [24] LOG: database system was shut down at 2023-02-03 13:43:08 UTC
localhost | 2023-02-03 13:43:35.155 UTC [1] LOG: database system is ready to accept connections
di-wrapp-container | > keystone-app#1.0.2 dev
di-wrapp-container | > keystone dev
di-wrapp-container |
di-wrapp-container | āØ Starting Keystone
di-wrapp-container | āļø Server listening on :8273 (http://localhost:8273/)
di-wrapp-container | āļø GraphQL API available at /api/graphql
di-wrapp-container | āØ Generating GraphQL and Prisma schemas
di-wrapp-container | Error: P1001: Can't reach database server at `localhost`:`5432`
di-wrapp-container |
di-wrapp-container | Please make sure your database server is running at `localhost`:`5432`.
di-wrapp-container | at Object.createDatabase (/usr/src/app/node_modules/#prisma/internals/dist/migrateEngineCommands.js:115:15)
di-wrapp-container | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
di-wrapp-container | at async ensureDatabaseExists (/usr/src/app/node_modules/#keystone-6/core/dist/migrations-e3b5740b.cjs.dev.js:262:19)
di-wrapp-container | at async Object.pushPrismaSchemaToDatabase (/usr/src/app/node_modules/#keystone-6/core/dist/migrations-e3b5740b.cjs.dev.js:68:3)
di-wrapp-container | at async Promise.all (index 1)
di-wrapp-container | at async setupInitialKeystone (/usr/src/app/node_modules/#keystone-6/core/scripts/cli/dist/keystone-6-core-scripts-cli.cjs.dev.js:984:3)
di-wrapp-container | at async initKeystone (/usr/src/app/node_modules/#keystone-6/core/scripts/cli/dist/keystone-6-core-scripts-cli.cjs.dev.js:762:35)di-wrapp-container exited with code 1
even though I receive that the database server is up on port 5432, my app container can't connect to it.
any help is appreciated.
I am new to writing my own docker-compose.yml files because I previously had coworkers around to write them for me. Not the case now.
I am trying to connect an Express server with TypeORM to a Postgres database inside of Docker containers.
I see there are two separate containers running: One for the express server and another for postgres.
I expect the port 5432 to be sufficient to direct container A to connect to container B. I am wrong. It's not.
I read somewhere that 0.0.0.0 refers to the host machine's actual network, not the internal network of the containers. Hence I am trying to change 0.0.0.0 in the Postgres output to something else.
Here's what I have so far:
docker-compose.yml
version: "3.1"
services:
app:
container_name: express_be
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- ./src:/app/src
depends_on:
- postgres_db
ports:
- "8000:8000"
networks:
- efinternal
postgres_db:
container_name: postgres_be
image: postgres:15.1
restart: always
environment:
- POSTGRES_USERNAME=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
volumes:
- postgresDB:/var/lib/postgresql/data
networks:
- efinternal
volumes:
postgresDB:
driver: local
networks:
efinternal:
driver: bridge
Here's my console log output
ostgres_be | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_be |
postgres_be | 2022-12-13 04:09:51.628 UTC [1] LOG: starting PostgreSQL 15.1 (Debian 15.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
postgres_be | 2022-12-13 04:09:51.628 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 // look here
postgres_be | 2022-12-13 04:09:51.628 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_be | 2022-12-13 04:09:51.636 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_be | 2022-12-13 04:09:51.644 UTC [28] LOG: database system was shut down at 2022-12-13 04:09:48 UTC
postgres_be | 2022-12-13 04:09:51.650 UTC [1] LOG: database system is ready to accept connections
express_be |
express_be | > efinternal#1.0.1 dev
express_be | > nodemon ./src/server.ts
express_be |
express_be | [nodemon] 2.0.20
express_be | [nodemon] starting `ts-node ./src/server.ts`
express_be | /task ... is running
express_be | /committee ... is running
express_be | App has started on port 8000
express_be | Database connection failed Failed to create connection with database
express_be | Error: getaddrinfo ENOTFOUND efinternal // look here
express_be | at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {
express_be | errno: -3008,
express_be | code: 'ENOTFOUND',
express_be | syscall: 'getaddrinfo',
express_be | hostname: 'efinternal'
express_be | }
Not sure what to say about it. What I'm trying to do is to change the line " listening on IPv4 address "0.0.0.0", port 5432" to say "listening on ... efinternal, port 5432" so that "Error: getaddrinfo ENOTFOUND efinternal" will have somewhere to point to.
This link seemed to have the answer but I couldn't decipher it.
Same story here, I can't decipher what I'm doing wrong.
A helpful person told me about "external_links" which sounds like the wrong tool for the job: "Link to containers started outside this docker-compose.yml or even outside of Compose"
I suspect this is The XY Problem where I'm trying to Y (create an "efinternal" network in my containers) so I can X (connect express to postgres) when in fact Y is the wrong solution to X.
In case it matters, here's my TypeORM Postgres connection settings:
export const AppDataSource = new DataSource({
type: "postgres",
host: "efinternal",
port: 5432,
username: "postgres",
password: "postgres",
database: "postgres",
synchronize: true,
logging: true,
entities: [User, Committee, Task, OnboardingStep, RefreshToken, PasswordToken],
subscribers: [],
migrations: [],
});
The special IPv4 address 0.0.0.0 means "all interfaces", in whichever context it's invoked in. In a Docker container you almost always want to listen to 0.0.0.0, which will listen to all "interfaces" in the isolated Docker container network environment (not the host network environment). So you don't need to change the PostgreSQL configuration here.
If you read through Networking in Compose in the Docker documentation, you'll find that each container is addressable by its Compose service name. The network name itself is not usable as a host name.
I'd strongly suggest making your database location configurable. Even in this setup, it will have a different hostname in your non-Docker development environment (localhost) vs. running in a container (postgres_db).
export const AppDataSource = new DataSource({
host: process.env.PGHOST || 'localhost',
...
});
Then in your Compose setup, you can specify that environment-variable setting.
version: '3.8'
services:
app:
build: . # and name the Dockerfile just "Dockerfile"
depends_on: [postgres_db]
ports: ['8000:8000']
environment: # add
PGHOST: postgres_db
postgres_db: {...}
Compose also provides you a network named default, and for simplicity you can delete all of the networks: blocks in the entire Compose file. You also do not need to manually specify container_name:, and I'd avoid overwriting the image's code with volumes:.
This setup also supports using Docker for your database but plain Node for ordinary development. (Also see near-daily SO questions about live reloading not working in Docker.) You can start only the database, and since the code defaults the database location to a developer-friendly localhost, you can just develop as normal.
docker-compose up -d postgres_db
yarn dev
I have a docker-composer.yml that is setting up two services: server and db. The Node.js server, which is the server service, uses pg to connect to the PostgreSQL database; and the db service is a PostgreSQL image.
On the server startup, it tries to connect to the database but gets a timeout.
docker-compose.yml
version: '3.8'
services:
server:
image: myapi
build: .
container_name: server
env_file: .env
environment:
- PORT=80
- DATABASE_URL=postgres://postgres:postgres#db:15432/mydb
- REDIS_URL=redis://redis
ports:
- 3000:80
depends_on:
- db
command: node script.js
restart: unless-stopped
db:
image: postgres
container_name: db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: mydb
ports:
- 15432:15432
volumes:
- db-data:/var/lib/postgresql/data
command: -p 15432
restart: unless-stopped
volumes:
db-data:
Edit: code above changed to remove links and expose.
db service output:
db |
db | PostgreSQL Database directory appears to contain a database; Skipping initialization
db |
db | 2020-11-05 20:18:15.865 UTC [1] LOG: starting PostgreSQL 13.0 (Debian 13.0-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db | 2020-11-05 20:18:15.865 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 15432
db | 2020-11-05 20:18:15.865 UTC [1] LOG: listening on IPv6 address "::", port 15432
db | 2020-11-05 20:18:15.873 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.15432"
db | 2020-11-05 20:18:15.880 UTC [25] LOG: database system was shut down at 2020-11-05 20:18:12 UTC
db | 2020-11-05 20:18:15.884 UTC [1] LOG: database system is ready to accept connections
script.js - used by the command from the server service.
const pg = require('pg');
console.log(process.env.DATABASE_URL);
const pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
connectionTimeoutMillis: 5000,
});
pool.connect((err, _, done) => {
if (err) {
console.error(err);
done(err);
}
done();
});
pool.query('SELECT NOW()', (err, res) => {
console.log(err, res);
pool.end();
});
const client = new pg.Client({
connectionString: process.env.DATABASE_URL,
connectionTimeoutMillis: 5000,
});
client.connect(console.error);
client.query('SELECT NOW()', (err, res) => {
console.log(err, res);
client.end();
});
server service output:
NOTE: The first line is the output of the first console.log call from script.js.
NOTE: Since the server service is set up with restart: unless-stopped, it will repeat this output forever.
server | postgres://postgres:postgres#db:15432/mydb
server | Error: Connection terminated due to connection timeout
server | at Connection.<anonymous> (/home/node/app/node_modules/pg/lib/client.js:255:9)
server | at Object.onceWrapper (events.js:421:28)
server | at Connection.emit (events.js:315:20)
server | at Socket.<anonymous> (/home/node/app/node_modules/pg/lib/connection.js:78:10)
server | at Socket.emit (events.js:315:20)
server | at emitCloseNT (net.js:1659:8)
server | at processTicksAndRejections (internal/process/task_queues.js:79:21)
server | at runNextTicks (internal/process/task_queues.js:62:3)
server | at listOnTimeout (internal/timers.js:523:9)
server | at processTimers (internal/timers.js:497:7)
server | Error: Connection terminated due to connection timeout
server | at Connection.<anonymous> (/home/node/app/node_modules/pg/lib/client.js:255:9)
server | at Object.onceWrapper (events.js:421:28)
server | at Connection.emit (events.js:315:20)
server | at Socket.<anonymous> (/home/node/app/node_modules/pg/lib/connection.js:78:10)
server | at Socket.emit (events.js:315:20)
server | at emitCloseNT (net.js:1659:8)
server | at processTicksAndRejections (internal/process/task_queues.js:79:21)
server | at runNextTicks (internal/process/task_queues.js:62:3)
server | at listOnTimeout (internal/timers.js:523:9)
server | at processTimers (internal/timers.js:497:7) undefined
server | Error: timeout expired
server | at Timeout._onTimeout (/home/node/app/node_modules/pg/lib/client.js:95:26)
server | at listOnTimeout (internal/timers.js:554:17)
server | at processTimers (internal/timers.js:497:7)
server | Error: Connection terminated unexpectedly
server | at Connection.<anonymous> (/home/node/app/node_modules/pg/lib/client.js:255:9)
server | at Object.onceWrapper (events.js:421:28)
server | at Connection.emit (events.js:315:20)
server | at Socket.<anonymous> (/home/node/app/node_modules/pg/lib/connection.js:78:10)
server | at Socket.emit (events.js:315:20)
server | at emitCloseNT (net.js:1659:8)
server | at processTicksAndRejections (internal/process/task_queues.js:79:21) undefined
server | postgres://postgres:postgres#db:15432/mydb
...
From the host computer, I can reach the PostgreSQL database at the db service, connecting successfully, using the same script from the server service.
The output of the script running from the host computer:
ā node script.js
postgres://postgres:postgres#localhost:15432/mydb
null Client { ... }
undefined Result { ... }
null Result { ... }
This output means the connection succeeded.
In summary:
I can't reach the db container from the server container, getting timeouts on the connection, but I can reach the db container from the host computer, connecting successfully.
Considerations
First, thanks for the answer so far. Addressing some points raised:
Missing network:
It isn't required because docker-compose has a default network. A tested with a custom network but it didn't work either.
Order of initialization:
I'm using depends_on to ensure the db container is started first but I know it isn't ensuring the database is in fact initialized first then the server. It isn't the problem because the server breaks when a timeout happens and it runs again because it is set up with restart: unless-stopped. So if the database is still initializing on the first or second try to start the server, there is no problem because the server will continue to be restarted until it succeeds in the connection (which never happened.)
UPDATE:
From the server container, I could reach the database at the db service using psql. I still can't connect from the Node.js app there.
The DATABASE_URL isn't the problem because the URI I used in the psql command is the same URI used by the script.js and printed by the first console.log call there.
Command-line used:
docker exec -it server psql postgres://postgres:postgres#db:15432/mydb
Edit: Improved code by removing the dependency for Sequelize. Now it uses only pg and calls the script directly.
Thanks for providing the source to reproduce the issue.
No issues in the docker-compose file as you have already ruled out.
The problem lies between your Dockerfile and the version of node-pg that you are using.
You are using node:14-alpine and pg: 7.18.2.
Turns out there is a bug on node 14 and earlier versions of node-pg.
Solution is either downgrade to node v12 or use latest version of node-pg which is currently 8.4.2 (fix went in on v8.0.3).
I have verified both these solutions on the branch you provided and they work.
This isn't a complete answer; I don't have your code handy so I can't actually test the compose file. However, there are a few issues there I'd like to point out:
The links directive is deprecated.
The links is a legacy option that was used before Docker introduced user-defined networks and automatic DNS support. You can just get rid of it. Containers in a compose file are able to refer to each other by name without.
The expose directive does nothing. It can be informative in for example a Dockerfile as a way of saying, "this image will expose a service on this port", but it doesn't actually make anything happen. It's almost entirely useless in a compose file.
The depends_on directive is also less useful than you would think. It will indeed cause docker-compose to bring up the database container first, but it the container is considered "up" as soon as the first process has started. It doesn't cause docker-compose to wait for the database to be ready to service requests, which means you'll still run into errors if your application tries to connect before the database is ready.
The best solution to this is to built database re-connection logic into your application so that if the database ever goes down (e.g. you restart the postgres container to activate a new configuration or upgrade the postgres version), the app will retry connections until it is successful.
An acceptable solution is to include code in your application startup that blocks until the database is responding to requests.
The problem has nothing to do with docker. To test that, perform following actions :
By using this docker-compose.yml file:
version: '3.8'
services:
app:
image: ubuntu
container_name: app
command: sleep 8h
db:
image: postgres
container_name: db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: mydb
expose:
- '15432'
ports:
- 15432:15432
volumes:
- db-data:/var/lib/postgresql/data
command: -p 15432
restart: unless-stopped
volumes:
db-data:
Perform a docker exec -it app bash to go into container app then install postgresql-client with apt install -y postgresql-client`.
Command psql -h db -p 15432 -U postgres -W succeeded !
Check pg configuration
You say that pg use environment variable DATABASE_URL to reach postgresql. I'm not sure :
From https://node-postgres.com/features/connecting, we can found this example :
$ PGUSER=dbuser \
PGHOST=database.server.com \
PGPASSWORD=secretpassword \
PGDATABASE=mydb \
PGPORT=3211 \
node script.js
And this sentence :
node-postgres uses the same environment variables as libpq to connect to a PostgreSQL server.
In libpq documentation, no DATABASE_URL.
To adapt example provided in pg documentation with your docker-compose.yml file, try with following file (I only changed environments variables of app service) :
version: '3.8'
services:
server:
image: myapi
build: .
container_name: server
env_file: .env
environment:
- PORT=80
- PGUSER=postgres
- PGPASSWORD=postgres
- PGHOST=db
- PGDATABASE=mydb
- PGPORT=15432
- REDIS_URL=redis://redis
ports:
- 3000:80
depends_on:
- db
command: node script.js
restart: unless-stopped
db:
image: postgres
container_name: db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: mydb
ports:
- 15432:15432
volumes:
- db-data:/var/lib/postgresql/data
command: -p 15432
restart: unless-stopped
volumes:
db-data:
Today I've been trying to setup Docker for my GraphQL API that runs with Knex.js, PostgreSQL and Node.js. The problem that I'm facing is that Knex.js is timing out when trying to access data in my database. I believe that it's due to my incorrect way of trying to link them together.
I've really tried to do this on my own, but I can't figure this out. I'd like to walk through each file that play a part of making this work so (hopefully) someone could spot my mistake(s) and help me.
knexfile.js
In my knexfile I have a connection key which used to look like this:
connection: 'postgres://localhost/devblog'
This worked just fine, but this wont work if I want to use Docker. So I modified it a bit and ended up with this:
connection: {
host: 'db' || 'localhost',
port: process.env.DB_PORT || 5432,
user: process.env.DB_USER || 'postgres',
password: process.env.DB_PASSWORD || undefined,
database: process.env.DATABASE // DATABASE = devblog
}
I've noticed that something is wrong with host. Since it always times-out when I have something else (in this case, db) than localhost.
Dockerfile
My Dockerfile looks like this:
FROM node:9
WORKDIR /app
COPY package-lock.json /app
COPY package.json /app
RUN npm install
COPY dist /app
COPY wait-for-it.sh /app
CMD node index.js
EXPOSE 3010
docker-compose.yml
And this file looks like this:
version: "3"
services:
redis:
image: redis
networks:
- webnet
db:
image: postgres
networks:
- webnet
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: martinnord
POSTGRES_DB: devblog
web:
image: node:8-alpine
command: "node /app/dist/index.js"
volumes:
- ".:/app"
working_dir: "/app"
depends_on:
- "db"
ports:
- "3010:3010"
environment:
DB_PASSWORD: password
DB_USER: martinnord
DB_NAME: devblog
DB_HOST: db
REDIS_HOST: redis
networks:
webnet:
When I try to run this with docker-compose up I get the following output:
Starting backend_db_1 ... done
Starting backend_web_1 ... done
Attaching to backend_db_1, backend_redis_1, backend_web_1
redis_1 | 1:C 12 Feb 16:05:21.303 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 12 Feb 16:05:21.303 # Redis version=4.0.8, bits=64, commit=00000000, modified=0, pid=1, just started
db_1 | 2018-02-12 16:05:21.337 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
redis_1 | 1:C 12 Feb 16:05:21.303 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 12 Feb 16:05:21.311 * Running mode=standalone, port=6379.
db_1 | 2018-02-12 16:05:21.338 UTC [1] LOG: listening on IPv6 address "::", port 5432
redis_1 | 1:M 12 Feb 16:05:21.311 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 12 Feb 16:05:21.314 # Server initialized
redis_1 | 1:M 12 Feb 16:05:21.315 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
db_1 | 2018-02-12 16:05:21.348 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-02-12 16:05:21.367 UTC [20] LOG: database system was shut down at 2018-02-12 16:01:17 UTC
redis_1 | 1:M 12 Feb 16:05:21.317 * DB loaded from disk: 0.002 seconds
redis_1 | 1:M 12 Feb 16:05:21.317 * Ready to accept connections
db_1 | 2018-02-12 16:05:21.374 UTC [1] LOG: database system is ready to accept connections
web_1 | DB_HOST db
web_1 |
web_1 | App listening on 3010
web_1 | Env: undefined
But when I try to make a query with GraphQL I get:
"message": "Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?"
I really don't know why this is not working for me and it's driving me nuts. If anyone could help me with this I would be delighted. I have also added a link to my project below.
Thanks a lot for reading! Cheers.
LINK TO PROJECT: https://github.com/Martinnord/DevBlog/tree/master/backend
Updated docker-compose file:
version: "3"
services:
redis:
image: redis
networks:
- webnet
db:
image: postgres
networks:
- webnet
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: martinnord
POSTGRES_DB: devblog
ports:
- "15432:5432"
web:
image: devblog-server
ports:
- "3010:3010"
networks:
- webnet
environment:
DB_PASSWORD: password
DB_USER: martinnord
DB_NAME: devblog
DB_HOST: db
REDIS_HOST: redis
command: ["./wait-for-it.sh", "db:5432", "--", "node", "index.js"]
networks:
webnet:
Maybe like this:
version: "3"
services:
redis:
image: redis
db:
image: postgres
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: martinnord
POSTGRES_DB: devblog
ports:
- "15432:5432"
web:
image: node:8-alpine
command: "node /app/dist/index.js"
volumes:
- ".:/app"
working_dir: "/app"
depends_on:
- "db"
ports:
- "3010:3010"
links:
- "db"
- "redis"
environment:
DB_PASSWORD: password
DB_USER: martinnord
DB_NAME: devblog
DB_HOST: db
REDIS_HOST: redis
And you should be able to connect from webapp to postgres with:
postgres://martinnord:password#db/devblog
or
connection: {
host: process.DB_HOST,
port: process.env.DB_PORT || 5432,
user: process.env.DB_USER || 'postgres',
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME || 'postgres'
}
I also added row that exposes postgres running in docker to a port 15432 so you can try to connect it first directly from your host machine with
psql postgres://martinnord:password#localhost:15432/devblog
I have a nodemon running in a docker-container with a mounted volume on OSX. Nodemon is receiving the file-change but it doesn't restart the server.
Output:
Creating piclet_web_1...
Attaching to piclet_web_1
web_1 | 7 Sep 13:37:19 - [nodemon] v1.4.1
web_1 | 7 Sep 13:37:19 - [nodemon] to restart at any time, enter `rs`
web_1 | 7 Sep 13:37:19 - [nodemon] watching: *.*
web_1 | 7 Sep 13:37:19 - [nodemon] starting node ./index.js
web_1 | restify listening at http://[::]:80 //normal output of started node-app
//file-change, which gets detected
web_1 | 7 Sep 13:37:30 - [nodemon] restarting due to changes...
//no output of restarted server
Dockerfile:
FROM node:0.12.7-wheezy
EXPOSE 80
RUN npm install nodemon -g
ADD app /app
WORKDIR app
RUN npm install
docker-compose.yml
web:
build: .
ports:
- "80:80"
volumes:
- ./app/lib:/app/lib
links:
- db
command: "nodemon -L ./index.js"
db:
image: mongo:3.1.7
I solved the issue by choosing another base-image for the container: node:0.12.7 instead of node:0.12.7-wheezy