Using two meteor project with docker - ECONNREFUSED 127.0.0.1:8082 - node.js

I know it sounds silly but I have to create two meteor project, one as a "server" on port 8081 and another one as a "client" on port 8080 and run both with docker.
The client must not create a mongodb instance as it should connect to the server's one.
I know that the server will create the mongodb on port 8082 automatically.
Using export MONGO_URL=mongodb://127.0.0.1:8082/meteor works fine when launching everything separately. But when using Docker, it tells me that client can't connect to the mongo on port 8082.
What I want is to be able to connect the client to the mongo on 8082 or to connect both the client and the server on a full mongodb on port 27017.
Here are the files:
Server's DockerFile:
FROM node:10
ENV METEOR_ALLOW_SUPERUSER=true
ENV ROOT_URL="http://localhost:8081"
RUN curl "https://install.meteor.com/" | sh
COPY . /usr/src/server
WORKDIR /usr/src/server
#RUN chmod -R 700 /usr/src/app/.meteor/local
RUN meteor npm install
EXPOSE 8081
CMD ["npm", "start"]
client's DockerFile:
FROM node:10
ENV METEOR_ALLOW_SUPERUSER=true
ENV ROOT_URL="http://localhost:8080"
RUN curl "https://install.meteor.com/" | sh
COPY . /usr/src/client
WORKDIR /usr/src/client
#RUN chmod -R 700 /usr/src/app/.meteor/local
RUN meteor npm install
RUN export MONGO_URL=mongodb://127.0.0.1:8082/meteor
EXPOSE 8080
CMD ["npm", "start"]
Docker-compose.yml :
version: "3.3"
services:
server:
build: ./Server
ports:
- "8081:8081"
command: "meteor run -p 8081"
links:
- database
client:
build: ./Client
ports:
- "8080:8080"
command: "meteor run -p 8080"
environment:
- MONGO_URL=mongodb://localhost:8082/meteor
depends_on:
- server
database:
image: mongo:3.6.4
Here is the error I get :
MongoNetworkError: failed to connect to server [localhost:8082] on first connect [MongoNetworkError: connect ECONNREFUSED 127.0.0.1:8082]
client_1_a42af00d59c0 | W20190311-15:01:13.496(0)? (STDERR) at Pool.<anonymous> (/root/.meteor/packages/npm-mongo/.3.1.1.v3rpzk.m5kk8++os+web.browser+web.browser.legacy+web.cordova/npm/node_modules/mongodb-core/lib/topologies/server.js:564:11)
client_1_a42af00d59c0 | W20190311-15:01:13.499(0)? (STDERR) at emitOne (events.js:116:13)
client_1_a42af00d59c0 | W20190311-15:01:13.501(0)? (STDERR) at Pool.emit (events.js:211:7)
client_1_a42af00d59c0 | W20190311-15:01:13.502(0)? (STDERR) at Connection.<anonymous> (/root/.meteor/packages/npm-mongo/.3.1.1.v3rpzk.m5kk8++os+web.browser+web.browser.legacy+web.cordova/npm/node_modules/mongodb-core/lib/connection/pool.js:317:12)
client_1_a42af00d59c0 | W20190311-15:01:13.503(0)? (STDERR) at Object.onceWrapper (events.js:317:30)
client_1_a42af00d59c0 | W20190311-15:01:13.504(0)? (STDERR) at emitTwo (events.js:126:13)
client_1_a42af00d59c0 | W20190311-15:01:13.505(0)? (STDERR) at Connection.emit (events.js:214:7)
client_1_a42af00d59c0 | W20190311-15:01:13.506(0)? (STDERR) at Socket.<anonymous> (/root/.meteor/packages/npm-mongo/.3.1.1.v3rpzk.m5kk8++os+web.browser+web.browser.legacy+web.cordova/npm/node_modules/mongodb-core/lib/connection/connection.js:246:50)
client_1_a42af00d59c0 | W20190311-15:01:13.507(0)? (STDERR) at Object.onceWrapper (events.js:315:30)
client_1_a42af00d59c0 | W20190311-15:01:13.508(0)? (STDERR) at emitOne (events.js:116:13)
client_1_a42af00d59c0 | W20190311-15:01:13.509(0)? (STDERR) at Socket.emit (events.js:211:7)
client_1_a42af00d59c0 | W20190311-15:01:13.509(0)? (STDERR) at emitErrorNT (internal/streams/destroy.js:64:8)
client_1_a42af00d59c0 | W20190311-15:01:13.511(0)? (STDERR) at _combinedTickCallback (internal/process/next_tick.js:138:11)
client_1_a42af00d59c0 | W20190311-15:01:13.511(0)? (STDERR) at process._tickDomainCallback (internal/process/next_tick.js:218:9)
Thanks.
EDIT: Alright, thanks for the help but I've managed to do it. Here are the files :
docker-compose.yml
version: "3.3"
services:
client:
build: ./Client
depends_on:
- server
ports:
- "8081:8081"
command: "meteor run -p 8081"
environment:
- MONGO_URL=mongodb://database:27017/meteor
server:
build: ./Server
ports:
- "8080:8080"
command: "meteor run -p 8080"
depends_on:
- api
environment:
- MONGO_URL=mongodb://database:27017/meteor
mobile:
build: ./application
links:
- database
depends_on:
- server
- database
api:
build: ./Client/api
ports:
- "4000:4000"
command: node apiLinks.js 4000 database
links:
- database
depends_on:
- database
database:
image: mongo:3.6.4
Client/Dockerfile
FROM node:10
ENV METEOR_ALLOW_SUPERUSER=true
ENV ROOT_URL="http://localhost:8081"
RUN curl "https://install.meteor.com/" | sh
COPY . /usr/src/client
WORKDIR /usr/src/client
#RUN chmod -R 700 /usr/src/app/.meteor/local
RUN meteor npm install
EXPOSE 8081
CMD ["npm", "start"]
Server/Dockerfile
FROM node:10
ENV METEOR_ALLOW_SUPERUSER=true
ENV ROOT_URL="http://localhost:8080"
RUN curl "https://install.meteor.com/" | sh
COPY . /usr/src/server
WORKDIR /usr/src/server
#RUN chmod -R 700 /usr/src/app/.meteor/local
RUN meteor npm install
EXPOSE 8080
CMD ["npm", "start"]
And in the code I've replaced all the url. For example, instead of http://127.0.0.1:8080 I have now http://server:8080.
You might notice that server is now on port 8080 and client on port 8081. I had to switch them, that was part of the subject which changed later.

A Meteor app does not expose mongodb directly. It's only in development that it starts one for you.
If you want to connect one meteor app to another, you should use DDP.connect with the server's url.
You can then subscribe to data from that server and call meteor methods on that server.

Related

Nodejs application not connecting Redis in docker-compose

I have a Nodejs application which connects to a Redis instance. I am using docker-compose for the setup, and running docker-compose up. Here is my docker-compose.yml file:
# Specify docker-compose version.
version: '3'
# Define the services/containers to be run.
services:
express:
build: .
container_name: node-app
ports:
- '8000:8000'
depends_on:
- redis-cache
redis-cache:
image: redis
ports:
- 6379:6379
My Dockerfile:
FROM node:12-alpine
WORKDIR /app
COPY . .
RUN npm ci
EXPOSE 8000
CMD [ “npm”, “start” ] // Also tried CMD [ “node”, “index.js” ]
I am getting the following error:
redis-cache_1 | 1:M 19 Jan 2021 04:23:49.411 * Ready to accept connections
node-app | sh: “start”: unknown operand
node-app exited with code 2
So, I went inside the container and manually ran npm start. The Nodejs started successfully but gave the following error while connecting Redis:
Error: Redis connection to redis-cache:6379 failed - getaddrinfo ENOTFOUND redis-cache
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26) {
errno: 'ENOTFOUND',
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'redis-cache'
}
Also tried node index.js but still got the same error. This is how I am connecting to my Redis instance in my Nodejs application:
const client = redis.createClient({host: 'redis-cache', port: 6379});
I have tried various answers on StackOverflow as well as various other sites, but none works for me. Please help !

Can't reach postgres container from app container

I have a docker-composer.yml that is setting up two services: server and db. The Node.js server, which is the server service, uses pg to connect to the PostgreSQL database; and the db service is a PostgreSQL image.
On the server startup, it tries to connect to the database but gets a timeout.
docker-compose.yml
version: '3.8'
services:
server:
image: myapi
build: .
container_name: server
env_file: .env
environment:
- PORT=80
- DATABASE_URL=postgres://postgres:postgres#db:15432/mydb
- REDIS_URL=redis://redis
ports:
- 3000:80
depends_on:
- db
command: node script.js
restart: unless-stopped
db:
image: postgres
container_name: db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: mydb
ports:
- 15432:15432
volumes:
- db-data:/var/lib/postgresql/data
command: -p 15432
restart: unless-stopped
volumes:
db-data:
Edit: code above changed to remove links and expose.
db service output:
db |
db | PostgreSQL Database directory appears to contain a database; Skipping initialization
db |
db | 2020-11-05 20:18:15.865 UTC [1] LOG: starting PostgreSQL 13.0 (Debian 13.0-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db | 2020-11-05 20:18:15.865 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 15432
db | 2020-11-05 20:18:15.865 UTC [1] LOG: listening on IPv6 address "::", port 15432
db | 2020-11-05 20:18:15.873 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.15432"
db | 2020-11-05 20:18:15.880 UTC [25] LOG: database system was shut down at 2020-11-05 20:18:12 UTC
db | 2020-11-05 20:18:15.884 UTC [1] LOG: database system is ready to accept connections
script.js - used by the command from the server service.
const pg = require('pg');
console.log(process.env.DATABASE_URL);
const pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
connectionTimeoutMillis: 5000,
});
pool.connect((err, _, done) => {
if (err) {
console.error(err);
done(err);
}
done();
});
pool.query('SELECT NOW()', (err, res) => {
console.log(err, res);
pool.end();
});
const client = new pg.Client({
connectionString: process.env.DATABASE_URL,
connectionTimeoutMillis: 5000,
});
client.connect(console.error);
client.query('SELECT NOW()', (err, res) => {
console.log(err, res);
client.end();
});
server service output:
NOTE: The first line is the output of the first console.log call from script.js.
NOTE: Since the server service is set up with restart: unless-stopped, it will repeat this output forever.
server | postgres://postgres:postgres#db:15432/mydb
server | Error: Connection terminated due to connection timeout
server | at Connection.<anonymous> (/home/node/app/node_modules/pg/lib/client.js:255:9)
server | at Object.onceWrapper (events.js:421:28)
server | at Connection.emit (events.js:315:20)
server | at Socket.<anonymous> (/home/node/app/node_modules/pg/lib/connection.js:78:10)
server | at Socket.emit (events.js:315:20)
server | at emitCloseNT (net.js:1659:8)
server | at processTicksAndRejections (internal/process/task_queues.js:79:21)
server | at runNextTicks (internal/process/task_queues.js:62:3)
server | at listOnTimeout (internal/timers.js:523:9)
server | at processTimers (internal/timers.js:497:7)
server | Error: Connection terminated due to connection timeout
server | at Connection.<anonymous> (/home/node/app/node_modules/pg/lib/client.js:255:9)
server | at Object.onceWrapper (events.js:421:28)
server | at Connection.emit (events.js:315:20)
server | at Socket.<anonymous> (/home/node/app/node_modules/pg/lib/connection.js:78:10)
server | at Socket.emit (events.js:315:20)
server | at emitCloseNT (net.js:1659:8)
server | at processTicksAndRejections (internal/process/task_queues.js:79:21)
server | at runNextTicks (internal/process/task_queues.js:62:3)
server | at listOnTimeout (internal/timers.js:523:9)
server | at processTimers (internal/timers.js:497:7) undefined
server | Error: timeout expired
server | at Timeout._onTimeout (/home/node/app/node_modules/pg/lib/client.js:95:26)
server | at listOnTimeout (internal/timers.js:554:17)
server | at processTimers (internal/timers.js:497:7)
server | Error: Connection terminated unexpectedly
server | at Connection.<anonymous> (/home/node/app/node_modules/pg/lib/client.js:255:9)
server | at Object.onceWrapper (events.js:421:28)
server | at Connection.emit (events.js:315:20)
server | at Socket.<anonymous> (/home/node/app/node_modules/pg/lib/connection.js:78:10)
server | at Socket.emit (events.js:315:20)
server | at emitCloseNT (net.js:1659:8)
server | at processTicksAndRejections (internal/process/task_queues.js:79:21) undefined
server | postgres://postgres:postgres#db:15432/mydb
...
From the host computer, I can reach the PostgreSQL database at the db service, connecting successfully, using the same script from the server service.
The output of the script running from the host computer:
➜ node script.js
postgres://postgres:postgres#localhost:15432/mydb
null Client { ... }
undefined Result { ... }
null Result { ... }
This output means the connection succeeded.
In summary:
I can't reach the db container from the server container, getting timeouts on the connection, but I can reach the db container from the host computer, connecting successfully.
Considerations
First, thanks for the answer so far. Addressing some points raised:
Missing network:
It isn't required because docker-compose has a default network. A tested with a custom network but it didn't work either.
Order of initialization:
I'm using depends_on to ensure the db container is started first but I know it isn't ensuring the database is in fact initialized first then the server. It isn't the problem because the server breaks when a timeout happens and it runs again because it is set up with restart: unless-stopped. So if the database is still initializing on the first or second try to start the server, there is no problem because the server will continue to be restarted until it succeeds in the connection (which never happened.)
UPDATE:
From the server container, I could reach the database at the db service using psql. I still can't connect from the Node.js app there.
The DATABASE_URL isn't the problem because the URI I used in the psql command is the same URI used by the script.js and printed by the first console.log call there.
Command-line used:
docker exec -it server psql postgres://postgres:postgres#db:15432/mydb
Edit: Improved code by removing the dependency for Sequelize. Now it uses only pg and calls the script directly.
Thanks for providing the source to reproduce the issue.
No issues in the docker-compose file as you have already ruled out.
The problem lies between your Dockerfile and the version of node-pg that you are using.
You are using node:14-alpine and pg: 7.18.2.
Turns out there is a bug on node 14 and earlier versions of node-pg.
Solution is either downgrade to node v12 or use latest version of node-pg which is currently 8.4.2 (fix went in on v8.0.3).
I have verified both these solutions on the branch you provided and they work.
This isn't a complete answer; I don't have your code handy so I can't actually test the compose file. However, there are a few issues there I'd like to point out:
The links directive is deprecated.
The links is a legacy option that was used before Docker introduced user-defined networks and automatic DNS support. You can just get rid of it. Containers in a compose file are able to refer to each other by name without.
The expose directive does nothing. It can be informative in for example a Dockerfile as a way of saying, "this image will expose a service on this port", but it doesn't actually make anything happen. It's almost entirely useless in a compose file.
The depends_on directive is also less useful than you would think. It will indeed cause docker-compose to bring up the database container first, but it the container is considered "up" as soon as the first process has started. It doesn't cause docker-compose to wait for the database to be ready to service requests, which means you'll still run into errors if your application tries to connect before the database is ready.
The best solution to this is to built database re-connection logic into your application so that if the database ever goes down (e.g. you restart the postgres container to activate a new configuration or upgrade the postgres version), the app will retry connections until it is successful.
An acceptable solution is to include code in your application startup that blocks until the database is responding to requests.
The problem has nothing to do with docker. To test that, perform following actions :
By using this docker-compose.yml file:
version: '3.8'
services:
app:
image: ubuntu
container_name: app
command: sleep 8h
db:
image: postgres
container_name: db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: mydb
expose:
- '15432'
ports:
- 15432:15432
volumes:
- db-data:/var/lib/postgresql/data
command: -p 15432
restart: unless-stopped
volumes:
db-data:
Perform a docker exec -it app bash to go into container app then install postgresql-client with apt install -y postgresql-client`.
Command psql -h db -p 15432 -U postgres -W succeeded !
Check pg configuration
You say that pg use environment variable DATABASE_URL to reach postgresql. I'm not sure :
From https://node-postgres.com/features/connecting, we can found this example :
$ PGUSER=dbuser \
PGHOST=database.server.com \
PGPASSWORD=secretpassword \
PGDATABASE=mydb \
PGPORT=3211 \
node script.js
And this sentence :
node-postgres uses the same environment variables as libpq to connect to a PostgreSQL server.
In libpq documentation, no DATABASE_URL.
To adapt example provided in pg documentation with your docker-compose.yml file, try with following file (I only changed environments variables of app service) :
version: '3.8'
services:
server:
image: myapi
build: .
container_name: server
env_file: .env
environment:
- PORT=80
- PGUSER=postgres
- PGPASSWORD=postgres
- PGHOST=db
- PGDATABASE=mydb
- PGPORT=15432
- REDIS_URL=redis://redis
ports:
- 3000:80
depends_on:
- db
command: node script.js
restart: unless-stopped
db:
image: postgres
container_name: db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: mydb
ports:
- 15432:15432
volumes:
- db-data:/var/lib/postgresql/data
command: -p 15432
restart: unless-stopped
volumes:
db-data:

Can not access replicaset mongodb(docker) from host

I have my nodejs code running on my host machine(MACOS) and which is trying to connect to the mongodb replicaset which is running in the docker.
version: "3"
services:
redis_master:
image: redis:2.8
volumes:
- "/Users/hiteshbaldaniya/docker-redis/master:/data/"
ports:
- "6379:6379"
networks:
- database
mongodb_primary:
build:
context: ./
dockerfile: DockerfileDB
command: mongod --replSet "hdbrs" --dbpath "/data/27017/" --port 27017
ports:
- "27017:27017"
volumes:
- "/Users/hiteshbaldaniya/docker-mongodb/:/data/"
networks:
- database
mongodb_secondary1:
build:
context: ./
dockerfile: DockerfileDB
command: mongod --replSet "hdbrs" --dbpath "/data/27018/" --port 27018
ports:
- "27018:27018"
volumes:
- "/Users/hiteshbaldaniya/docker-mongodb/:/data/"
networks:
- database
mongodb_secondary2:
build:
context: ./
dockerfile: DockerfileDB
command: mongod --replSet "hdbrs" --dbpath "/data/27019/" --port 27019
ports:
- "27019:27019"
volumes:
- "/Users/hiteshbaldaniya/docker-mongodb/:/data/"
networks:
- database
hdb_nginx:
build:
context: ./nginx/
dockerfile: DockerfileNginx.dev
ports:
- "8081:80"
volumes:
- "/Users/hiteshbaldaniya/logs/docker-nginx/:/var/log/nginx/"
networks:
- backend
networks:
backend:
driver: bridge
database:
driver: bridge
All 3 ports are open on my host machine and I tried using telnet and I am able to connect to all the ports as well.
My nodejs application using mongodb-node-driver and using following configuration.
module.exports = {
servers: [{
host: 'localhost',
port: 27017,
},
{
host: 'localhost',
port: 27018,
},
{
host: 'localhost',
port: 27019,
},
],
database: 'mydatabase',
options: {
"raw": false,
"poolSize": 5,
"readPreference": "primaryPreferred",
"w": 1,
"wtimeout": 12000,
"replicaSet": "hdbrs"
}
};
While connecting to mongodb my application throws following error can someone help me over here?
{ MongoNetworkError: failed to connect to server [mongodb_primary:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongodb_primary mongodb_primary:27017]
at Pool.<anonymous> (/Users/hiteshbaldaniya/Applications/contentstack-migration/node_modules/mongodb-core/lib/topologies/server.js:505:11)
at Pool.emit (events.js:198:13)
at Connection.<anonymous> (/Users/hiteshbaldaniya/Applications/contentstack-migration/node_modules/mongodb-core/lib/connection/pool.js:329:12)
at Object.onceWrapper (events.js:286:20)
at Connection.emit (events.js:198:13)
at Socket.<anonymous> (/Users/hiteshbaldaniya/Applications/contentstack-migration/node_modules/mongodb-core/lib/connection/connection.js:245:50)
at Object.onceWrapper (events.js:286:20)
at Socket.emit (events.js:198:13)
at emitErrorNT (internal/streams/destroy.js:91:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
name: 'MongoNetworkError',
message:
'failed to connect to server [mongodb_primary:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongodb_primary mongodb_primary:27017]' }
thanks,
I was able to figure out solution. By adding the host entries in my /etc/hosts file.
127.0.0.1 mongodb_primary
127.0.0.1 mongodb_secondary1
127.0.0.1 mongodb_secondary2
I am still not sure what is wrong with localhost while connecting to mongodb running inside docker from host machine, I would still need to understand the behaviour of this.
Just for the node I have also tried adding the --bind_ip_all in command line during command for mongodb run.
Do let me know if someone know it. thanks

Node.js LoopBack framework, docker-compose.yml, MongoDB Connector - Error: MongoServerSelectionError: connect ECONNREFUSED

I can't connect MongoDB to my LoopBack framework.
I can connect into database with Mongo Express. I can create db and collections. But it's running on localhost. From my app i need to connect with mongo:27017.
docker-compose.yml
version: "3.7"
services:
web:
build:
context: .
dockerfile: .docker/node/Dockerfile
volumes:
- .:/home/node/app
ports:
- 3000:3000
depends_on:
- mongo
links:
- mongo
mongo:
image: mongo:latest
restart: always
volumes:
- ./src/datasources/mongodb:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
ports:
- 27017:27017
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
Dockerfile
# Check out https://hub.docker.com/_/node to select a new base image
FROM node:10-slim
# Set to a non-root built-in user `node`
USER node
# Create app directory (with user `node`)
RUN mkdir -p /home/node/app
WORKDIR /home/node/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY --chown=node package*.json ./
RUN npm install
# Bundle app source code
COPY --chown=node . /home/node/app
RUN npm run build
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE ${PORT}
CMD [ "node", "." ]
LoopBack datasources - mongo-db.datasource.config.json
{
"name": "MongoDB",
"connector": "mongodb",
"url": "",
"host": "mongo",
"port": 27017,
"user": "root",
"password": "sportee",
"database": "sportee",
"useNewUrlParser": true
}
docker error
MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
at Timeout.waitQueueMember.timer.setTimeout [as _onTimeout] (/home/node/app/node_modules/mongodb/lib/core/sdam/topology.js:430:30)
at ontimeout (timers.js:436:11)
at tryOnTimeout (timers.js:300:5)
at listOnTimeout (timers.js:263:5)
at Timer.processTimers (timers.js:223:10)
Emitted 'error' event at:
at MongoDbDataSource.postInit (/home/node/app/node_modules/loopback-datasource-juggler/lib/datasource.js:502:16)
at onError (/home/node/app/node_modules/loopback-connector-mongodb/lib/mongodb.js:316:21)
at /home/node/app/node_modules/loopback-connector-mongodb/lib/mongodb.js:324:9
at /home/node/app/node_modules/mongodb/lib/utils.js:722:9
at err (/home/node/app/node_modules/mongodb/lib/mongo_client.js:216:23)
at connectCallback (/home/node/app/node_modules/mongodb/lib/operations/connect.js:350:5)
at topology.connect.err (/home/node/app/node_modules/mongodb/lib/operations/connect.js:583:14)
at Object.selectServer.err [as callback] (/home/node/app/node_modules/mongodb/lib/core/sdam/topology.js:285:11)
at Timeout.waitQueueMember.timer.setTimeout [as _onTimeout] (/home/node/app/node_modules/mongodb/lib/core/sdam/topology.js:435:25)
[... lines matching original stack trace ...]
Unhandled error in GET /users: 500 Error: Timeout in connecting after 5000 ms
at Timeout._onTimeout (/home/node/app/node_modules/loopback-datasource-juggler/lib/datasource.js:2640:10)
at /home/node/app/node_modules/loopback-datasource-juggler/lib/datasource.js:343:12
Can someone help please? :)
UPDATE:
I resolve my problem with adding networks into my docker-compose.yml
networks:
app-tier:
driver: bridge
me-tier:
driver: bridge

ECONNREFUSED error in docker-compose with NodeJS and postgresql in google cloud

I have created my react app with Node.js and postgresql and I deployed in google cloud. I created a docker image of postgres and nodejs and I uploaded images to docker hub. From gcloud I accessing Those images.
This is my docker-compose-production.yml file.
version: '2.0'
services:
postgres:
image : mycompany/myapp:pglatest
restart : always
volumes:
- ./backdata/databackups:/var/lib/postgresql/backdata
ports:
- "5433:5432"
backend:
image: mycompany/myapp:nodelatest7
command: npm run start
ports:
- "5001:5000"
depends_on:
- postgres
environment:
POSTGRES_URL: postgresql://postgres:root#postgres:5432/db_mydb
DEBUG: hms-backend:*
when I run command
sudo docker-compose -f docker-compose-production.yml up --build -d
2 images are created.
after that I have run tail command
sudo docker-compose -f docker-compose-production.yml logs -t backend
I'm getting error as
backend_1 | 2018-09-15T09:12:18.926001351Z REST API listening on port 5000
backend_1 | 2018-09-15T09:12:18.937246598Z error { Error: connect ECONNREFUSED 192.168.80.2:5432
backend_1 | 2018-09-15T09:12:18.937268668Z at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
backend_1 | 2018-09-15T09:12:18.937280934Z errno: 'ECONNREFUSED',
backend_1 | 2018-09-15T09:12:18.937283960Z code: 'ECONNREFUSED',
backend_1 | 2018-09-15T09:12:18.937286817Z syscall: 'connect',
backend_1 | 2018-09-15T09:12:18.937289488Z address: '192.168.80.2',
backend_1 | 2018-09-15T09:12:18.937292260Z port: 5432 }
How to solve this problem
For me your postgres url is wrong : postgresql://postgres:root#postgres:5432/db_mydb
It should be postgresql://postgres:root#postgres:5433/db_mydb since the postgres "exposed" port in 5433
Hum, by i think you should add "container_name" in the docker-compose
services:
postgres:
container_name: my_postgres
and use this name for the "adress" of your postgres
postgresql://my_postgres:root#postgres:5433/db_mydb

Resources