I have two apps:
MongoDB (started from the Bitnami MongoDB Docker image)
My custom node app
The two apps interact flawlessly when my node app is run natively. Now I've put it inside a Docker container and when I start both together with docker compose up, the backend can not connect to MongoDB.
This is an excerpt of the startup sequence:
mongodb_1 | 2018-11-10T22:22:52.481+0000 I NETWORK [initandlisten] waiting for connections on port 27017
[...]
backend_1 | 2018-11-10T22:23:48.119Z 'MongoNetworkError: failed to connect to server [localhost:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017]'
This is my docker-compose.yml:
version: '2'
services:
mongodb:
image: bitnami/mongodb:latest
expose:
- 27017
environment:
- ALLOW_EMPTY_PASSWORD=yes
backend:
build: ./backend
environment:
API_HOST: http://localhost:3000/
APP_SERVER_PORT: 3000
expose:
- 3000
volumes:
- ./backend:/app/backend
links:
- mongodb
depends_on:
- mongodb
This is my node call to the DB:
mongoose.connect('mongodb://localhost:27017/groceryList', {
useNewUrlParser: true
});
I skimmed about 15 Stackoverflow questions asking the same and I am not getting the cause:
It is not that MongoDB is not ready when my node app tries to
connect. I wrapped my connection call into an auto reconnection
function as described here and the error repeats endlessly. It is not just about the "first
connect".
I can publish Port 27017 of the MongoDB container and
happily connect with Robo3T. The DB is definitely working.
When I connect to mongodb://mongo:27017/groceryList instead, the same applies, only with the ENOTFOUND flag instead of ECONNREFUSED.
What am I missing?
Docker 18.06.1-ce
docker-compose 1.22.0
Mongoose 5.3.6
MongoDB 4.0.3
Node 11.1.0
macOS 10.14.1
Your mongodb service is named mongodb not mongo.
Try
mongoose.connect('mongodb://mongodb:27017/groceryList', {
useNewUrlParser: true
});
The generic form is 'mongodb://mongoServiceName:27017/dbname', this uses docker's automatic dns resolution for containers within the same network.
And as you may already know from other questions/answers, within a container, the url is relative to itself, therefore since there not mongodb running inside the backend container, it can't connect to it.
It's not possible to use localhost:27017 into a container to communicate with other because the scope "localhost" is self refering (localhost:27017 will look for the port 27017 into the container backend - not in mongodb container)
Then, you ned to put or the service name (mongodb) or IP of your machine
Related
Good morning guys.
I'm having a problem connecting a nodejs application, in a container, to another container that contains a redis server. On my local machine I can connect the application to this redis container without any problem. However, when trying to upload this application in a container, a timeout error is returned.
I'm new to docker and I don't understand why I can connect to this docker container in the application running locally on my machine but that same connection doesn't work when I upload the application in a container.
I tried using docker-compose, but from what I understand it will upload in another container to the redis server, instead of using the redis container that is already in docker.
To connect to redis I'm using the following code:
createClient({
socket: {
host: process.env.REDIS_HOST,
port: Number(process.env.REDIS_PORT)
}
});
Where REDIS_HOST is the address of my container running on the server and REDIS_PORT is the port where this container is running on my server.
To run redis on docker I used the following guide: https://redis.io/docs/stack/get-started/install/docker/
I apologize if my problem was not very clear, I'm still studying docker.
You mentioned you are using Docker Compose. Here's an example showing how to start Redis in a container, and make your Node application wait for that container then use an environment variable in your Node application to specify the name of the host to connect to Redis on. In this example it connects to the container running Redis that I've called "redis":
version: "3.9"
services:
redis:
container_name: redis_kaboom
image: "redislabs/redismod"
ports:
- 6379:6379
volumes:
- ./redisdata:/data
entrypoint:
redis-server
--loadmodule /usr/lib/redis/modules/rejson.so
--appendonly yes
deploy:
replicas: 1
restart_policy:
condition: on-failure
node:
container_name: node_kaboom
build: .
volumes:
- .:/app
- /app/node_modules
command: sh -c "npm run load && npm run dev"
depends_on:
- redis
ports:
- 8080:8080
environment:
- REDIS_HOST=redis
So in your Node code you'd then use the value of process.env.REDIS_HOST to connect to the right Redis host. Here, I'm not using a password or a non-standard port, you could also supply those as environment variables that match the configuration of the Redis container in Docker Compose too if you needed to.
Disclosure: I work for Redis.
ISSUE: I have a docker image running for neo4j and one for express.js. I cant get the docker images to communicate between eachother.
I can run neo4j desktop, start a nodemon server and they will communicate.
SETUP:
NEO4J official docker image
NEO4J_AUTH none
PORTS localhost:7474 localhost:7687
Version neo4j-community-4.3.3-unix.tar.gz
NODEJS Image
PORTS 0.0.0.0:3000 :::3000
Version 14.17.5
Express conf
DEV_DB_USER_NAME="neo4j"
DEV_DB_PASSWORD="test"
DEV_DB_URI="neo4j://localhost" //for image purpose for local its bolt://localhost:7687
DEV_DB_SECRET_KEY=""
let driver = neo4j.driver(
envConf.dbUri,
neo4j.auth.basic(envConf.dbUserName, envConf.dbUserName)
);
package.json
"#babel/node": "^7.13.10",
"neo4j-driver": "^4.2.3",
I can remote into the neo4j image through http://localhost:7474/browser/ so its running.
I cannot use the server image to call a local neo4j instance.
when i call the apis in the server image i get these errors
If i use neo4j protocal:
Neo4jError: Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1629484043610, routers=[], readers=[], writers=[]]
If i use bolt protocal:
Neo4jError: Failed to connect to server. Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0. Caused by: connect ECONNREFUSED 127.0.0.1:7687
Ive been scouring the documentation for a while any ideas would be most welcome!
I was able to achieve the communication by using docker-compose. The problem is that both containers acted as separate networks and i could not find a way to allow the server to communicate with the database. Running docker-compose and building both containers within a single compose network allows communication using the service names.
take note this is tab sensitive!!
docker-compose.yml
version: '3.7'
networks:
lan:
# The different services that make up our "network" of containers
services:
# Express is our first service
express:
container_name: exp_server
networks:
- lan
# The location of dockerfile to build this service
build: <location of dockerfile>
# Command to run once the Dockerfile completes building
command: npm run startdev
# Volumes, mounting our files to parts of the container
volumes:
- .:/src
# Ports to map, mapping our port 3000, to the port 3000 on the container
ports:
- 3000:3000
# designating a file with environment variables
env_file:
- ./.env.express
## Defining the Neo4j Database Service
neo:
container_name: neo4j_server
networks:
- lan
# The image to use
image: neo4j:latest
# map the ports so we can check the db server is up
ports:
- "7687:7687"
- "7474:7474"
# mounting a named volume to the container to track db data
volumes:
- $HOME/neo4j/conf:/conf
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/plugins:/plugins
env_file:
- .env.neo4j
with this you can use docker to run both the server and database and anything else while still using change detection rebuilding to develop and even build multiple environment images at the same time. NEAT
I can't figure out how to connect to my redis service from my app service. Using DDocker version 18.03.1-ce, build 9ee9f40ocker for Mac.
I've tried connecting the various ways I've found on similar questions:
const client = redis.createClient({ host: 'localhost', port: 6379});
const client = redis.createClient({ host: 'redis', port: 6379});
const client = redis.createClient('redis://redis:6379');
const client = redis.createClient('redis', 6379); // and reversed args
I always get some form of:
Error: Redis connection to localhost:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
Error: Redis connection to redis:6379 failed - connect ECONNREFUSED 172.20.0.2:6379
Docker containers
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0fd798d58561 app_app "pm2-runtime start e…" 2 seconds ago Up 7 seconds app
65d148e498f7 app_redis "docker-entrypoint.s…" About a minute ago Up 8 seconds 0.0.0.0:6379->6379/tcp redis
Redis works:
$ docker exec -it redis /bin/bash
root#65d148e498f7:/data# redis-cli ping
PONG
Redis Dockerfile (pretty simple)
FROM redis:4.0.9
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD ["redis-server", "/usr/local/etc/redis/redis.conf"]
app Dockerfile
FROM node:10.3.0-slim
RUN mkdir -p /app
COPY src/* /app/
CMD ["pm2-runtime", "start", "/app/ecosystem.config.js"]
docker-compose.yml
version: "3"
services:
redis:
build: ./redis/
container_name: redis
restart: unless-stopped
ports:
- "6379:6379"
expose:
- "6379"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- 'API_PORT=6379'
- 'NODE_ENV=production'
app:
depends_on:
- redis
build: ./app/
container_name: app
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /app/node_modules
environment:
- 'NODE_ENV=production'
It looks like your redis image is configured to listen on 127.0.0.1 rather than all interfaces. This is not an issue with the default redis images, so either use the official image from docker hub, or correct your configuration to listen on 0.0.0.0.
You'll be able to verify this with netshoot:
docker run --rm --net container:app_redis nicolaka/netshoot netstat -ltn
In the redis conf, listening on all interface is done by commenting out the "bind" line in redis.conf.
Let me explain it in simple language. When you run docker-compose up it runs redis and app in separate containers. Now your app needs to connect/access the redis container (remember redis is not at your machines localhost, its inside a container and runs inside it at default port 6379). By default Docker will keep app container and redis container in same network and you can access a container by its service name (which in your case is redis and app) so in order to access redis from app container all you need is to use the default port 6379 and host will be the service name (in your case "redis").
For a node application running in a container get access to Redis (which was also running in a container) by
const redis = require("redis");
const client = redis.createClient(6379, "service-name-for-redis-container");
I solve this problem changing the redis host from 'localhost' to 'redis', exemple:
REDIS_HOST=redis
REDIS_PORT=6379
After the change my docker service started to comunicate with redis.
Original forum answer: https://forums.docker.com/t/connecting-redis-from-my-network-in-docker-net-core-application/92405
In my case the problem was that I was binding a different port on redis:
redis:
image: redis
ports:
- 49155:6379
And I was trying to connect to port 49155 but I needed to connect through port 6379 since the connection is from another service.
localhost from the app container's perspective won't be able to leave the app container. So the best bet is to use redis or the host's ip address.
If you want to reach redis from the app container, you'll need to link them or put them into the same network. Please add a network property to both services, using the same network name. Docker will then provide you with with valid dns lookups for the service names.
See the official docs at https://docs.docker.com/compose/compose-file/#networks (for the service: property) and https://docs.docker.com/compose/compose-file/#network-configuration-reference (for the top-level networks property).
I'm pretty new to docker but I'm having some issues getting a node app to connect to a mongo database running on a separate container.
I'm using the official mongo image
I run it using:
docker run --name some-mongo --network-alias some-mongo -d mongo
It's running on port 27017 by default. I can connect to it using the mongo shell:
mongo --host mongodb://172.17.0.2:27017
But I can't connect to it by name
mongo --host mongodb://some-mongo:27017
MongoDB shell version: 3.2.19
connecting to: mongodb://some-mongo:27017/test
2018-05-07T17:23:20.813-0400 I NETWORK [thread1] getaddrinfo("some-mongo") failed: Name or service not known
2018-05-07T17:23:20.813-0400 E QUERY [thread1] Error: couldn't initialize connection to host some-mongo, address is invalid :
connect#src/mongo/shell/mongo.js:223:14
#(connect):1:6
exception: connect failed
Instead I get an error message about how I can't connect to the mongo host:
I'm trying some docker-compose tutorials but either they're too simple or they don't seem to work for me. I just want to connect a custom node app, (not the official node) to mongodb and some other dependencies.
Your approach is not altering your host's system configuration, so that the mongo service will not be available just like that. Agreeing with #unm4sk, you should compose you application's services into a single compose file like this:
version: '2'
services:
mongo:
image: mongo
expose:
- "27017"
[...]
service_utilizing_mongo:
[...]
links:
- mongo:mongo
Then, your service_utilizing_mongo would have a DNS entry that'd make this service capable of accessing your mongo service via a alias mongo on a default 27017 port.
You have to run your container with passing ports to your host machine:
docker run -p 27017:27017 --name some-mongo --network-alias some-mongo -d mongo
Then you can connect to MongoDB from your host machine:
If you don't want to do this you can connect to mongo through docker container command
docker exec -it some-mongo mongo
I am trying to connect postgresdb service with nodejs web service using docker compose
My docker-compose.yml file
version: "3"
services:
web:
build: ./
ports:
- "40000:3000"
depends_on:
- postgres
postgres:
image: kartoza/postgis:9.6-2.4
restart: always
volumes:
- postgresdata:/data/db
environment:
- POSTGRES_PASS=password
- POSTGRES_DBNAME=sticki
- POSTGRES_USER=renga
- ALLOW_IP_RANGE=0.0.0.0/0
ports:
- "1000:5432"
volumes:
postgresdata:
So when i do docker-compose up in my root directory both services are running and i can access web service using localhost:40000 and postgres service using postico on localhost:1000
But in Node Web service i have written code to access postgres using Sequelize as
const sequelize = new Sequelize('sticki', 'renga', 'password', {
host: 'postgres',
dialect: 'postgres',
});
But I get the following error
SequelizeConnectionRefusedError: connect ECONNREFUSED 172.18.0.2:1000
Why does postgres Connection is made to 172.18.0.2 instead of localhost(0.0.0.0)? What i am doing wrong?
For your web container postgres is a DNS name defined in compose as a service. It fetches the postgres DNS IP address via docker internal DNS & network, that's why it's resolving to 172.18.0.2. If you go to web container & ping postgres, you will get the same IP.
As a fix, configure your node service to connect to host postgres on port 5432 since it's the container port. Port 1000 is the host machine port, if you want to use port 1000, configure node service to connect to your MACHINE_IP:1000.
PS - Localhost within a container means the container itself & nothing else.
Service name is taken from container_name - which is fixed. In your case you do not have that and name is created from folder where docker-compose.yml is + _ + service name + _1.
With this DNS name you can reach your service on the default network that docker-compose will create, from one service to reach the other.
Thanks