Cant reach mongo db container - node.js

I trying to implement a dockerized mongo/node based app by this article.
docker-compose.yml
version: '3'
services:
gecar_app:
build:
context: .
dockerfile: Dockerfile.gecars
restart: always
ports:
- 3000:3000
depends_on:
- mongo
command: /usr/app/initAndStartService.sh
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Dockerfile.gercars
FROM node:latest
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY src/ /usr/app/src/
COPY package.json /usr/app/package.json
COPY webpack.config.js /usr/app/webpack.config.js
COPY tsconfig.json /usr/app/tsconfig.json
COPY initAndStartService.sh /usr/app/initAndStartService.sh
RUN yarn
RUN yarn build
RUN ["chmod", "+x", "/usr/app/initAndStartService.sh"]
initAndStartService.sh
#!/bin/sh
curl 127.0.0.1:27017
but when run docker-compose up get this:
mongo | 2019-11-19T06:13:21.682+0000 I NETWORK [initandlisten] Listening on 0.0.0.0
mongo | 2019-11-19T06:13:21.683+0000 I NETWORK [initandlisten] waiting for connections on port 27017
...................
gecar_app_1 | Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to 127.0.0.1 port 27017: Connection refused
If I recognize the situation correctly, gecar_app_1 container cant reachmongo` container.

depends_on
Express dependency between services, Service dependencies cause the
following behaviors:
docker-compose up starts services in dependency order. In the following example, db and redis are started before web.
docker-compose up SERVICE automatically includes SERVICE’s dependencies. In the following example, docker-compose up web also
creates and starts db and redis.
docker-compose stop stops services in dependency order. In the following example, web is stopped before db and redis.
in other words, when you want to communicate with the mongo container, all you have to do is to call it as mongo based on your container service naming.
So curl mongo:27017 from the gecar_app container will do the trick

127.0.0.1 is a loopback address, this mean, when you run curl 127.0.0.1:27017 on gecar_app container, you try to connect to itself, but gecar_app did not have any service running on 27017 port. What you need to connect is mongo service (I think so).
You use depends_on to wait until the mongo service started, then start the gecar_app container (good point), now mongo will be come to a hostname, and gecar_app can connect to mongo via mongo "domain".
But, gecar_app maybe started before mongodb service already, then you have make sure that when mongo service already. You can use healthcheck property to do that.
Finally,
docker-compose.yml
version: '3'
services:
gecar_app:
build:
context: .
dockerfile: Dockerfile.gecars
restart: always
ports:
- 3000:3000
depends_on:
- mongo
command: /usr/app/initAndStartService.sh
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet 1
interval: 3s
timeout: 5s
retries: 5
and, initAndStartService.sh
#!/bin/sh
curl mongo:27017

Related

Connect mongo and sapper server with docker

I am working in a sapper / svelte project and I need to build the sapper project and connect it to a mongodb (I need to start mongo compose from docker-compose.yml)
At the moment I was trying to connect the db to the local mongo on port localhost: 27017 but it can't establish the connection. What should I do?
Here there is my docker-compose
version: "3.4"
services:
myapp:
image: my_image
deploy:
update_config:
delay: 30s
parallelism: 1
failure_action: rollback
ports:
- "3000:3000"
and here my dockerfile
FROM node:lts-alpine
WORKDIR /app
COPY static static
COPY emails emails
COPY package.json .
ENV NODE_ENV production
RUN npm install
COPY __sapper__/build __sapper__/build
EXPOSE 3000
CMD ["node", "__sapper__/build/index.js"]
Also what should I do to start the mongo deployment directly from compose? I have mongo on docker but I should start both directly from compose.
I think mongo service should be added to services of docker-compose.yml.
for example.
services:
mongodb:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
Then, the node application can access to mongodb by the service name.(ex. mongodb:27017).
I think this URL will help.
https://hub.docker.com/_/mongo
version: "3.4"
services:
app:
image: yourimage
ports:
- "3000:3000"
environment:
- MONGODB_URL=mongodb://yourip/yourdb
mongodb:
image: mongo
restart: always
ports:
- "yourportsdb:yourportsdb"
it is not necessary to authenticate the mongo with password and user, eventually it passes the environments as suggested #Jihoon Yeo

Can't connect to Postgis running in docker from Geoserver running in another docker continer

I used kartoza's docker images for Geoserver and Postgis and started them in two docker containers using the provided docker-compose.yml:
version: '2.1'
volumes:
geoserver-data:
geo-db-data:
services:
db:
image: kartoza/postgis:12.0
volumes:
- geo-db-data:/var/lib/postgresql
ports:
- "25434:5432"
env_file:
- docker-env/db.env
restart: on-failure
healthcheck:
test: "exit 0"
geoserver:
image: kartoza/geoserver:2.17.0
volumes:
- geoserver-data:/opt/geoserver/data_dir
ports:
- "8600:8080"
restart: on-failure
env_file:
- docker-env/geoserver.env
depends_on:
db:
condition: service_healthy
healthcheck:
test: curl --fail -s http://localhost:8080/ || exit 1
interval: 1m30s
timeout: 10s
retries: 3
The referenced .env files are:
db.env
POSTGRES_DB=gis,gwc
POSTGRES_USER=docker
POSTGRES_PASS=docker
ALLOW_IP_RANGE=0.0.0.0/0
geoserver.env
GEOSERVER_DATA_DIR=/opt/geoserver/data_dir
ENABLE_JSONP=true
MAX_FILTER_RULES=20
OPTIMIZE_LINE_WIDTH=false
FOOTPRINTS_DATA_DIR=/opt/footprints_dir
GEOWEBCACHE_CACHE_DIR=/opt/geoserver/data_dir/gwc
GEOSERVER_ADMIN_PASSWORD=myawesomegeoserver
INITIAL_MEMORY=2G
MAXIMUM_MEMORY=4G
XFRAME_OPTIONS='false'
STABLE_EXTENSIONS=''
SAMPLE_DATA=false
GEOSERVER_CSRF_DISABLED=true
docker-compose up brings both containers up and running with no errors giving them names backend_db_1 (Postgis) and backend_geoserver_1 (Geoserver). I can access Geoserver running in backend_geoserver_1 under http://localhost:8600/geoserver/ as expected. I can connect an external, AWS-based Postgis as a data store to my docker-based Geoserver instance without any problems. I can also access the Postgis running in the docker container backend_db_1 from PgAdmin, with psql from the command line and from the Webstorm IDE.
However, if I try to use my Postgis running in backend_db_1 as a data store for my Geoserver running in backend_geoserver_1, I get the following error:
> Error creating data store, check the parameters. Error message: Unable
> to obtain connection: Cannot create PoolableConnectionFactory
> (Connection to localhost:25434 refused. Check that the hostname and
> port are correct and that the postmaster is accepting TCP/IP
> connections.)
So, my Geoserver in backend_geoserver_1 can connect to Postgis on AWS, but not to the one running in another docker container on the same localhost. The Postgis in backend_db_1 in its turn can be accessed from many other local apps and tools, but not from Geoserver running in a docker container.
Any ideas what I am missing? Thanks!
just add the network_mode in YAML in db and geoserver and set it to host
network_mode: host
note that this will ignore the expose option and will use the host network an containers network

How do I get my app in a docker instance to talk to my database in another docker instance inside the same network?

Ok I give up! I spent far too much time on this:
So I want my app inside a docker container to talk to my postgres which is inside another container.
Docker-compose.yml
version: "3.8"
services:
foodbudget-db:
container_name: foodbudget-db
image: postgres:12.4
restart: always
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: foodbudget
PGDATA: /var/lib/postgresql/data
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- 5433:5432
web:
image: node:14.10.1
env_file:
- .env
depends_on:
- foodbudget-db
ports:
- 8080:8080
build:
context: .
dockerfile: Dockerfile
Dockerfile
FROM node:14.10.1
WORKDIR /src/app
ADD https://github.com/palfrey/wait-for-db/releases/download/v1.0.0/wait-for-db-linux-x86 /src/app/wait-for-db
RUN chmod +x /src/app/wait-for-db
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5433 -t 1000000
EXPOSE 8080
But I keep getting this error when I build the Dockerfile, even though the database is up and running when I run docker ps. I tried connecting to the postgres database in my host machine, and it connected successfully...
Temporary error (pausing for 3 seconds): PostgresError { error: Error(Io(Custom { kind: Other, error: "failed to lookup address information: Name does not resolve" })) }
Have anyone created an app and talk to a db in another docker instance be4?
This line is the issue:
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5433 -t 1000000
You must use the internal port of the docker container (5432) instead of the exposed one within a network:
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5432 -t 1000000

Nodejs application docker unable to connect to mongodb docker container

I have a nodejs application which dockerized and need a replicated MongoDB database. I have built my replicated MongoDB in docker-compose and working just fine. if I run the command docker inspect MongoDB-primary |grep IPAddress its print:
"IPAddress": "",
"IPAddress": "172.18.0.2",
now in my application, i give this ip as mongoconnection string(of course with protocol names) but the application cannot connect to MongoDB and throw this error message(application also is a docker container):
message: 'failed to connect to server [172.18.0.2:27017] on first connect [MongoNetworkError: connection 1 to 172.18.0.2:27017 timed out]',
here is my mongodb docker compose file:
version: '2'
services:
mongodb-primary:
image: 'bitnami/mongodb:latest'
environment:
- MONGODB_REPLICA_SET_MODE=primary
volumes:
- 'mongodb_master_data:/bitnami'
mongodb-secondary:
image: 'bitnami/mongodb:latest'
depends_on:
- mongodb-primary
environment:
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_PRIMARY_HOST=mongodb-primary
- MONGODB_PRIMARY_PORT_NUMBER=27017
mongodb-arbiter:
image: 'bitnami/mongodb:latest'
depends_on:
- mongodb-primary
environment:
- MONGODB_REPLICA_SET_MODE=arbiter
- MONGODB_PRIMARY_HOST=mongodb-primary
- MONGODB_PRIMARY_PORT_NUMBER=27017
volumes:
mongodb_master_data:
driver: local
and my node js application dockerfile is:
FROM node:6.0
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=productio
# Bundle app source
COPY . .
EXPOSE 3001
CMD [ "npm", "start" ]
how can I fix this?
Your docker-compose does not automatically expose tcp ports to the outer world, like your host PC (I assume your nodeJs runs on host and not included in docker-compose). This is the behavior of docker bridge networks, you can read more at https://docs.docker.com/network/bridge/
You have to do one of the following:
Include your NodeJs container into docker-compose
or
Expose ports from docker-compose.yml
I had the same issue and adding the ports fixed it for me.
Make sure your connection url includes the image name:
mongodb://mongo:27017/ and not localhost.
mongo:
image: mongo
expose:
- 27017
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db

Cannot connect to MongoDB via node.js in Docker

My node.js express app cannot connect to the MongoDB in a Docker. I'm not that familiar with Docker.
node.js connection:
import mongodb from 'mongodb';
...
mongodb.MongoClient.connect('mongodb://localhost:27017', ... );
Dockerfile:
FROM node:argon
RUN mkdir /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: “2”
services:
web:
build: .
volumes:
— ./:/app
ports:
— “3000:3000”
links:
— mongo
mongo:
image: mongo
ports:
— “27017:27017”
Build command: docker build -t NAME .
Run command: docker run -ti -p 3000:3000 NAME
Connection error:
[MongoError: failed to connect to server [localhost:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]]
name: 'MongoError',
message: 'failed to connect to server [localhost:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]'
Try:
mongodb.MongoClient.connect('mongodb://mongo:27017', ... );
Change your docker-compose.yml:
version: "2"
services:
web:
build: .
volumes:
- ./:/app
ports:
- "3000:3000"
links:
- mongo
mongo:
image: mongo
ports:
- "27017:27017"
And use some docker compose commands:
docker-compose down
docker-compose build
docker-compose up -d mongo
docker-compose up web
Try this.
When using linked docker containers you should use the name of the container in this case for example your connection to mongodb should be mongodb.MongoClient.connect('mongodb://mongo:27017', ... ); instead of mongodb.MongoClient.connect('mongodb://localhost:27017', ... );. The reason for changing it to mongo is because you used the links attribute to mongo in your docker-compose.yml. That would result to a hostname of mongo in your /etc/hosts of the web docker container. Reference linking-containers.
The docker-compose.yml seems to be lacking an indention. On the mongo attribute should be the same level as web.
version: '2'
services:
web:
build: .
volumes: ['./:/app']
ports: [ '3000:3000' ]
links: [ mongo ]
mongo:
image: mongo
ports: [ '27017:27017' ]
I tried your configuration using my docker what Ive done is update docker-compose.yml then I docker-compose build then docker-compose up. Logs of my local run
I am not sure if you still have this question, but the datasources.json should be:
"host": "mongo"
rather than "localhost".
In my logs I see:
mongo | NETWORK [listener] connection accepted from 172.22.0.3:47880 #1 (1 connection now open)
As you can see, docker compose will NAT mongo to another V-LAN. The IP address 172.22.0.0 is an internal IP address used by the daemon to route a docker-compose image. So localhost is now not in the game.
At least, it works for me.
datasources.json
"mongoDS": {
"host": "mongo",
"port": 27017,
...
in my case works like this :
just link to the container from the command line. db is my up database container
sudo docker run -it --link db:db1 --publish 4000-4006:4000-4006 --name backend backend:latest

Resources