Unable to connect a dockerised keystone container with a dockerised mongo container - node.js

I am trying to get a dockerised keystoneJS to talk to a dockerised mongoDB instance and I am struggling to see where I am going wrong in terms of linking them together. I have gone through the docker docs and similar examples online that are trying to do what I am trying to do, but still I am unable to get the two to talk to each other.
The main issues are it being unable to find 'localhost:27017' or Error: Invalid mongodb uri. Must begin with "mongodb://" Received: "mongodb://mongo:27017/"
Below are the relevant files:
Dockerfile for keystone
FROM node:6.9.1
RUN mkdir -p /docker
WORKDIR /docker
COPY . /docker
RUN npm install --no-optional
CMD ["node", "keystone.js"]
docker-compose.yml
version: '2'
services:
keystone:
image: keystone-test
ports:
- "3000:3000"
depends_on:
- mongo
networks:
- localnetwork
environment:
- MONGO_URI="mongodb://mongo:27017/"
mongo:
image: mongo:3
command: mongod --smallfiles
ports:
- "27017:27017"
networks:
- localnetwork
networks:
localnetwork:
keystone.js
var mongoose = require('mongoose');
mongoose.Promise = global.Promise;
mongoose.connect(process.env.MONGO_URI);
..... and some usual keystone stuff

It takes me a lot of time to figure out. In your docker-compose.yml, the MONGO_URI shouldn't wrap in double quotes, it should be mongodb://mongo:27017/ instead of "mongodb://mongo:27017". MONGO_URI=mongodb://mongo:27017/ instead of MONGO_URI="mongodb://mongo:27017/"

Related

Docker - Redis connect ECONNREFUSED 127.0.0.1:6379

I know this is a common error, but I literally spent the entire day trying to get past this error, trying everything I could find online. But I cant find anything that works for me.
I am very new to Docker and using it for my NodeJS + Express + Postgresql + Redis application.
Here is what I have for my docker-compose file:
version: "3.8"
services:
db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=admin
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
cache:
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
api:
container_name: api
build:
context: .
# target: production
# image: api
depends_on:
- db
- cache
ports:
- 3000:3000
environment:
NODE_ENV: production
DB_HOST: db
DB_PORT: 5432
DB_USER: postgres
DB_PASSWORD: admin
DB_NAME: postgres
REDIS_HOST: cache
REDIS_PORT: 6379
links:
- db
- cache
volumes:
- ./:/src
volumes:
db:
driver: local
cache:
driver: local
Here is my app.js upper part:
const express = require('express')
const app = express()
const cors = require('cors')
const redis = require('redis')
const client = redis.createClient({
host: 'cache',
port: 6379,
legacyMode: true // Also tried without this line, same behavior
})
client.connect()
client.on('connect', () => {
log('Redis connected')
})
app.use(cors())
app.use(express.json())
And my Dockerfile:
FROM node:16.15-alpine3.14
WORKDIR ./
COPY package.json ./
RUN npm install
COPY ./ ./
EXPOSE 3000 6379
CMD [ "npm", "run", "serve" ]
npm run serve is nodemon ./app.js.
I also already tried to prune the system and network.
What am I missing? Help!
There are two things to put in mind here,
First of All Docker Network:
Containers are exposed to your localhost system, so as a "Server" you can access each of them directly through the browser or the command-line, But
Taken for granted that you only can access the containers because they are exposed to a default network that is accessible by the root of the system - the docker user which you can inspect by the way.
The deployed containers are not exposed to each other by default, so you need to define a virtual network and expose them to it so they can talk to each other through the ports or the host name -- which will be the container_name
So you need to do two things:
Add a container name to the redis, in the compose file just like you did on the API
Create a network and bind all the services to it, one way of doing that will be:
version: "3.8"
Network:
my-network:
name: my-network
services:
....
cache:
container_name: cache
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
networks: # add it in all containers that communicate together
- my-network
Then and only then you can call redis container name as the host, since docker network will create a host name for the service by the container name,
When deploying the whole compose file later, the containers will be created and all will be joined to the network by default on startup and that will allow you API app to communicate with Redis container via the docker container name as the host name
Refer to these resources for more details:
Networking on Docker Compose
Docker Network Overview
A side Unrelated note:
I personally used redis from npm for some testing projects, but I found that ioredis was much better with TypeScript projects and more expected in its behavior
To Avoid any problems with Redis, make sure to create a password and use it to connect, sometimes redis randomly considers the client as a ReadOnly client and fails to find a read replica, adding the password solved it for me

MongoDB Database data getting deleted while using Docker on Digital Ocean droplet

I am hosting my website http://apgiiit.com/ on Digital Ocean cloud using docker containers. Site is build using Express and MongoDB. But, it seems when I run docker-compose down command all of my database data is getting wiped out somehow. I have no idea why this is happening. Any help would be greatly appreciated. Here's my docker-compose and Docker files for the project.
version: '3'
services:
app:
container_name: express_blog
restart: always
build: .
ports:
- '80:5000'
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- '27017:27017'
volumes:
- ./mongodb:/data/db/
volumes:
mongodb:
external: true
Here's the other docker file used to run express.
FROM node:12
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
I am using external volumes for storing mongodb data. I've created another volume using docker volume command and using that volume in the docker-compose file. What am I doing wrong here ?

Connect mongo and sapper server with docker

I am working in a sapper / svelte project and I need to build the sapper project and connect it to a mongodb (I need to start mongo compose from docker-compose.yml)
At the moment I was trying to connect the db to the local mongo on port localhost: 27017 but it can't establish the connection. What should I do?
Here there is my docker-compose
version: "3.4"
services:
myapp:
image: my_image
deploy:
update_config:
delay: 30s
parallelism: 1
failure_action: rollback
ports:
- "3000:3000"
and here my dockerfile
FROM node:lts-alpine
WORKDIR /app
COPY static static
COPY emails emails
COPY package.json .
ENV NODE_ENV production
RUN npm install
COPY __sapper__/build __sapper__/build
EXPOSE 3000
CMD ["node", "__sapper__/build/index.js"]
Also what should I do to start the mongo deployment directly from compose? I have mongo on docker but I should start both directly from compose.
I think mongo service should be added to services of docker-compose.yml.
for example.
services:
mongodb:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
Then, the node application can access to mongodb by the service name.(ex. mongodb:27017).
I think this URL will help.
https://hub.docker.com/_/mongo
version: "3.4"
services:
app:
image: yourimage
ports:
- "3000:3000"
environment:
- MONGODB_URL=mongodb://yourip/yourdb
mongodb:
image: mongo
restart: always
ports:
- "yourportsdb:yourportsdb"
it is not necessary to authenticate the mongo with password and user, eventually it passes the environments as suggested #Jihoon Yeo

Connect docker compose containers without links

https://docs.docker.com/compose/networking/
At above official docker document, I found the part
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
So I understood this paragraph that I can connect docker containers each other without links: or networks: explicitly. because above docker-compose.yml snippet doesn't have links or networks: part. and the document say web’s application code could connect to the URL postgres://db:5432
So I tried to test simple docker-compose with nodejs express, mongodb together using above way. I thought I can connect mongodb in express app with just mongodb://mongo:27017/myapp But I cannot connect mongodb in express container. I think I followed docker's official manual but I don't know why it's not working. Of course I can connect mongodb using links: or networks: But I heard links is depreciated and I cannot find the proper way to use networks:
I think I might be misunderstood, Please fix me.
Below is my docker-compose.yml
version: '3'
services:
app:
container_name: node
restart: always
build: .
ports:
- '3000:3000'
mongo:
image: mongo
ports:
- '27017:27017'
In express app, I connect to mongodb with
mongoose.connect('mongodb://mongo:27017/myapp', {
useMongoClient: true
});
// also doesn't work with mongodb://mongo/myapp
Plus) Dockerfile
FROM node:10.17-alpine3.9
ENV NODE_ENV development
WORKDIR /usr/src/app
COPY ["package*.json", "npm-shrinkwrap.json*", "./"]
RUN rm -rf node_modules
RUN apk --no-cache --virtual build-dependencies add \
python \
make \
g++ \
&& npm install \
&& apk del build-dependencies
COPY . .
EXPOSE 3000
CMD npm start
If you want to connect mongo with local then you should have to select network mode.
docket-compose.yml file content.
version: '2.1'
services:
z2padmin_docker:
image: z2padmin_docker
build: .
environment:
NODE_ENV: production
volumes: [/home/ankit/Z2PDATAHUB/uploads:/mnt/Z2PDATAHUB/uploads]
ports:
- 5000:5000
network_mode: host

docker-compose wait-for.sh fails for waiting mongodb

I am trying to set up a docker network with simple nodejs and mongodb services by following this guide, however, when building nodejs it fails because it can't connect to mongodb.
docker-compose.yml
version: "3"
services:
nodejs:
container_name: nodejs # How the container will appear when listing containers from the CLI
image: node:10 # The <container-name>:<tag-version> of the container, in this case the tag version aligns with the version of node
user: node # The user to run as in the container
working_dir: "/app" # Where to container will assume it should run commands and where you will start out if you go inside the container
networks:
- app # Networking can get complex, but for all intents and purposes just know that containers on the same network can speak to each other
ports:
- "3000:3000" # <host-port>:<container-port> to listen to, so anything running on port 3000 of the container will map to port 3000 on our localhost
volumes:
- ./:/app # <host-directory>:<container-directory> this says map the current directory from your system to the /app directory in the docker container
command: # The command docker will execute when starting the container, this command is not allowed to exit, if it does your container will stop
- ./wait-for.sh
- --timeout=15
- mongodb:27017
- --
- bash
- -c
- npm install && npm start
env_file: ".env"
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=mongodb
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
depends_on:
- mongodb
mongodb:
image: mongo:4.1.8-xenial
container_name: mongodb
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app
networks:
app:
driver: bridge
volumes:
dbdata:
app.js
const express = require('express');
var server = express();
var bodyParser = require('body-parser');
// getting-started.js
var mongoose = require('mongoose');
mongoose.connect('mongodb://simpleUser:123456#mongodb:27017/simpleDb', {useNewUrlParser: true});
server.listen(3000, function() {
console.log('Example app listening on port 3000');
});
Here is the common wait-for.sh script that I was using. https://github.com/eficode/wait-for/blob/master/wait-for
docker logs -f nodejs gives;
Operation timed out
Thanks for your help!
In this case I believe the issue is that you are using the wait-for.sh script which makes use of netcat command (see https://github.com/eficode/wait-for/blob/master/wait-for#L24), but the node:10 image does not have netcat installed...
I would suggest either creating a custom image based on the node:10 image and adding netcat or use a different approach (preferably a nodejs based solution) for checking if the mongodb is accessible
A sample Dockerfile for creating your own custom image would look something like this
FROM node:10
RUN apt update && apt install -y netcat
Then you can build this image by replacing image: node:10 with
build:
dockerfile: Dockerfile
context: .
and you should be fine
I found the problem which was because of the image node:10 doesn't have nc command installed so it was failing. I switched to image node:10-alpine and it worked.

Resources