I am trying to create local dev environment NodeJS (node:latest) & MongoDB on docker (mongo:latest) and I use mongoose to connect with MongoDB. I would like setup very simple environment.
My docker-compose.yml
version: '3'
services:
server:
build: .
container_name: cms_webserver
ports:
- "4000:4000"
volumes:
- ./api:/app/api
- ./node_modules:/app/node_modules
links:
- mongo
mongo:
container_name: cms_mongo
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root
- MONGO_INITDB_DATABASE=cms
ports:
- "27017:27017"
volumes:
- ./docker/mongod.conf:/etc/mongod.conf
- ./data:/data/db
adminmongo:
image: mrvautin/adminmongo
ports:
- "1234:1234"
mongod.conf
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
security:
authorization: disabled
net:
port: 27017
bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
Anytime when I rebuild docker I remove content of data/. Every time I have seen like new user is added with success:
Successfully added user: {
"user" : "root",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
When I try connect to mongoDB in node I use
mongoose.connect('mongodb://mongo')
and get info about success.
When I try connect to mongoDB to default DB: cms
mongoose.connect('mongodb://root:root#mongo/cms')
I get an error:
SCRAM-SHA-1 authentication failed for root on cms from client
172.18.0.4:59142 ; UserNotFound: Could not find user root#cms
My first question is how can I disabled security? I found and an option to set security: authorization: disabled and even I have done with this still get problem.
Second question is in topic.
Run docker ps and check which container belongs to mongo then use docker exec -it XXX bash to log into. I use simple command mongo and I can log to mongo. What I can do is open mongo with command
mongo -u root -proot admin but I cannot with mongo -u root -p admin (mongo does not ask me about password)
As an admin (root/root) I can create new DB by run command use cms, I can also add record and check if DB exist
I can create new user let say:
db.createUser({user:"test",pwd:"test",roles:[{role:"dbOwner",db:"cms"}]});
I exit and try again log to DB cms I created and get an error:
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017/cms
MongoDB server version: 3.6.3
2018-04-03T12:13:41.428+0000 E QUERY [thread1] Error: Authentication > failed. :
DB.prototype._authOrThrow#src/mongo/shell/db.js:1608:20
#(auth):6:1
#(auth):1:2
exception: login failed
Is there anyone who knows what's going on and could explain me what I am doing wrong or even give me ready to use examples how I can setup mongo DB on my local? I am not very strong in DevOps and I am totally lost.
Related
I am using docker to connect node and mongo and I am trying to insert data in a database. All the containers are up and running. And it is running perfectly on my local machine but in the server I get the following error.
MongoServerError: not authorized on app to execute command { insert: "users", documents: [ { username: "riwaj", password: "$2a$12$C3hpChig42coIoMEbtegsepw7tJeflHqpW7x.0/jPseX6G5KUXWO.", _id: ObjectId('63d41dc11d038db2b950a744'), __v: 0 } ], ordered: true, lsid:....
This clearly states that the user riwaj is not allowed to perform insert operation on the database. However, I have defined the necessary attributes required for mongo container as mentioned in the documentation in my docker-compose file which are:
MONGO_INITDB_ROOT_USERNAME=riwaj
MONGO_INITDB_ROOT_PASSWORD=dummypasswordxx
The users are created as per the credentials but I checked it by going into the interactive shell of the container and executing the following command
mongosh -u riwaj -password
However even here if I try to insert data into a database using the mongo insert() function I get a similar error related to authorization.
For more reference here is my docker-compose files:
docker-compose.yml
version: "3"
services:
nginx:
image: nginx:stable-alpine
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
node-app:
build: .
environment:
- PORT=3000
depends_on:
- mongo
#adding mango container
mongo: #service name for mongo
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=riwaj
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- mongo-db:/data/db #Named volume for data persistance
#adding redis container
redis:
image: redis
volumes:
mongo-db:
docker-compose.prod.yml
version: "3"
services:
nginx:
ports:
- "80:80"
node-app:
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWORD=${MONGO_PASSWORD}
- SESSION_SECRET=${SESSION_SECRET}
command: node index.js
mongo:
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE= app
It is clear the issue is with authorization but shouldn't the root user have all the authorization to do all read-write operations? Here is the link to the repo https://github.com/Riwajchalise/node-docker where the project is pushed the endpoint for the signup api is mentioned in the Readme file
Would be really helpful if you can contribute in any way
Docker file for my node app
FROM node:latest
COPY . .
RUN npm install
EXPOSE 5000
CMD ["npm", "start"]
docker compose file -
version: '3'
services:
pern-todo-backend:
image: pern-todo-backend
ports:
- 5000:5000
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; npm start'
depends_on:
- db
environment:
- DATABASE_URL=postgres://postgres:*****#db:5432/pern
- PORT=5000
db:
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=****
- POSTGRES_DB=pern
when i try to hit the endpoint from postman -
{
"errno": -111,
"code": "ECONNREFUSED",
"syscall": "connect",
"address": "172.26.0.3",
"port": 5432
}
I tried updating my pg pool hostname as the name of my container also
const pool = new Pool({
user : 'postgres',
password : 'subh1994',
host : 'localhost',
port : 5432,
database : 'pern'
})
I'm new with Docker , please help . thanks
Can you check your container name of db service by docker ps
It can be different than service name (db), you can try replacing it in DATABASE_URL connection string.
But one weird thing is db is getting resolved to 172.26.0.3 if the db is correct container name then you can try checking logs by docker logs db to get details on what might be wrong.
On side note if you don't want datbase exposed to host you can skip the ports mapping in db service.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
https://docs.docker.com/compose/networking/
I am currently working on an angular app using Rest API (Express, Nodejs) and Postgresql. Everything worked well when hosted on my local machine. After testing, I moved the images to Ubuntu server so the app can be hosted on an external port. I am able to access the angular frontend using the https://serveripaddress:80 but when trying to login, the api is not connecting to Postgresql. I am getting an error message: ERR_CONNECTION_REFUSED. Here is my docker-compose file:
version: '3.0'
services:
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: myDatabase
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
ports:
- 5434:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
backend: # name of the second service
image: myid/nodeapi
ports:
- 3000:3000
environment:
POSTGRES_DB: myDatabase
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
depends_on:
- db
command: bash -c "sleep 20 && node server.js"
myapp-portal:
image: myId/angular-app
ports:
- "80:80"
depends_on:
- backend
volumes:
postgres-data:
The code to connect to database:
const { Client } = require('pg')
const client = new Client({
database: process.env.POSTGRES_DB,
user: 'postgres',
password: process.env.POSTGRES_PASSWORD,
host: process.env.POSTGRES_HOST,
port: process.env.POSTGRES_PORT
})
client.connect()
.then(
() => {
console.log("db connected");
})
and the docker-compose log for backend:
backend_1 | db connected
When I exec into the database docker container and connect to psql, I see that my database is created(used pg_dump manually) with all the tables and data. My guess is that node.js is connecting to the default Postgres database created at the time of the installation. I had the same issue on my local machine but I resolved it by creating a new server group in pgAdmin4 and creating a new db on port 5434. I prefer not to do this on server as it defeats the purpose of the concept of docker. Another thought is perhaps node.js is attempting to connect to the database even before it is up. That is the reason I added the line 'sleep 20' which worked on my local machine. Any thoughts on how I can fix this? TIA!
If you want to wait on the availability of a host and TCP port, you can use this script https://github.com/vishnubob/wait-for-it
In your docker file you can copy this file into container and change mode
RUN chmod +x wait-for-it.sh
Then in your docker compose run this script on service which you want to wait
entrypoint: bash -c "./wait-for-it.sh --timeout=0 service_name:service_port && node server.js"
I am facing weird issue while connecting MongoDB running in a separate container from my nodejs container, it displays the following error while trying to connect to MongoDB.
My Dockerfile
FROM node:latest
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8000
CMD ["npm","start"]
enter code here
docker-compose
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
links:
- mongo
- redis
mongo:
image: mongo
ports:
- "49155:49155"
redis:
image: "redis:alpine"
mongo config
mongoose: { // MongoDB
// uri: mongodb://username:password#host:port/database?options
uri: `mongodb://localhost:27017/${DB_NAME}`,
options: {
},
seed: {
path: '/api/models/seeds/',
list: [
{
file: 'user.seed',
schema: 'User',
plant: 'once' // once - always - never
},
{
file: 'example.seed',
schema: 'Example',
plant: 'once'
}
]
},
},
Issue
enter image description here
I am only studied docker pls help me
On creating a new container, docker will attach that container to a default internal bridge.
https://docs.docker.com/network/ check network drivers.
To make it available for the localhost
you will have to set
network_mode: "host", in docker-compose in mongo section
indrajeet's answer will work, but since you are venturing into docker-compose, it is better if you treat each of your services/container as a host. In your case, the error is because your web app is trying to connect to a mongo service running on the same (web) localhost.
Instead you should connect to the mongo service/contaner, which docker-compose conveniently let your web container to access the mongo service/container/host via the hostname "mongo" ( the service name you specified in the docker-compose yaml)
My answer is to change the uri line to
uri: `mongodb://mongo:27017/${DB_NAME}`,
My problem is I have a script that should scrap data and put it inside postgres database, however it has a problem to reach out postgres container.
When I run my docker-compose here is the result:
Name Command State Ports
------------------------------------------------------------------------------------------
orcsearch_dev-db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
orcsearch_flask_1 gunicorn wsgi:application ... Up 0.0.0.0:80->80/tcp, 8000/tcp
We can clearly see that postgres is on 5432 port.
This is my python script database setting:(ofcourse I removed password for obvious reason)
class App():
settings = {
'db_host': 'db',
'db_user': 'postgres',
'db_pass': '',
'db_db': 'orc',
}
db = None
proxies = None
and this is my docker-compose.yml
version: '2'
services:
flask:
build:
context: ./backend
dockerfile: Dockerfile.dev
volumes:
- ./backend:/app
- pip-cache:/root/.cache
ports:
- "80:80"
links:
- "dev-db:db"
environment:
- DATABASE_URL=postgresql://postgres#db:5432/postgres
stdin_open: true
command: gunicorn wsgi:application -w 1 --bind 0.0.0.0:80 --log-level debug --reload
networks:
app:
aliases:
- flask
dev-db:
image: postgres:9.5
ports:
- "5432:5432"
networks:
app:
aliases:
- dev-db
volumes:
pip-cache:
driver: local
networks:
app:
When going into exec flask bash(inside flask container) and running script command I get this error:
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Obviously there is postgres running on this port and I cant figure out what wrong do I do. Any help would be nice!
Probably you are using DSN instead of URI, and PostgreSQL thinks that "db" is not a host because it's hard to tell if "db" is host or path to socket. To fix it, use URI instead of DSN if you use >=9.2 version of PostgreSQL.
Example of URI:
postgresql://[user[:password]#][netloc][:port][/dbname][?param1=value1&...]
https://www.postgresql.org/docs/9.2/static/libpq-connect.html#LIBPQ-CONNSTRING
In your App class it should be 'db_host': 'dev-db', Seems like that hostname is exposed, not db.
I think that the problem is related to the fact that you're using the network and the link together. Try remove the link and change the postgres address to dev-db or change the alias to:
networks:
app:
aliases:
- dev-db
- db