How to setup replica sets to docker containers using custom MongoDB Configuration file - node.js

I Need Your help,
I Created three docker MongoDB containers using a custom sample config file, then I need to implement replica sets to these containers, but I can't implement, and I can't access other containers IP and Port
db.yaml
storage:
dbPath: /data/db
journal:
enabled: true
replication:
replSetName: "my_replicaSet"
net:
bindIp: 127.0.0.1
port: 26017
db1.yaml
storage:
dbPath: /data/db
journal:
enabled: true
replication:
replSetName: "my_replicaSet"
net:
bindIp: 127.0.0.1
port: 28017
db2.yaml
storage:
dbPath: /data/db
journal:
enabled: true
replication:
replSetName: "my_replicaSet"
net:
bindIp: 127.0.0.1
port: 29017
First created three docker containers using below command
Container Name:DB
docker run --name DB -v /home/mahesh/Documents/Trishula/cortana/database:/etc/mongo --net my-mongo-cluster -d mongo --config /etc/mongo/db.yaml
Container Name:DB1
docker run --name DB -v /home/mahesh/Documents/Trishula/cortana/database:/etc/mongo --net my-mongo-cluster -d mongo --config /etc/mongo/db1.yaml
Container Name:DB2
docker run --name DB -v /home/mahesh/Documents/Trishula/cortana/database:/etc/mongo --net my-mongo-cluster -d mongo --config /etc/mongo/db2.yaml
then open a docker container DB shell with mongo --port 26017
initiated Replica sets with rs.initiate()
then add another docker container as a member to that shell by defining rs.add("DB1"), here DB1 is the name of another container, I got the error message like this
my_replicaSet:PRIMARY> rs.add("DB1")
{
"operationTime" : Timestamp(1597812494, 1),
"ok" : 0,
"errmsg" : "Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2",
"code" : 103,
"codeName" : "NewReplicaSetConfigurationIncompatible",
"$clusterTime" : {
"clusterTime" : Timestamp(1597812494, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
I also tried with given the container port and IP address which is defined in YAML file,
my_replicaSet:PRIMARY> rs.add("127.0.0.1:28017")
{
"operationTime" : Timestamp(1597812984, 1),
"ok" : 0,
"errmsg" : "Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: 127.0.0.1:26017; the following nodes did not respond affirmatively: 127.0.0.1:28017 failed with Error connecting to 127.0.0.1:28017 :: caused by :: Connection refused",
"code" : 74,
"codeName" : "NodeNotFound",
"$clusterTime" : {
"clusterTime" : Timestamp(1597812984, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
I Had Implemented replica sets with command lines, but I can't implement replica sets with custom MongoDB configuration YAML file with docker MongoDB containers, Please help, I have been working on this for the past one week...
Note: I didn't use docker-compose YAML file...

If you are trying to setup mongodb replicaset locally using docker, refer https://medium.com/#simone.pezzano/quick-docker-and-mongodb-replica-set-on-your-computer-5c2470012a41

Related

Elasticsearch cluster isn't shown up

Hi I installed Elasticsearch 6.6 with Ansible playbook over a cluster with 3 nodes.
All nodes are on the same port.
When I run the query:
curl -u es_admin:<pass> -X GET 'https://<hostname1>:9201/_nodes/process?pretty' -k
I see only one node in the cluster:
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "new_cluster",
"nodes" : {
"Qlqcbgs_QmWXpglNVoOApQ" : {
"name" : "node1",
"transport_address" : "<IP_address>:9301",
"host" : "<hostname1>",
"ip" : "<IP_address>",
"version" : "6.6.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "<build_hash_number>",
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"ml.machine_memory" : "16653647872",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20",
"ml.enabled" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 11674,
"mlockall" : false
}
}
}
}
I get the same output for each node separately:
curl -u es_admin:<pass> -X GET 'https://<hostname2>:9201/_nodes/process?pretty' -k
curl -u es_admin:<pass> -X GET 'https://<hostname3>:9201/_nodes/process?pretty' -k
Under elasticsearch.template.yml I do see the other nodes. For example if I go to node1 I see the other two:
discovery.zen.ping.unicast.hosts:
- <hostname2>:9301
- <hostname3>:9301
here is elasticsearch.yml:
node.name: node1
network.host: <hostname>
http.port: 9201
transport.tcp.port: 9301
node.master: true
node.data: true
node.ingest: true
search.remote.connect: true
#################################### Paths ####################################
# Path to directory containing configuration (this file and logging.yml):
path.data: /var/lib/elasticsearch/node1
path.logs: /var/log/elasticsearch/node1
discovery.zen.ping.unicast.hosts:
- <hostname2>:9301
- <hostname3>:9301
xpack.license.self_generated.type: trial
node.ml: true
xpack.ml.enabled: true
xpack.security.audit.enabled: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
xpack.ssl.keystore.path: **path**
xpack.ssl.keystore.password: *passwd*
xpack.ssl.truststore.path: **path**
xpack.ssl.truststore.password: *passwd*
What should be done in order to see all the nodes under the same cluster?
In 6.X you also need to set discovery.zen.minimum_master_nodes to say to your nodes what is the minimum number of master nodes required to form a cluster.
Since you didn't set it, each of your nodes think they are the master node and they won't join any cluster.
Set it to discovery.zen.minimum_master_nodes: 2 in each elasticsearch.yml file and restart your nodes.
I think all discovery.zen.ping.unicast.hosts must be the same in all node.
discovery.zen.ping.unicast.hosts:
- <hostname1>:9301
- <hostname2>:9301
- <hostname3>:9301
please try this or just:
discovery.zen.ping.unicast.hosts: ["hostname1:9301"]

NestJS and TypeORM fail to connect my local Postgres database. Claims my database does not exist, even tho it does

I have NestJS application that uses TypeORM to connect to my local database. I create database with shell script:
#!/bin/bash
set -e
SERVER="my_database_server";
PW="mysecretpassword";
DB="my_database";
echo "echo stop & remove old docker [$SERVER] and starting new fresh instance of [$SERVER]"
(docker kill $SERVER || :) && \
(docker rm $SERVER || :) && \
docker run --name $SERVER -e POSTGRES_PASSWORD=$PW \
-e PGPASSWORD=$PW \
-p 5432:5432 \
-d postgres
# wait for pg to start
echo "sleep wait for pg-server [$SERVER] to start";
SLEEP 3;
# create the db
echo "CREATE DATABASE $DB ENCODING 'UTF-8';" | docker exec -i $SERVER psql -U postgres
echo "\l" | docker exec -i $SERVER psql -U postgres
After that, it logs databases:
Then I fire up my application, and I encounter error "error: database "my_database" does not exist"
I use following code to connect to database:
static getDatabaseConnection(): TypeOrmModuleOptions {
console.log(require('dotenv').config())
return {
type: 'postgres',
host: "127.0.0.1",
port: 5432,
username: 'postgres',
password: 'mysecretpassword',
database: 'my_database',
entities: ['dist/**/*.entity{.ts,.js}'],
synchronize: true,
};
}
Any ideas where do I go wrong?
When connecting to a docker instance, you should usually use the service name. In this case I guess it is my_database_server as host parameter.
return {
type: 'postgres',
host: "my_database_server",
port: 5432,
username: 'postgres',
password: 'mysecretpassword',
database: 'my_database',
entities: ['dist/**/*.entity{.ts,.js}'],
synchronize: true,
};
"localhost" isn't address of your docker container. Which address uses docker you can look running command:
$ docker inspect {your_container_name}
for me is: 172.17.0.2
Try enable SSL, adding next configuration lines:
ssl: true,
extra: { ssl: { rejectUnauthorized: false } }
Try using localhost instead of 127.0.0.1

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

Docker - SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306

I'm trying to get my nodejs application up and running using a docker container. I have no clue what might be wrong. The credentials seems to be passed correctly when I debug the credentials with the console. Also firing up sequel pro and connecting directly with the same username and password seems to work. When node starts in the container I get the error message:
SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
The application itself is loading correctly on port 3000, however no data is retrieved from the database. If have also tried adding the environment variables directly to the docker compose file, but this also doesn't seem to work.
My project code is hosted over here: https://github.com/pietheinstrengholt/rssmonster
The following database.js configuration is used. When I add console.log(config) the correct credentials from the .env file are displayed.
require('dotenv').load();
const Sequelize = require('sequelize');
const fs = require('fs');
const path = require('path');
const env = process.env.NODE_ENV || 'development';
const config = require(path.join(__dirname + '/../config/config.js'))[env];
if (config.use_env_variable) {
var sequelize = new Sequelize(process.env[config.use_env_variable], config);
} else {
var sequelize = new Sequelize(config.database, config.username, config.password, config);
}
module.exports = sequelize;
When I do a console.log(config) inside the database.js I get the following output:
{
username: 'rssmonster',
password: 'password',
database: 'rssmonster',
host: 'localhost',
dialect: 'mysql'
}
Following .env:
DB_HOSTNAME=localhost
DB_PORT=3306
DB_DATABASE=rssmonster
DB_USERNAME=rssmonster
DB_PASSWORD=password
And the following docker-compose.yml:
version: '2.3'
services:
app:
depends_on:
mysql:
condition: service_healthy
build:
context: ./
dockerfile: app.dockerfile
image: rssmonster/app
ports:
- 3000:3000
environment:
NODE_ENV: development
PORT: 3000
DB_USERNAME: rssmonster
DB_PASSWORD: password
DB_DATABASE: rssmonster
DB_HOSTNAME: localhost
working_dir: /usr/local/rssmonster/server
env_file:
- ./server/.env
links:
- mysql:mysql
mysql:
container_name: mysqldb
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_DATABASE: "rssmonster"
MYSQL_USER: "rssmonster"
MYSQL_PASSWORD: "password"
ports:
- "3307:3306"
volumes:
- /var/lib/mysql
restart: unless-stopped
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 5s
retries: 10
volumes:
dbdata:
Error output:
{ SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
app_1 | at Promise.tap.then.catch.err (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:128:19)
app_1 | From previous event:
app_1 | at ConnectionManager.connect (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:125:13)
app_1 | at sequelize.runHooks.then (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:306:50)
app_1 | From previous event:
app_1 | at ConnectionManager._connect (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:306:8)
app_1 | at ConnectionManager.getConnection (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:247:46)
app_1 | at Promise.try (/usr/local/rssmonster/server/node_modules/sequelize/lib/sequelize.js:564:34)
app_1 | From previous event:
app_1 | at Promise.resolve.retryParameters (/usr/local/rssmonster/server/node_modules/sequelize/lib/sequelize.js:464:64)
app_1 | at /usr/local/rssmonster/server/node_modules/retry-as-promised/index.js:60:21
app_1 | at new Promise (<anonymous>)
Insteaf of localhost point to mysql which is the service name (DNS) that nodejs will resolve into the MySQL container:
DB_HOSTNAME: mysql
And
{
...
host: 'mysql',
...
}
Inside of the container you should reference the container by the name you gave in your docker-compose.yml file.
In this case you should use
DB_HOSTNAME: mysql
After searching and digging up through several googling attempt, the culprit of the problem soon appear. In this context, the database server is not in the same machine. In other words, the MySQL Database Server address is not localhost. So, how can the above MySQL database configuration by default is pointing to localhost address. Well, it seems that if there is no further definition of the host address, it will connect to the localhost address by default. Read the article for further reference about sequelize syntax pattern in this link.
So, in order to solve the problem, just modify the file with the right configuration database. The following is the correction of the configuration database :
const sequelize = require("sequelize")
const db = new sequelize("db_master","db_user","password", {
host : "10.0.2.2",
dialect: "mysql"
});
db.sync({});
module.exports = db;
Actually, the NodeJS application is running in a virtual server. It is a guest machine run in a VirtualBox application. On the other hand, MySQL Database server exist outside the guest machine. It is available in the host machine where the VirtualBox application is running. The host machine IP address is 10.0.2.2. So, in order to connect to MySQL Database Server in the host machine, the IP address of the host is 10.0.2.2.
use your connection string as :
mysql://username:password#mysql:(port_running_on_container)or(exposed_port)/db_name
Answers already exist, but to provide some further explanation:
You can't use 127.0.0.1 (localhost) to access other services/containers since each container will view that as inside itself. When running docker-compose, all your services will be entered into the same docker network. All services inside the same docker network, are able to reach eachother by service name.
hence, as already stated in previous answers: in your configuration, change db hostname from localhost to mysql.
three things to check before
make sure your service name must be MySQL
in Configure DB_HOST also a MySQL
And your backend service depends on mysql in docker-compose.yml
here is my success code
export const db = new Sequelize(
process.env.DB_NAME,
process.env.DB_USER,
process.env.DB_PASSWORD,
{
port: process.env.DB_PORT,
host:'mysql',
dialect: "mysql",
logging: false,
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
},
}
);

Could not connect to mongod from mongo shell when auth enabled (on ubuntu)

System: ubuntu 14.04
mongodb 3.0.3 tar ball is downloaded from mongodb download center
connected to mongodb without auth, then from mongo shell, created a user for 'test' db. following is the command.
db.createUser({user: "user1",
pwd: "test123",
roles: [ { role: "readWrite", db: "test" }
]})
Verified that user details in admin db. Following is the command & result:
> db.system.users.findOne({user:'user1'})
{
"_id" : "testdb.user1",
"user" : "user1",
"db" : "testdb",
"credentials" : {
"SCRAM-SHA-1" : {
"iterationCount" : 10000,
"salt" : "kNfOd1vs+QT+ueH7SI6Vzw==",
"storedKey" : "JCesIKSW1pb74ddo2Y19rEO1GVY=",
"serverKey" : "d87Sb1htoD5K8zecAy73JPZyHdc="
}
},
"roles" : [
{
"role" : "readWrite",
"db" : "test"
}
]
}
Now exit from the mongo shell, killed the mongod.
Started the mongodb with auth, following is the command.
$ ./mongod --auth
Connected to mongo shell as usual, see the below:
$ ./mongo
MongoDB shell version: 3.0.3
connecting to: test
> show collections
2016-05-11T22:33:46.302+0530 E QUERY Error: listCollections failed: {
"ok" : 0,
"errmsg" : "not authorized on test to execute command { listCollections: 1.0 }",
"code" : 13
}
at Error (<anonymous>)
at DB._getCollectionInfosCommand (src/mongo/shell/db.js:646:15)
at DB.getCollectionInfos (src/mongo/shell/db.js:658:20)
at DB.getCollectionNames (src/mongo/shell/db.js:669:17)
at shellHelper.show (src/mongo/shell/utils.js:625:12)
at shellHelper (src/mongo/shell/utils.js:524:36)
at (shellhelp2):1:1 at src/mongo/shell/db.js:646
> db.auth({user:'user1', pwd:'test123'})
1
> use test
switched to db test
> db.collone.insert({name:'firstcollection'})
WriteResult({ "nInserted" : 1 })
> show collections
collone
system.indexes
> db.collone.find()
{ "_id" : ObjectId("5733669fb7d44cd444ebf028"), "name" : "firstcollection" }
> exit
bye
When i tried to do the authentication while starting the mongo shell, getting authentication failed error. See below:
$ ./mongo test -u 'user1' -p 'test123' --authenticationDatabase 'admin'
MongoDB shell version: 3.0.3
connecting to: test
2016-05-11T22:37:21.559+0530 E QUERY Error: 18 Authentication failed.
at DB._authOrThrow (src/mongo/shell/db.js:1266:32)
at (auth):6:8
at (auth):7:2 at src/mongo/shell/db.js:1266
exception: login failed
All this is just a POC that i'm trying to do.
Once it's success, my target is to connect from mongoose client(from Node.js app) to mongod.
The following command from a stackoverflow post can help me to set up connection from mongoose to mongod with auth.

Resources