Docker swarm secret not working correctly in node.js api (mongoDB) - node.js

I am trying to get my mongo_username and mongo_password in a swarm secret, but for some reason they are converted? I get this error in the container log
/usr/src/app/node_modules/saslprep/index.js:99
throw new Error(
^
Error: Prohibited character, see https://tools.ietf.org/html/rfc4013#section-2.3
at saslprep (/usr/src/app/node_modules/saslprep/index.js:99:11)
at continueScramConversation (/usr/src/app/node_modules/mongodb/lib/core/auth/scram.js:126:36)
at /usr/src/app/node_modules/mongodb/lib/core/auth/scram.js:111:5
at MessageStream.messageHandler (/usr/src/app/node_modules/mongodb/lib/cmap/connection.js:277:5)
at MessageStream.emit (events.js:315:20)
at processIncomingData (/usr/src/app/node_modules/mongodb/lib/cmap/message_stream.js:144:12)
at MessageStream._write (/usr/src/app/node_modules/mongodb/lib/cmap/message_stream.js:42:5)
at writeOrBuffer (_stream_writable.js:353:12)
at MessageStream.Writable.write (_stream_writable.js:303:12)
at Socket.ondata (_stream_readable.js:713:22)
at Socket.emit (events.js:315:20)
at addChunk (_stream_readable.js:302:12)
at readableAddChunk (_stream_readable.js:278:9)
at Socket.Readable.push (_stream_readable.js:217:10)
at TCP.onStreamRead (internal/stream_base_commons.js:186:23)
I added the secret like this
echo admin | docker secret create mongo_username -
echo totallySecurePassword23456789 | docker secret create mongo_password -
and when i log those two secrets in my DB connect function like this:
const mongoose = require("mongoose"); // For connection to DB
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DATABASE_NAME,
} = process.env;
console.log(MONGO_USERNAME);
console.log(MONGO_PASSWORD);
console.log(MONGO_HOSTNAME);
console.log(MONGO_PORT);
console.log(MONGO_DATABASE_NAME);
const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}#${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DATABASE_NAME}?retryWrites=true$w=majority`;
console.log(url);
const connectDB = async () => {
const conn = await mongoose.connect(url, {
useNewUrlParser: true,
useCreateIndex: true,
useFindAndModify: false,
useUnifiedTopology: true,
});
console.log(`MongoDB Connected: ${conn.connection.host}`.cyan.underline.bold);
};
module.exports = connectDB;
They show up correctly as:
admin
totallySecurePassword23456789
host.docker.internal
27020
mainDB
and yet the url which is made which is supposed to look like mongodb://admin:totallySecurePassword23456789#host.docker.internal:27020/mainDB?retryWrites=true$w=majority
shows up as #host.docker.internal:27020/mainDB?retryWrites=true$w=majority
And this ofcourse makes the connection fail
the hostname, port and database_name work fine because these are defined as normal environment variables
Any help would be really appreciated, and if more info is needed please let me know!
Edit 1:
Here is the docker-compose file i use to run docker stack:
version: "3.8"
services:
main:
image: main:5.0.0
environment:
- MONGO_USERNAME_FILE=/run/secrets/mongo_username
- MONGO_PASSWORD_FILE=/run/secrets/mongo_password
- MONGO_HOSTNAME=host.docker.internal
- MONGO_PORT=27020
- MONGO_DATABASE_NAME=mainDB
secrets:
- mongo_username
- mongo_password
networks:
- main-net
ports:
- "80:3001"
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 10s
window: 60s
networks:
main-net:
driver: overlay
secrets:
mongo_username:
external: true
mongo_password:
external: true
Edit 2:
Fixed the compose file missing the _FILE behind the username and password

After lots of investigating i found out i was having this problem https://github.com/aspnet/Configuration/issues/701, with no good fix found i settled on editing the entrypoint i was using (https://github.com/BretFisher/node-docker-good-defaults/blob/main/docker-entrypoint.sh) so it removed the end of line stuff before setting it as a environment variable by using
export "$var"="${val//[$'\t\r\n ']}"

I assume you use gnu bash. If you use echo shell builtin without -n option, it automatically adds new line to provided string.
Try these:
echo -n admin | docker secret create mongo_username -
echo -n totallySecurePassword23456789 | docker secret create mongo_password -
If you want to check your secret value whether includes new line, execute the following command on the machine where your docker cli resides:
docker container exec <your_container_name or id> cat -e /run/secrets/<your_secret_name>
If this command returns your value followed by $ symbol then we can conclude that new line character was appended while you were creating secret.
Note: I presume your container is a linux container. Also, be sure your container is in running state and its PATH environment variable must include the directory path where cat binary reside.

Related

Jest detects open redis client on travis-ci

I encountered some difficulties with redis testing on travis-ci.
Here is the redis setup code,
async function getClient() {
const redisClient = createClient({
socket: {
url: redisConfig.connectionString,
reconnectStrategy: (currentNumberOfRetries: number) => {
if (currentNumberOfRetries > 1) {
throw new Error("max retries reached");
}
return 1000;
},
},
});
try {
await redisClient.connect();
} catch (e) {
console.log(e);
}
return redisClient;
}
Here is the travis config, note that I run npm install redis because it is listed as a peer dependency.
language: node_js
node_js:
- "14"
dist: focal # ubuntu 20.04
services:
- postgresql
- redis-server
addons:
postgresql: "13"
apt:
packages:
- postgresql-13
env:
global:
- PGUSER=postgres
- PGPORT=5432 # for some reason unlike what documentation says, the port is 5432
jobs:
- NODE_ENV=ci
cache:
directories:
- node_modules
before_install:
- sudo sed -i -e '/local.*peer/s/postgres/all/' -e 's/peer\|md5/trust/g' /etc/postgresql/*/main/pg_hba.conf
- sudo service postgresql restart
- sleep 1
- postgres --version
- pg_lsclusters # shows port of postgresql, ubuntu specific command
install:
- npm i
- npm i redis
before_script:
- sudo psql -c 'create database orm_test;' -p 5432 -U postgres
script:
- npm run test-detectopen
The first issue is this missing client.connect function, whereas connection on my local machine with redis-server running works.
console.log
TypeError: redisClient.connect is not a function
at Object.getClient (/home/travis/build/sunjc826/mini-orm/src/connection/redis/index.ts:21:23)
at Function.init (/home/travis/build/sunjc826/mini-orm/src/data-mapper/index.ts:33:30)
at /home/travis/build/sunjc826/mini-orm/src/lib-test/tests/orm.test.ts:25:20
at Promise.then.completed (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/utils.js:390:28)
at new Promise (<anonymous>)
at callAsyncCircusFn (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/utils.js:315:10)
at _callCircusHook (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/run.js:181:40)
at _runTestsForDescribeBlock (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/run.js:47:7)
at run (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:166:21)
The second is this open handle issue, on my local machine, even if connection fails, jest does not give such an error and exits cleanly.
Jest has detected the following 1 open handle potentially keeping Jest from exiting:
● TCPWRAP
7 |
8 | async function getClient() {
> 9 | const redisClient = createClient({
| ^
10 | socket: {
11 | url: redisConfig.connectionString,
12 | reconnectStrategy: (currentNumberOfRetries: number) => {
at RedisClient.Object.<anonymous>.RedisClient.create_stream (node_modules/redis/index.js:196:31)
at new RedisClient (node_modules/redis/index.js:121:10)
at Object.<anonymous>.exports.createClient (node_modules/redis/index.js:1023:12)
at Object.getClient (src/connection/redis/index.ts:9:23)
at Function.init (src/data-mapper/index.ts:33:30)
at src/lib-test/tests/orm.test.ts:25:20
at TestScheduler.scheduleTests (node_modules/#jest/core/build/TestScheduler.js:333:13)
at runJest (node_modules/#jest/core/build/runJest.js:387:19)
at _run10000 (node_modules/#jest/core/build/cli/index.js:408:7)
at runCLI (node_modules/#jest/core/build/cli/index.js:261:3)
It turns out that this is likely caused by redis being a peer dependency.
Listing out node-redis versions, I'm guessing the version tagged latest (as of time writing 3.1.2) was installed instead of the version 4+.
So, I moved redis to regular dependencies instead.

Can't authenticate with mongoDB from docker-compose service

What I'm trying to do
I'm trying to set up a docker-compose definition, where I have a mongoDB container, and a nodeJS container that connects to it.
version: "3.9"
services:
events-db:
image: mongo
volumes:
- db-volume:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: $SANDBOX_DB_USER
MONGO_INITDB_ROOT_PASSWORD: $SANDBOX_DB_PASS
MONGO_INITDB_DATABASE: sandboxdb
app:
image: node:15.12.0
user: node
working_dir: /home/node/app
volumes:
- ./:/home/node/app:ro
environment:
MDB_CONNECTION: mongodb://$SANDBOX_DB_USER:$SANDBOX_DB_PASS#events-db:27017/sandboxdb
command: node myapp
depends_on:
- events-db
volumes:
db-volume:
Along with a .env file that declares the credentials (planning to use proper env variables when I deploy this to a production environment):
SANDBOX_DB_USER=myuser
SANDBOX_DB_PASS=myp4ss
Finally, my nodejs script, myapp.js is simply trying to connect, grab a reference to a collection, and insert a document:
require('dotenv').config()
const { MongoClient } = require('mongodb')
async function main () {
console.log('Connecting')
const client = new MongoClient(process.env.MDB_CONNECTION, {
connectTimeoutMS: 10000,
useUnifiedTopology: true,
})
await client.connect()
const db = client.db()
const events = db.collection('events')
console.log('Inserting an event')
await events.insertOne({
type: 'foo',
timestamp: new Date(),
})
console.log('Done.')
process.exit(0)
}
if (require.main === module) {
main()
}
Result
When I run docker-compose config I see the following output, so I would expect it to work:
$ docker-compose config
services:
app:
command: node myapp
depends_on:
events-db:
condition: service_started
environment:
MDB_CONNECTION: mongodb://myuser:myp4ss#events-db:27017/sandboxdb
image: node:15.12.0
user: node
volumes:
- C:\workspace\dcsandbox:/home/node/app:ro
working_dir: /home/node/app
events-db:
environment:
MONGO_INITDB_DATABASE: sandboxdb
MONGO_INITDB_ROOT_PASSWORD: myp4ss
MONGO_INITDB_ROOT_USERNAME: myuser
image: mongo
volumes:
- db-volume:/data/db:rw
version: '3.9'
volumes:
db-volume: {}
However, when I run docker-compose up I see that my node container is unable to connect to the mongoDB to insert an event:
events-db_1 | {"t":{"$date":"2021-04-07T13:57:36.793+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
app_1 | Connecting
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.811+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.27.0.3:34164","connectionId":1,"connectionCount":1}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.816+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn1","msg":"client metadata","attr":{"remote":"172.27.0.3:34164","client":"conn1","doc":{"driver":{"name":"nodejs","version":"3.6.6"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"4.19.128-microsoft-standard"},"platform":"'Node.js v15.12.0, LE (unified)"}}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.820+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.27.0.3:34166","connectionId":2,"connectionCount":2}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.822+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn2","msg":"client metadata","attr":{"remote":"172.27.0.3:34166","client":"conn2","doc":{"driver":{"name":"nodejs","version":"3.6.6"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"4.19.128-microsoft-standard"},"platform":"'Node.js v15.12.0, LE (unified)"}}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.822+00:00"},"s":"I", "c":"ACCESS", "id":20251, "ctx":"conn2","msg":"Supported SASL mechanisms requested for unknown user","attr":{"user":"myuser#sandboxdb"}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.823+00:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn2","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-256","principalName":"myuser","authenticationDatabase":"sandboxdb","client":"172.27.0.3:34166","result":"UserNotFound: Could not find user \"myuser\" for db \"sandboxdb\""}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.824+00:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn2","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","principalName":"myuser","authenticationDatabase":"sandboxdb","client":"172.27.0.3:34166","result":"UserNotFound: Could not find user \"myuser\" for db \"sandboxdb\""}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.826+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn1","msg":"Connection ended","attr":{"remote":"172.27.0.3:34164","connectionId":1,"connectionCount":1}}
app_1 | /home/node/app/node_modules/mongodb/lib/cmap/connection.js:268
app_1 | callback(new MongoError(document));
app_1 | ^
app_1 |
app_1 | MongoError: Authentication failed.
app_1 | at MessageStream.messageHandler (/home/node/app/node_modules/mongodb/lib/cmap/connection.js:268:20)
app_1 | at MessageStream.emit (node:events:369:20)
app_1 | at processIncomingData (/home/node/app/node_modules/mongodb/lib/cmap/message_stream.js:144:12)
app_1 | at MessageStream._write (/home/node/app/node_modules/mongodb/lib/cmap/message_stream.js:42:5)
app_1 | at writeOrBuffer (node:internal/streams/writable:395:12)
app_1 | at MessageStream.Writable.write (node:internal/streams/writable:340:10)
app_1 | at Socket.ondata (node:internal/streams/readable:750:22)
app_1 | at Socket.emit (node:events:369:20)
app_1 | at addChunk (node:internal/streams/readable:313:12)
app_1 | at readableAddChunk (node:internal/streams/readable:288:9) {
app_1 | ok: 0,
app_1 | code: 18,
app_1 | codeName: 'AuthenticationFailed'
app_1 | }
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.832+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn2","msg":"Connection ended","attr":{"remote":"172.27.0.3:34166","connectionId":2,"connectionCount":0}}
dcsandbox_app_1 exited with code 1
I've put the full output at https://pastebin.com/uNyJ6tiy
and the example code at this repo: https://github.com/akatechis/example-docker-compose-mongo-node-auth
After some more digging, I managed to figure it out. The issue is that the MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD variables simply set the root user's credentials, and the MONGO_INITDB_DATABASE simply sets the initial database for scripts in /docker-entrypoint-initdb.d.
By default, the root user is added to the admin database, so by removing the /sandboxdb part of the connection string, I was able to have my node app authenticate against the admin DB as the root user.
While this doesn't quite accomplish what I wanted initially (to create a separate, non-root user for my database, and use that to authenticate), I think this puts me on the right path to using an init script to set up the user accounts I want to have.

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

Docker - SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306

I'm trying to get my nodejs application up and running using a docker container. I have no clue what might be wrong. The credentials seems to be passed correctly when I debug the credentials with the console. Also firing up sequel pro and connecting directly with the same username and password seems to work. When node starts in the container I get the error message:
SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
The application itself is loading correctly on port 3000, however no data is retrieved from the database. If have also tried adding the environment variables directly to the docker compose file, but this also doesn't seem to work.
My project code is hosted over here: https://github.com/pietheinstrengholt/rssmonster
The following database.js configuration is used. When I add console.log(config) the correct credentials from the .env file are displayed.
require('dotenv').load();
const Sequelize = require('sequelize');
const fs = require('fs');
const path = require('path');
const env = process.env.NODE_ENV || 'development';
const config = require(path.join(__dirname + '/../config/config.js'))[env];
if (config.use_env_variable) {
var sequelize = new Sequelize(process.env[config.use_env_variable], config);
} else {
var sequelize = new Sequelize(config.database, config.username, config.password, config);
}
module.exports = sequelize;
When I do a console.log(config) inside the database.js I get the following output:
{
username: 'rssmonster',
password: 'password',
database: 'rssmonster',
host: 'localhost',
dialect: 'mysql'
}
Following .env:
DB_HOSTNAME=localhost
DB_PORT=3306
DB_DATABASE=rssmonster
DB_USERNAME=rssmonster
DB_PASSWORD=password
And the following docker-compose.yml:
version: '2.3'
services:
app:
depends_on:
mysql:
condition: service_healthy
build:
context: ./
dockerfile: app.dockerfile
image: rssmonster/app
ports:
- 3000:3000
environment:
NODE_ENV: development
PORT: 3000
DB_USERNAME: rssmonster
DB_PASSWORD: password
DB_DATABASE: rssmonster
DB_HOSTNAME: localhost
working_dir: /usr/local/rssmonster/server
env_file:
- ./server/.env
links:
- mysql:mysql
mysql:
container_name: mysqldb
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_DATABASE: "rssmonster"
MYSQL_USER: "rssmonster"
MYSQL_PASSWORD: "password"
ports:
- "3307:3306"
volumes:
- /var/lib/mysql
restart: unless-stopped
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 5s
retries: 10
volumes:
dbdata:
Error output:
{ SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
app_1 | at Promise.tap.then.catch.err (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:128:19)
app_1 | From previous event:
app_1 | at ConnectionManager.connect (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:125:13)
app_1 | at sequelize.runHooks.then (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:306:50)
app_1 | From previous event:
app_1 | at ConnectionManager._connect (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:306:8)
app_1 | at ConnectionManager.getConnection (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:247:46)
app_1 | at Promise.try (/usr/local/rssmonster/server/node_modules/sequelize/lib/sequelize.js:564:34)
app_1 | From previous event:
app_1 | at Promise.resolve.retryParameters (/usr/local/rssmonster/server/node_modules/sequelize/lib/sequelize.js:464:64)
app_1 | at /usr/local/rssmonster/server/node_modules/retry-as-promised/index.js:60:21
app_1 | at new Promise (<anonymous>)
Insteaf of localhost point to mysql which is the service name (DNS) that nodejs will resolve into the MySQL container:
DB_HOSTNAME: mysql
And
{
...
host: 'mysql',
...
}
Inside of the container you should reference the container by the name you gave in your docker-compose.yml file.
In this case you should use
DB_HOSTNAME: mysql
After searching and digging up through several googling attempt, the culprit of the problem soon appear. In this context, the database server is not in the same machine. In other words, the MySQL Database Server address is not localhost. So, how can the above MySQL database configuration by default is pointing to localhost address. Well, it seems that if there is no further definition of the host address, it will connect to the localhost address by default. Read the article for further reference about sequelize syntax pattern in this link.
So, in order to solve the problem, just modify the file with the right configuration database. The following is the correction of the configuration database :
const sequelize = require("sequelize")
const db = new sequelize("db_master","db_user","password", {
host : "10.0.2.2",
dialect: "mysql"
});
db.sync({});
module.exports = db;
Actually, the NodeJS application is running in a virtual server. It is a guest machine run in a VirtualBox application. On the other hand, MySQL Database server exist outside the guest machine. It is available in the host machine where the VirtualBox application is running. The host machine IP address is 10.0.2.2. So, in order to connect to MySQL Database Server in the host machine, the IP address of the host is 10.0.2.2.
use your connection string as :
mysql://username:password#mysql:(port_running_on_container)or(exposed_port)/db_name
Answers already exist, but to provide some further explanation:
You can't use 127.0.0.1 (localhost) to access other services/containers since each container will view that as inside itself. When running docker-compose, all your services will be entered into the same docker network. All services inside the same docker network, are able to reach eachother by service name.
hence, as already stated in previous answers: in your configuration, change db hostname from localhost to mysql.
three things to check before
make sure your service name must be MySQL
in Configure DB_HOST also a MySQL
And your backend service depends on mysql in docker-compose.yml
here is my success code
export const db = new Sequelize(
process.env.DB_NAME,
process.env.DB_USER,
process.env.DB_PASSWORD,
{
port: process.env.DB_PORT,
host:'mysql',
dialect: "mysql",
logging: false,
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
},
}
);

Not able to connect a rabbitmq docker with appl docker

I have a simple app (simple.js), written in nodejs,
var amqp = require('amqplib/callback_api');
amqp.connect('amqp://localhost', function(err, conn) {
conn.createChannel(function(err, ch) {
var q = 'hello';
ch.assertQueue(q, {durable: false});
// Note: on Node 6 Buffer.from(msg) should be used
ch.sendToQueue(q, new Buffer('Hello World!'));
console.log(" [x] Sent 'Hello World!'");
});
setTimeout(function() { conn.close(); process.exit(0) }, 500);
});
And I have dockerized it (simple-app). Then I pulled rabbitmq docker from docker hub.
When I try to link these in following manner :
docker run -d --name rabbitmq-server rabbitmq:latest
docker build -t simple-app .
docker run -d -P --name myapp --link rabbitmq-server:rabbitmq-server simple-app
docker logs a8789193af523b
I get the below mentioned error :
/usr/src/server/simple.js:3
conn.createChannel(function(err, ch) {
^
TypeError: Cannot read property 'createChannel' of undefined
at /usr/src/server/simple.js:3:7
at /usr/src/server/node_modules/amqplib/callback_api.js:16:10
at Socket.<anonymous> (/usr/src/server/node_modules/amqplib/lib/connect.js:167:18)
at Socket.g (events.js:292:16)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1277:8)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9)
I have tried many ways in order to resolve :
1. Exposed all ports with -P option using --links
2. Did mapping of specific ports with -p option --links
3. created network with bridge as driver and used that n/w for starting all my dockers.
None of the above methods seem to work, I am continuosly getting same error.
I am trying this on AWS (EC2) .. amazon linux.
Also, if I try to run the simple.js directly on linux, it is successfully connecting to my rabbitmq server (which is dockerized).
Please help me out here !!

Resources