Mongo single node replicaset on Docker, MongooseServerSelectionError: connect ECONNREFUSED - node.js

I'm trying to set up a single node MongoDB replicaset on Docker and connect to it within my node app, but the connection is refused. The connection works fine if I use mongo as a standalone instance and no replicaset.
This is how I connect to mongo on my node app :
mongoose.connect("mongodb://admin:secretpass#app_mongodb:27017/dbname?authSource=admin&replicaSet=rs0")
And this is the error I receive :
/var/www/worker/node_modules/mongoose/lib/connection.js:824
const serverSelectionError = new ServerSelectionError();
^
MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
Mongo logs show that the app tries to connect :
{"t":{"$date":"2023-01-27T10:22:46.410+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.18.0.10:41318","uuid":"2e1ccbc5-3162-4f64-80f3-8be580079ef6","connectionId":68,"connectionCount":11}}
{"t":{"$date":"2023-01-27T10:22:46.417+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn68","msg":"client metadata","attr":{"remote":"172.18.0.10:41318","client":"conn68","doc":{"driver":{"name":"nodejs|Mongoose","version":"4.11.0"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"5.10.0-12-amd64"},"platform":"Node.js v19.5.0, LE (unified)","version":"4.11.0|6.7.5"}}}
{"t":{"$date":"2023-01-27T10:22:46.425+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn68","msg":"Connection ended","attr":{"remote":"172.18.0.10:41318","uuid":"2e1ccbc5-3162-4f64-80f3-8be580079ef6","connectionId":68,"connectionCount":10}}
docker-compose.yml :
version: '3.8'
services:
mongodb:
container_name: app_mongodb
build:
dockerfile: ./Dockerfile
x-bake:
no-cache: true
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=secretpass
volumes:
- ./db:/data/db
networks:
- proxy
restart: unless-stopped
worker:
container_name: app_worker
image: "node:latest"
command: "npm run dev"
user: "node"
working_dir: /var/www/worker
environment:
WAIT_HOSTS: app_mongodb:27017
volumes:
- ./worker:/var/www/worker
networks:
- proxy
links:
- mongodb
depends_on:
- mongodb
restart: unless-stopped
networks:
proxy:
external: true
Dockerfile :
FROM mongo
# Initiate replica set
RUN echo "secretpasswordkey" > "/tmp/replica.key"
RUN chmod 600 /tmp/replica.key
RUN chown 999:999 /tmp/replica.key
CMD ["mongod", "--replSet", "rs0", "--bind_ip_all", "--keyFile", "/tmp/replica.key"]
And I also run this command with mongosh after starting up the container (is there a way to add that to the Dockerfile instead by the way?) :
rs.initiate({_id: 'rs0', members: [{_id:1, 'host':'127.0.0.1:27017'}]})
A quick check shows the replicaset is indeed initialized :
rs0 [direct: primary] admin> rs.status()
{
set: 'rs0',
date: ISODate("2023-01-27T10:35:44.062Z"),
myState: 1,
term: Long("1"),
syncSourceHost: '',
syncSourceId: -1,
heartbeatIntervalMillis: Long("2000"),
majorityVoteCount: 1,
writeMajorityCount: 1,
votingMembersCount: 1,
writableVotingMembersCount: 1,
optimes: {
lastCommittedOpTime: { ts: Timestamp({ t: 1674815739, i: 1 }), t: Long("1") },
lastCommittedWallTime: ISODate("2023-01-27T10:35:39.161Z"),
readConcernMajorityOpTime: { ts: Timestamp({ t: 1674815739, i: 1 }), t: Long("1") },
appliedOpTime: { ts: Timestamp({ t: 1674815739, i: 1 }), t: Long("1") },
durableOpTime: { ts: Timestamp({ t: 1674815739, i: 1 }), t: Long("1") },
lastAppliedWallTime: ISODate("2023-01-27T10:35:39.161Z"),
lastDurableWallTime: ISODate("2023-01-27T10:35:39.161Z")
},
lastStableRecoveryTimestamp: Timestamp({ t: 1674815679, i: 1 }),
electionCandidateMetrics: {
lastElectionReason: 'electionTimeout',
lastElectionDate: ISODate("2023-01-27T10:25:49.015Z"),
electionTerm: Long("1"),
lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1674815148, i: 1 }), t: Long("-1") },
lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1674815148, i: 1 }), t: Long("-1") },
numVotesNeeded: 1,
priorityAtElection: 1,
electionTimeoutMillis: Long("10000"),
newTermStartDate: ISODate("2023-01-27T10:25:49.082Z"),
wMajorityWriteAvailabilityDate: ISODate("2023-01-27T10:25:49.111Z")
},
members: [
{
_id: 1,
name: '127.0.0.1:27017',
health: 1,
state: 1,
stateStr: 'PRIMARY',
uptime: 643,
optime: { ts: Timestamp({ t: 1674815739, i: 1 }), t: Long("1") },
optimeDate: ISODate("2023-01-27T10:35:39.000Z"),
lastAppliedWallTime: ISODate("2023-01-27T10:35:39.161Z"),
lastDurableWallTime: ISODate("2023-01-27T10:35:39.161Z"),
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
electionTime: Timestamp({ t: 1674815149, i: 1 }),
electionDate: ISODate("2023-01-27T10:25:49.000Z"),
configVersion: 1,
configTerm: 1,
self: true,
lastHeartbeatMessage: ''
}
],
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1674815739, i: 1 }),
signature: {
hash: Binary(Buffer.from("a211cfe3faa237bf8a30ccbc8ca929eea704f467", "hex"), 0),
keyId: Long("7193276291800367109")
}
},
operationTime: Timestamp({ t: 1674815739, i: 1 })
}
Ideas on what I'm doing wrong ? Thanks !

Thanks to #WernfriedDomscheit's comment pointing me to the right direction, I realized I have to give the mongodb container hostname instead of 127.0.0.1 when initiating the replicaset :
rs.initiate({_id: 'rs0', members: [{_id: 1, 'host': 'mongodb:27017'}]})
I was able to automate initialization by adding a healthcheck routine to my docker-compose.yml file. The Dockerfile is not necessary anymore.
The whole mongodb container description is now this :
mongodb:
container_name: app_mongodb
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGODB_USER}
- MONGO_INITDB_ROOT_PASSWORD=${MONGODB_PASS}
volumes:
- ./db:/data/db
networks:
- proxy
restart: unless-stopped
healthcheck:
test: test $$(echo 'rs.initiate({_id':' "rs0", members':' [{_id':' 1, "host"':' "mongodb':'27017"}]}) || rs.status().ok' | mongosh -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 10s
start_period: 30s
It's important to set the host in the initiate() and not leave the arguments empty because otherwise by default it'll setup the replicaset using the container ID as host, and if the container is removed and restarted the replicaset won't work anymore as the new container ID will be different.

Related

Openshift Angular Application S2I Build ImagePullBackOff

Trying to complete Openshift S2I build using NodeJS builder image, running into the error: npm ERR! enoent ENOENT: no such file or directory, open '/opt/app-root/src/package.json'.
Here are the logs of the build
Adding cluster TLS certificate authority to trust store
Cloning "https://dev.azure.com/westfieldgrp/PL/_git/rule_tool_frontend" ...
Commit: 620bcb6c63dd479ffb4c73f72bea0d71eeb4ba55 (deleted files that have been moved)
Author: D************ <D************#************.com>
Date: Fri Dec 16 09:39:09 2022 -0500
Adding cluster TLS certificate authority to trust store
Adding cluster TLS certificate authority to trust store
time="2022-12-16T14:40:04Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
I1216 14:40:04.659698 1 defaults.go:102] Defaulting to storage driver "overlay" with options [mountopt=metacopy=on].
Caching blobs under "/var/cache/blobs".
Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/nodejs#sha256:ec4bda6a4daaea3a28591ea97afc0ea52d06d881a5d966e18269f9c0d0c87892...
Getting image source signatures
Copying blob sha256:600dbb68a707d0370701a1985b053a56c1b71c054179497b8809f0bbdcf72fda
Copying blob sha256:2cf6011ee4f717c20cb7060fe612341720080cd81c52bcd32f54edb15af12991
Copying blob sha256:417723e2b937d59afc1be1bee1ba70a3952be0d1bc922efd8160e0a7060ff7d4
Copying config sha256:f6dc2bbf0dea77c31c3c5d0435fee81c1f52ab70ecdeb0092102b2ae86b0a1ef
Writing manifest to image destination
Storing signatures
Generating dockerfile with builder image image-registry.openshift-image-registry.svc:5000/openshift/nodejs#sha256:ec4bda6a4daaea3a28591ea97afc0ea52d06d881a5d966e18269f9c0d0c87892
Adding transient rw bind mount for /run/secrets/rhsm
STEP 1/9: FROM image-registry.openshift-image-registry.svc:5000/openshift/nodejs#sha256:ec4bda6a4daaea3a28591ea97afc0ea52d06d881a5d966e18269f9c0d0c87892
STEP 2/9: LABEL "io.openshift.build.commit.date"="Fri Dec 16 09:39:09 2022 -0500" "io.openshift.build.commit.id"="620bcb6c63dd479ffb4c73f72bea0d71eeb4ba55" "io.openshift.build.commit.ref"="main" "io.openshift.build.commit.message"="deleted files that have been moved" "io.openshift.build.source-context-dir"="/" "io.openshift.build.image"="image-registry.openshift-image-registry.svc:5000/openshift/nodejs#sha256:ec4bda6a4daaea3a28591ea97afc0ea52d06d881a5d966e18269f9c0d0c87892" "io.openshift.build.commit.author"="DominicRomano <DominicRomano#westfieldgrp.com>"
STEP 3/9: ENV OPENSHIFT_BUILD_NAME="rule-tool-frontend2-3" OPENSHIFT_BUILD_NAMESPACE="rule-tool-webapp2" OPENSHIFT_BUILD_SOURCE="https://************#dev.azure.com/************/**/_git/rule_tool_frontend" OPENSHIFT_BUILD_COMMIT="620bcb6c63dd479ffb4c73f72bea0d71eeb4ba55"
STEP 4/9: USER root
STEP 5/9: COPY upload/src /tmp/src
STEP 6/9: RUN chown -R 1001:0 /tmp/src
STEP 7/9: USER 1001
STEP 8/9: RUN /usr/libexec/s2i/assemble
---> Installing application source ...
---> Installing all dependencies
npm ERR! code ENOENT
npm ERR! syscall open
npm ERR! path /opt/app-root/src/package.json
npm ERR! errno -2
npm ERR! enoent ENOENT: no such file or directory, open '/opt/app-root/src/package.json'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent
npm ERR! A complete log of this run can be found in:
npm ERR! /opt/app-root/src/.npm/_logs/2022-12-16T14_40_22_836Z-debug-0.log
error: build error: error building at STEP "RUN /usr/libexec/s2i/assemble": error while running runtime: exit status 254
Here is the YAML
kind: Pod
apiVersion: v1
metadata:
generateName: rule-tool-frontend2-f484544fb-
annotations:
k8s.ovn.org/pod-networks: >-
{"default":{"ip_addresses":["**.***.*.**/**"],"mac_address":"**:**:**:**:**:**","gateway_ips":["**.***.*.*"],"ip_address":"**.***.*.**/**","gateway_ip":"**.***.*.*"}}
k8s.v1.cni.cncf.io/network-status: |-
[{
"name": "ovn-kubernetes",
"interface": "eth0",
"ips": [
"**.***.*.*"
],
"mac": "**:**:**:**:**:**",
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status: |-
[{
"name": "ovn-kubernetes",
"interface": "eth0",
"ips": [
"**.***.*.*"
],
"mac": "**:**:**:**:**:**",
"default": true,
"dns": {}
}]
openshift.io/scc: restricted
resourceVersion: '186661887'
name: rule-tool-frontend2-f484544fb-sb24h
uid: faf4501f-417f-481a-a05f-d57b411188b7
creationTimestamp: '2022-12-16T13:54:49Z'
managedFields:
- manager: kube-controller-manager
operation: Update
apiVersion: v1
time: '2022-12-16T13:54:49Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:generateName': {}
'f:labels':
.: {}
'f:app': {}
'f:deploymentconfig': {}
'f:pod-template-hash': {}
'f:ownerReferences':
.: {}
'k:{"uid":"19afb68c-c09d-4ced-97bf-a69cfeb3c05e"}': {}
'f:spec':
'f:containers':
'k:{"name":"rule-tool-frontend2"}':
.: {}
'f:image': {}
'f:imagePullPolicy': {}
'f:name': {}
'f:ports':
.: {}
'k:{"containerPort":8080,"protocol":"TCP"}':
.: {}
'f:containerPort': {}
'f:protocol': {}
'f:resources': {}
'f:terminationMessagePath': {}
'f:terminationMessagePolicy': {}
'f:dnsPolicy': {}
'f:enableServiceLinks': {}
'f:restartPolicy': {}
'f:schedulerName': {}
'f:securityContext': {}
'f:terminationGracePeriodSeconds': {}
- manager: svatwfldopnshft-2v2n5-master-0
operation: Update
apiVersion: v1
time: '2022-12-16T13:54:49Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:k8s.ovn.org/pod-networks': {}
- manager: multus
operation: Update
apiVersion: v1
time: '2022-12-16T13:54:51Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:k8s.v1.cni.cncf.io/network-status': {}
'f:k8s.v1.cni.cncf.io/networks-status': {}
subresource: status
- manager: Go-http-client
operation: Update
apiVersion: v1
time: '2022-12-16T13:54:52Z'
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:conditions':
'k:{"type":"ContainersReady"}':
.: {}
'f:lastProbeTime': {}
'f:lastTransitionTime': {}
'f:message': {}
'f:reason': {}
'f:status': {}
'f:type': {}
'k:{"type":"Initialized"}':
.: {}
'f:lastProbeTime': {}
'f:lastTransitionTime': {}
'f:status': {}
'f:type': {}
'k:{"type":"Ready"}':
.: {}
'f:lastProbeTime': {}
'f:lastTransitionTime': {}
'f:message': {}
'f:reason': {}
'f:status': {}
'f:type': {}
'f:containerStatuses': {}
'f:hostIP': {}
'f:podIP': {}
'f:podIPs':
.: {}
'k:{"ip":"**.***.*.*"}':
.: {}
'f:ip': {}
'f:startTime': {}
subresource: status
namespace: rule-tool-webapp2
ownerReferences:
- apiVersion: apps/v1
kind: ReplicaSet
name: rule-tool-frontend2-f484544fb
uid: 19afb68c-c09d-4ced-97bf-a69cfeb3c05e
controller: true
blockOwnerDeletion: true
labels:
app: rule-tool-frontend2
deploymentconfig: rule-tool-frontend2
pod-template-hash: f484544fb
spec:
restartPolicy: Always
serviceAccountName: default
imagePullSecrets:
- name: default-dockercfg-g9tqv
priority: 0
schedulerName: default-scheduler
enableServiceLinks: true
terminationGracePeriodSeconds: 30
preemptionPolicy: PreemptLowerPriority
nodeName: svatwfldopnshft-2v2n5-worker-kmk85
securityContext:
seLinuxOptions:
level: 's0:c28,c12'
fsGroup: 1000780000
containers:
- resources: {}
terminationMessagePath: /dev/termination-log
name: rule-tool-frontend2
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SETGID
- SETUID
runAsUser: 1000780000
ports:
- containerPort: 8080
protocol: TCP
imagePullPolicy: Always
volumeMounts:
- name: kube-api-access-k2tzb
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePolicy: File
image: >-
image-registry.openshift-image-registry.svc:5000/rule-tool-webapp2/rule-tool-frontend2:latest
serviceAccount: default
volumes:
- name: kube-api-access-k2tzb
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- path: namespace
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- configMap:
name: openshift-service-ca.crt
items:
- key: service-ca.crt
path: service-ca.crt
defaultMode: 420
dnsPolicy: ClusterFirst
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
status:
phase: Pending
conditions:
- type: Initialized
status: 'True'
lastProbeTime: null
lastTransitionTime: '2022-12-16T13:54:49Z'
- type: Ready
status: 'False'
lastProbeTime: null
lastTransitionTime: '2022-12-16T13:54:49Z'
reason: ContainersNotReady
message: 'containers with unready status: [rule-tool-frontend2]'
- type: ContainersReady
status: 'False'
lastProbeTime: null
lastTransitionTime: '2022-12-16T13:54:49Z'
reason: ContainersNotReady
message: 'containers with unready status: [rule-tool-frontend2]'
- type: PodScheduled
status: 'True'
lastProbeTime: null
lastTransitionTime: '2022-12-16T13:54:49Z'
hostIP: **.***.*.*
podIP: **.***.*.*
podIPs:
- ip: **.***.*.*
startTime: '2022-12-16T13:54:49Z'
containerStatuses:
- name: rule-tool-frontend2
state:
waiting:
reason: ImagePullBackOff
message: >-
Back-off pulling image
"image-registry.openshift-image-registry.svc:5000/rule-tool-webapp2/rule-tool-frontend2:latest"
lastState: {}
ready: false
restartCount: 0
image: >-
image-registry.openshift-image-registry.svc:5000/rule-tool-webapp2/rule-tool-frontend2:latest
imageID: ''
started: false
qosClass: BestEffort
package.json is located in rule_tool_frontend/src, how do I change where NPM is looking for package.json? Is this something that should be edited in the build YAML?
Thank you for any help.
Tried to complete the Openshift S2I build using NodeJS builder image, expecting a successful build. Got the described error instead.

In sequelize connection I am getting operation timeout error. How to fix this issue?

I am trying to run node JS server & Postgres inside docker & using sequalize for DB Connection. However, Seems like my Node JS Server is not able to communicate with Postgres DB inside docker.
Before someone mark it as Duplicate, Please note that I have already checked other answers & none of them worked out for me.
I already tried implementing Retry Strategy for Sequalize connection.
Here's my docker-compose file:
version: "3.8"
services:
rapi:
container_name: rapi
image: rapi/latest
build: .
ports:
- "3001:3001"
environment:
- EXTERNAL_PORT=3001
- PGUSER=rapiuser
- PGPASSWORD=12345
- PGDATABASE=postgres
- PGHOST=rapi_db # NAME OF THE SERVICE
depends_on:
- rapi_db
rapi_db:
container_name: rapi_db
image: "postgres:12"
ports:
- "5432:5432"
environment:
- POSTGRES_USER=rapiuser
- POSTGRES_PASSWORD=12345
- POSTGRES_DB=postgres
volumes:
- rapi_data:/var/lib/postgresql/data
volumes:
rapi_data: {}
Here's my Dockerfile:
FROM node:16
EXPOSE 3000
# Use latest version of npm
RUN npm i npm#latest -g
COPY package.json package-lock.json* ./
RUN npm install --no-optional && npm cache clean --force
# copy in our source code last, as it changes the most
WORKDIR /
COPY . .
CMD [ "node", "index.js" ]
My DB Credentials:
credentials = {
PGUSER :process.env.PGUSER,
PGDATABASE :process.env.PGNAME,
PGPASSWORD : process.env.PGPASSWORD,
PGHOST : process.env.PGHOST,
PGPORT:process.env.PGPORT,
PGNAME:'postgres'
}
console.log("env Users: " + process.env.PGUSER + " env Database: " + process.env.PGDATABASE + " env PGHOST: " + process.env.PGHOST + " env PORT: " + process.env.EXTERNAL_PORT)
}
//else credentials = {}
module.exports = credentials;
Sequalize DB code:
const db = new Sequelize(credentials.PGDATABASE,credentials.PGUSER,credentials.PGPASSWORD, {
host: credentials.PGHOST,
dialect: credentials.PGNAME,
port:credentials.PGPORT,
protocol: credentials.PGNAME,
dialectOptions: {
},
logging: false,
define: {
timestamps: false
}
,
pool: {
max: 10,
min: 0,
acquire: 100000,
},
retry: {
match: [/Deadlock/i, Sequelize.ConnectionError], // Retry on connection errors
max: 3, // Maximum retry 3 times
backoffBase: 3000, // Initial backoff duration in ms. Default: 100,
backoffExponent: 1.5, // Exponent to increase backoff each try. Default: 1.1
},
});
module.exports = db;
Your process.env.PGPORT does not exists. Add an enviroment variable in the docker-compose for service rapi or set it to 5432 in your credentials file.

MongoDB cluster timeout while connecting to Node-RED

I am facing troubles while trying to connect my MongoDB:3.4 cluster to Node-RED:2 using Docker Swarm.
My environment consists of one leader machine, two workers with one Mongo node on each (mongo1 and mongo2), and the Node-RED container on one of the workers.
I successfully initiated my cluster with the below command:
rs.initiate({
_id : "rs1",
members: [
{ _id: 1, host: "mongo1:27017" },
{ _id: 2, host: "mongo2:27017" }
]
})
A connection with Mongo Express was successful on both the primary and secondary nodes of my cluster.
But when I tried to connect to the cluster from node-RED using the node-red-node-mongodb module, I got the following error:
MongoNetworkError: failed to connect to server [mongo2:27017] on first connect [MongoNetworkTimeoutError: connection timed out
at connectionFailureError (/data/node_modules/mongodb/lib/core/connection/connect.js:362:14)
at Socket.<anonymous> (/data/node_modules/mongodb/lib/core/connection/connect.js:330:16)
at Object.onceWrapper (events.js:519:28)
at Socket.emit (events.js:400:28)
at Socket._onTimeout (net.js:495:8)
at listOnTimeout (internal/timers.js:557:17)
at processTimers (internal/timers.js:500:7)]
This is how the MongoDB node was configured:
Host: mongo1,mongo2
Connection topology: RelicaSet/Cluster (mongodb://)
Connection options: replicaSet=rs1&tls=true&tlsAllowInvalidCertificates=true&wtimeoutMS=10000&slaveOk=true
And these are the relevant parts of the docker-compose.yml file:
version: '3.4'
services:
NodeRed:
user: root
networks:
- mynetwork
volumes:
- /home/ssmanager/nfsdata/nodered:/data
- /home/ssmanager/nfsdata/records:/data/records
- /home/ssmanager/nfsdata/cdr:/data/cdr
- /home/ssmanager/nfsdata/html/decrypted_temp:/data/records/decrypted
image: nodered/node-red:2
deploy:
placement:
constraints:
- "node.hostname!=ssmanager3"
endpoint_mode: dnsrr
mode: replicated
replicas: 1
update_config:
delay: 10s
restart_policy:
condition: any
max_attempts: 5
mongo1:
image: mongo:3.4
command: mongod --replSet rs1 --noauth --oplogSize 3
environment:
TERM: xterm
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- mynetwork
deploy:
replicas: 1
placement:
constraints:
- node.labels.mongo.replica == 1
- "node.hostname!=ssmanager3"
mongo2:
image: mongo:3.4
command: mongod --replSet rs1 --noauth --oplogSize 3
environment:
TERM: xterm
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- mynetwork
deploy:
replicas: 1
placement:
constraints:
- node.labels.mongo.replica == 2
- "node.hostname!=ssmanager3"
express:
container_name: express
image: mongo-express:0.54.0
environment:
ME_CONFIG_BASICAUTH_USERNAME: admin
ME_CONFIG_BASICAUTH_PASSWORD: password
ME_CONFIG_MONGODB_ENABLE_ADMIN: "true"
ME_CONFIG_MONGODB_PORT: 27017
ME_CONFIG_MONGODB_SERVER: mongo1
ME_CONFIG_MONGODB_URL: mongodb://mongo:27017
ME_CONFIG_REQUEST_SIZE: 100Mb
command:
- "mongo-express"
networks:
- mynetwork
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- "node.hostname!=dcsynmgr01"
- "node.hostname!=ssmanager3"
ports:
- target: 8081
published: 8081
protocol: tcp
mode: host
networks:
host_mode:
external:
name: 'host'
mynetwork:
attachable: true

How to configure trojan to make it fall back to the site correctly?

I use the mirror jwilder/nginx-proxy to automatically HTTPS, and I deploy the trojan-go service through the compose.yml file. The content of the compose.yml file is shown below. I can open the HTTPS website correctly by the domain name, but trojan-go does not fall back to the website correctly, and the log shows
github.com/p4gefau1t/trojan-go/proxy.(*Node).BuildNext:stack.go:29 invalid redirect address. check your http server: trojan_web:80 | dial tcp 172.18.0.2:80: connect: connection refused, where is the problem? thank you very much!
version: '3'
services:
trojan-go:
image: teddysun/trojan-go:latest
restart: always
volumes:
- ./config.json:/etc/trojan-go/config.json
- /opt/trojan/nginx/certs/:/opt/crt/:ro
environment:
- "VIRTUAL_HOST=domain name"
- "VIRTUAL_PORT=38232"
- "LETSENCRYPT_HOST=domain name"
- "LETSENCRYPT_EMAIL=xxx#gmail.com"
expose:
- "38232"
web1:
image: nginx:latest
restart: always
expose:
- "80"
volumes:
- /opt/trojan/nginx/html:/usr/share/nginx/html:ro
environment:
- VIRTUAL_HOST=domain name
- VIRTUAL_PORT=80
- LETSENCRYPT_HOST=domain name
- LETSENCRYPT_EMAIL=xxx#gmail.com
networks:
default:
external:
name: proxy_nginx-proxy
the content of trojan-go config.conf is shown below:
{
"run_type": "server",
"local_addr": "0.0.0.0",
"local_port": 38232,
"remote_addr": "trojan_web",
"remote_port": 80,
"log_level": 1,
"password": [
"mypasswd"
],
"ssl": {
"verify": true,
"verify_hostname": true,
"cert": "/opt/crt/domain name.crt",
"key": "/opt/crt/domain name.key",
"sni":"domain name"
},
"router":{
"enabled": true,
"block": [
"geoip:private"
]
}
}
(ps:I confirm that the trojan-go service and the web container are on the same intranet and can communicate with each other)

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

Resources