Docker Node MongoDB AuthenticationFailed: SCRAM-SHA-1 authentication failed, storedKey mismatch - node.js

I am posting it here as I am run out of options.
I am trying to make a connection from my node app to the mongodb.
I am getting AuthenticationFailed: SCRAM-SHA-1 authentication failed, storedKey mismatch
My local environment works 100%. I dumped my local MongoDB (the app database and admin too) into the docker container.
I created my docker-compose.yml as below:
version: "1.0"
services:
mongodb:
image: mongo:3.4.7
container_name: MongoDB2
restart: unless-stopped
ports:
- '27017:27017'
app:
links:
- mongodb
depends_on:
- mongodb
image: eamello/gsd:myCore
ports:
- '8087:8087'
stdin_open: true
tty: true
volumes:
db:
networks:
node-webapp-network:
driver: bridge
My config.json file, which has the database connection details:
"myCore": {
"database": {
"url": "mongodb://mongodb:27017/myCore",
"options": {
"db": {
"native_parser": true
},
"server": {
"poolSize": 100,
"socketOptions": {
"keepAlive": 1000,
"connectTimeoutMS": 30000
}
},
"replset": {},
"user": "myAdmin",
"pass": "/WnUU5Jqithypb9970AfIQ==",
"auth": {
"authdb": "admin"
},
"queryLevel":{
"common":{
"maxTimeMS": 15000
}
}
}
},
I am 100% sure the user created in my admin database has the same password.
I check and rechecked several times.
I also tried to add my user via javascript file... it looks like the javascript was never executed.
db.createUser(
{
user: "myAdmin",
pwd: "/WnUU5Jqithypb9970AfIQ==",
roles: [
{
role: "userAdmin", db: "myCore"
},
{
role: "readWrite", db: "myCore"
}
]
}
);
As I can manage my MongoDB via Compass, I left this javascript aside.
Does anyone have any clue why I am getting AuthenticationFailed: SCRAM-SHA-1 authentication failed, storedKey mismatch?
I changed some names above as this is a company issue. Thanks.

Related

Digital ocean mongo DB docker compose thorws "MongoServerError: Authentication failed."

Wrapping up a docker tutorial. When it comes to docker containers on my laptop, it works fine no Mongo DB issue.
But ever since I put it on digital ocean(Ubuntu 20.04 LTS x64, 2GB memory, 10GB disk) the issue occured below:
MongoServerError: Authentication failed.
at Connection.onMessage (/app/node_modules/mongodb/lib/cmap/connection.js:207:30)
at MessageStream.<anonymous> (/app/node_modules/mongodb/lib/cmap/connection.js:60:60)
at MessageStream.emit (node:events:513:28)
at processIncomingData (/app/node_modules/mongodb/lib/cmap/message_stream.js:132:20)
at MessageStream._write (/app/node_modules/mongodb/lib/cmap/message_stream.js:33:9)
at writeOrBuffer (node:internal/streams/writable:392:12)
at _write (node:internal/streams/writable:333:10)
at Writable.write (node:internal/streams/writable:337:10)
at Socket.ondata (node:internal/streams/readable:766:22)
at Socket.emit (node:events:513:28) {
ok: 0,
code: 18,
codeName: 'AuthenticationFailed',
connectionGeneration: 0,
[Symbol(errorLabels)]: Set(2) { 'HandshakeError', 'ResetPool' }
codes:
package.json
"dependencies": {
"bcryptjs": "^2.4.3",
"connect-redis": "^6.1.3",
"cors": "^2.8.5",
"express": "^4.18.2",
"express-session": "^1.17.3",
"mongoose": "^6.7.2",
"redis": "^4.5.0"
},
index.js
const {
MONGO_USER,
MONGO_PASSWORD,
MONGO_IP,
MONGO_PORT,
REDIS_URL,
REDIS_PORT,
SESSION_SECRET,
} = require("./config/config");
let redisClient = redis.createClient({
legacyMode: true,
socket: {
host: REDIS_URL,
port: REDIS_PORT,
},
});
redisClient
.connect()
.then(() => console.log("redis connected"))
.catch((e) => console.error("redis error", e));
const postRouter = require("./routes/postRoutes");
const userRouter = require("./routes/userRoutes");
const app = express();
const connectWithRetry = () => {
mongoose
.connect(
`mongodb://${MONGO_USER}:${MONGO_PASSWORD}#${MONGO_IP}:${MONGO_PORT}/?authSource=admin`
)
.then(() => console.log("successfully connected to DB"))
.catch((e) => {
console.error(e);
setTimeout(connectWithRetry, 5000);
});
};
connectWithRetry();
app.enable("trust proxy");
app.use(cors({}));
...
docker-compose.yml
version: "3"
services:
nginx:
image: nginx:stable-alpine
ports:
- "3000:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- node-app
node-app:
build: .
environment:
- PORT=3000
depends_on:
- mongo
mongo:
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=snap
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- mongo-db:/data/db
redis:
image: redis
volumes:
mongo-db:
docker-compose.prod.yml
version: "3"
services:
nginx:
ports:
- "80:80"
node-app:
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWORD=${MONGO_PASSWORD}
- SESSION_SECRET=${SESSION_SECRET}
command: node index.js
mongo:
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
The environment variables stored in .env file which located in the root directory of Ubuntu droplet. I tried hard coding the mongo uri instead of using environment variables but got the same result so pretty sure it may not be a cause.
Here's a log of db users below:
use admin
switched to db admin
admin> db.system.users.find()
[
{
_id: 'admin.snap',
userId: new UUID("4e79e245-894e-4d9a-9d76-802c970d9129"),
user: 'snap',
db: 'admin',
...
},
roles: [ { role: 'root', db: 'admin' } ]
},
{
_id: 'admin.admin',
userId: new UUID("0a8a471e-0e70-4e1c-b9ed-d99ede9ad0bd"),
user: 'admin',
db: 'admin',
...
},
roles: [ { role: 'root', db: 'admin' } ]
},
{
_id: 'test.snap',
userId: new UUID("4282b24f-9b15-4956-865f-ef9b206a499d"),
user: 'snap',
db: 'test',
...
roles: [ { role: 'readWrite', db: 'test' } ]
}
]
Thanks for your help.

MongoClient cannot connect to mongo in another Docker container

I am currently trying to deploy my app with docker but I have some problem with mongodb.
My app is not succeeding to connect to mongodb in another container.
Here is my docker-compose.yml file:
version: "3.7"
services:
mqtt_broker:
build:
context: .
dockerfile: Dockerfile
container_name: mqtt_broker
ports:
- "4040:4040"
links:
- mqtt_db:mqtt_db
depends_on:
- mqtt_db
mqtt_db:
build:
context: ./mongo
dockerfile: Dockerfile
container_name: mqtt_db
ports:
- "27017:27017"
Here is the Dockerfile for the mongo container:
FROM mongo:latest
COPY init_docker.js /docker-entrypoint-initdb.d/
CMD mongod --replSet "rs0" --bind_ip_all
Here is how I connect to mongo in the app container:
const client = new MongoClient(config.get("dbURL"));
const mongo = await client.connect();
const db = client.db(config.get("dbName"));
with the configuration file:
{
"cors": {
"origin": "*"
},
"clientPort": 4040,
"mqttPort": 1883,
"dbURL": "mongodb://mqtt_db:27017",
"dbName": "mqtt_proxy"
}
Here the error I get with the mongo node client:
MongoServerSelectionError: Server selection timed out after 30000 ms
at Timeout._onTimeout (/broker/node_modules/mongodb/lib/sdam/topology.js:305:38)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
reason: TopologyDescription {
type: 'Unknown',
servers: Map(1) {
'mqtt_db:27017' => ServerDescription {
_hostAddress: HostAddress { isIPv6: false, host: 'mqtt_db', port: 27017 },
address: 'mqtt_db:27017',
type: 'RSGhost',
hosts: [],
passives: [],
arbiters: [],
tags: {},
minWireVersion: 0,
maxWireVersion: 13,
roundTripTime: 35.24000000000001,
lastUpdateTime: 13896371,
lastWriteDate: 0,
topologyVersion: {
processId: ObjectId { [Symbol(id)]: [Buffer [Uint8Array]] },
counter: 0
},
logicalSessionTimeoutMinutes: 30
}
},
stale: false,
compatible: true,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
commonWireVersion: 13,
logicalSessionTimeoutMinutes: undefined
},
code: undefined,
[Symbol(errorLabels)]: Set(0) {}
}
I can actually ping mqtt_db, the ip is resolved, but I don't see it in /etc/hosts.
I also tried to add networks in my docker-compose but it translated mqtt_db in localhost and it does not work.
Here is the repo of my project: https://gitlab.com/mqtt-broker-sniffer/sniffer
EDIT
I had to specify ?directConnection=true when I was connecting to the db.
Crédit: https://stackoverflow.com/a/70204195/19127936
const client = new MongoClient(
`${config.get("dbURL")}/${config.get("dbName")}?directConnection=true`
);
this.client = await client.connect();
this.db = client.db(config.get("dbName"));

How to resolve this MongoError : command find requires authentication?

I installed a local instance of mongodb with password connection using docker and linked it to my backend in node.js. Everything works fine on my laptop. The problem is when I put the mongo with docker and my backend on a vps I got a weird error from the backend when testing the endpoints : MongoError: command find requires authentication
I tried to investigate and at first I thought there were some problems with mongo config file so I ran this command : db._adminCommand( {getCmdLineOpts: 1}) and I got this output :
{
"argv" : [
"mongod",
"--auth",
"--bind_ip_all"
],
"parsed" : {
"net" : {
"bindIp" : "*"
},
"security" : {
"authorization" : "enabled"
}
},
"ok" : 1
}
which shows that authorization is activated.
Also I got no erros while running my backend that would tell me that there were a connection error. On contrary while running this server :
#Configuration({
...config,
acceptMimes: ["application/json"],
httpPort: process.env.PORT || 8083,
httpsPort: false, // CHANGE
mongoose: [
{
id: "mydb",
url: "mongodb://127.0.0.1:27017/mydb",
connectionOptions: {
user: process.env.USER_MONGO_MYDB,
pass: process.env.PASSWORD_MONGO_MYDB
}
}
],
componentsScan: [
`${rootDir}/protocols/*.ts` // scan protocols directory
],
mount: {
"/rest": [
`${rootDir}/controllers/**/*.ts`
],
"/": [IndexCtrl]
},
views: {
root: `${rootDir}/../views`,
viewEngine: "ejs"
},
exclude: [
"**/*.spec.ts"
]
})
export class Server {
#Inject()
app: PlatformApplication;
#Configuration()
settings: Configuration;
}
I got a log that would tell me that connection was successful :
[2021-06-18T16:07:42.391] [INFO ] [TSED] - Connect to mongo database: mydb
(node:18398) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
(node:18398) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
I have no idea where to investigate and how to solve the problem ? Anyone has any recommendation ?
I'll put here my dockerfile docker-compose and other config files I used for the mongo instance if that can be of any help :
docker-compose.yml :
version: "3"
services:
mongodb:
build: .
container_name: mongodb
environment:
MONGO_INITDB_DATABASE: mydb
MONGO_INITDB_ROOT_USERNAME: "${MONGO_INITDB_ROOT_USERNAME}"
MONGO_INITDB_ROOT_PASSWORD: "${MONGO_INITDB_ROOT_PASSWORD}"
volumes:
- ./database:/data/db
- ./log/:/var/log/mongodb/
- ./mongod.conf:/etc/mongod.conf
ports:
- 27017:27017
restart: unless-stopped
Dockerfile:
FROM mongo
COPY docker-entrypoint.sh /usr/bin/
COPY seed-data.js /docker-entrypoint-initdb.d/
COPY .env /docker-entrypoint-initdb.d/
COPY mongod.conf /etc/mongod.conf
mongod.conf
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
# security settings including user password protection
security:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
seed-data.js :
db.createUser(
{
user: "user",
pwd: "userpassword",
roles: [ "readWrite", "dbAdmin" ]
}
)
docker-entrypoint.sh
if [ "$MONGO_INITDB_ROOT_USERNAME" ] && [ "$MONGO_INITDB_ROOT_PASSWORD" ]; then
rootAuthDatabase='admin'
"${mongo[#]}" "$rootAuthDatabase" <<-EOJS
db.createUser({
user: $(_js_escape "$MONGO_INITDB_ROOT_USERNAME"),
pwd: $(_js_escape "$MONGO_INITDB_ROOT_PASSWORD"),
roles: [ { role: 'root', db: $(_js_escape "$rootAuthDatabase") } ]
})
EOJS
fi
I solved the problem, really stupid mistake : I forgot to put back the .env file after I pulled the repo into the vps. The backend received undefined instead of process.env.USER_MONGO_MYDB and process.env.PASSWORD_MONGO_MYDB and was not throwing any mongo connection error !

docker-compose: Connect to mongodb using node

I'm looking for some help on how I can connect to a mongodb using node in two different containers.
I have three services set up in my docker compose:
webserver (irrelevant to question)
nodeJs
mongo database
The nodejs container is essentially an api which I can use to communicate with mongodb:
require('dotenv').config();
const express = require('express');
const cors = require('cors')
const app = express();
var MongoClient = require('mongodb').MongoClient;
var mongodb = require('mongodb');
app.use(express.json())
app.use(cors())
app.post('/api/fetch-items', (req, res) => {
if (req.headers.apikey !== process.env.API_KEY) return res.sendStatus(401)
// URL is in the format: mongodb://user:pwd#database:27017
MongoClient.connect(process.env.MONGODB_URL, function(err, db) {
if (err) return res.status(500).send(err);
var dbo = db.db("db");
dbo.collection("col").find({}).toArray(function(err, result) {
if (err) return res.status(500).send(err);
db.close();
return res.status(200).send(result);
});
});
})
app.listen(4000)
This all works perfectly fine if I run node as a standalone container (not using docker-compose) and use localhost in the URL.
However, when I use the image in docker-compose I receive the response:
{
"name": "MongoNetworkError"
}
when sending a request to the API.
I am currently using the hostname 'database' in the URL and this does not work. I have also tried using localhost.
There are also no errors as a result of the command node server.
If needed my Dockerfile for the node server is:
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
RUN chown node:node ./package*.json
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 4000
CMD [ "node", "server" ]
My docker-compose.yml file:
version: "3.1"
services:
mongodb:
image: mongo
restart: always
container_name: database
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: xxxxxxxx
# Web server stuff
node:
image: created-node-server
container_name: node
ports:
- 4000:4000
Finally, the output of docker network inspect:
[
{
"Name": "network_default",
"Id": "3e51a90a23f2785cfc405243ad4c73991852f52826fd1cd0b14da5d4eaa180e4",
"Created": "2021-01-12T01:07:42.656013002Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.23.0.0/16",
"Gateway": "172.23.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"418876a06c3f8fa430804ae77c66cca986a49dbc88374266346463f7f448baa7": {
"Name": "database",
"EndpointID": "ac08c5a439edd43e612723d269714e9dfbae29dbdb50790b61c66207287d70c8",
"MacAddress": "02:42:ac:17:00:04",
"IPv4Address": "172.23.0.4/16",
"IPv6Address": ""
},
"7b6dcbb8f76618575c988a026ac0308075a116f79a2e58d8a146e33fb5d7674c": {
"Name": "node",
"EndpointID": "e6beb412a2fe97ae7d04d2484a7ca3634bfa37c82680becc412d1f44502da72f",
"MacAddress": "02:42:ac:17:00:03",
"IPv4Address": "172.23.0.3/16",
"IPv6Address": ""
},
"f2ea250bccdb2c6a0c4d7818912ddbf29196eff072dad699e8dbcef466cd38a3": {
"Name": "webserver",
"EndpointID": "f6617aab4001032069e68300c5303fa730f3458e2fe0092ace45a9f67e16d7c5",
"MacAddress": "02:42:ac:17:00:02",
"IPv4Address": "172.23.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "proj",
"com.docker.compose.version": "1.27.4"
}
}
]
Essentially, I am retrieving the MongoNetworkError when trying to communicate with mongodb through node, both of which are docker containers created using docker-compose.
I hope all the above makes sense, sorry if it is a bit wordy, I have tried to include as much info as possible. Comment if you need any more info
Thanks :)
You just need to include an environmental variable under the node service MONGODB_URL=mongodb://database:27017

【Hyperledger Fabric】Can't send invoke request to peer

My fabric sdk server is working as a docker container. It is running on mac.
Fabric docker containers(peers, orderers and CAs...) are also running on mac.
Now I'm trying to send invoke request from sdk container(ubuntu:18.04) to peer.
And I'm using first-network of fabric-samples.
This is peer config file(docker-compose-base.yaml).
version: '2'
services:
orderer.example.com:
container_name: orderer.example.com
extends:
file: peer-base.yaml
service: orderer-base
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
- orderer.example.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LISTENADDRESS=0.0.0.0:7051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.example.com:8051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
- peer0.org1.example.com:/var/hyperledger/production
ports:
- 7051:7051
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.org1.example.com
- CORE_PEER_ADDRESS=peer1.org1.example.com:8051
- CORE_PEER_LISTENADDRESS=0.0.0.0:8051
- CORE_PEER_CHAINCODEADDRESS=peer1.org1.example.com:8052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:8052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:8051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls:/etc/hyperledger/fabric/tls
- peer1.org1.example.com:/var/hyperledger/production
ports:
- 8051:8051
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org2.example.com
- CORE_PEER_ADDRESS=peer0.org2.example.com:9051
- CORE_PEER_LISTENADDRESS=0.0.0.0:9051
- CORE_PEER_CHAINCODEADDRESS=peer0.org2.example.com:9052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:9052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org2.example.com:9051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org2.example.com:10051
- CORE_PEER_LOCALMSPID=Org2MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls:/etc/hyperledger/fabric/tls
- peer0.org2.example.com:/var/hyperledger/production
ports:
- 9051:9051
peer1.org2.example.com:
container_name: peer1.org2.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.org2.example.com
- CORE_PEER_ADDRESS=peer1.org2.example.com:10051
- CORE_PEER_LISTENADDRESS=0.0.0.0:10051
- CORE_PEER_CHAINCODEADDRESS=peer1.org2.example.com:10052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:10052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org2.example.com:10051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org2.example.com:9051
- CORE_PEER_LOCALMSPID=Org2MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls:/etc/hyperledger/fabric/tls
- peer1.org2.example.com:/var/hyperledger/production
ports:
- 10051:10051
This is connection file (connection-org1.json).
{
"name": "first-network-org1",
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
}
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com",
"peer1.org1.example.com"
],
"certificateAuthorities": [
"ca.org1.example.com"
]
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpcs://host.docker.internal:7051",
"tlsCACerts": {
"pem": "-----BEGIN CERTIFICATE-----\nMIICVzCCAf2gAwIBAgIQIrzVUkH/VhPQNk1YHCtj3jAKBggqhkjOPQQDAjB2MQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\nYW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEfMB0GA1UEAxMWdGxz\nY2Eub3JnMS5leGFtcGxlLmNvbTAeFw0xOTEyMTIwODIwMDBaFw0yOTEyMDkwODIw\nMDBaMHYxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH\nEw1TYW4gRnJhbmNpc2NvMRkwFwYDVQQKExBvcmcxLmV4YW1wbGUuY29tMR8wHQYD\nVQQDExZ0bHNjYS5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0D\nAQcDQgAE8TqkZLAW+F2rYwnicTo2NTo1+2kYUvI28UKvgAdm1iavbwunNlBB+Gph\nLT4z/XVDjp2XP3VYdv4jmCRSmBkREKNtMGswDgYDVR0PAQH/BAQDAgGmMB0GA1Ud\nJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDATAPBgNVHRMBAf8EBTADAQH/MCkGA1Ud\nDgQiBCDW8LHjcUuWGmLSfNXZym6gDPb9twcHkByS3Yoa/rM4YTAKBggqhkjOPQQD\nAgNIADBFAiEAieGWQzmyWS2pUORmXczUM/XCaB4t33HNtizkr62YgWUCIB7HVwss\ny7l1k9ifxb0VN7q4pzIeHTFMeH6+e6Nl3p2C\n-----END CERTIFICATE-----\n"
},
"grpcOptions": {
"ssl-target-name-override": "peer0.org1.example.com"
}
},
"peer1.org1.example.com": {
"url": "grpcs://host.docker.internal:8051",
"tlsCACerts": {
"pem": "-----BEGIN CERTIFICATE-----\nMIICVzCCAf2gAwIBAgIQIrzVUkH/VhPQNk1YHCtj3jAKBggqhkjOPQQDAjB2MQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\nYW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEfMB0GA1UEAxMWdGxz\nY2Eub3JnMS5leGFtcGxlLmNvbTAeFw0xOTEyMTIwODIwMDBaFw0yOTEyMDkwODIw\nMDBaMHYxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH\nEw1TYW4gRnJhbmNpc2NvMRkwFwYDVQQKExBvcmcxLmV4YW1wbGUuY29tMR8wHQYD\nVQQDExZ0bHNjYS5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0D\nAQcDQgAE8TqkZLAW+F2rYwnicTo2NTo1+2kYUvI28UKvgAdm1iavbwunNlBB+Gph\nLT4z/XVDjp2XP3VYdv4jmCRSmBkREKNtMGswDgYDVR0PAQH/BAQDAgGmMB0GA1Ud\nJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDATAPBgNVHRMBAf8EBTADAQH/MCkGA1Ud\nDgQiBCDW8LHjcUuWGmLSfNXZym6gDPb9twcHkByS3Yoa/rM4YTAKBggqhkjOPQQD\nAgNIADBFAiEAieGWQzmyWS2pUORmXczUM/XCaB4t33HNtizkr62YgWUCIB7HVwss\ny7l1k9ifxb0VN7q4pzIeHTFMeH6+e6Nl3p2C\n-----END CERTIFICATE-----\n"
}
}
},
"certificateAuthorities": {
"ca.org1.example.com": {
"url": "https://host.docker.internal:7054",
"caName": "ca-org1",
"tlsCACerts": {
"pem": "-----BEGIN CERTIFICATE-----\nMIICUTCCAfegAwIBAgIQeWwAs49jzhe2XsEmY4M0jDAKBggqhkjOPQQDAjBzMQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\nYW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEcMBoGA1UEAxMTY2Eu\nb3JnMS5leGFtcGxlLmNvbTAeFw0xOTEyMTIwODIwMDBaFw0yOTEyMDkwODIwMDBa\nMHMxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T\nYW4gRnJhbmNpc2NvMRkwFwYDVQQKExBvcmcxLmV4YW1wbGUuY29tMRwwGgYDVQQD\nExNjYS5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE\nhvJll8VoZC+0seO0fKrpbxWWAABOt2UoCbyq540wY3YSM2GCKuD2XMTtCsiC8XEB\nbKaokdxo5WyWXOsamK1hEKNtMGswDgYDVR0PAQH/BAQDAgGmMB0GA1UdJQQWMBQG\nCCsGAQUFBwMCBggrBgEFBQcDATAPBgNVHRMBAf8EBTADAQH/MCkGA1UdDgQiBCC2\nwW1+TNe+qJeskHsq1AoNdYrmgKJ2Pf12KqootThXNDAKBggqhkjOPQQDAgNIADBF\nAiEA+2pAiOxL64KxOFWoqavs5NWeO+GLN0ArS14zCsBxS4MCIAQ7nBMVPYyOHYLS\nBy/zxDAzC+NsFq1iKxyWQxq3Yu9I\n-----END CERTIFICATE-----\n"
},
"httpOptions": {
"verify": false
}
}
}
}
And this is invoke file (invoke.js).
'use strict';
const { Gateway, Wallets } = require('fabric-network');
const path = require('path');
const ccpPath = path.resolve(__dirname, 'connection-org1.json');
module.exports = async function (key, data) {
try {
// Create a new file system based wallet for managing identities.
const walletPath = path.join(process.cwd(), 'wallet');
const wallet = await Wallets.newFileSystemWallet(walletPath);
console.log(`Wallet path: ${walletPath}`);
// Check to see if we've already enrolled the user.
const identity = await wallet.get('user1');
if (!identity) {
console.log('An identity for the user "user1" does not exist in the wallet');
console.log('Run the registerUser.js application before retrying');
return;
}
// Create a new gateway for connecting to our peer node.
const gateway = new Gateway();
await gateway.connect(ccpPath, { wallet, identity: 'user1', discovery: { enabled: true, asLocalhost: true } });
// Get the network (channel) our contract is deployed to.
const network = await gateway.getNetwork('mychannel');
// Get the contract from the network.
const contract = network.getContract('save-file-hash');
// Submit the specified transaction.
await contract.submitTransaction('registerHash', key, data.length.toString());
for(var i = 0; i < data.length; i++) {
await contract.submitTransaction('registerHash', key + i.toString(), data[i]);
}
console.log('Transactions has been submitted');
// Disconnect from the gateway.
await gateway.disconnect();
} catch (error) {
console.error(`Failed to submit transaction: ${error}`);
process.exit(1);
}
}
And this is error log.
# node main.js
Wallet path: /usr/src/app/wallet
2019-12-13T13:03:10.930Z - error: [Remote.js]: Error: Failed to connect before the deadline URL:grpcs://localhost:7051 timeout:3000
2019-12-13T13:03:10.935Z - warn: [DiscoveryEndorsementHandler]: _build_endorse_group_member >> G0:1 - endorsement failed - Error: Failed to connect before the deadline URL:grpcs://localhost:7051 timeout:3000
2019-12-13T13:03:10.937Z - error: [Remote.js]: Error: Failed to connect before the deadline URL:grpcs://localhost:9051 timeout:3000
2019-12-13T13:03:10.938Z - warn: [DiscoveryEndorsementHandler]: _build_endorse_group_member >> G1:0 - endorsement failed - Error: Failed to connect before the deadline URL:grpcs://localhost:9051 timeout:3000
2019-12-13T13:03:13.938Z - error: [Remote.js]: Error: Failed to connect before the deadline URL:grpcs://localhost:8051 timeout:3000
2019-12-13T13:03:13.941Z - warn: [DiscoveryEndorsementHandler]: _build_endorse_group_member >> G0:1 - endorsement failed - Error: Failed to connect before the deadline URL:grpcs://localhost:8051 timeout:3000
2019-12-13T13:03:13.944Z - error: [Remote.js]: Error: Failed to connect before the deadline URL:grpcs://localhost:10051 timeout:3000
2019-12-13T13:03:13.944Z - warn: [DiscoveryEndorsementHandler]: _build_endorse_group_member >> G1:0 - endorsement failed - Error: Failed to connect before the deadline URL:grpcs://localhost:10051 timeout:3000
2019-12-13T13:03:13.947Z - error: [DiscoveryEndorsementHandler]: _endorse - endorsement failed::Error: Endorsement has failed
at DiscoveryEndorsementHandler._endorse (/usr/src/app/node_modules/fabric-client/lib/impl/DiscoveryEndorsementHandler.js:185:19)
at process._tickCallback (internal/process/next_tick.js:68:7)
Failed to submit transaction: Error: Endorsement has failed
When sdk server runs on mac and not a docker container, invoke request success.
And when it is a docker container, invoke request fail.
Also, user1 is already enrolled in CA-org1.
I think it is a problem with connection.
Please tell me how to send invoke request from sdk server container.
I think this is the line that is causing you a problem in the "SDK Container":
await gateway.connect(ccpPath, { wallet, identity: 'user1', discovery: { enabled: true, asLocalhost: true } });
asLocalhost: true will be substituting all the addresses with localhost and they just try to connect bsack into the SDK container which is causing the errors. When running directly on the mac the port forwarding into the docker containers works with localhost.
After you use asLocalhost: false you may have to change the various urls in the connection profile to be peer1.org1.example.com etc.
First:you can read the document:
https://hyperledger-fabric.readthedocs.io/en/latest/developapps/connectionprofile.html
There has a sample and your config miss channels:{} and orderers:{}
Second:If you don't enable tls, use the grpc:XXX.XXX, or you should use grpcs:XXX.XXX.
For example, your orderer not enable tls and you should use "url": "grpc://host.docker.internal:7050", your peers enable tls and you should use "url": "grpcs://host.docker.internal:8051".
Third:After last two steps, the query transaction should be ok. But invoke(some write transactions) may still fail. I still block here...

Resources