Error while sending query request from client : No peer available to query - hyperledger-fabric

I am getting the following error while sending query request from my client.
FabricError: No peers available to query. Errors: ["Failed to connect before the deadline
URL:grpcs://localhost:12051","Failed to connect before the deadline
URL:grpcs://localhost:11051"].
Following is my the part of my connection-org3.json connection profile file
"organizations": {
"Org3": {
"mspid": "Org3MSP",
"peers": [
"peer0.org3.bc4scm.de",
"peer1.org3.bc4scm.de"
],
"certificateAuthorities": [
"ca.org3.bc4scm.de"
]
}
},
"peers": {
"peer0.org3.bc4scm.de": {
"url": "grpcs://localhost:11051",
"tlsCACerts": {
"path": "crypto-config/peerOrganizations/org3.bc4scm.de/tlsca/tlsca.org3.bc4scm.de-cert.pem"
},
"grpcOptions": {
"ssl-target-name-override": "peer0.org3.bc4scm.de"
}
},
"peer1.org3.bc4scm.de": {
"url": "grpcs://localhost:12051",
"tlsCACerts": {
"path": "crypto-config/peerOrganizations/supplier.bc4scm.de/tlsca/tlsca.org3.bc4scm.de-cert.pem"
},
"grpcOptions": {
"ssl-target-name-override": "peer1.org3.bc4scm.de"
}
}
},
"certificateAuthorities": {
"ca.org3.bc4scm.de": {
"url": "https://localhost:9054",
"caName": "ca-supplier",
"tlsCACerts": {
"path": "crypto-config/peerOrganizations/org3.bc4scm.de/tlsca/tlsca.org3.bc4scm.de-cert.pem"
},
"httpOptions": {
"verify": false
}
}
}
And following is a part of my docker composer file.
peer0.org3.bc4scm.de:
container_name: peer0.org3.bc4scm.de
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org3.bc4scm.de
- CORE_PEER_ADDRESS=peer0.org3.bc4scm.de:11051
- CORE_PEER_LISTENADDRESS=0.0.0.0:11051
- CORE_PEER_CHAINCODEADDRESS=peer0.org3.bc4scm.de:11052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:11052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org3.bc4scm.de:12051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org3.bc4scm.de:11051
- CORE_PEER_LOCALMSPID=Org3MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org3.bc4scm.de/peers/peer0.org3.bc4scm.de/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org3.bc4scm.de/peers/peer0.org3.bc4scm.de/tls:/etc/hyperledger/fabric/tls
- peer0.org3.bc4scm.de:/var/hyperledger/production
ports:
- 11051:11051
peer1.org3.bc4scm.de:
container_name: peer1.org3.bc4scm.de
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.org3.bc4scm.de
- CORE_PEER_ADDRESS=peer1.org3.bc4scm.de:12051
- CORE_PEER_LISTENADDRESS=0.0.0.0:12051
- CORE_PEER_CHAINCODEADDRESS=peer1.org3.bc4scm.de:12052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:12052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org3.bc4scm.de:11051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org3.bc4scm.de:12051
- CORE_PEER_LOCALMSPID=Org3MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org3.bc4scm.de/peers/peer1.org3.bc4scm.de/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/supplier.bc4scm.de/peers/peer1.org3.bc4scm.de/tls:/etc/hyperledger/fabric/tls
- peer1.org3.bc4scm.de:/var/hyperledger/production
ports:
- 12051:12051
I got this code from Fabcar sample and tried to query from a client in Org3 instead of Org1. I created an admin user and then created a user in this organization successfully. According to my observations, I am getting the error from following code line execution.
const result = await contract.evaluateTransaction('queryAllProducts','123');
What is the possible reason for this issue? Appreciate your insights on this.
Updates:
I checked opened ports in peer0.prg3.bs4scm.de
root#e52992a76c3d:/opt/gopath/src/github.com/hyperledger/fabric/peer# netstat -tulpn | grep LISTEN
tcp 0 0 127.0.0.1:9443 0.0.0.0:* LISTEN 1/peer
tcp 0 0 127.0.0.11:46353 0.0.0.0:* LISTEN -
tcp6 0 0 :::11051 :::* LISTEN 1/peer
tcp6 0 0 :::6060 :::* LISTEN 1/peer
tcp6 0 0 :::11052 :::* LISTEN 1/peer
Here I can see ports 11051 and 11052 are open and listening.
Also, there is a container for the installed chain code.
cd0b165e5186 dev-peer0.org3.bc4scm.de-scmlogic-1.0-9c7e776aa8a752e530f79d0b456f1bda28aac3f5db0af734be2f315d8d1a4f53 "/bin/sh -c 'cd /usr…" 48 seconds ago Up 47 seconds dev-peer0.org3.bc4scm.de-scmlogic-1.0
When I look at the logs of that peer(peer0.org3) I can see floowing error log is print continuously. It is complaining about the connection with org1
019-07-06 10:26:52.278 UTC [gossip.discovery] expireDeadMembers -> WARN 164 Exiting
2019-07-06 10:26:56.381 UTC [gossip.comm] func1 -> WARN 165 peer1.org1.bc4scm.de:8051, PKIid:42214b7584f3fabcdb84e5770c62e4cf0f7c00b2a9d0441d772925882d4457a7 isn't responsive: EOF
2019-07-06 10:26:56.381 UTC [gossip.discovery] expireDeadMembers -> WARN 166 Entering [42214b7584f3fabcdb84e5770c62e4cf0f7c00b2a9d0441d772925882d4457a7]
2019-07-06 10:26:56.381 UTC [gossip.discovery] expireDeadMembers -> WARN 167 Closing connection to Endpoint: peer1.org1.bc4scm.de:8051, InternalEndpoint: , PKI-ID: 42214b7584f3fabcdb84e5770c62e4cf0f7c00b2a

You could check, if peer is accessible even using browser(Firefox). request on firefox - localhost:11051 if you could see the response means your peer is accessible or if not means your port is not open for the same, then go to the docker file and open the port for the same, and up the peer using docker compose , do the same for every peer you want to access.
Even you could check the logs of peers using following -
docker logs --follow peer0.org3.bc4scm.de
Update : ---
You could check CORE_PEER_GOSSIP_BOOTSTRAP & CORE_PEER_GOSSIP_EXTERNALENDPOINT for both peers
**CORE_PEER_GOSSIP_BOOTSTRAP=<a list of peer endpoints within the peer's org>
CORE_PEER_GOSSIP_EXTERNALENDPOINT=<the peer endpoint, as known outside the org>**
for peer0.org3.bc4scm.de
CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org3.bc4scm.de:12051
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org3.bc4scm.de:11051
for peer1.org3.bc4scm.de :
CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org3.bc4scm.de:11051
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org3.bc4scm.de:12051
Check Port accordingly your peers & up your docker file.

This could be due to multiple reasons:
Your peers are not accessible so first check if these ports are open or not.
You should confirm if the chaincode is installed on these peers or not.
If these are not the cases then you must check the logs inside the docker containers of the chaincode and these peers and for that you can use:
docker exec -it [container-name] bash
Do tell me if you find something there and you can't resolve it.

I had this same problem and realized the issue was I ha.d set the "asLocalhost" property to false and was trying to access peers at http://localhost/. Below is the working line with the property set correctly. (I pulled from an example using fabcar, which was great otherwise).
await gateway.connect(ccpPath, { wallet, identity: 'user1', discovery: { enabled: true, asLocalhost: true } });

Related

Hyperledger fabric - No response when using sendTransactionProposal() API

This is a tracing network with one channel composed of 3 Orgs, 1 anchor peer per organization, 1 MSP per org, and 1 CA for org3. And I'm not using TLS (because I couldn't find a dependable sample with TLS ON)
I'm trying to use Fabric-sdk-node to build a web front end for it, and I'm using fabcar sample. and when I use invoke.js (almost the same as the example), I find there is no reponse.
Store path:/root/hyperledger-fabric/test/webapp/hfc-key-store
Successfully loaded user1 from persistence
Assigning transaction_id: 8387c087b4b7b9210cdc68ff0ff7fda99c706bad052b9b5138c86df5463244be
Transaction proposal was bad
proposalResponses[0].response is bad
undefined
Failed to send Proposal or receive valid response. Response null or status is not 200. exiting...
Failed to invoke successfully :: Error: Failed to send Proposal or receive valid response. Response null or status is not 200. exiting...
and the code is
if (proposalResponses && proposalResponses[0].response && proposalResponses[0].response.status === 200) {
isProposalGood = true;
console.log('Transaction proposal was good');
} else {
console.error('Transaction proposal was bad');
if (!proposalResponses) {
console.log('proposalResponses is bad');
}
if (!proposalResponses[0].response) {
console.log('proposalResponses[0].response is bad');
//console.log(proposalResponses[0].response.status);
}
}
When I check the the docker logs(in ca, peer0, orderer ), I find the only thing changed in orderer.trace.com
2021-05-04 07:41:41.136 UTC [comm.grpc.server] 1 -> INFO 007 streaming call completed {"grpc.start_time": "2021-05-04T07:40:41.77Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.21.0.8:39422", "error": "context finished before block retrieved: context canceled", "grpc.code": "Unknown", "grpc.call_duration": "59.36575566s"}
After several attempts, this error occurred on one occasion in peer0.sell.trace.com
2021-05-04 01:44:31.199 UTC [ConnProducer] NewConnection -> ERRO 034 Failed connecting to orderer.trace.com:7050 , error: context deadline exceeded
2021-05-04 01:44:31.200 UTC [deliveryClient] connect -> ERRO 035 Failed obtaining connection: Could not connect to any of the endpoints: [orderer.trace.com:7050]
2021-05-04 01:44:31.200 UTC [deliveryClient] try -> WARN 036 Got error: Could not connect to any of the endpoints: [orderer.trace.com:7050] , at 1 attempt. Retrying in 1s
I'm a very newbie in both fabric and nodejs, so any kind of help would be great. Thanks in advance.
NEW EDIT
My peer yaml
peer0.sell.trace.com:
container_name: peer0.sell.trace.com
image: hyperledger/fabric-peer:latest
environment:
- CORE_PEER_ID=peer0.sell.trace.com
- CORE_PEER_ADDRESS=peer0.sell.trace.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.sell.trace.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.sell.trace.com:7051
- CORE_PEER_LOCALMSPID=OrgSellMSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=test_default
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
##TLS
#- CORE_PEER_TLS_ENABLED=true
#- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
#- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
#- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- GODEBUG=netdns=go
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/sell.trace.com/peers/peer0.sell.trace.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/sell.trace.com/peers/peer0.sell.trace.com/tls:/etc/hyperledger/fabric/tls
#- ./crypto-config/peerOrganizations/sell.trace.com/users/Admin#sell.trace.com/tls:/etc/hyperledger/client/tls
ports:
- 1151:7051
- 1153:7053
networks:
default:
aliases:
- test
my connection.json
{
"name": "first-network-org_sell",
"version": "1.0.0",
"client": {
"organization": "org_sell",
"connection": {
"timeout": {
"peer": {
"endorser": "3000"
}
}
}
},
"organizations": {
"org_sell": {
"mspid": "OrgSellMSP",
"peers": [
"peer0.sell.trace.com",
"peer1.sell.trace.com"
]
}
},
"peers": {
"peer0.sell.trace.com": {
"url": "grpc://localhost:7051",
},
"peer1.sell.trace.com": {
"url": "grpc://localhost:7051",
}
}
}

What is needed to use 1 central certificate authority for all the organization on Hyperledger Fabric v1.4?

Based on Hyperldeger Fabric is created a network on which there are:1 orderer, 1 ca, 1 couchdb, 1 cli, 1 peer
Afterwards, is added a new org with: 1 peer, 1 couchdb and 1 cli
Until this stage there is no error. All the containers are running. Then is enrolled the ca admin. Still no problem. The admin is connected with no problem. I want to create admin for the new organization.
enrollandregisterNewAdmin.js
const gateway = new Gateway();
await gateway.connect(ccpPath, { wallet, identity: 'admin', discovery: { enabled: true, asLocalhost: true } });
const ca = gateway.getClient().getCertificateAuthority();
const adminIdentity = gateway.getCurrentIdentity();
const secret = await ca.register({
affiliation: 'org1.department1',
enrollmentID: 'adminOrg3',
role: 'client',
attrs: [ {"name": "hf.Registrar.Roles", "value": "client"},
{"name": "hf.Registrar.DelegateRoles", "value": "client"},
{"name": "hf.Revoker", "value": "true"},
{"name": "hf.IntermediateCA", "value": "true"},
{"name": "hf.GenCRL", "value": "true"},
{"name": "hf.AffiliationMgr", "value": "true"},
{"name": "hf.Registrar.Attributes", "value": "hf.Registrar.Roles,hf.Registrar.DelegateRoles,hf.Revoker,hf.IntermediateCA,hf.GenCRL,hf.Registrar.Attributes,hf.AffiliationMgr"} ] }
, adminIdentity);
const enrollment = await ca.enroll({ enrollmentID: 'adminOrg3', enrollmentSecret: secret});
const userIdentity = X509WalletMixin.createIdentity('Org3MSP', enrollment.certificate, enrollment.key.toBytes());
await wallet.import('adminOrg3', userIdentity);
Finally the certificates of 'adminOrg3' are imported to the wallet with no error. But when I am trying to invoke/query with the 'adminOrg3'. I receive this error:
[Channel.js]: Channel:byfn received discovery error:access denied
[Channel.js]: Error: Channel:byfn Discovery error:access denied
error: [Network]: _initializeInternalChannel: Unable to initialize channel. Attempted to contact 1 Peers. Last error was Error: Channel:byfn Discovery error:access denied
This is a common error when the wallet exists from a previous deployment. But the wallet is deleted each time the network is restarted.
docker logs peer0.org3.example.com
2021-02-22 10:21:09.588 UTC [cauthdsl] deduplicate -> ERRO 082 Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority) for identity 0
My config file for new org
docker-compose-org3.yaml
version: '2'
volumes:
peer0.org3.example.com:
networks:
byfn:
services:
peer0.org3.example.com:
container_name: peer0.org3.example.com
extends:
file: base/peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org3.example.com
- CORE_PEER_ADDRESS=peer0.org3.example.com:11051
- CORE_PEER_LISTENADDRESS=0.0.0.0:11051
- CORE_PEER_CHAINCODEADDRESS=peer0.org3.example.com:11052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:11052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org3.example.com:11051
- CORE_PEER_LOCALMSPID=Org3MSP
volumes:
- /var/run/:/host/var/run/
- ./org3-artifacts/crypto-config/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/msp:/etc/hyperledger/fabric/msp
- ./org3-artifacts/crypto-config/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/tls:/etc/hyperledger/fabric/tls
- peer0.org3.example.com:/var/hyperledger/production
ports:
- 11051:11051
networks:
- byfn
Org3cli:
container_name: Org3cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_ID=Org3cli
- CORE_PEER_ADDRESS=peer0.org3.example.com:11051
- CORE_PEER_LOCALMSPID=Org3MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org3.example.com/users/Admin#org3.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./org3-artifacts/crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./crypto-config/peerOrganizations/org1.example.com:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com
-./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
depends_on:
- peer0.org3.example.com
networks:
- byfn
Is it possible under the same affiliation to exist different MSPs?
Is needed any change to the configuration files?
Just to clarify few things ...
did you add the new org on the channel before trying to connect with the new org user?
are you running the peers in docker containers and use volumes for the peer file system mapping? - It may happen that the peers still load the content of the old channels...
-Tsvetan

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

hyperledger fabric sdk node orderer, node client Failed to connect before the deadline

i'm using fabric sample project 'basic network' as implementary enviroment, to develop chaincode and nodejs client app(REST API) base on fabric node client sdk, the node app resident in same host with fabric peer.
while all the docker container(ca,orderer,peer,couchdb,client) in one host, i've succeed in creating and joining channel, installing and instantiating chaincode, so with nodejs client, the query and invoke function performed succeessfully. the connection.json file are copy from basic network sample.
when i've moved the orderer container to another host, modified the container docker-compose yaml file, and the connection.json, the operation result in client container doesn't changed, they are all OK, the nodejs client app query oepration can proceed but the invoke(insert and modify) failed,the log is:
2019-03-23T03:32:38.769Z - debug: [Remote.js]: getUrl::grpc://192.168.122.6:7050
2019-03-23T03:32:38.769Z - error: [Remote.js]: Error: Failed to connect before the deadline URL:grpc://192.168.122.6:7050
2019-03-23T03:32:38.770Z - error: [Remote.js]: Error: Failed to connect before the deadline URL:grpc://192.168.122.6:7050
2019-03-23T03:32:38.772Z - debug: [Remote.js]: getUrl::grpc://192.168.122.6:7050
2019-03-23T03:32:38.772Z - error: [Orderer.js]: Orderer grpc://192.168.122.6:7050 has an error Error: Failed to connect before the deadline URL:grpc://192.168.122.6:7050
2019-03-23T03:32:38.772Z - error: [Orderer.js]: Orderer grpc://192.168.122.6:7050 has an error Error: Failed to connect before the deadline URL:grpc://192.168.122.6:7050
here,the '192.168.122.6' is the host which the orderer container resident in. below is the connection.json file used by nodejs app, i've turned off the tls between orderer and peer:
{
"name": "basic-network",
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"channels": {
"mychannel": {
"orderers": [
"orderer.example.com"
],
"peers": {
"peer0.org1.example.com": {}
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com"
],
"certificateAuthorities": [
"ca.example.com"
]
}
},
"orderers": {
"orderer.example.com": {
"url": "grpc://192.168.122.6:7050"
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpc://127.0.0.1:7051"
}
},
"certificateAuthorities": {
"ca.example.com": {
"url": "http://127.0.0.1:7054",
"caName": "ca.example.com"
}
}
}
i guess,there is something wrong in connection.json,but i don't know which is.
below is the content about orderer and peer in docker-compose.yaml:
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=net_basic
- ORDERER_GENERAL_LOGLEVEL=info
- FABRIC_LOGGING_SPEC=info
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=false
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- 7050:7050
volumes:
- ./config/:/etc/hyperledger/configtx
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/:/etc/hyperledger/msp/orderer
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/msp/peerOrg1
networks:
- basic
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer
environment:
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
- CORE_PEER_NETWORKID=basic
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.zte.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.zte.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=net_basic
- FABRIC_LOGGING_SPEC=debug
- CORE_CHAINCODE_LOGGING_LEVEL=debug
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
# The CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME and CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
# provide the credentials for ledger to connect to CouchDB. The username and password must
# match the username and password set for the associated CouchDB.
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=false
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 7051:7051
- 7052:7052
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
# - orderer.example.com
- couchdb
networks:
- basic
extra_hosts:
- "orderer.example.com:192.168.122.6"
- "peer0.org1.example.com:127.0.0.1"

How to start kubernetes service on NodePort outside service-node-port-range default range?

I've been trying to start kubernetes-dashboard (and eventualy other services) on a NodePort outside the default port range with little success,
here is my setup:
Cloud provider: Azure (Not azure container service)
OS: CentOS 7
here is what I have tried:
Update the host
$ yum update
Install kubeadm
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
$ setenforce 0
$ yum install -y docker kubelet kubeadm kubectl kubernetes-cni
$ systemctl enable docker && systemctl start docker
$ systemctl enable kubelet && systemctl start kubelet
Start the cluster with kubeadm
$ kubeadm init
Allow runing containers on master node, because we have a single node cluster
$ kubectl taint nodes --all dedicated-
Install a pod network
$ kubectl apply -f https://git.io/weave-kube
Our kubernetes-dashboard Deployment (# ~/kubernetes-dashboard.yaml
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI.
#
# Example usage: kubectl create -f <this_file>
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
# Comment the following annotation if Dashboard must not be deployed on master
annotations:
scheduler.alpha.kubernetes.io/tolerations: |
[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 8880
targetPort: 9090
nodePort: 8880
selector:
app: kubernetes-dashboard
Create our Deployment
$ kubectl create -f ~/kubernetes-dashboard.yaml
deployment "kubernetes-dashboard" created
The Service "kubernetes-dashboard" is invalid: spec.ports[0].nodePort: Invalid value: 8880: provided port is not in the valid range. The range of valid ports is 30000-32767
I found out that to change the range of valid ports I could set service-node-port-range option on kube-apiserver to allow a different port range,
so I tried this:
$ kubectl get po --namespace=kube-system
NAME READY STATUS RESTARTS AGE
dummy-2088944543-lr2zb 1/1 Running 0 31m
etcd-test2-highr 1/1 Running 0 31m
kube-apiserver-test2-highr 1/1 Running 0 31m
kube-controller-manager-test2-highr 1/1 Running 2 31m
kube-discovery-1769846148-wmbhb 1/1 Running 0 31m
kube-dns-2924299975-8vwjm 4/4 Running 0 31m
kube-proxy-0ls9c 1/1 Running 0 31m
kube-scheduler-test2-highr 1/1 Running 2 31m
kubernetes-dashboard-3203831700-qrvdn 1/1 Running 0 22s
weave-net-m9rxh 2/2 Running 0 31m
Add "--service-node-port-range=8880-8880" to kube-apiserver-test2-highr
$ kubectl edit po kube-apiserver-test2-highr --namespace=kube-system
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-apiserver",
"namespace": "kube-system",
"creationTimestamp": null,
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "k8s",
"hostPath": {
"path": "/etc/kubernetes"
}
},
{
"name": "certs",
"hostPath": {
"path": "/etc/ssl/certs"
}
},
{
"name": "pki",
"hostPath": {
"path": "/etc/pki"
}
}
],
"containers": [
{
"name": "kube-apiserver",
"image": "gcr.io/google_containers/kube-apiserver-amd64:v1.5.3",
"command": [
"kube-apiserver",
"--insecure-bind-address=127.0.0.1",
"--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",
"--service-cluster-ip-range=10.96.0.0/12",
"--service-node-port-range=8880-8880",
"--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--client-ca-file=/etc/kubernetes/pki/ca.pem",
"--tls-cert-file=/etc/kubernetes/pki/apiserver.pem",
"--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--token-auth-file=/etc/kubernetes/pki/tokens.csv",
"--secure-port=6443",
"--allow-privileged",
"--advertise-address=100.112.226.5",
"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
"--anonymous-auth=false",
"--etcd-servers=http://127.0.0.1:2379"
],
"resources": {
"requests": {
"cpu": "250m"
}
},
"volumeMounts": [
{
"name": "k8s",
"readOnly": true,
"mountPath": "/etc/kubernetes/"
},
{
"name": "certs",
"mountPath": "/etc/ssl/certs"
},
{
"name": "pki",
"mountPath": "/etc/pki"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 8080,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15,
"failureThreshold": 8
}
}
],
"hostNetwork": true
},
"status": {}
$ :wq
The following is the truncated response
# pods "kube-apiserver-test2-highr" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
So I tried a different approach, I edited the deployment file for kube-apiserver with the same change described above
and ran the following:
$ kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.json --namespace=kube-system
And got this response:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
So now i'm stuck, how can I change the range of valid ports?
You are specifying --service-node-port-range=8880-8880 wrong. You set it to one port only, Set it to a range.
Second problem: You are setting the service to use 9090 and it's not in the range.
ports:
- port: 80
targetPort: 9090
nodePort: 9090
API Server should have a deployment too, Try to editing the port-range in the deployment itself and delete the api server pod so it gets recreated via new config.
The Service node ports range is set to infrequently-used ports for a reason. Why do you want to publish this on every node? Do you really want that?
An alternative is to expose it on a semi-random nodeport, then use a proxy pod on a known node or set of nodes to access it via hostport.
This issue:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
was caused by my port range excluding 8080, which kube-apiserver was serving on, so I could not send any updates to kubectl.
I fixed it by changing the port range to 8080-8881 and restarting the kubelet service like so:
$ service kubelet restart
Everything works as expected now.

Resources