panic: runtime error: index out of range When starting the orderer with genesis.block - hyperledger-fabric

I'm getting an index out of range when I try to start the orderer. I happens after does the config values of the orderer:
Operations.TLS.Enabled = false
Operations.TLS.PrivateKey = ""
Operations.TLS.Certificate = ""
Operations.TLS.RootCAs = []
Operations.TLS.ClientAuthRequired = false
Operations.TLS.ClientRootCAs = []
Metrics.Provider = "disabled"
Metrics.Statsd.Network = "udp"
Metrics.Statsd.Address = "127.0.0.1:8125"
Metrics.Statsd.WriteInterval = 30s
Metrics.Statsd.Prefix = ""
panic: runtime error: index out of range
goroutine 1 [running]:
github.com/hyperledger/fabric/msp.(*bccspmsp).sanitizeCert(0xc0002079e0, 0xc000111700, 0x26, 0xc000531108, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimpl.go:691 +0x207
github.com/hyperledger/fabric/msp.newIdentity(0xc000111700, 0x1152560, 0xc00000ef98, 0xc0002079e0, 0xc00035e148, 0x1152560, 0xc00000ef98, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/msp/identities.go:47 +0x70
github.com/hyperledger/fabric/msp.(*bccspmsp).getIdentityFromConf(0xc0002079e0, 0xc000354000, 0x3cd, 0x400, 0x1, 0x1, 0x0, 0x7c8088, 0xc0000ac7e0, 0xff)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimpl.go:161 +0x102
github.com/hyperledger/fabric/msp.(*bccspmsp).setupCAs(0xc0002079e0, 0xc00014b1d0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimplsetup.go:134 +0x65d
github.com/hyperledger/fabric/msp.(*bccspmsp).preSetupV1(0xc0002079e0, 0xc00014b1d0, 0xc0005312f0, 0x7d23a0)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimplsetup.go:393 +0x64
github.com/hyperledger/fabric/msp.(*bccspmsp).setupV1(0xc0002079e0, 0xc00014b1d0, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimplsetup.go:373 +0x39
github.com/hyperledger/fabric/msp.(*bccspmsp).setupV1-fm(0xc00014b1d0, 0x1026ec0, 0x1a)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimpl.go:112 +0x34
github.com/hyperledger/fabric/msp.(*bccspmsp).Setup(0xc0002079e0, 0xc00034a300, 0x0, 0xc00034a3c0)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimpl.go:225 +0x14d
github.com/hyperledger/fabric/msp/cache.(*cachedMSP).Setup(0xc0004f2f90, 0xc00034a300, 0x1159600, 0xc0004f2f90)
/opt/gopath/src/github.com/hyperledger/fabric/msp/cache/cache.go:88 +0x4b
github.com/hyperledger/fabric/common/channelconfig.(*MSPConfigHandler).ProposeMSP(0xc000508550, 0xc00034a300, 0x19, 0xc0005314c8, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/msp.go:68 +0xc0
github.com/hyperledger/fabric/common/channelconfig.(*OrganizationConfig).validateMSP(0xc00034a2c0, 0x0, 0xffffffffffffffff)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/organization.go:80 +0xc0
github.com/hyperledger/fabric/common/channelconfig.(*OrganizationConfig).Validate(0xc00034a2c0, 0xc000531550, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/organization.go:73 +0x2b
github.com/hyperledger/fabric/common/channelconfig.NewOrganizationConfig(0xc0004fcf48, 0x6, 0xc0004f55e0, 0xc000508550, 0x0, 0x0, 0x8)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/organization.go:54 +0x10e
github.com/hyperledger/fabric/common/channelconfig.NewConsortiumConfig(0xc0004f5590, 0xc000508550, 0xc0005316c0, 0xf07a40, 0xc0004f2e70)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/consortium.go:44 +0x196
github.com/hyperledger/fabric/common/channelconfig.NewConsortiumsConfig(0xc0004f5540, 0xc000508550, 0xc000531808, 0x4, 0x1b8ac00)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/consortiums.go:31 +0x103
github.com/hyperledger/fabric/common/channelconfig.NewChannelConfig(0xc0004f5040, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/channel.go:104 +0x392
github.com/hyperledger/fabric/common/channelconfig.NewBundle(0xc0004fd2e0, 0xc, 0xc0004f2780, 0xc000536510, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/bundle.go:196 +0x6b
github.com/hyperledger/fabric/common/channelconfig.NewBundleFromEnvelope(0xc0004f4a50, 0x1444, 0x1500, 0x114b520)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/bundle.go:187 +0x14d
github.com/hyperledger/fabric/orderer/common/server.ValidateBootstrapBlock(0xc000079940, 0xc000079940, 0xc000531be8)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/onboarding.go:349 +0xf7
github.com/hyperledger/fabric/orderer/common/server.Start(0x1013e09, 0x5, 0xc0004c8900)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:97 +0x59
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:91 +0x1ce
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
I have gone to the place were it does and it is on this function.
func (msp *bccspmsp) sanitizeCert(cert *x509.Certificate) (*x509.Certificate, error) {
if isECDSASignedCert(cert) {
// Lookup for a parent certificate to perform the sanitization
var parentCert *x509.Certificate
chain, err := msp.getUniqueValidationChain(cert, msp.getValidityOptsForCert(cert))
if err != nil {
return nil, err
}
// at this point, cert might be a root CA certificate
// or an intermediate CA certificate
if cert.IsCA && len(chain) == 1 {
// cert is a root CA certificate
parentCert = cert
} else {
parentCert = chain[1]
}
// Sanitize
cert, err = sanitizeECDSASignedCert(cert, parentCert)
if err != nil {
return nil, err
}
}
return cert, nil
}
Its on
parentCert = chain[1]
I know that is a problem on my genesis block on my configtx file, and following the code of the error I guess that is looking at the ca files.
So guessing that I have looked at the files and this is the following structure that I use:
msp
admincerts (the certificate of the admin)
tlscacerts (the tls cert of the tls-ca)
cacerts (the tls cert of the ca cert)
And everything is correct ad far as I know.
EDIT 1:
If put the logs on debug mode it gives the same error information, but is happening after the following:
2019-07-11 08:45:00.119 UTC [common.channelconfig] NewStandardValues -> DEBU 0ed Initializing protos for *channelconfig.OrdererProtos
2019-07-11 08:45:00.119 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0ee Processing field: ConsensusType
2019-07-11 08:45:00.119 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0ef Processing field: BatchSize
2019-07-11 08:45:00.119 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0f0 Processing field: BatchTimeout
2019-07-11 08:45:00.119 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0f1 Processing field: KafkaBrokers
2019-07-11 08:45:00.120 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0f2 Processing field: ChannelRestrictions
2019-07-11 08:45:00.120 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0f3 Processing field: Capabilities
2019-07-11 08:45:00.120 UTC [common.channelconfig] NewStandardValues -> DEBU 0f4 Initializing protos for *channelconfig.OrganizationProtos
2019-07-11 08:45:00.120 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0f5 Processing field: MSP
2019-07-11 08:45:00.120 UTC [common.channelconfig] validateMSP -> DEBU 0f6 Setting up MSP for org OrgMSP
2019-07-11 08:45:00.120 UTC [msp] newBccspMsp -> DEBU 0f7 Creating BCCSP-based MSP instance
2019-07-11 08:45:00.120 UTC [msp] New -> DEBU 0f8 Creating Cache-MSP instance
2019-07-11 08:45:00.120 UTC [msp] Setup -> DEBU 0f9 Setting up MSP instance OrgMSP
2019-07-11 08:45:00.120 UTC [msp.identity] newIdentity -> DEBU 0fa Creating identity instance for cert

Looks like the contents of cacerts and tlscacerts are wrong.
cacerts should contain the CA root certificate which signed the admin certificate
tlscacerts should contain the CA root certificate used to sign TLS certificates.

Related

How to connect to Hyperledger Fabric Gateway Service (new in HF 2.4) with TLS enabled?

I have a Hyperlegder Fabric network set-up which is operating fine as long as I don't use new Fabric-Gateway SDK (https://hyperledger-fabric.readthedocs.io/en/release-2.4/gateway.html).
I upgraded my network from 2.3.1 to 2.4.1 and wanted to try the new SDK, but cannot connect to the Peer. Below I give some details of my configuration.
Peer-base docker service:
peer-base:
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_basic
- FABRIC_LOGGING_SPEC=info:gateway,comm,comm.grpc,comm.grpc.server=debug
- CORE_CHAINCODE_LOGGING_LEVEL=info
- CORE_PEER_LISTENADDRESS=0.0.0.0:7051
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/peer/tls/ca.crt
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=***
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=***
- CORE_METRICS_PROVIDER=prometheus
- CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:7055
- CORE_PEER_GATEWAY_ENABLED=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
volumes:
- ./config:/etc/hyperledger/configtx
- /var/run/:/host/var/run/
networks:
- basic
restart: always
After migrating to 2.4.1, I added CORE_PEER_GATEWAY_ENABLED=true.
The peer docker service, which extends the peer-base:
peer0.org1.tcash.com:
container_name: peer0.org1.tcash.com
extends:
file: docker-compose-org1-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org1.tcash.com
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_ADDRESS=peer0.org1.tcash.com:7051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.tcash.com:7052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=test2.tcash.sigmacomp.pl:7051
- CORE_PEER_GOSSIP_ENDPOINT=test2.tcash.sigmacomp.pl:7051
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb0.org1.tcash.com:5984
ports:
- 7051:7051
- 7053:7053
- 7055:7055
volumes:
- ./crypto-config/peerOrganizations/org1.tcash.com/peers/peer0.org1.tcash.com:/etc/hyperledger/peer
- ./persistence/peer0.org1.tcash.com/:/var/hyperledger/production
depends_on:
- couchdb0.org1.tcash.com
extra_hosts:
- orderer0.tcash.com:146.59.17.169
- orderer1.tcash.com:146.59.17.169
- orderer2.tcash.com:146.59.17.169
- orderer3.tcash.com:146.59.17.169
- orderer4.tcash.com:146.59.17.169
- peer2.org1.tcash.com:51.195.202.90
- peer3.org1.tcash.com:51.195.202.90
- peer4.org1.tcash.com:51.68.172.244
- peer5.org1.tcash.com:51.68.172.244
No changes have been made here during migration to 2.4.1.
I can see in the Peer logs, that new gateway service has been started:
2022-01-21 12:34:09.177 UTC 0023 INFO [nodeCmd] serve -> Starting peer with Gateway enabled
2022-01-21 12:34:09.177 UTC 0024 INFO [nodeCmd] serve -> Starting peer with ID=[peer0.org1.tcash.com], network ID=[dev], address=[peer0.org1.tcash.com:7051]
2022-01-21 12:34:09.177 UTC 0025 INFO [nodeCmd] func7 -> Starting profiling server with listenAddress = 0.0.0.0:6060
2022-01-21 12:34:09.177 UTC 0026 INFO [nodeCmd] serve -> Started peer with ID=[peer0.org1.tcash.com], network ID=[dev], address=[peer0.org1.tcash.com:7051]
After deploying the network, I try to run the transaction with the following code (NodeJS):
'use strict';
const fs = require('fs');
const crypto = require('crypto');
const grpc =require('#grpc/grpc-js');
const { connect, signers } = require('#hyperledger/fabric-gateway');
async function main() {
// Main try/catch block
try {
const credentials = fs.readFileSync('walletOffline/user.cert.pem');
const identity = { mspId: 'Org1MSP', credentials };
const privateKeyPem = fs.readFileSync('walletOffline/user.key.pem');
const privateKey = crypto.createPrivateKey(privateKeyPem);
const signer = signers.newPrivateKeySigner(privateKey);
const ccpJSON = fs.readFileSync('connection.json');
const ccp = JSON.parse(ccpJSON);
const peerName = ccp.organizations.org1.peers[0];
const peerAddress = ccp.peers[peerName].url.replace('grpcs://', '');
const tlsCACert = ccp.peers[peerName].tlsCACerts.pem;
const grpcOptions = ccp.peers[peerName].grpcOptions;
const tlsRootCert = Buffer.from(tlsCACert);
const tlsCredentials = grpc.credentials.createSsl(tlsRootCert);
const client = new grpc.Client(peerAddress, tlsCredentials, grpcOptions);
const gateway = connect({identity, signer, client});
const network = gateway.getNetwork('tcashchannel');
const contract = network.getContract('tcash');
const result = await contract.evaluateTransaction('queryAccountState', '100', '');
console.log('result: ' + result);
} catch (error) {
console.log('Error: ' + error);
console.log(error.stack);
}
}
main();
As you can see, I am extracting connection parameters from the JSON connection profile. This connection profile I use with the 'old' HF Node SDK and it's working without issues. However running this code gives me the following error from contract.evaluateTransaction() after 120 seconds timeout:
GatewayError: 14 UNAVAILABLE: failed to create new connection: context deadline exceeded
at newGatewayError (/Users/michaliwanicki/git/tcash/tcash-application/node_modules/#hyperledger/fabric-gateway/dist/gatewayerror.js:40:12)
at Object.callback (/Users/michaliwanicki/git/tcash/tcash-application/node_modules/#hyperledger/fabric-gateway/dist/client.js:81:67)
at Object.onReceiveStatus (/Users/michaliwanicki/git/tcash/tcash-application/node_modules/#grpc/grpc-js/build/src/client.js:180:36)
at Object.onReceiveStatus (/Users/michaliwanicki/git/tcash/tcash-application/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:365:141)
at Object.onReceiveStatus (/Users/michaliwanicki/git/tcash/tcash-application/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:328:181)
at /Users/michaliwanicki/git/tcash/tcash-application/node_modules/#grpc/grpc-js/build/src/call-stream.js:182:78
at processTicksAndRejections (internal/process/task_queues.js:77:11)
I can also see the corresponding entry in the peer logs:
2022-01-21 14:24:14.961 UTC 007e INFO [comm.grpc.server] 1 -> unary call completed grpc.service=gateway.Gateway grpc.method=Evaluate grpc.peer_address=178.183.68.178:54151 error="rpc error: code = Unavailable desc = failed to create new connection: context deadline exceeded" grpc.code=Unavailable grpc.call_duration=2m0.00087636s
There are no errors or warnings in the peer log.
EDIT:
After switching logging level to DEBUG and filtering it out, I came across the following part:
2022-01-27 13:38:19.217 UTC 67af DEBU [core.comm] ServerHandshake -> Server TLS handshake completed in 69.892651ms server=PeerServer remoteaddress=178.183.68.178:58755
2022-01-27 13:38:19.356 UTC 67b0 DEBU [lockbasedtxmgr] newQueryExecutor -> constructing new query executor txid = [407898ef-0004-4f25-be10-b603a2aaf919]
2022-01-27 13:38:19.357 UTC 67b1 DEBU [statecouchdb] GetState -> GetState(). ns=, key=CHANNEL_CONFIG_ENV_BYTES
2022-01-27 13:38:19.358 UTC 67b2 DEBU [lockbasedtxmgr] Done -> Done with transaction simulation / query execution [407898ef-0004-4f25-be10-b603a2aaf919]
2022-01-27 13:38:19.358 UTC [grpc] WarningDepth -> DEBU 02f [core]Adjusting keepalive ping interval to minimum period of 10s
2022-01-27 13:38:19.359 UTC [grpc] InfoDepth -> DEBU 030 [core]parsed scheme: ""
2022-01-27 13:38:19.359 UTC [grpc] InfoDepth -> DEBU 031 [core]scheme "" not registered, fallback to default scheme
2022-01-27 13:38:19.359 UTC [grpc] InfoDepth -> DEBU 032 [core]ccResolverWrapper: sending update to cc: {[{test2.tcash.sigmacomp.pl:8051 <nil> 0 <nil>}] <nil> <nil>}
2022-01-27 13:38:19.360 UTC [grpc] InfoDepth -> DEBU 033 [core]ClientConn switching balancer to "pick_first"
2022-01-27 13:38:19.360 UTC [grpc] InfoDepth -> DEBU 034 [core]Channel switches to new LB policy "pick_first"
2022-01-27 13:38:19.360 UTC [grpc] InfoDepth -> DEBU 035 [core]Subchannel Connectivity change to CONNECTING
2022-01-27 13:38:19.360 UTC [grpc] InfoDepth -> DEBU 036 [core]pickfirstBalancer: UpdateSubConnState: 0xc002ed2b30, {CONNECTING <nil>}
2022-01-27 13:38:19.361 UTC [grpc] InfoDepth -> DEBU 037 [core]Channel Connectivity change to CONNECTING
2022-01-27 13:38:19.360 UTC [grpc] InfoDepth -> DEBU 038 [core]Subchannel picks a new address "test2.tcash.sigmacomp.pl:8051" to connect
2022-01-27 13:38:19.370 UTC [grpc] InfoDepth -> DEBU 039 [core]Subchannel Connectivity change to TRANSIENT_FAILURE
2022-01-27 13:38:19.370 UTC [grpc] InfoDepth -> DEBU 03a [core]pickfirstBalancer: UpdateSubConnState: 0xc002ed2b30, {TRANSIENT_FAILURE connection closed}
2022-01-27 13:38:19.370 UTC [grpc] InfoDepth -> DEBU 03b [core]Channel Connectivity change to TRANSIENT_FAILURE
2022-01-27 13:38:19.370 UTC [grpc] InfoDepth -> DEBU 03c [transport]transport: loopyWriter.run returning. connection error: desc = "transport is closing"
EDIT 2:
I noticed that there are some errors in peer logs belonging to the other peers in the network (not the one which is called by the client application and running the Gateway service). It seems that there is a problem with establishment of TLS between peers when using Gateway SDK:
2022-02-10 14:36:24.934 UTC 24b0 DEBU [gossip.comm] func1 -> Got message: GossipMessage: Channel: , nonce: 0, tag: CHAN_OR_ORG state_info_pull_req: Channel MAC:23b92135be842b052b823a7c87853436fb579040416405d4fdfd0b6db0aa02d9, Envelope: 39 bytes, Signature: 0 bytes
2022-02-10 14:36:24.934 UTC 24b1 DEBU [gossip.gossip] handleMessage -> Entering, 54.37.226.59:7051 5c2af6d536100ada4e7f1829978c7f0163a6589f47f44207aa51a84987fe6a5b sent us GossipMessage: Channel: , nonce: 0, tag: CHAN_OR_ORG state_info_pull_req: Channel MAC:23b92135be842b052b823a7c87853436fb579040416405d4fdfd0b6db0aa02d9, Envelope: 39 bytes, Signature: 0 bytes
2022-02-10 14:36:24.935 UTC 24b2 DEBU [gossip.gossip] handleMessage -> Exiting
2022-02-10 14:36:24.942 UTC 24b3 ERRO [core.comm] ServerHandshake -> Server TLS handshake failed in 15.541µs with error tls: first record does not look like a TLS handshake server=PeerServer remoteaddress=172.24.0.1:36394
2022-02-10 14:36:24.942 UTC [grpc] WarningDepth -> DEBU 04e [core]grpc: Server.Serve failed to complete security handshake from "172.24.0.1:36394": tls: first record does not look like a TLS handshake
I suspect that there is some piece of configuration which is required for this feature to work, which I am missing. I will appreciate if anyone can help me find it.
It looks like the gateway peer is failing to connect to another endorsing peer in the network. Are you seeing any gossip communication between the peers in the logs?
Try reducing the dialTimeout to something less than the endorsementTimeout in the core.yaml and see if it connects to the other peers.

Instantiate chaincode successful but chaincode doesn't appear

I am running Hyperledger Fabric 1.4.x, and release-1.4 of fabric-sdk-java. I am trying to use the following code to instantiate a chaincode:
InstantiateProposalRequest instantiateProposalRequest = client.newInstantiationProposalRequest();
instantiateProposalRequest.setProposalWaitTime(180000);
instantiateProposalRequest.setChaincodeID(buildChaincodeID(name, version, path));
instantiateProposalRequest.setChaincodeLanguage(Type.JAVA);
instantiateProposalRequest.setFcn("init");
instantiateProposalRequest.setArgs(new String[] {""});
Collection<ProposalResponse> responses = channel.sendInstantiationProposal(instantiateProposalRequest);
The instantiation seemed successful, but I don't see it listed when I check with peer chaincode list --instantiated -C deconeb-channel.
The following log is from the point I execute the code (with FABRIC_LOGGING_SPEC=DEBUG:msp=info:gossip=info):
[root#vmdev2 createchannel]# docker logs $(docker ps -aqf name=fabric_peer1) -n 0 -f
... [endorser] ProcessProposal -> DEBU 22d2 Entering: request from 192.168.50.126:3280
... [protoutils] ValidateProposalMessage -> DEBU 22d3 ValidateProposalMessage starts for signed proposal 0xc003725b30
... [protoutils] validateChannelHeader -> DEBU 22d4 validateChannelHeader info: header type 3
... [protoutils] checkSignatureFromCreator -> DEBU 22d5 begin
... [protoutils] checkSignatureFromCreator -> DEBU 22d6 creator is &{myMSP 0167d1f59420e22fb1032bc6a17b528414378511c23a07719d2b364842f862a7}
... [protoutils] checkSignatureFromCreator -> DEBU 22d7 creator is valid
... [protoutils] checkSignatureFromCreator -> DEBU 22d8 exits successfully
... [protoutils] validateChaincodeProposalMessage -> DEBU 22d9 validateChaincodeProposalMessage starts for proposal 0xc001fb47e0, header 0xc0036cc870
... [protoutils] validateChaincodeProposalMessage -> DEBU 22da validateChaincodeProposalMessage info: header extension references chaincode name:"cscc"
... [endorser] preProcess -> DEBU 22db [deconeb-channel][db2d2e4f] processing txid: db2d2e4fef0b7cb0889fe6d0258b2f424426c180be62457c2f2cf5ec00bdd96c
... [fsblkstorage] retrieveTransactionByID -> DEBU 22dc retrieveTransactionByID() - txId = [db2d2e4fef0b7cb0889fe6d0258b2f424426c180be62457c2f2cf5ec00bdd96c]
... [endorser] SimulateProposal -> DEBU 22dd [deconeb-channel][db2d2e4f] Entry chaincode: name:"cscc"
... [endorser] callChaincode -> INFO 22de [deconeb-channel][db2d2e4f] Entry chaincode: name:"cscc"
... [chaincode] Execute -> DEBU 22df Entry
... [cscc] Invoke -> DEBU 22e0 Invoke function: GetConfigBlock
... [aclmgmt] CheckACL -> DEBU 22e1 acl policy not found in config for resource cscc/GetConfigBlock
... [policies] Evaluate -> DEBU 22e2 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/Readers ==
... [policies] Evaluate -> DEBU 22e3 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
... [policies] Evaluate -> DEBU 22e4 == Evaluating *cauthdsl.policy Policy /Channel/Application/myMSP/Readers ==
... [cauthdsl] func1 -> DEBU 22e5 0xc00059ec70 gate 1638419940859552678 evaluation starts
... [cauthdsl] func2 -> DEBU 22e6 0xc00059ec70 signed by 0 principal evaluation starts (used [false])
... [cauthdsl] func2 -> DEBU 22e7 0xc00059ec70 processing identity 0 with bytes of 115a2d0
... [cauthdsl] func2 -> DEBU 22e8 0xc00059ec70 principal matched by identity 0
... [cauthdsl] func2 -> DEBU 22e9 0xc00059ec70 principal evaluation succeeds for identity 0
... [cauthdsl] func2 -> DEBU 22ea 0xc00059ec70 signed by 1 principal evaluation starts (used [true])
... [cauthdsl] func2 -> DEBU 22eb 0xc00059ec70 skipping identity 0 because it has already been used
... [cauthdsl] func2 -> DEBU 22ec 0xc00059ec70 principal evaluation fails
... [cauthdsl] func2 -> DEBU 22ed 0xc00059ec70 signed by 2 principal evaluation starts (used [true])
... [cauthdsl] func2 -> DEBU 22ee 0xc00059ec70 skipping identity 0 because it has already been used
... [cauthdsl] func2 -> DEBU 22ef 0xc00059ec70 principal evaluation fails
... [cauthdsl] func1 -> DEBU 22f0 0xc00059ec70 gate 1638419940859552678 evaluation succeeds
... [policies] Evaluate -> DEBU 22f1 Signature set satisfies policy /Channel/Application/myMSP/Readers
... [policies] Evaluate -> DEBU 22f2 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/myMSP/Readers
... [policies] Evaluate -> DEBU 22f3 Signature set satisfies policy /Channel/Application/Readers
... [policies] Evaluate -> DEBU 22f4 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/Readers
... [chaincode] handleMessage -> DEBU 22f5 [db2d2e4f] Fabric side handling ChaincodeMessage of type: COMPLETED in state ready
... [chaincode] Notify -> DEBU 22f6 [db2d2e4f] notifying Txid:db2d2e4fef0b7cb0889fe6d0258b2f424426c180be62457c2f2cf5ec00bdd96c, channelID:deconeb-channel
... [chaincode] Execute -> DEBU 22f7 Exit
... [endorser] callChaincode -> INFO 22f8 [deconeb-channel][db2d2e4f] Exit chaincode: name:"cscc" (1ms)
... [endorser] SimulateProposal -> DEBU 22f9 [deconeb-channel][db2d2e4f] Exit
... [endorser] endorseProposal -> DEBU 22fa [deconeb-channel][db2d2e4f] Entry chaincode: name:"cscc"
... [endorser] endorseProposal -> DEBU 22fb [deconeb-channel][db2d2e4f] escc for chaincode name:"cscc" is escc
... [endorser] EndorseWithPlugin -> DEBU 22fc Entering endorsement for {plugin: escc, channel: deconeb-channel, tx: db2d2e4fef0b7cb0889fe6d0258b2f424426c180be62457c2f2cf5ec00bdd96c, chaincode: cscc}
... [endorser] EndorseWithPlugin -> DEBU 22fd Exiting {plugin: escc, channel: deconeb-channel, tx: db2d2e4fef0b7cb0889fe6d0258b2f424426c180be62457c2f2cf5ec00bdd96c, chaincode: cscc}
... [endorser] endorseProposal -> DEBU 22fe [deconeb-channel][db2d2e4f] Exit
... [endorser] func1 -> DEBU 22ff Exit: request from 192.168.50.126:3280
... [comm.grpc.server] 1 -> INFO 2300 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=192.168.50.126:3280 grpc.code=OK grpc.call_duration=3.1014ms
... [common.deliverevents] Deliver -> DEBU 2301 Starting new Deliver handler
... [common.deliver] Handle -> DEBU 2302 Starting new deliver loop for 192.168.50.126:3285
... [common.deliver] Handle -> DEBU 2303 Attempting to read seek info message from 192.168.50.126:3285
... [aclmgmt] CheckACL -> DEBU 2304 acl policy not found in config for resource event/Block
... [policies] Evaluate -> DEBU 2305 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/Readers ==
... [policies] Evaluate -> DEBU 2306 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
... [policies] Evaluate -> DEBU 2307 == Evaluating *cauthdsl.policy Policy /Channel/Application/myMSP/Readers ==
... [cauthdsl] func1 -> DEBU 2308 0xc00373d130 gate 1638419941422542679 evaluation starts
... [cauthdsl] func2 -> DEBU 2309 0xc00373d130 signed by 0 principal evaluation starts (used [false])
... [cauthdsl] func2 -> DEBU 230a 0xc00373d130 processing identity 0 with bytes of 115a2d0
... [cauthdsl] func2 -> DEBU 230b 0xc00373d130 principal matched by identity 0
... [cauthdsl] func2 -> DEBU 230c 0xc00373d130 principal evaluation succeeds for identity 0
... [cauthdsl] func2 -> DEBU 230d 0xc00373d130 signed by 1 principal evaluation starts (used [true])
... [cauthdsl] func2 -> DEBU 230e 0xc00373d130 skipping identity 0 because it has already been used
... [cauthdsl] func2 -> DEBU 230f 0xc00373d130 principal evaluation fails
... [cauthdsl] func2 -> DEBU 2310 0xc00373d130 signed by 2 principal evaluation starts (used [true])
... [cauthdsl] func2 -> DEBU 2311 0xc00373d130 skipping identity 0 because it has already been used
... [cauthdsl] func2 -> DEBU 2312 0xc00373d130 principal evaluation fails
... [cauthdsl] func1 -> DEBU 2313 0xc00373d130 gate 1638419941422542679 evaluation succeeds
... [policies] Evaluate -> DEBU 2314 Signature set satisfies policy /Channel/Application/myMSP/Readers
... [policies] Evaluate -> DEBU 2315 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/myMSP/Readers
... [policies] Evaluate -> DEBU 2316 Signature set satisfies policy /Channel/Application/Readers
... [policies] Evaluate -> DEBU 2317 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/Readers
... [common.deliver] deliverBlocks -> DEBU 2318 [channel: deconeb-channel] Received seekInfo (0xc002fa3a40) start:<newest:<> > stop:<specified:<number:9223372036854775807 > > from 192.168.50.126:3285
... [fsblkstorage] Next -> DEBU 2319 Initializing block stream for iterator. itr.maxBlockNumAvailable=1
... [fsblkstorage] newBlockfileStream -> DEBU 231a newBlockfileStream(): filePath=[/var/hyperledger/production/ledgersData/chains/chains/deconeb-channel/blockfile_000000], startOffset=[21642]
... [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 231b Remaining bytes=[4671], Going to peek [8] bytes
... [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 231c Returning blockbytes - length=[4669], placementInfo={fileNum=[0], startOffset=[21642], bytesOffset=[21644]}
... [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 231d blockbytes [4669] read from file [0]
... [common.deliver] deliverBlocks -> DEBU 231e [channel: deconeb-channel] Delivering block [1] for (0xc002fa3a40) for 192.168.50.126:3285
... [fsblkstorage] waitForBlock -> DEBU 231f Going to wait for newer blocks. maxAvailaBlockNumber=[1], waitForBlockNum=[2]
... [endorser] ProcessProposal -> DEBU 2320 Entering: request from 192.168.50.126:3280
... [protoutils] ValidateProposalMessage -> DEBU 2321 ValidateProposalMessage starts for signed proposal 0xc0033d4fa0
... [protoutils] validateChannelHeader -> DEBU 2322 validateChannelHeader info: header type 3
... [protoutils] checkSignatureFromCreator -> DEBU 2323 begin
... [protoutils] checkSignatureFromCreator -> DEBU 2324 creator is &{myMSP 0167d1f59420e22fb1032bc6a17b528414378511c23a07719d2b364842f862a7}
... [protoutils] checkSignatureFromCreator -> DEBU 2325 creator is valid
... [protoutils] checkSignatureFromCreator -> DEBU 2326 exits successfully
... [protoutils] validateChaincodeProposalMessage -> DEBU 2327 validateChaincodeProposalMessage starts for proposal 0xc00014d5e0, header 0xc0033d53b0
... [protoutils] validateChaincodeProposalMessage -> DEBU 2328 validateChaincodeProposalMessage info: header extension references chaincode name:"lscc"
... [endorser] preProcess -> DEBU 2329 [deconeb-channel][bc1525c4] processing txid: bc1525c4a55b0d9115df62e094cc5f651a38bd1e037bf0675296f77f489585da
... [fsblkstorage] retrieveTransactionByID -> DEBU 232a retrieveTransactionByID() - txId = [bc1525c4a55b0d9115df62e094cc5f651a38bd1e037bf0675296f77f489585da]
... [lockbasedtxmgr] NewTxSimulator -> DEBU 232b constructing new tx simulator
... [lockbasedtxmgr] newLockBasedTxSimulator -> DEBU 232c constructing new tx simulator txid = [bc1525c4a55b0d9115df62e094cc5f651a38bd1e037bf0675296f77f489585da]
... [endorser] SimulateProposal -> DEBU 232d [deconeb-channel][bc1525c4] Entry chaincode: name:"lscc"
... [endorser] callChaincode -> INFO 232e [deconeb-channel][bc1525c4] Entry chaincode: name:"lscc"
... [chaincode] Execute -> DEBU 232f Entry
... [chaincode] handleMessage -> DEBU 2330 [bc1525c4] Fabric side handling ChaincodeMessage of type: GET_STATE in state ready
... [chaincode] HandleTransaction -> DEBU 2331 [bc1525c4] handling GET_STATE from chaincode
... [chaincode] HandleGetState -> DEBU 2332 [bc1525c4] getting state for chaincode lscc, key productOwnership, channel deconeb-channel
... [statecouchdb] GetState -> DEBU 2333 GetState(). ns=lscc, key=productOwnership
... [couchdb] ReadDoc -> DEBU 2334 [deconeb-channel_lscc] Entering ReadDoc() id=[productOwnership]
... [couchdb] handleRequest -> DEBU 2335 Entering handleRequest() method=GET url=http://couchdb1.myorg.com:5984 dbName=deconeb-channel_lscc
... [couchdb] handleRequest -> DEBU 2336 Request URL: http://couchdb1.myorg.com:5984/deconeb-channel_lscc/productOwnership?attachments=true
... [couchdb] handleRequest -> DEBU 2337 HTTP Request: GET /deconeb-channel_lscc/productOwnership?attachments=true HTTP/1.1 | Host: couchdb1.myorg.com:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Authorization: Basic Y291Y2hkYjpjb3VjaGRiMTIz | Accept-Encoding: gzip | |
... [couchdb] handleRequest -> DEBU 2338 Error handling CouchDB request. Error:not_found, Status Code:404, Reason:missing
... [couchdb] ReadDoc -> DEBU 2339 [deconeb-channel_lscc] Document not found (404), returning nil value instead of 404 error
... [chaincode] HandleGetState -> DEBU 233a [bc1525c4] No state associated with key: productOwnership. Sending RESPONSE with an empty payload
... [chaincode] HandleTransaction -> DEBU 233b [bc1525c4] Completed GET_STATE. Sending RESPONSE
... [cauthdsl] func1 -> DEBU 233c 0xc00370e210 gate 1638419941463477320 evaluation starts
... [cauthdsl] func2 -> DEBU 233d 0xc00370e210 signed by 0 principal evaluation starts (used [false])
... [cauthdsl] func2 -> DEBU 233e 0xc00370e210 processing identity 0 with bytes of 115a2d0
... [cauthdsl] func2 -> DEBU 233f 0xc00370e210 principal matched by identity 0
... [cauthdsl] func2 -> DEBU 2340 0xc00370e210 principal evaluation succeeds for identity 0
... [cauthdsl] func1 -> DEBU 2341 0xc00370e210 gate 1638419941463477320 evaluation succeeds
... [chaincode] handleMessage -> DEBU 2342 [bc1525c4] Fabric side handling ChaincodeMessage of type: PUT_STATE in state ready
... [chaincode] HandleTransaction -> DEBU 2343 [bc1525c4] handling PUT_STATE from chaincode
... [chaincode] HandleTransaction -> DEBU 2344 [bc1525c4] Completed PUT_STATE. Sending RESPONSE
... [lscc] putChaincodeCollectionData -> DEBU 2345 No collection configuration specified
... [chaincode] handleMessage -> DEBU 2346 [bc1525c4] Fabric side handling ChaincodeMessage of type: COMPLETED in state ready
... [chaincode] Notify -> DEBU 2347 [bc1525c4] notifying Txid:bc1525c4a55b0d9115df62e094cc5f651a38bd1e037bf0675296f77f489585da, channelID:deconeb-channel
... [chaincode] Execute -> DEBU 2348 Exit
... [chaincode] LaunchConfig -> DEBU 2349 launchConfig: executable:"/root/chaincode-java/start",Args:[/root/chaincode-java/start,--peerAddress,peer1.myorg.com:7052],Envs:[CORE_CHAINCODE_LOGGING_LEVEL=info,CORE_CHAINCODE_LOGGING_SHIM=warning,CORE_CHAINCODE_LOGGING_FORMAT=%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message},CORE_CHAINCODE_ID_NAME=productOwnership:1.0,CORE_PEER_TLS_ENABLED=true,CORE_TLS_CLIENT_KEY_PATH=/etc/hyperledger/fabric/client.key,CORE_TLS_CLIENT_CERT_PATH=/etc/hyperledger/fabric/client.crt,CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/peer.crt],Files:[/etc/hyperledger/fabric/client.crt /etc/hyperledger/fabric/client.key /etc/hyperledger/fabric/peer.crt]
... [chaincode] Start -> DEBU 234a start container: productOwnership:1.0
... [chaincode] Start -> DEBU 234b start container with args: /root/chaincode-java/start --peerAddress peer1.myorg.com:7052
... [chaincode] Start -> DEBU 234c start container with env:
CORE_CHAINCODE_LOGGING_LEVEL=info
CORE_CHAINCODE_LOGGING_SHIM=warning
CORE_CHAINCODE_LOGGING_FORMAT=%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}
CORE_CHAINCODE_ID_NAME=productOwnership:1.0
CORE_PEER_TLS_ENABLED=true
CORE_TLS_CLIENT_KEY_PATH=/etc/hyperledger/fabric/client.key
CORE_TLS_CLIENT_CERT_PATH=/etc/hyperledger/fabric/client.crt
CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/peer.crt
... [container] lockContainer -> DEBU 234d waiting for container(productOwnership-1.0) lock
... [container] lockContainer -> DEBU 234e got container (productOwnership-1.0) lock
... [dockercontroller] stopInternal -> DEBU 234f stopping container id=dev-peer1.myorg.com-productOwnership-1.0
... [dockercontroller] stopInternal -> DEBU 2350 stop container result error="No such container: dev-peer1.myorg.com-productOwnership-1.0"
... [dockercontroller] stopInternal -> DEBU 2351 killing container id=dev-peer1.myorg.com-productOwnership-1.0
... [dockercontroller] stopInternal -> DEBU 2352 kill container result id=dev-peer1.myorg.com-productOwnership-1.0 error="No such container: dev-peer1.myorg.com-productOwnership-1.0"
... [dockercontroller] stopInternal -> DEBU 2353 removing container id=dev-peer1.myorg.com-productOwnership-1.0
... [dockercontroller] stopInternal -> DEBU 2354 remove container result id=dev-peer1.myorg.com-productOwnership-1.0 error="No such container: dev-peer1.myorg.com-productOwnership-1.0"
... [dockercontroller] createContainer -> DEBU 2355 create container imageID=dev-peer1.myorg.com-productownership-1.0-8833aa49d3efc325e99f93399c3e02417a6d5a271b81e9013f3095580e89b308 containerID=dev-peer1.myorg.com-productOwnership-1.0
... [dockercontroller] getDockerHostConfig -> DEBU 2356 docker container hostconfig NetworkMode: devbc
... [dockercontroller] createContainer -> DEBU 2357 created container imageID=dev-peer1.myorg.com-productownership-1.0-8833aa49d3efc325e99f93399c3e02417a6d5a271b81e9013f3095580e89b308 containerID=dev-peer1.myorg.com-productOwnership-1.0
... [dockercontroller] Start -> DEBU 2358 Started container dev-peer1.myorg.com-productOwnership-1.0
... [container] unlockContainer -> DEBU 2359 container lock deleted(productOwnership-1.0)
... [container] lockContainer -> DEBU 235a waiting for container(productOwnership-1.0) lock
... [container] lockContainer -> DEBU 235b got container (productOwnership-1.0) lock
... [container] unlockContainer -> DEBU 235c container lock deleted(productOwnership-1.0)
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 235d Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 235e INFO: <<<<<<<<<<<<<Enviromental options>>>>>>>>>>>>
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 235f Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2360 INFO: CORE_CHAINCODE_ID_NAME: productOwnership:1.0
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2361 Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2362 INFO: CORE_PEER_ADDRESS: 127.0.0.1
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2363 Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2364 INFO: CORE_PEER_TLS_ENABLED: true
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2365 Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2366 INFO: CORE_PEER_TLS_ROOTCERT_FILE: /etc/hyperledger/fabric/peer.crt
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2367 Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2368 INFO: CORE_TLS_CLIENT_KEY_PATH: /etc/hyperledger/fabric/client.key
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2369 Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236a INFO: CORE_TLS_CLIENT_CERT_PATH: /etc/hyperledger/fabric/client.crt
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236b Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236c INFO: LOGLEVEL: INFO
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236d Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236e INFO: <<<<<<<<<<<<<CommandLine options>>>>>>>>>>>>
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236f Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2370 INFO: CORE_CHAINCODE_ID_NAME: productOwnership:1.0
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2371 Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2372 INFO: CORE_PEER_ADDRESS: peer1.myorg.com:7052
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2373 Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2374 INFO: CORE_PEER_TLS_ENABLED: true
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2375 Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2376 INFO: CORE_PEER_TLS_ROOTCERT_FILE: /etc/hyperledger/fabric/peer.crt
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2377 Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2378 INFO: CORE_TLS_CLIENT_KEY_PATH: /etc/hyperledger/fabric/client.key
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2379 Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237a INFO: CORE_TLS_CLIENT_CERT_PATH: /etc/hyperledger/fabric/client.crt
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237b org.hyperledger
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237c org.hyperledger.fabric.shim.ChaincodeBase
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237d org.hyperledger
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237e 04:39:03:119 INFO org.hyperledger.fabric.shim.ChaincodeBase initializeLogging Loglevel set to INFO
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237f 04:39:03:130 INFO org.hyperledger.fabric.shim.ChaincodeBase getChaincodeConfig <<<<<<<<<<<<<Properties options>>>>>>>>>>>>
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2380 04:39:03:131 INFO org.hyperledger.fabric.shim.ChaincodeBase getChaincodeConfig {CORE_CHAINCODE_ID_NAME=productOwnership:1.0, CORE_PEER_ADDRESS=peer1.myorg.com}
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2381 04:39:03:136 INFO org.hyperledger.fabric.metrics.Metrics initialize Metrics disabled
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2382 04:39:03:840 INFO org.hyperledger.fabric.shim.ChaincodeBase newChannelBuilder ()->Configuring channel connection to peer.peer1.myorg.com:7052 tlsenabled true
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2383 04:39:04:408 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskManager <init> Max Pool Size [TP_MAX_POOL_SIZE]5
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2384 04:39:04:408 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskManager <init> Queue Size [TP_CORE_POOL_SIZE]5000
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2385 04:39:04:413 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskManager <init> Core Pool Size [TP_QUEUE_SIZE]5
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2386 04:39:04:413 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskManager <init> Keep Alive Time [TP_KEEP_ALIVE_MS]5000
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2387 04:39:04:421 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskExecutor <init> Thread pool created
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2388 04:39:04:422 INFO org.hyperledger.fabric.shim.impl.ChaincodeSupportClient start making the grpc call
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2389 04:39:04:596 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskManager register Registering new chaincode name: "productOwnership:1.0"
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 238a
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 238b 04:39:04:620 FINE org.hyperledger.fabric.shim.impl.ChaincodeSupportClient$2 accept > sendToPeer
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 238c 04:39:04:636 FINE org.hyperledger.fabric.shim.impl.ChaincodeSupportClient$2 accept < sendToPeer
... [chaincode.accesscontrol] authenticate -> DEBU 238d Chaincode productOwnership:1.0 's authentication is authorized
... [chaincode] handleMessage -> DEBU 238e [] Fabric side handling ChaincodeMessage of type: REGISTER in state created
... [chaincode] HandleRegister -> DEBU 238f Received REGISTER in state created
... [chaincode] Register -> DEBU 2390 registered handler complete for chaincode productOwnership:1.0
... [chaincode] HandleRegister -> DEBU 2391 Got REGISTER for chaincodeID = name:"productOwnership:1.0" , sending back REGISTERED
... [chaincode] HandleRegister -> DEBU 2392 Changed state to established for name:"productOwnership:1.0"
... [chaincode] sendReady -> DEBU 2393 sending READY for chaincode name:"productOwnership:1.0"
... [chaincode] sendReady -> DEBU 2394 Changed to state ready for chaincode name:"productOwnership:1.0"
... [chaincode] Launch -> DEBU 2395 launch complete
... [chaincode] Execute -> DEBU 2396 Entry
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2397 04:39:05:261 FINE org.hyperledger.fabric.shim.impl.InnvocationTaskManager onChaincodeMessage [ ] {
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2398 "type": "REGISTERED"
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2399 }
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239a 04:39:05:269 FINE org.hyperledger.fabric.shim.impl.InnvocationTaskManager onChaincodeMessage [ ] Received REGISTERED: moving to established state
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239b 04:39:05:273 FINE org.hyperledger.fabric.shim.impl.InnvocationTaskManager onChaincodeMessage [ ] {
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239c "type": "READY"
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239d }
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239e 04:39:05:274 FINE org.hyperledger.fabric.shim.impl.InnvocationTaskManager onChaincodeMessage [ ] Received READY: ready for invocations
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239f 04:39:05:288 FINE org.hyperledger.fabric.shim.impl.InnvocationTaskManager onChaincodeMessage [bc1525c4] {
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 23a0 "type": "INIT",
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 23a1 "payload": "CgRpbml0CgA=",
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 23a2 "txid": "bc1525c4a55b0d9115df62e094cc5f651a38bd1e037bf0675296f77f489585da",
I had to truncate the log above to fit post requirements; the full log can be found at https://pastebin.com/XBUtQdwz
I don't see any errors in the log so I'm not sure what the problem is. Can anyone point me in the right direction?

Chaincode as external service with TLS: installed but query doesn't find it?

In the Fabric docs example for a bin/release script, there's the comment
if tls_required is true, copy TLS files (using above example, the fully qualified path for these fils would be "$RELEASE"/chaincode/server/tls)
But which files should be put there? How do they have to be named? Are they referenced somewhere? Altogether I even don't understand why they're needed anyway. We already have all certficates in the connection.json on the peer side and also TLS certficates referenced by the ChaincodeServer on the chaincode side.
I'm asking because I can't invoke my chaincode and since I don't have additional certificates within the $RELEASE folder, that might cause the problem.
This is happening on chaincode query at the CLI:
$ export CORE_PEER_MSPCONFIGPATH=/config/admin/msp
$ peer chaincode query -C channel1 -n cc-abac -c '{"Args":["query","a"]}' --clientauth --tls --cafile /config/peer/tls-msp/tlscacerts/ca-cert.pem --keyfile /config/peer/tls-msp/keystore/key.pem --certfile /config/peer/tls-msp/signcerts/cert.pem
2020-07-06 07:20:55.290 UTC [msp] loadCertificateAt -> WARN 001 Failed loading ClientOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-06 07:20:55.290 UTC [msp] loadCertificateAt -> WARN 002 Failed loading PeerOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-06 07:20:55.290 UTC [msp] loadCertificateAt -> WARN 003 Failed loading AdminOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-06 07:20:55.291 UTC [msp] loadCertificateAt -> WARN 004 Failed loading OrdererOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-06 07:20:55.302 UTC [grpc] Infof -> DEBU 005 parsed scheme: ""
2020-07-06 07:20:55.302 UTC [grpc] Infof -> DEBU 006 scheme "" not registered, fallback to default scheme
2020-07-06 07:20:55.302 UTC [grpc] Infof -> DEBU 007 ccResolverWrapper: sending update to cc: {[{org1-peer1:30151 <nil> 0 <nil>}] <nil> <nil>}
2020-07-06 07:20:55.302 UTC [grpc] Infof -> DEBU 008 ClientConn switching balancer to "pick_first"
2020-07-06 07:20:55.302 UTC [grpc] Infof -> DEBU 009 Channel switches to new LB policy "pick_first"
2020-07-06 07:20:55.302 UTC [grpc] Infof -> DEBU 00a Subchannel Connectivity change to CONNECTING
2020-07-06 07:20:55.302 UTC [grpc] Infof -> DEBU 00b Subchannel picks a new address "org1-peer1:30151" to connect
2020-07-06 07:20:55.302 UTC [grpc] UpdateSubConnState -> DEBU 00c pickfirstBalancer: HandleSubConnStateChange: 0xc0001aff40, {CONNECTING <nil>}
2020-07-06 07:20:55.302 UTC [grpc] Infof -> DEBU 00d Channel Connectivity change to CONNECTING
2020-07-06 07:20:55.310 UTC [grpc] Infof -> DEBU 00e Subchannel Connectivity change to READY
2020-07-06 07:20:55.310 UTC [grpc] UpdateSubConnState -> DEBU 00f pickfirstBalancer: HandleSubConnStateChange: 0xc0001aff40, {READY <nil>}
2020-07-06 07:20:55.310 UTC [grpc] Infof -> DEBU 010 Channel Connectivity change to READY
2020-07-06 07:20:55.315 UTC [grpc] Infof -> DEBU 011 parsed scheme: ""
2020-07-06 07:20:55.315 UTC [grpc] Infof -> DEBU 012 scheme "" not registered, fallback to default scheme
2020-07-06 07:20:55.315 UTC [grpc] Infof -> DEBU 013 ccResolverWrapper: sending update to cc: {[{org1-peer1:30151 <nil> 0 <nil>}] <nil> <nil>}
2020-07-06 07:20:55.315 UTC [grpc] Infof -> DEBU 014 ClientConn switching balancer to "pick_first"
2020-07-06 07:20:55.315 UTC [grpc] Infof -> DEBU 015 Channel switches to new LB policy "pick_first"
2020-07-06 07:20:55.315 UTC [grpc] Infof -> DEBU 016 Subchannel Connectivity change to CONNECTING
2020-07-06 07:20:55.315 UTC [grpc] Infof -> DEBU 017 Subchannel picks a new address "org1-peer1:30151" to connect
2020-07-06 07:20:55.315 UTC [grpc] UpdateSubConnState -> DEBU 018 pickfirstBalancer: HandleSubConnStateChange: 0xc0003447f0, {CONNECTING <nil>}
2020-07-06 07:20:55.315 UTC [grpc] Infof -> DEBU 019 Channel Connectivity change to CONNECTING
2020-07-06 07:20:55.320 UTC [grpc] Infof -> DEBU 01a Subchannel Connectivity change to READY
2020-07-06 07:20:55.320 UTC [grpc] UpdateSubConnState -> DEBU 01b pickfirstBalancer: HandleSubConnStateChange: 0xc0003447f0, {READY <nil>}
2020-07-06 07:20:55.320 UTC [grpc] Infof -> DEBU 01c Channel Connectivity change to READY
Error: endorsement failure during query. response: status:500 message:"make sure the chaincode cc-abac has been successfully defined on channel channel1 and try again: chaincode definition for 'cc-abac' exists, but chaincode is not installed"
Ok, let's check if it's installed:
$ peer lifecycle chaincode queryinstalled
2020-07-06 07:27:54.192 UTC [msp] loadCertificateAt -> WARN 001 Failed loading ClientOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-06 07:27:54.192 UTC [msp] loadCertificateAt -> WARN 002 Failed loading PeerOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-06 07:27:54.192 UTC [msp] loadCertificateAt -> WARN 003 Failed loading AdminOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-06 07:27:54.193 UTC [msp] loadCertificateAt -> WARN 004 Failed loading OrdererOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-06 07:27:54.201 UTC [grpc] Infof -> DEBU 005 parsed scheme: ""
2020-07-06 07:27:54.201 UTC [grpc] Infof -> DEBU 006 scheme "" not registered, fallback to default scheme
2020-07-06 07:27:54.201 UTC [grpc] Infof -> DEBU 007 ccResolverWrapper: sending update to cc: {[{org1-peer1:30151 <nil> 0 <nil>}] <nil> <nil>}
2020-07-06 07:27:54.201 UTC [grpc] Infof -> DEBU 008 ClientConn switching balancer to "pick_first"
2020-07-06 07:27:54.201 UTC [grpc] Infof -> DEBU 009 Channel switches to new LB policy "pick_first"
2020-07-06 07:27:54.201 UTC [grpc] Infof -> DEBU 00a Subchannel Connectivity change to CONNECTING
2020-07-06 07:27:54.202 UTC [grpc] Infof -> DEBU 00b Subchannel picks a new address "org1-peer1:30151" to connect
2020-07-06 07:27:54.202 UTC [grpc] UpdateSubConnState -> DEBU 00c pickfirstBalancer: HandleSubConnStateChange: 0xc000447800, {CONNECTING <nil>}
2020-07-06 07:27:54.202 UTC [grpc] Infof -> DEBU 00d Channel Connectivity change to CONNECTING
2020-07-06 07:27:54.209 UTC [grpc] Infof -> DEBU 00e Subchannel Connectivity change to READY
2020-07-06 07:27:54.209 UTC [grpc] UpdateSubConnState -> DEBU 00f pickfirstBalancer: HandleSubConnStateChange: 0xc000447800, {READY <nil>}
2020-07-06 07:27:54.209 UTC [grpc] Infof -> DEBU 010 Channel Connectivity change to READY
2020-07-06 07:27:54.213 UTC [grpc] Infof -> DEBU 011 parsed scheme: ""
2020-07-06 07:27:54.213 UTC [grpc] Infof -> DEBU 012 scheme "" not registered, fallback to default scheme
2020-07-06 07:27:54.213 UTC [grpc] Infof -> DEBU 013 ccResolverWrapper: sending update to cc: {[{org1-peer1:30151 <nil> 0 <nil>}] <nil> <nil>}
2020-07-06 07:27:54.213 UTC [grpc] Infof -> DEBU 014 ClientConn switching balancer to "pick_first"
2020-07-06 07:27:54.213 UTC [grpc] Infof -> DEBU 015 Channel switches to new LB policy "pick_first"
2020-07-06 07:27:54.213 UTC [grpc] Infof -> DEBU 016 Subchannel Connectivity change to CONNECTING
2020-07-06 07:27:54.213 UTC [grpc] Infof -> DEBU 017 Subchannel picks a new address "org1-peer1:30151" to connect
2020-07-06 07:27:54.213 UTC [grpc] UpdateSubConnState -> DEBU 018 pickfirstBalancer: HandleSubConnStateChange: 0xc0000da8f0, {CONNECTING <nil>}
2020-07-06 07:27:54.213 UTC [grpc] Infof -> DEBU 019 Channel Connectivity change to CONNECTING
2020-07-06 07:27:54.219 UTC [grpc] Infof -> DEBU 01a Subchannel Connectivity change to READY
2020-07-06 07:27:54.219 UTC [grpc] UpdateSubConnState -> DEBU 01b pickfirstBalancer: HandleSubConnStateChange: 0xc0000da8f0, {READY <nil>}
2020-07-06 07:27:54.219 UTC [grpc] Infof -> DEBU 01c Channel Connectivity change to READY
Installed chaincodes on peer:
Package ID: cc-abac:7f7a2b755874ef0c72e6d1eb467f6e65afb488994c80a75f3c5712fcdc9ee095, Label: cc-abac
So it's installed but the query command doesn't find it?
And yes - it's installed on the correct channel:
$ peer lifecycle chaincode querycommitted --channelID channel1 --name cc-abac --cafile /config/peer/tls-msp/tlscacerts/ca-cert.pem
Committed chaincode definition for chaincode 'cc-abac' on channel 'channel1':
Version: 1.0, Sequence: 1, Endorsement Plugin: escc, Validation Plugin: vscc, Approvals: [Org1MSP: true, Org2MSP: true, Org3MSP: true]
This is the peer log at the time of the query:
[36m2020-07-06 07:31:23.605 UTC [lockbasedtxmgr] NewTxSimulator -> DEBU 12a6[0m constructing new tx simulator
[36m2020-07-06 07:31:23.605 UTC [lockbasedtxmgr] newLockBasedTxSimulator -> DEBU 12a7[0m constructing new tx simulator txid = [ed5b5de845b99b8b126e1b10d05df3849b9b108c83435f4225fc47c9a3b841c7]
[36m2020-07-06 07:31:23.605 UTC [stateleveldb] GetState -> DEBU 12a8[0m GetState(). ns=_lifecycle, key=namespaces/fields/cc-abac/Sequence
[36m2020-07-06 07:31:23.605 UTC [lockbasedtxmgr] Done -> DEBU 12a9[0m Done with transaction simulation / query execution [ed5b5de845b99b8b126e1b10d05df3849b9b108c83435f4225fc47c9a3b841c7]
[34m2020-07-06 07:31:23.605 UTC [comm.grpc.server] 1 -> INFO 12aa[0m unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.131.0.100:51580 grpc.peer_subject="CN=org1-peer1,OU=peer,O=Hyperledger,ST=North Carolina,C=US" grpc.code=OK grpc.call_duration=2.128156ms
[36m2020-07-06 07:31:23.608 UTC [grpc] infof -> DEBU 12ab[0m transport: loopyWriter.run returning. connection error: desc = "transport is closing"
[36m2020-07-06 07:31:23.608 UTC [grpc] warningf -> DEBU 12ac[0m transport: http2Server.HandleStreams failed to read frame: read tcp 10.130.1.219:7051->10.131.0.100:51580: read: connection reset by peer
[36m2020-07-06 07:31:23.608 UTC [grpc] infof -> DEBU 12ad[0m transport: loopyWriter.run returning. connection error: desc = "transport is closing"
10.131.0.100 is the calling CLI and 10.130.1.219 is the peer. So is there a connection problem between CLI and Peer?
Coming back to the first paragraph of this question - this is the connection.json available to the peer:
{
"address": "org1-cc1:31101",
"dial_timeout": "10s",
"tls_required": "true",
"client_auth_required": "true",
"client_key": "-----BEGIN PRIVATE KEY-----\nxxx\nxxx\nxxx\n-----END PRIVATE KEY-----",
"client_cert": "-----BEGIN CERTIFICATE-----\nxxx/xxx\nxxx\nxxx\nxxx\nxxx\nxxx\nxxx/xxxn\nxxx\nxxx\nxxx\nxxx\nxxx\nxxx\nxxx=\n-----END CERTIFICATE-----",
"root_cert": "-----BEGIN CERTIFICATE-----\nxxx\nxxx\nxxx\nxxx\nxxx\nxxx\nxxx\nxxx\nxxx\nxxx\nxxx\nxxx\n-----END CERTIFICATE-----"
}
Of course, xxx are just placeholders ;)
The Chaincode main() looks as follows:
func main() {
keyFile := os.Getenv("CHAINCODE_TLS_KEY_FILE")
key, err := ioutil.ReadFile(keyFile)
check(err)
certFile := os.Getenv("CHAINCODE_TLS_CERT_FILE")
cert, err := ioutil.ReadFile(certFile)
check(err)
caFile := os.Getenv("CHAINCODE_TLS_CACERT_FILE")
ca, err := ioutil.ReadFile(caFile)
check(err)
server := &shim.ChaincodeServer{
CCID: os.Getenv("CHAINCODE_CCID"),
Address: "0.0.0.0:9999",
CC: new(SimpleChaincode),
TLSProps: shim.TLSProperties{
Disabled: false,
Key: key,
Cert: cert,
ClientCACerts: ca,
},
}
err = server.Start()
if err != nil {
fmt.Printf("Error starting Simple chaincode: %s", err)
}
}

Error : issue while using createChannel command

I've been following the tutorial for the Hyperledger Fabric development
but i have this error message while trying the command ./network.sh createChannel:
Error: failed to create deliver client for orderer: orderer client failed to connect to localhost:7050: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp [::1]:7050: connectex: Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée."
!!!!!!!!!!!!!!! Channel creation failed !!!!!!!!!!!!!!!!
The French part means that no connection can be established cause the targeted computer has refused it.
After searching a bit there might be an issue with the ports or ip addresses but not sure.
My logs for the different components :
Orderer service :
2020-05-27 09:13:16.388 UTC [localconfig] completeInitialization -> WARN 001 General.GenesisFile should be replaced by General.BootstrapFile
2020-05-27 09:13:16.389 UTC [localconfig] completeInitialization -> INFO 002 Kafka.Version unset, setting to 0.10.2.0
2020-05-27 09:13:16.389 UTC [orderer.common.server] prettyPrintStruct -> INFO 003 Orderer config values:
General.ListenAddress = "0.0.0.0"
General.ListenPort = 7050
General.TLS.Enabled = true
General.TLS.PrivateKey = "/var/hyperledger/orderer/tls/server.key"
General.TLS.Certificate = "/var/hyperledger/orderer/tls/server.crt"
General.TLS.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
General.TLS.ClientAuthRequired = false
General.TLS.ClientRootCAs = []
General.Cluster.ListenAddress = ""
General.Cluster.ListenPort = 0
General.Cluster.ServerCertificate = ""
General.Cluster.ServerPrivateKey = ""
General.Cluster.ClientCertificate = "/var/hyperledger/orderer/tls/server.crt"
General.Cluster.ClientPrivateKey = "/var/hyperledger/orderer/tls/server.key"
General.Cluster.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
General.Cluster.DialTimeout = 5s
General.Cluster.RPCTimeout = 7s
General.Cluster.ReplicationBufferSize = 20971520
General.Cluster.ReplicationPullTimeout = 5s
General.Cluster.ReplicationRetryTimeout = 5s
General.Cluster.ReplicationBackgroundRefreshInterval = 5m0s
General.Cluster.ReplicationMaxRetries = 12
General.Cluster.SendBufferSize = 10
General.Cluster.CertExpirationWarningThreshold = 168h0m0s
General.Cluster.TLSHandshakeTimeShift = 0s
General.Keepalive.ServerMinInterval = 1m0s
General.Keepalive.ServerInterval = 2h0m0s
General.Keepalive.ServerTimeout = 20s
General.ConnectionTimeout = 0s
General.GenesisMethod = "file"
General.GenesisFile = "/var/hyperledger/orderer/orderer.genesis.block"
General.BootstrapMethod = "file"
General.BootstrapFile = "/var/hyperledger/orderer/orderer.genesis.block"
General.Profile.Enabled = false
General.Profile.Address = "0.0.0.0:6060"
General.LocalMSPDir = "/var/hyperledger/orderer/msp"
General.LocalMSPID = "OrdererMSP"
General.BCCSP.ProviderName = "SW"
General.BCCSP.SwOpts.SecLevel = 256
General.BCCSP.SwOpts.HashFamily = "SHA2"
General.BCCSP.SwOpts.Ephemeral = true
General.BCCSP.SwOpts.FileKeystore.KeyStorePath = ""
General.BCCSP.SwOpts.DummyKeystore =
General.BCCSP.SwOpts.InmemKeystore =
General.Authentication.TimeWindow = 15m0s
General.Authentication.NoExpirationChecks = false
FileLedger.Location = "/var/hyperledger/production/orderer"
FileLedger.Prefix = "hyperledger-fabric-ordererledger"
Kafka.Retry.ShortInterval = 5s
Kafka.Retry.ShortTotal = 10m0s
Kafka.Retry.LongInterval = 5m0s
Kafka.Retry.LongTotal = 12h0m0s
Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
Kafka.Retry.Metadata.RetryMax = 3
Kafka.Retry.Metadata.RetryBackoff = 250ms
Kafka.Retry.Producer.RetryMax = 3
Kafka.Retry.Producer.RetryBackoff = 100ms
Kafka.Retry.Consumer.RetryBackoff = 2s
Kafka.Verbose = true
Kafka.Version = 0.10.2.0
Kafka.TLS.Enabled = false
Kafka.TLS.PrivateKey = ""
Kafka.TLS.Certificate = ""
Kafka.TLS.RootCAs = []
Kafka.TLS.ClientAuthRequired = false
Kafka.TLS.ClientRootCAs = []
Kafka.SASLPlain.Enabled = false
Kafka.SASLPlain.User = ""
Kafka.SASLPlain.Password = ""
Kafka.Topic.ReplicationFactor = 1
Debug.BroadcastTraceDir = ""
Debug.DeliverTraceDir = ""
Consensus = map[SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot WALDir:/var/hyperledger/production/orderer/etcdraft/wal]
Operations.ListenAddress = "127.0.0.1:8443"
Operations.TLS.Enabled = false
Operations.TLS.PrivateKey = ""
Operations.TLS.Certificate = ""
Operations.TLS.RootCAs = []
Operations.TLS.ClientAuthRequired = false
Operations.TLS.ClientRootCAs = []
Metrics.Provider = "disabled"
Metrics.Statsd.Network = "udp"
Metrics.Statsd.Address = "127.0.0.1:8125"
Metrics.Statsd.WriteInterval = 30s
Metrics.Statsd.Prefix = ""
2020-05-27 09:13:16.400 UTC [msp] loadCertificateAt -> WARN 004 Failed loading ClientOU certificate at [/var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem]: [could not read file /var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem: open /var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.400 UTC [msp] loadCertificateAt -> WARN 005 Failed loading PeerOU certificate at [/var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem]: [could not read file /var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem: open /var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.400 UTC [msp] loadCertificateAt -> WARN 006 Failed loading AdminOU certificate at [/var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem]: [could not read file /var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem: open /var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.400 UTC [msp] loadCertificateAt -> WARN 007 Failed loading OrdererOU certificate at [/var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem]: [could not read file /var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem: open /var/hyperledger/orderer/msp/cacerts\ca.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.434 UTC [orderer.common.server] initializeServerConfig -> INFO 008 Starting orderer with TLS enabled
2020-05-27 09:13:16.441 UTC [fsblkstorage] NewProvider -> INFO 009 Creating new file ledger directory at /var/hyperledger/production/orderer/chains
2020-05-27 09:13:16.464 UTC [orderer.common.server] extractSysChanLastConfig -> INFO 00a Bootstrapping because no existing channels
2020-05-27 09:13:16.483 UTC [orderer.common.server] Main -> INFO 00b Setting up cluster for orderer type etcdraft
2020-05-27 09:13:16.490 UTC [orderer.common.server] reuseListener -> INFO 00c Cluster listener is not configured, defaulting to use the general listener on port 7050
2020-05-27 09:13:16.490 UTC [fsblkstorage] newBlockfileMgr -> INFO 00d Getting block information from block storage
2020-05-27 09:13:16.529 UTC [orderer.consensus.etcdraft] HandleChain -> INFO 00e EvictionSuspicion not set, defaulting to 10m0s
2020-05-27 09:13:16.530 UTC [orderer.consensus.etcdraft] createOrReadWAL -> INFO 00f No WAL data found, creating new WAL at path '/var/hyperledger/production/orderer/etcdraft/wal/system-channel' channel=system-channel node=1
2020-05-27 09:13:16.536 UTC [orderer.commmon.multichannel] Initialize -> INFO 010 Starting system channel 'system-channel' with genesis block hash 22c93e29c38e9681f960d390fda12c72869fc9ebfebf0a6d1c15f60198b13119 and orderer type etcdraft
2020-05-27 09:13:16.537 UTC [orderer.consensus.etcdraft] Start -> INFO 011 Starting Raft node channel=system-channel node=1
2020-05-27 09:13:16.537 UTC [orderer.common.cluster] Configure -> INFO 012 Entering, channel: system-channel, nodes: []
2020-05-27 09:13:16.537 UTC [orderer.common.cluster] Configure -> INFO 013 Exiting
2020-05-27 09:13:16.537 UTC [orderer.consensus.etcdraft] start -> INFO 014 Starting raft node as part of a new channel channel=system-channel node=1
2020-05-27 09:13:16.537 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO 015 1 became follower at term 0 channel=system-channel node=1
2020-05-27 09:13:16.538 UTC [orderer.consensus.etcdraft] newRaft -> INFO 016 newRaft 1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] channel=system-channel node=1
2020-05-27 09:13:16.538 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO 017 1 became follower at term 1 channel=system-channel node=1
2020-05-27 09:13:16.538 UTC [orderer.common.server] Main -> INFO 018 Starting orderer:
Version: 2.1.0
Commit SHA: 1bdf975
Go version: go1.14.1
OS/Arch: linux/amd64
2020-05-27 09:13:16.538 UTC [orderer.common.server] Main -> INFO 019 Beginning to serve requests
2020-05-27 09:13:16.538 UTC [orderer.consensus.etcdraft] run -> INFO 01a This node is picked to start campaign channel=system-channel node=1
2020-05-27 09:13:16.539 UTC [orderer.consensus.etcdraft] apply -> INFO 01b Applied config change to add node 1, current nodes in channel: [1] channel=system-channel node=1
2020-05-27 09:13:17.539 UTC [orderer.consensus.etcdraft] Step -> INFO 01c 1 is starting a new election at term 1 channel=system-channel node=1
2020-05-27 09:13:17.540 UTC [orderer.consensus.etcdraft] becomePreCandidate -> INFO 01d 1 became pre-candidate at term 1 channel=system-channel node=1
2020-05-27 09:13:17.540 UTC [orderer.consensus.etcdraft] poll -> INFO 01e 1 received MsgPreVoteResp from 1 at term 1 channel=system-channel node=1
2020-05-27 09:13:17.540 UTC [orderer.consensus.etcdraft] becomeCandidate -> INFO 01f 1 became candidate at term 2 channel=system-channel node=1
2020-05-27 09:13:17.541 UTC [orderer.consensus.etcdraft] poll -> INFO 020 1 received MsgVoteResp from 1 at term 2 channel=system-channel node=1
2020-05-27 09:13:17.541 UTC [orderer.consensus.etcdraft] becomeLeader -> INFO 021 1 became leader at term 2 channel=system-channel node=1
2020-05-27 09:13:17.541 UTC [orderer.consensus.etcdraft] run -> INFO 022 raft.node: 1 elected leader 1 at term 2 channel=system-channel node=1
2020-05-27 09:13:17.542 UTC [orderer.consensus.etcdraft] run -> INFO 023 Leader 1 is present, quit campaign channel=system-channel node=1
2020-05-27 09:13:17.543 UTC [orderer.consensus.etcdraft] run -> INFO 024 Raft leader changed: 0 -> 1 channel=system-channel node=1
2020-05-27 09:13:17.543 UTC [orderer.consensus.etcdraft] run -> INFO 025 Start accepting requests as Raft leader at block [0] channel=system-channel node=1
Peer 1 :
2020-05-27 09:13:16.435 UTC [msp] loadCertificateAt -> WARN 001 Failed loading ClientOU certificate at [/etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem]: [could not read file /etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem: open /etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.435 UTC [msp] loadCertificateAt -> WARN 002 Failed loading PeerOU certificate at [/etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem]: [could not read file /etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem: open /etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.435 UTC [msp] loadCertificateAt -> WARN 003 Failed loading AdminOU certificate at [/etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem]: [could not read file /etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem: open /etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.436 UTC [msp] loadCertificateAt -> WARN 004 Failed loading OrdererOU certificate at [/etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem]: [could not read file /etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem: open /etc/hyperledger/fabric/msp/cacerts\ca.org1.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.441 UTC [nodeCmd] serve -> INFO 005 Starting peer:
Version: 2.1.0
Commit SHA: 1bdf975
Go version: go1.14.1
OS/Arch: linux/amd64
Chaincode:
Base Docker Label: org.hyperledger.fabric
Docker Namespace: hyperledger
2020-05-27 09:13:16.442 UTC [peer] getLocalAddress -> INFO 006 Auto-detected peer address: 172.18.0.3:7051
2020-05-27 09:13:16.442 UTC [peer] getLocalAddress -> INFO 007 Returning peer0.org1.example.com:7051
2020-05-27 09:13:16.469 UTC [nodeCmd] initGrpcSemaphores -> INFO 008 concurrency limit for endorser service is 2500
2020-05-27 09:13:16.470 UTC [nodeCmd] initGrpcSemaphores -> INFO 009 concurrency limit for deliver service is 2500
2020-05-27 09:13:16.470 UTC [nodeCmd] serve -> INFO 00a Starting peer with TLS enabled
2020-05-27 09:13:16.500 UTC [ledgermgmt] NewLedgerMgr -> INFO 00b Initializing LedgerMgr
2020-05-27 09:13:16.513 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 00c DB is empty Setting db format as 2.0
2020-05-27 09:13:16.514 UTC [fsblkstorage] NewProvider -> INFO 00d Creating new file ledger directory at /var/hyperledger/production/ledgersData/chains/chains
2020-05-27 09:13:16.521 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 00e DB is empty Setting db format as 2.0
2020-05-27 09:13:16.535 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 00f DB is empty Setting db format as 2.0
2020-05-27 09:13:16.536 UTC [ledgermgmt] NewLedgerMgr -> INFO 010 Initialized LedgerMgr
2020-05-27 09:13:16.547 UTC [gossip.service] New -> INFO 011 Initialize gossip with endpoint peer0.org1.example.com:7051
2020-05-27 09:13:16.549 UTC [gossip.gossip] New -> INFO 012 Creating gossip service with self membership of Endpoint: peer0.org1.example.com:7051, InternalEndpoint: peer0.org1.example.com:7051, PKI-ID: 58df3c0a908cbbd073a6b4138ef676c652aaab118fb99179d7304206f63a0207, Metadata:
2020-05-27 09:13:16.550 UTC [lifecycle] InitializeLocalChaincodes -> INFO 013 Initialized lifecycle cache with 0 already installed chaincodes
2020-05-27 09:13:16.550 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 014 Entering computeChaincodeEndpoint with peerHostname: peer0.org1.example.com
2020-05-27 09:13:16.550 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 015 Exit with ccEndpoint: peer0.org1.example.com:7052
2020-05-27 09:13:16.550 UTC [gossip.gossip] start -> INFO 016 Gossip instance peer0.org1.example.com:7051 started
2020-05-27 09:13:16.560 UTC [sccapi] DeploySysCC -> INFO 017 deploying system chaincode 'lscc'
2020-05-27 09:13:16.560 UTC [sccapi] DeploySysCC -> INFO 018 deploying system chaincode 'cscc'
2020-05-27 09:13:16.560 UTC [sccapi] DeploySysCC -> INFO 019 deploying system chaincode 'qscc'
2020-05-27 09:13:16.560 UTC [sccapi] DeploySysCC -> INFO 01a deploying system chaincode '_lifecycle'
2020-05-27 09:13:16.560 UTC [nodeCmd] serve -> INFO 01b Deployed system chaincodes
2020-05-27 09:13:16.560 UTC [discovery] NewService -> INFO 01c Created with config TLS: true, authCacheMaxSize: 1000, authCachePurgeRatio: 0.750000
2020-05-27 09:13:16.560 UTC [nodeCmd] registerDiscoveryService -> INFO 01d Discovery service activated
2020-05-27 09:13:16.560 UTC [nodeCmd] serve -> INFO 01e Starting peer with ID=[peer0.org1.example.com], network ID=[dev], address=[peer0.org1.example.com:7051]
2020-05-27 09:13:16.560 UTC [nodeCmd] serve -> INFO 01f Started peer with ID=[peer0.org1.example.com], network ID=[dev], address=[peer0.org1.example.com:7051]
2020-05-27 09:13:16.560 UTC [kvledger] LoadPreResetHeight -> INFO 020 Loading prereset height from path [/var/hyperledger/production/ledgersData/chains]
2020-05-27 09:13:16.561 UTC [fsblkstorage] preResetHtFiles -> INFO 021 No active channels passed
2020-05-27 09:13:16.561 UTC [nodeCmd] func6 -> INFO 022 Starting profiling server with listenAddress = 0.0.0.0:6060
Peer 2 :
2020-05-27 09:13:16.409 UTC [msp] loadCertificateAt -> WARN 001 Failed loading ClientOU certificate at [/etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem]: [could not read file /etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem: open /etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.410 UTC [msp] loadCertificateAt -> WARN 002 Failed loading PeerOU certificate at [/etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem]: [could not read file /etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem: open /etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.410 UTC [msp] loadCertificateAt -> WARN 003 Failed loading AdminOU certificate at [/etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem]: [could not read file /etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem: open /etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.410 UTC [msp] loadCertificateAt -> WARN 004 Failed loading OrdererOU certificate at [/etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem]: [could not read file /etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem: open /etc/hyperledger/fabric/msp/cacerts\ca.org2.example.com-cert.pem: no such file or directory]
2020-05-27 09:13:16.420 UTC [nodeCmd] serve -> INFO 005 Starting peer:
Version: 2.1.0
Commit SHA: 1bdf975
Go version: go1.14.1
OS/Arch: linux/amd64
Chaincode:
Base Docker Label: org.hyperledger.fabric
Docker Namespace: hyperledger
2020-05-27 09:13:16.421 UTC [peer] getLocalAddress -> INFO 006 Auto-detected peer address: 172.18.0.2:9051
2020-05-27 09:13:16.421 UTC [peer] getLocalAddress -> INFO 007 Returning peer0.org2.example.com:9051
2020-05-27 09:13:16.433 UTC [nodeCmd] initGrpcSemaphores -> INFO 008 concurrency limit for endorser service is 2500
2020-05-27 09:13:16.434 UTC [nodeCmd] initGrpcSemaphores -> INFO 009 concurrency limit for deliver service is 2500
2020-05-27 09:13:16.434 UTC [nodeCmd] serve -> INFO 00a Starting peer with TLS enabled
2020-05-27 09:13:16.472 UTC [ledgermgmt] NewLedgerMgr -> INFO 00b Initializing LedgerMgr
2020-05-27 09:13:16.492 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 00c DB is empty Setting db format as 2.0
2020-05-27 09:13:16.493 UTC [fsblkstorage] NewProvider -> INFO 00d Creating new file ledger directory at /var/hyperledger/production/ledgersData/chains/chains
2020-05-27 09:13:16.501 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 00e DB is empty Setting db format as 2.0
2020-05-27 09:13:16.528 UTC [leveldbhelper] openDBAndCheckFormat -> INFO 00f DB is empty Setting db format as 2.0
2020-05-27 09:13:16.528 UTC [ledgermgmt] NewLedgerMgr -> INFO 010 Initialized LedgerMgr
2020-05-27 09:13:16.542 UTC [gossip.service] New -> INFO 011 Initialize gossip with endpoint peer0.org2.example.com:9051
2020-05-27 09:13:16.547 UTC [gossip.gossip] New -> INFO 012 Creating gossip service with self membership of Endpoint: peer0.org2.example.com:9051, InternalEndpoint: peer0.org2.example.com:9051, PKI-ID: c7429efa7a899a8b3644235bc56251ffbfb45fe3f55fc0a4d199fd03b1521df4, Metadata:
2020-05-27 09:13:16.547 UTC [lifecycle] InitializeLocalChaincodes -> INFO 013 Initialized lifecycle cache with 0 already installed chaincodes
2020-05-27 09:13:16.548 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 014 Entering computeChaincodeEndpoint with peerHostname: peer0.org2.example.com
2020-05-27 09:13:16.548 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 015 Exit with ccEndpoint: peer0.org2.example.com:9052
2020-05-27 09:13:16.549 UTC [gossip.gossip] start -> INFO 016 Gossip instance peer0.org2.example.com:9051 started
2020-05-27 09:13:16.555 UTC [sccapi] DeploySysCC -> INFO 017 deploying system chaincode 'lscc'
2020-05-27 09:13:16.558 UTC [sccapi] DeploySysCC -> INFO 018 deploying system chaincode 'cscc'
2020-05-27 09:13:16.558 UTC [sccapi] DeploySysCC -> INFO 019 deploying system chaincode 'qscc'
2020-05-27 09:13:16.559 UTC [sccapi] DeploySysCC -> INFO 01a deploying system chaincode '_lifecycle'
2020-05-27 09:13:16.559 UTC [nodeCmd] serve -> INFO 01b Deployed system chaincodes
2020-05-27 09:13:16.559 UTC [discovery] NewService -> INFO 01c Created with config TLS: true, authCacheMaxSize: 1000, authCachePurgeRatio: 0.750000
2020-05-27 09:13:16.559 UTC [nodeCmd] registerDiscoveryService -> INFO 01d Discovery service activated
2020-05-27 09:13:16.559 UTC [nodeCmd] serve -> INFO 01e Starting peer with ID=[peer0.org2.example.com], network ID=[dev], address=[peer0.org2.example.com:9051]
2020-05-27 09:13:16.559 UTC [nodeCmd] serve -> INFO 01f Started peer with ID=[peer0.org2.example.com], network ID=[dev], address=[peer0.org2.example.com:9051]
2020-05-27 09:13:16.559 UTC [kvledger] LoadPreResetHeight -> INFO 020 Loading prereset height from path [/var/hyperledger/production/ledgersData/chains]
2020-05-27 09:13:16.559 UTC [fsblkstorage] preResetHtFiles -> INFO 021 No active channels passed
2020-05-27 09:13:16.559 UTC [nodeCmd] func6 -> INFO 022 Starting profiling server with listenAddress = 0.0.0.0:6060
There are also several warnings but don't really know their meaning.
I've tried to relaunch docker as admin , also tried to relaunch the network.

Failed to invoke chaincode name:"lscc" , error: container exited with 1: chaincode registration failed (Fabric 1.4.1)

I am trying to create a single org, single ca, single peer network, bootstrapped by the nodejs-sdk. I have used this sample for reference.
When I try to instantiate() the chaincode I get this error in the peer accessed by docker logs ax-peer
2019-06-02 13:21:51.395 UTC [ledgermgmt] CreateLedger -> INFO 028 Created ledger [default] with genesis block
2019-06-02 13:21:51.401 UTC [gossip.gossip] JoinChan -> INFO 029 Joining gossip network of channel default with 1 organizations
2019-06-02 13:21:51.401 UTC [gossip.gossip] learnAnchorPeers -> INFO 02a No configured anchor peers of AxOrgMSP for channel default to learn about
2019-06-02 13:21:51.529 UTC [gossip.state] NewGossipStateProvider -> INFO 02b Updating metadata information, current ledger sequence is at = 0, next expected block is = 1
2019-06-02 13:21:51.531 UTC [sccapi] deploySysCC -> INFO 02c system chaincode lscc/default(github.com/hyperledger/fabric/core/scc/lscc) deployed
2019-06-02 13:21:51.532 UTC [cscc] Init -> INFO 02d Init CSCC
2019-06-02 13:21:51.532 UTC [sccapi] deploySysCC -> INFO 02e system chaincode cscc/default(github.com/hyperledger/fabric/core/scc/cscc) deployed
2019-06-02 13:21:51.532 UTC [qscc] Init -> INFO 02f Init QSCC
2019-06-02 13:21:51.532 UTC [sccapi] deploySysCC -> INFO 030 system chaincode qscc/default(github.com/hyperledger/fabric/core/scc/qscc) deployed
2019-06-02 13:21:51.532 UTC [sccapi] deploySysCC -> INFO 031 system chaincode (+lifecycle,github.com/hyperledger/fabric/core/chaincode/lifecycle) disabled
2019-06-02 13:21:51.533 UTC [endorser] callChaincode -> INFO 032 [][4f292791] Exit chaincode: name:"cscc" (656ms)
2019-06-02 13:21:51.533 UTC [comm.grpc.server] 1 -> INFO 033 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.21.0.1:50128 grpc.code=OK grpc.call_duration=657.290863ms
2019-06-02 13:21:51.541 UTC [endorser] callChaincode -> INFO 034 [][3ae34d18] Entry chaincode: name:"lscc"
2019-06-02 13:21:51.542 UTC [endorser] callChaincode -> INFO 035 [][3ae34d18] Exit chaincode: name:"lscc" (0ms)
2019-06-02 13:21:51.542 UTC [comm.grpc.server] 1 -> INFO 036 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.21.0.1:50128 grpc.code=OK grpc.call_duration=940.979µs
2019-06-02 13:21:51.550 UTC [endorser] callChaincode -> INFO 037 [default][17bf8e2d] Entry chaincode: name:"lscc"
2019-06-02 13:21:51.550 UTC [endorser] callChaincode -> INFO 038 [default][17bf8e2d] Exit chaincode: name:"lscc" (1ms)
2019-06-02 13:21:51.550 UTC [comm.grpc.server] 1 -> INFO 039 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.21.0.1:50128 grpc.code=OK grpc.call_duration=1.690033ms
2019-06-02 13:21:51.709 UTC [endorser] callChaincode -> INFO 03a [][bc977c1f] Entry chaincode: name:"lscc"
2019-06-02 13:21:51.710 UTC [lscc] executeInstall -> INFO 03b Installed Chaincode [ax-chaincode] Version [v2] to peer
2019-06-02 13:21:51.710 UTC [endorser] callChaincode -> INFO 03c [][bc977c1f] Exit chaincode: name:"lscc" (1ms)
2019-06-02 13:21:51.710 UTC [comm.grpc.server] 1 -> INFO 03d unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.21.0.1:50128 grpc.code=OK grpc.call_duration=1.329134ms
2019-06-02 13:21:51.738 UTC [endorser] callChaincode -> INFO 03e [default][c3bbc09e] Entry chaincode: name:"lscc"
2019-06-02 13:21:57.532 UTC [gossip.election] beLeader -> INFO 03f 7da5b667471b7350114ff369dd11eda7255c2c9de61dc64915fa01b0ca730def : Becoming a leader
2019-06-02 13:21:57.532 UTC [gossip.service] func1 -> INFO 040 Elected as a leader, starting delivery service for channel default
2019-06-02 13:22:10.692 UTC [endorser] callChaincode -> INFO 041 [default][c3bbc09e] Exit chaincode: name:"lscc" (18954ms)
2019-06-02 13:22:10.692 UTC [endorser] SimulateProposal -> ERRO 042 [default][c3bbc09e] failed to invoke chaincode name:"lscc" , error: container exited with 1
github.com/hyperledger/fabric/core/chaincode.(*RuntimeLauncher).Launch.func1
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/runtime_launcher.go:63
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:1333
chaincode registration failed
2019-06-02 13:22:10.693 UTC [comm.grpc.server] 1 -> INFO 043 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.21.0.1:50128 grpc.code=OK grpc.call_duration=18.955253529s
No additional logs are being registered by the orderer. My code is as follows
const createClientInstance = async () => {
let myClient = new client();
const ordererConfig = {
hostname: 'orderer0',
url: 'grpc://localhost:7050',
pem: readCryptoFile('ordererOrg.pem')
};
const orderer = myClient.newOrderer(ordererConfig.url, {
pem: ordererConfig.pem,
'ssl-target-name-override': ordererConfig.hostname
});
let peerConfig = {
hostname: 'ax-peer',
url: 'grpc://localhost:7051',
eventHubUrl: 'grpc://localhost:7053',
pem: readCryptoFile('axOrg.pem')
};
const defaultPeer = myClient.newPeer(peerConfig.url, {
pem: peerConfig.pem,
'ssl-target-name-override': peerConfig.hostname
});
myClient.setStateStore(await client.newDefaultKeyValueStore({
path: './ax-peer'
}))
let user = await myClient.getUserContext('admin', true);
if (user && user.isEnrolled()) {
console.log('Existing admin user used');
} else {
let url = 'http://localhost:7054'
const ca = new CAClient(url, {
verify: false
});
let enrollmentID = 'admin';
let enrollmentSecret = 'adminpw';
const enrollment = await ca.enroll({
enrollmentID: 'admin',
enrollmentSecret: 'adminpw'
});
user = new User(enrollmentID, myClient);
await user.setEnrollment(enrollment.key, enrollment.certificate, 'AxOrgMSP');
};
await myClient.setUserContext(user);
let adminUser = await myClient.createUser({
username: `Admin#ax-peer`,
mspid: 'AxOrgMSP',
cryptoContent: {
privateKeyPEM: readCryptoFile('Admin#ax-org-key.pem'),
signedCertPEM: readCryptoFile('Admin#ax-org-cert.pem')
}
});
let channelRes = await myClient.queryChannels(defaultPeer);
// Create a new channel. Does not make you join it though
let txId = myClient.newTransactionID();
let envelope_bytes = fs.readFileSync('./channel.tx');
var channelConfig = myClient.extractChannelConfig(envelope_bytes);
let signature = myClient.signChannelConfig(channelConfig);
const request = {
name: 'default',
orderer: orderer,
config: channelConfig,
signatures: [signature],
txId: txId
};
await myClient.createChannel(request);
let channel = myClient.newChannel('default');
channel.addOrderer(orderer);
channel.addPeer(defaultPeer);
const genesisBlock = await channel.getGenesisBlock({ txId: myClient.newTransactionID() });
let res = await channel.joinChannel({
targets: [defaultPeer],
txId: myClient.newTransactionID(),
block: genesisBlock
}, 120000);
const installReq = {
targets: [ defaultPeer ],
chaincodePath: ccPath,
chaincodeId:'ax-chaincode',
chaincodeVersion: 'v2',
chaincodeType: 'node'
};
let installRes = await myClient.installChaincode(installReq, 120000);
let instantiateResponse = await channel.sendInstantiateProposal({
targets: [ defaultPeer ],
chaincodeId: 'ax-chaincode',
chaincodeVersion: 'v2',
chaincodeType: 'node',
txId: myClient.newTransactionID()
});
// This fails
console.log(instantiateResponse);
};
Since the language is node I have to provide the absolute path to the chaincode. My folder structure is
- chaincode
- src
- ax-chaincode
- package.json
- index.js (fabric-contract-api used)
- server
- index.js (where I am calling the above code)
If I run client.queryInstalledChaincodes(defaultPeer) then it returns this log so I guess the chaincode is being installed.
{ chaincodes:
[ { name: 'ax-chaincode',
version: 'v2',
path: '/home/varun/Algorythmix/Core-Projects/ax-boilerplate/chaincode/src/ax-chaincode',
input: '',
escc: '',
vscc: '',
id: [Object] } ] }
How do I fix this? I want to stick to using nodejs and not change my chaincode to golang. The example also pulls the certificates and stores it in the root folder so it can be accessed without having to do docker exec -it bash.
So as per the suggestion by Gari, I added the command in my peer-base.yaml file. The code still did not work but in docker logs ax-peer a more descriptive error popped up that said fabric-chaincode-node not found. Upon inspection it seems apart from installing fabric-contract-api, I also have to install fabric-shim in the chaincode folder. This was added as a requirement in latest Fabric as per this document.
Since fabric-contract-api extends fabric-shim so I did not include it, now that I have the chaincode is being installed.
EDIT- 2020
The documents for nodejs SDK have shifted. The release notes and new dependencies for fabric-contract-api can be found at this link which states the fabric-shim is now fabric-shim-api

Resources