spring integration dsl aggregator not releasing the messages - spring-integration

We are using the below code for aggregation and we have noticed the messages are not being released intermittently to subsequent flow. We enabled the trace log for aggregate package
IntegrationFlows
.from("upstream")
.log(INFO, g -> "Message Received for Aggregation: " + g.getPayload())
.aggregate(aggregatorSpec -> aggregatorSpec.correlationStrategy( m -> 1)
.expireGroupsUponCompletion(true)
.expireGroupsUponTimeout(true)
.groupTimeout(30000)
.sendPartialResultOnExpiry(true)
.releaseStrategy(new TimeoutCountSequenceSizeReleaseStrategy(100, 30000)))
.log(INFO, g -> "Message released:" + ((ArrayList) g.getPayload()).size())
.handle(someService)
.get();
This log shows the message never completed by aggregate function
2019-02-20 16:53:44,366 UTC INFO [org.springframework.jms.listener.DefaultMessageListenerContainer#0-4] org.springframework.integration.handler.LoggingHandler- Message Received for Aggregation: Message1
2019-02-20 16:53:44,366 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-4] org.springframework.integration.aggregator.AggregatingMessageHandler- org.springframework.integration.aggregator.AggregatingMessageHandler#0 received message: GenericMessage [payload=Message1, headers={jms headers}]
2019-02-20 16:53:44,366 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-4] org.springframework.integration.aggregator.AggregatingMessageHandler- Handling message with correlationKey [1]: GenericMessage [payload=Message1, headers={jms headers}]
2019-02-20 16:53:44,367 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-4] org.springframework.integration.aggregator.AggregatingMessageHandler- Schedule MessageGroup [ SimpleMessageGroup{groupId=1, messages=[GenericMessage [payload=Message1, headers={jms headers}] to 'forceComplete'.
This log shows the message was completed by aggregate function
2019-02-20 16:58:15,386 UTC INFO [org.springframework.jms.listener.DefaultMessageListenerContainer#0-3] org.springframework.integration.handler.LoggingHandler- Message Received for Aggregation: Message2
2019-02-20 16:58:15,386 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-3] org.springframework.integration.aggregator.AggregatingMessageHandler- org.springframework.integration.aggregator.AggregatingMessageHandler#0 received message: GenericMessage [payload=Message2, headers={jms headers}]
2019-02-20 16:58:15,386 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-3] org.springframework.integration.aggregator.AggregatingMessageHandler- Handling message with correlationKey [1]: GenericMessage [payload=Message2, headers={jms headers}]
2019-02-20 16:58:15,386 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-3] org.springframework.integration.aggregator.AggregatingMessageHandler- Schedule MessageGroup [ SimpleMessageGroup{groupId=1, messages=[GenericMessage [payload=Message2, headers={jms headers}] to 'forceComplete'.
2019-02-20 16:58:45,387 UTC DEBUG [task-scheduler-6] org.springframework.integration.aggregator.AggregatingMessageHandler- Cancel 'forceComplete' scheduling for MessageGroup [ SimpleMessageGroup{groupId=1, messages=[GenericMessage [payload=Message2, headers={jms headers}].
2019-02-20 16:58:45,387 UTC DEBUG [task-scheduler-6] org.springframework.integration.aggregator.AggregatingMessageHandler- Completing group with correlationKey [1]
2019-02-20 16:58:45,387 UTC INFO [task-scheduler-6] org.springframework.integration.handler.LoggingHandler- Message released: 1
Can you help what is missing in the code

Related

Hyperledger fabric peers consistently logging peer changed its PKI-ID logs

I have hyperledger fabric peers running on version 2.3.2 and the peer's certificates are renewed. The peers continuously logging with message peer2.xorg:7051 changed its PKI-ID from xxxxxx to xxxxxxx and then purging xxxxxxxx from membership
Does anyone knows the reason for these continuos logs?
Below are the complete logs:
2022-06-14 08:47:42.647 UTC [comm.grpc.server] 1 -> INFO 10d08 streaming call completed grpc.service=gossip.Gossip grpc.method=GossipStream grpc.peer_address=10.20.30.140:38550 grpc.peer_subject="CN=peer2.org1.com,OU=peer,O=Hyperledger,ST=North Carolina,C=US" grpc.code=OK grpc.call_duration=24.617863135s
2022-06-14 08:47:42.647 UTC [gossip.discovery] purge -> INFO 10d09 Purging e3c96c537b91675f3a6428a509a287addb65bddeeacb4b5d000b6e4ef567b013 from membership
2022-06-14 08:47:42.647 UTC [gossip.comm] createConnection -> INFO 10d0a Peer peer2.org1.com:7051 changed its PKI-ID from 1c56c0d7a0397dd9c756205197067ef26bef156cdf5ee27af16728a62123fb76 to 9994c9e8d63ae1f6564d1713f9a5393c458a78dfdb915ea2a4a4f6efb6d26dae
2022-06-14 08:47:42.647 UTC [gossip.discovery] purge -> INFO 10d0b Purging 1c56c0d7a0397dd9c756205197067ef26bef156cdf5ee27af16728a62123fb76 from membership
2022-06-14 08:47:42.648 UTC [gossip.comm] createConnection -> INFO 10d0c Peer peer2.org1.com:7051 changed its PKI-ID from da99b167b6c3a7b8289dd943568a382ac0f27d2d0ffcee53725f4fd18a10be9c to 9994c9e8d63ae1f6564d1713f9a5393c458a78dfdb915ea2a4a4f6efb6d26dae
2022-06-14 08:47:42.648 UTC [gossip.discovery] purge -> INFO 10d0d Purging da99b167b6c3a7b8289dd943568a382ac0f27d2d0ffcee53725f4fd18a10be9c from membership
2022-06-14 08:47:42.649 UTC [gossip.comm] createConnection -> INFO 10d0e Peer peer2.org1.com:7051 changed its PKI-ID from 87b299aa1d0a71002dbbac8b0b1bf049a6bd1aa58e669d31f0355587af15a8e9 to 9994c9e8d63ae1f6564d1713f9a5393c458a78dfdb915ea2a4a4f6efb6d26dae
2022-06-14 08:47:42.649 UTC [gossip.comm] func1 -> WARN 10d0f peer2.org1.com:7051, PKIid:87b299aa1d0a71002dbbac8b0b1bf049a6bd1aa58e669d31f0355587af15a8e9 isn't responsive: EOF
2022-06-14 08:47:42.649 UTC [gossip.discovery] purge -> INFO 10d10 Purging 87b299aa1d0a71002dbbac8b0b1bf049a6bd1aa58e669d31f0355587af15a8e9 from membership
2022-06-14 08:47:42.657 UTC [comm.grpc.server] 1 -> INFO 10d11 streaming call completed grpc.service=gossip.Gossip grpc.method=GossipStream grpc.peer_address=10.20.30.140:38546 grpc.peer_subject="CN=peer2.org1-shared.com,OU=peer,O=Hyperledger,ST=North Carolina,C=US" error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=24.712078411s
2022-06-14 08:47:42.659 UTC [gossip.comm] createConnection -> INFO 10d12 Peer peer2.org1-shared.com:7051 changed its PKI-ID from 5f03f639eb1bc912609b9208a2577cb8575c20a103d71155efe68487dedde236 to 99d3b90022039ca4d3311c96b1ccddc64e58d170f15e39cc18232e43be1c7b63
2022-06-14 08:47:42.659 UTC [gossip.discovery] purge -> INFO 10d13 Purging 5f03f639eb1bc912609b9208a2577cb8575c20a103d71155efe68487dedde236 from membership
2022-06-14 08:47:42.659 UTC [gossip.comm] createConnection -> INFO 10d14 Peer peer2.org1-shared.com:7051 changed its PKI-ID from d4b6c5c8659587ea44ac4ba1f813dc3e52194ca0c2e09b7ecfe1cbd47d1db7c4 to 99d3b90022039ca4d3311c96b1ccddc64e58d170f15e39cc18232e43be1c7b63
2022-06-14 08:47:42.660 UTC [gossip.comm] func1 -> WARN 10d15 peer2.org1-shared.com:7051, PKIid:d4b6c5c8659587ea44ac4ba1f813dc3e52194ca0c2e09b7ecfe1cbd47d1db7c4 isn't responsive: EOF
2022-06-14 08:47:42.660 UTC [gossip.discovery] purge -> INFO 10d16 Purging d4b6c5c8659587ea44ac4ba1f813dc3e52194ca0c2e09b7ecfe1cbd47d1db7c4 from membership
2022-06-14 08:47:42.675 UTC [comm.grpc.server] 1 -> INFO 10d17 unary call completed grpc.service=gossip.Gossip grpc.method=Ping grpc.request_deadline=2022-06-14T08:47:44.674Z grpc.peer_address=10.20.30.140:39676 grpc.peer_subject="CN=peer2.org1.com,OU=peer,O=Hyperledger,ST=North Carolina,C=US" grpc.code=OK grpc.call_duration=62.13µs
2022-06-14 08:47:42.710 UTC [endorser] callChaincode -> INFO 10d18 finished chaincode: assets duration: 37ms channel=assetschannel txID=58a9628e
2022-06-14 08:47:42.711 UTC [comm.grpc.server] 1 -> INFO 10d19 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.20.30.140:34366 grpc.peer_subject="CN=fabric-common" grpc.code=OK grpc.call_duration=39.919313ms
2022-06-14 08:47:43.034 UTC [endorser] callChaincode -> INFO 10d1a finished chaincode: assets duration: 35ms channel=assetschannel txID=010913f5
2022-06-14 08:47:43.035 UTC [comm.grpc.server] 1 -> INFO 10d1b unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.20.30.140:34366 grpc.peer_subject="CN=fabric-common" grpc.code=OK grpc.call_duration=38.114437ms
2022-06-14 08:47:43.153 UTC [endorser] callChaincode -> INFO 10d1c finished chaincode: assets duration: 49ms channel=assetschannel txID=49d4c88f
2022-06-14 08:47:43.153 UTC [comm.grpc.server] 1 -> INFO 10d1d unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.20.30.140:34366 grpc.peer_subject="CN=fabric-common" grpc.code=OK grpc.call_duration=52.987518ms
2022-06-14 08:47:43.279 UTC [endorser] callChaincode -> INFO 10d1e finished chaincode: assets duration: 85ms channel=assetschannel txID=69279b3e
It keeps on purging same PKI ID again and again
After talking to some experts, I got to know that peers might be fetching service discovery cache from other peers. This is why the errors were not being resolved even after restarting a peer.
After restarting all the peers at the same time, the issue was resolved.
Thanks to Yacov Manevich

Instantiate chaincode successful but chaincode doesn't appear

I am running Hyperledger Fabric 1.4.x, and release-1.4 of fabric-sdk-java. I am trying to use the following code to instantiate a chaincode:
InstantiateProposalRequest instantiateProposalRequest = client.newInstantiationProposalRequest();
instantiateProposalRequest.setProposalWaitTime(180000);
instantiateProposalRequest.setChaincodeID(buildChaincodeID(name, version, path));
instantiateProposalRequest.setChaincodeLanguage(Type.JAVA);
instantiateProposalRequest.setFcn("init");
instantiateProposalRequest.setArgs(new String[] {""});
Collection<ProposalResponse> responses = channel.sendInstantiationProposal(instantiateProposalRequest);
The instantiation seemed successful, but I don't see it listed when I check with peer chaincode list --instantiated -C deconeb-channel.
The following log is from the point I execute the code (with FABRIC_LOGGING_SPEC=DEBUG:msp=info:gossip=info):
[root#vmdev2 createchannel]# docker logs $(docker ps -aqf name=fabric_peer1) -n 0 -f
... [endorser] ProcessProposal -> DEBU 22d2 Entering: request from 192.168.50.126:3280
... [protoutils] ValidateProposalMessage -> DEBU 22d3 ValidateProposalMessage starts for signed proposal 0xc003725b30
... [protoutils] validateChannelHeader -> DEBU 22d4 validateChannelHeader info: header type 3
... [protoutils] checkSignatureFromCreator -> DEBU 22d5 begin
... [protoutils] checkSignatureFromCreator -> DEBU 22d6 creator is &{myMSP 0167d1f59420e22fb1032bc6a17b528414378511c23a07719d2b364842f862a7}
... [protoutils] checkSignatureFromCreator -> DEBU 22d7 creator is valid
... [protoutils] checkSignatureFromCreator -> DEBU 22d8 exits successfully
... [protoutils] validateChaincodeProposalMessage -> DEBU 22d9 validateChaincodeProposalMessage starts for proposal 0xc001fb47e0, header 0xc0036cc870
... [protoutils] validateChaincodeProposalMessage -> DEBU 22da validateChaincodeProposalMessage info: header extension references chaincode name:"cscc"
... [endorser] preProcess -> DEBU 22db [deconeb-channel][db2d2e4f] processing txid: db2d2e4fef0b7cb0889fe6d0258b2f424426c180be62457c2f2cf5ec00bdd96c
... [fsblkstorage] retrieveTransactionByID -> DEBU 22dc retrieveTransactionByID() - txId = [db2d2e4fef0b7cb0889fe6d0258b2f424426c180be62457c2f2cf5ec00bdd96c]
... [endorser] SimulateProposal -> DEBU 22dd [deconeb-channel][db2d2e4f] Entry chaincode: name:"cscc"
... [endorser] callChaincode -> INFO 22de [deconeb-channel][db2d2e4f] Entry chaincode: name:"cscc"
... [chaincode] Execute -> DEBU 22df Entry
... [cscc] Invoke -> DEBU 22e0 Invoke function: GetConfigBlock
... [aclmgmt] CheckACL -> DEBU 22e1 acl policy not found in config for resource cscc/GetConfigBlock
... [policies] Evaluate -> DEBU 22e2 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/Readers ==
... [policies] Evaluate -> DEBU 22e3 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
... [policies] Evaluate -> DEBU 22e4 == Evaluating *cauthdsl.policy Policy /Channel/Application/myMSP/Readers ==
... [cauthdsl] func1 -> DEBU 22e5 0xc00059ec70 gate 1638419940859552678 evaluation starts
... [cauthdsl] func2 -> DEBU 22e6 0xc00059ec70 signed by 0 principal evaluation starts (used [false])
... [cauthdsl] func2 -> DEBU 22e7 0xc00059ec70 processing identity 0 with bytes of 115a2d0
... [cauthdsl] func2 -> DEBU 22e8 0xc00059ec70 principal matched by identity 0
... [cauthdsl] func2 -> DEBU 22e9 0xc00059ec70 principal evaluation succeeds for identity 0
... [cauthdsl] func2 -> DEBU 22ea 0xc00059ec70 signed by 1 principal evaluation starts (used [true])
... [cauthdsl] func2 -> DEBU 22eb 0xc00059ec70 skipping identity 0 because it has already been used
... [cauthdsl] func2 -> DEBU 22ec 0xc00059ec70 principal evaluation fails
... [cauthdsl] func2 -> DEBU 22ed 0xc00059ec70 signed by 2 principal evaluation starts (used [true])
... [cauthdsl] func2 -> DEBU 22ee 0xc00059ec70 skipping identity 0 because it has already been used
... [cauthdsl] func2 -> DEBU 22ef 0xc00059ec70 principal evaluation fails
... [cauthdsl] func1 -> DEBU 22f0 0xc00059ec70 gate 1638419940859552678 evaluation succeeds
... [policies] Evaluate -> DEBU 22f1 Signature set satisfies policy /Channel/Application/myMSP/Readers
... [policies] Evaluate -> DEBU 22f2 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/myMSP/Readers
... [policies] Evaluate -> DEBU 22f3 Signature set satisfies policy /Channel/Application/Readers
... [policies] Evaluate -> DEBU 22f4 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/Readers
... [chaincode] handleMessage -> DEBU 22f5 [db2d2e4f] Fabric side handling ChaincodeMessage of type: COMPLETED in state ready
... [chaincode] Notify -> DEBU 22f6 [db2d2e4f] notifying Txid:db2d2e4fef0b7cb0889fe6d0258b2f424426c180be62457c2f2cf5ec00bdd96c, channelID:deconeb-channel
... [chaincode] Execute -> DEBU 22f7 Exit
... [endorser] callChaincode -> INFO 22f8 [deconeb-channel][db2d2e4f] Exit chaincode: name:"cscc" (1ms)
... [endorser] SimulateProposal -> DEBU 22f9 [deconeb-channel][db2d2e4f] Exit
... [endorser] endorseProposal -> DEBU 22fa [deconeb-channel][db2d2e4f] Entry chaincode: name:"cscc"
... [endorser] endorseProposal -> DEBU 22fb [deconeb-channel][db2d2e4f] escc for chaincode name:"cscc" is escc
... [endorser] EndorseWithPlugin -> DEBU 22fc Entering endorsement for {plugin: escc, channel: deconeb-channel, tx: db2d2e4fef0b7cb0889fe6d0258b2f424426c180be62457c2f2cf5ec00bdd96c, chaincode: cscc}
... [endorser] EndorseWithPlugin -> DEBU 22fd Exiting {plugin: escc, channel: deconeb-channel, tx: db2d2e4fef0b7cb0889fe6d0258b2f424426c180be62457c2f2cf5ec00bdd96c, chaincode: cscc}
... [endorser] endorseProposal -> DEBU 22fe [deconeb-channel][db2d2e4f] Exit
... [endorser] func1 -> DEBU 22ff Exit: request from 192.168.50.126:3280
... [comm.grpc.server] 1 -> INFO 2300 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=192.168.50.126:3280 grpc.code=OK grpc.call_duration=3.1014ms
... [common.deliverevents] Deliver -> DEBU 2301 Starting new Deliver handler
... [common.deliver] Handle -> DEBU 2302 Starting new deliver loop for 192.168.50.126:3285
... [common.deliver] Handle -> DEBU 2303 Attempting to read seek info message from 192.168.50.126:3285
... [aclmgmt] CheckACL -> DEBU 2304 acl policy not found in config for resource event/Block
... [policies] Evaluate -> DEBU 2305 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/Readers ==
... [policies] Evaluate -> DEBU 2306 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
... [policies] Evaluate -> DEBU 2307 == Evaluating *cauthdsl.policy Policy /Channel/Application/myMSP/Readers ==
... [cauthdsl] func1 -> DEBU 2308 0xc00373d130 gate 1638419941422542679 evaluation starts
... [cauthdsl] func2 -> DEBU 2309 0xc00373d130 signed by 0 principal evaluation starts (used [false])
... [cauthdsl] func2 -> DEBU 230a 0xc00373d130 processing identity 0 with bytes of 115a2d0
... [cauthdsl] func2 -> DEBU 230b 0xc00373d130 principal matched by identity 0
... [cauthdsl] func2 -> DEBU 230c 0xc00373d130 principal evaluation succeeds for identity 0
... [cauthdsl] func2 -> DEBU 230d 0xc00373d130 signed by 1 principal evaluation starts (used [true])
... [cauthdsl] func2 -> DEBU 230e 0xc00373d130 skipping identity 0 because it has already been used
... [cauthdsl] func2 -> DEBU 230f 0xc00373d130 principal evaluation fails
... [cauthdsl] func2 -> DEBU 2310 0xc00373d130 signed by 2 principal evaluation starts (used [true])
... [cauthdsl] func2 -> DEBU 2311 0xc00373d130 skipping identity 0 because it has already been used
... [cauthdsl] func2 -> DEBU 2312 0xc00373d130 principal evaluation fails
... [cauthdsl] func1 -> DEBU 2313 0xc00373d130 gate 1638419941422542679 evaluation succeeds
... [policies] Evaluate -> DEBU 2314 Signature set satisfies policy /Channel/Application/myMSP/Readers
... [policies] Evaluate -> DEBU 2315 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/myMSP/Readers
... [policies] Evaluate -> DEBU 2316 Signature set satisfies policy /Channel/Application/Readers
... [policies] Evaluate -> DEBU 2317 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/Readers
... [common.deliver] deliverBlocks -> DEBU 2318 [channel: deconeb-channel] Received seekInfo (0xc002fa3a40) start:<newest:<> > stop:<specified:<number:9223372036854775807 > > from 192.168.50.126:3285
... [fsblkstorage] Next -> DEBU 2319 Initializing block stream for iterator. itr.maxBlockNumAvailable=1
... [fsblkstorage] newBlockfileStream -> DEBU 231a newBlockfileStream(): filePath=[/var/hyperledger/production/ledgersData/chains/chains/deconeb-channel/blockfile_000000], startOffset=[21642]
... [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 231b Remaining bytes=[4671], Going to peek [8] bytes
... [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 231c Returning blockbytes - length=[4669], placementInfo={fileNum=[0], startOffset=[21642], bytesOffset=[21644]}
... [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 231d blockbytes [4669] read from file [0]
... [common.deliver] deliverBlocks -> DEBU 231e [channel: deconeb-channel] Delivering block [1] for (0xc002fa3a40) for 192.168.50.126:3285
... [fsblkstorage] waitForBlock -> DEBU 231f Going to wait for newer blocks. maxAvailaBlockNumber=[1], waitForBlockNum=[2]
... [endorser] ProcessProposal -> DEBU 2320 Entering: request from 192.168.50.126:3280
... [protoutils] ValidateProposalMessage -> DEBU 2321 ValidateProposalMessage starts for signed proposal 0xc0033d4fa0
... [protoutils] validateChannelHeader -> DEBU 2322 validateChannelHeader info: header type 3
... [protoutils] checkSignatureFromCreator -> DEBU 2323 begin
... [protoutils] checkSignatureFromCreator -> DEBU 2324 creator is &{myMSP 0167d1f59420e22fb1032bc6a17b528414378511c23a07719d2b364842f862a7}
... [protoutils] checkSignatureFromCreator -> DEBU 2325 creator is valid
... [protoutils] checkSignatureFromCreator -> DEBU 2326 exits successfully
... [protoutils] validateChaincodeProposalMessage -> DEBU 2327 validateChaincodeProposalMessage starts for proposal 0xc00014d5e0, header 0xc0033d53b0
... [protoutils] validateChaincodeProposalMessage -> DEBU 2328 validateChaincodeProposalMessage info: header extension references chaincode name:"lscc"
... [endorser] preProcess -> DEBU 2329 [deconeb-channel][bc1525c4] processing txid: bc1525c4a55b0d9115df62e094cc5f651a38bd1e037bf0675296f77f489585da
... [fsblkstorage] retrieveTransactionByID -> DEBU 232a retrieveTransactionByID() - txId = [bc1525c4a55b0d9115df62e094cc5f651a38bd1e037bf0675296f77f489585da]
... [lockbasedtxmgr] NewTxSimulator -> DEBU 232b constructing new tx simulator
... [lockbasedtxmgr] newLockBasedTxSimulator -> DEBU 232c constructing new tx simulator txid = [bc1525c4a55b0d9115df62e094cc5f651a38bd1e037bf0675296f77f489585da]
... [endorser] SimulateProposal -> DEBU 232d [deconeb-channel][bc1525c4] Entry chaincode: name:"lscc"
... [endorser] callChaincode -> INFO 232e [deconeb-channel][bc1525c4] Entry chaincode: name:"lscc"
... [chaincode] Execute -> DEBU 232f Entry
... [chaincode] handleMessage -> DEBU 2330 [bc1525c4] Fabric side handling ChaincodeMessage of type: GET_STATE in state ready
... [chaincode] HandleTransaction -> DEBU 2331 [bc1525c4] handling GET_STATE from chaincode
... [chaincode] HandleGetState -> DEBU 2332 [bc1525c4] getting state for chaincode lscc, key productOwnership, channel deconeb-channel
... [statecouchdb] GetState -> DEBU 2333 GetState(). ns=lscc, key=productOwnership
... [couchdb] ReadDoc -> DEBU 2334 [deconeb-channel_lscc] Entering ReadDoc() id=[productOwnership]
... [couchdb] handleRequest -> DEBU 2335 Entering handleRequest() method=GET url=http://couchdb1.myorg.com:5984 dbName=deconeb-channel_lscc
... [couchdb] handleRequest -> DEBU 2336 Request URL: http://couchdb1.myorg.com:5984/deconeb-channel_lscc/productOwnership?attachments=true
... [couchdb] handleRequest -> DEBU 2337 HTTP Request: GET /deconeb-channel_lscc/productOwnership?attachments=true HTTP/1.1 | Host: couchdb1.myorg.com:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Authorization: Basic Y291Y2hkYjpjb3VjaGRiMTIz | Accept-Encoding: gzip | |
... [couchdb] handleRequest -> DEBU 2338 Error handling CouchDB request. Error:not_found, Status Code:404, Reason:missing
... [couchdb] ReadDoc -> DEBU 2339 [deconeb-channel_lscc] Document not found (404), returning nil value instead of 404 error
... [chaincode] HandleGetState -> DEBU 233a [bc1525c4] No state associated with key: productOwnership. Sending RESPONSE with an empty payload
... [chaincode] HandleTransaction -> DEBU 233b [bc1525c4] Completed GET_STATE. Sending RESPONSE
... [cauthdsl] func1 -> DEBU 233c 0xc00370e210 gate 1638419941463477320 evaluation starts
... [cauthdsl] func2 -> DEBU 233d 0xc00370e210 signed by 0 principal evaluation starts (used [false])
... [cauthdsl] func2 -> DEBU 233e 0xc00370e210 processing identity 0 with bytes of 115a2d0
... [cauthdsl] func2 -> DEBU 233f 0xc00370e210 principal matched by identity 0
... [cauthdsl] func2 -> DEBU 2340 0xc00370e210 principal evaluation succeeds for identity 0
... [cauthdsl] func1 -> DEBU 2341 0xc00370e210 gate 1638419941463477320 evaluation succeeds
... [chaincode] handleMessage -> DEBU 2342 [bc1525c4] Fabric side handling ChaincodeMessage of type: PUT_STATE in state ready
... [chaincode] HandleTransaction -> DEBU 2343 [bc1525c4] handling PUT_STATE from chaincode
... [chaincode] HandleTransaction -> DEBU 2344 [bc1525c4] Completed PUT_STATE. Sending RESPONSE
... [lscc] putChaincodeCollectionData -> DEBU 2345 No collection configuration specified
... [chaincode] handleMessage -> DEBU 2346 [bc1525c4] Fabric side handling ChaincodeMessage of type: COMPLETED in state ready
... [chaincode] Notify -> DEBU 2347 [bc1525c4] notifying Txid:bc1525c4a55b0d9115df62e094cc5f651a38bd1e037bf0675296f77f489585da, channelID:deconeb-channel
... [chaincode] Execute -> DEBU 2348 Exit
... [chaincode] LaunchConfig -> DEBU 2349 launchConfig: executable:"/root/chaincode-java/start",Args:[/root/chaincode-java/start,--peerAddress,peer1.myorg.com:7052],Envs:[CORE_CHAINCODE_LOGGING_LEVEL=info,CORE_CHAINCODE_LOGGING_SHIM=warning,CORE_CHAINCODE_LOGGING_FORMAT=%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message},CORE_CHAINCODE_ID_NAME=productOwnership:1.0,CORE_PEER_TLS_ENABLED=true,CORE_TLS_CLIENT_KEY_PATH=/etc/hyperledger/fabric/client.key,CORE_TLS_CLIENT_CERT_PATH=/etc/hyperledger/fabric/client.crt,CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/peer.crt],Files:[/etc/hyperledger/fabric/client.crt /etc/hyperledger/fabric/client.key /etc/hyperledger/fabric/peer.crt]
... [chaincode] Start -> DEBU 234a start container: productOwnership:1.0
... [chaincode] Start -> DEBU 234b start container with args: /root/chaincode-java/start --peerAddress peer1.myorg.com:7052
... [chaincode] Start -> DEBU 234c start container with env:
CORE_CHAINCODE_LOGGING_LEVEL=info
CORE_CHAINCODE_LOGGING_SHIM=warning
CORE_CHAINCODE_LOGGING_FORMAT=%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}
CORE_CHAINCODE_ID_NAME=productOwnership:1.0
CORE_PEER_TLS_ENABLED=true
CORE_TLS_CLIENT_KEY_PATH=/etc/hyperledger/fabric/client.key
CORE_TLS_CLIENT_CERT_PATH=/etc/hyperledger/fabric/client.crt
CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/peer.crt
... [container] lockContainer -> DEBU 234d waiting for container(productOwnership-1.0) lock
... [container] lockContainer -> DEBU 234e got container (productOwnership-1.0) lock
... [dockercontroller] stopInternal -> DEBU 234f stopping container id=dev-peer1.myorg.com-productOwnership-1.0
... [dockercontroller] stopInternal -> DEBU 2350 stop container result error="No such container: dev-peer1.myorg.com-productOwnership-1.0"
... [dockercontroller] stopInternal -> DEBU 2351 killing container id=dev-peer1.myorg.com-productOwnership-1.0
... [dockercontroller] stopInternal -> DEBU 2352 kill container result id=dev-peer1.myorg.com-productOwnership-1.0 error="No such container: dev-peer1.myorg.com-productOwnership-1.0"
... [dockercontroller] stopInternal -> DEBU 2353 removing container id=dev-peer1.myorg.com-productOwnership-1.0
... [dockercontroller] stopInternal -> DEBU 2354 remove container result id=dev-peer1.myorg.com-productOwnership-1.0 error="No such container: dev-peer1.myorg.com-productOwnership-1.0"
... [dockercontroller] createContainer -> DEBU 2355 create container imageID=dev-peer1.myorg.com-productownership-1.0-8833aa49d3efc325e99f93399c3e02417a6d5a271b81e9013f3095580e89b308 containerID=dev-peer1.myorg.com-productOwnership-1.0
... [dockercontroller] getDockerHostConfig -> DEBU 2356 docker container hostconfig NetworkMode: devbc
... [dockercontroller] createContainer -> DEBU 2357 created container imageID=dev-peer1.myorg.com-productownership-1.0-8833aa49d3efc325e99f93399c3e02417a6d5a271b81e9013f3095580e89b308 containerID=dev-peer1.myorg.com-productOwnership-1.0
... [dockercontroller] Start -> DEBU 2358 Started container dev-peer1.myorg.com-productOwnership-1.0
... [container] unlockContainer -> DEBU 2359 container lock deleted(productOwnership-1.0)
... [container] lockContainer -> DEBU 235a waiting for container(productOwnership-1.0) lock
... [container] lockContainer -> DEBU 235b got container (productOwnership-1.0) lock
... [container] unlockContainer -> DEBU 235c container lock deleted(productOwnership-1.0)
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 235d Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 235e INFO: <<<<<<<<<<<<<Enviromental options>>>>>>>>>>>>
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 235f Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2360 INFO: CORE_CHAINCODE_ID_NAME: productOwnership:1.0
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2361 Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2362 INFO: CORE_PEER_ADDRESS: 127.0.0.1
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2363 Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2364 INFO: CORE_PEER_TLS_ENABLED: true
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2365 Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2366 INFO: CORE_PEER_TLS_ROOTCERT_FILE: /etc/hyperledger/fabric/peer.crt
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2367 Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2368 INFO: CORE_TLS_CLIENT_KEY_PATH: /etc/hyperledger/fabric/client.key
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2369 Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236a INFO: CORE_TLS_CLIENT_CERT_PATH: /etc/hyperledger/fabric/client.crt
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236b Dec 02, 2021 4:39:02 AM org.hyperledger.fabric.shim.ChaincodeBase processEnvironmentOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236c INFO: LOGLEVEL: INFO
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236d Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236e INFO: <<<<<<<<<<<<<CommandLine options>>>>>>>>>>>>
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 236f Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2370 INFO: CORE_CHAINCODE_ID_NAME: productOwnership:1.0
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2371 Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2372 INFO: CORE_PEER_ADDRESS: peer1.myorg.com:7052
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2373 Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2374 INFO: CORE_PEER_TLS_ENABLED: true
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2375 Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2376 INFO: CORE_PEER_TLS_ROOTCERT_FILE: /etc/hyperledger/fabric/peer.crt
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2377 Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2378 INFO: CORE_TLS_CLIENT_KEY_PATH: /etc/hyperledger/fabric/client.key
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2379 Dec 02, 2021 4:39:03 AM org.hyperledger.fabric.shim.ChaincodeBase processCommandLineOptions
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237a INFO: CORE_TLS_CLIENT_CERT_PATH: /etc/hyperledger/fabric/client.crt
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237b org.hyperledger
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237c org.hyperledger.fabric.shim.ChaincodeBase
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237d org.hyperledger
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237e 04:39:03:119 INFO org.hyperledger.fabric.shim.ChaincodeBase initializeLogging Loglevel set to INFO
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 237f 04:39:03:130 INFO org.hyperledger.fabric.shim.ChaincodeBase getChaincodeConfig <<<<<<<<<<<<<Properties options>>>>>>>>>>>>
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2380 04:39:03:131 INFO org.hyperledger.fabric.shim.ChaincodeBase getChaincodeConfig {CORE_CHAINCODE_ID_NAME=productOwnership:1.0, CORE_PEER_ADDRESS=peer1.myorg.com}
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2381 04:39:03:136 INFO org.hyperledger.fabric.metrics.Metrics initialize Metrics disabled
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2382 04:39:03:840 INFO org.hyperledger.fabric.shim.ChaincodeBase newChannelBuilder ()->Configuring channel connection to peer.peer1.myorg.com:7052 tlsenabled true
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2383 04:39:04:408 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskManager <init> Max Pool Size [TP_MAX_POOL_SIZE]5
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2384 04:39:04:408 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskManager <init> Queue Size [TP_CORE_POOL_SIZE]5000
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2385 04:39:04:413 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskManager <init> Core Pool Size [TP_QUEUE_SIZE]5
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2386 04:39:04:413 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskManager <init> Keep Alive Time [TP_KEEP_ALIVE_MS]5000
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2387 04:39:04:421 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskExecutor <init> Thread pool created
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2388 04:39:04:422 INFO org.hyperledger.fabric.shim.impl.ChaincodeSupportClient start making the grpc call
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2389 04:39:04:596 INFO org.hyperledger.fabric.shim.impl.InnvocationTaskManager register Registering new chaincode name: "productOwnership:1.0"
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 238a
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 238b 04:39:04:620 FINE org.hyperledger.fabric.shim.impl.ChaincodeSupportClient$2 accept > sendToPeer
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 238c 04:39:04:636 FINE org.hyperledger.fabric.shim.impl.ChaincodeSupportClient$2 accept < sendToPeer
... [chaincode.accesscontrol] authenticate -> DEBU 238d Chaincode productOwnership:1.0 's authentication is authorized
... [chaincode] handleMessage -> DEBU 238e [] Fabric side handling ChaincodeMessage of type: REGISTER in state created
... [chaincode] HandleRegister -> DEBU 238f Received REGISTER in state created
... [chaincode] Register -> DEBU 2390 registered handler complete for chaincode productOwnership:1.0
... [chaincode] HandleRegister -> DEBU 2391 Got REGISTER for chaincodeID = name:"productOwnership:1.0" , sending back REGISTERED
... [chaincode] HandleRegister -> DEBU 2392 Changed state to established for name:"productOwnership:1.0"
... [chaincode] sendReady -> DEBU 2393 sending READY for chaincode name:"productOwnership:1.0"
... [chaincode] sendReady -> DEBU 2394 Changed to state ready for chaincode name:"productOwnership:1.0"
... [chaincode] Launch -> DEBU 2395 launch complete
... [chaincode] Execute -> DEBU 2396 Entry
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2397 04:39:05:261 FINE org.hyperledger.fabric.shim.impl.InnvocationTaskManager onChaincodeMessage [ ] {
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2398 "type": "REGISTERED"
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 2399 }
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239a 04:39:05:269 FINE org.hyperledger.fabric.shim.impl.InnvocationTaskManager onChaincodeMessage [ ] Received REGISTERED: moving to established state
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239b 04:39:05:273 FINE org.hyperledger.fabric.shim.impl.InnvocationTaskManager onChaincodeMessage [ ] {
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239c "type": "READY"
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239d }
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239e 04:39:05:274 FINE org.hyperledger.fabric.shim.impl.InnvocationTaskManager onChaincodeMessage [ ] Received READY: ready for invocations
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 239f 04:39:05:288 FINE org.hyperledger.fabric.shim.impl.InnvocationTaskManager onChaincodeMessage [bc1525c4] {
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 23a0 "type": "INIT",
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 23a1 "payload": "CgRpbml0CgA=",
... [peer.chaincode.dev-peer1.myorg.com-productOwnership-1.0] func2 -> INFO 23a2 "txid": "bc1525c4a55b0d9115df62e094cc5f651a38bd1e037bf0675296f77f489585da",
I had to truncate the log above to fit post requirements; the full log can be found at https://pastebin.com/XBUtQdwz
I don't see any errors in the log so I'm not sure what the problem is. Can anyone point me in the right direction?

Endorsement policy failure error is taking a long time in hyperledger fabric v2.2

I have discovery enabled and I am testing if a transaction will fail if the endorsing organizations set in the transaction do not match the organizations actually involved in the transaction.
I am attempting to create a private data collection with ORG1 and as part of the transaction I have used the following method to set the endorsing organizations:
transaction.setEndorsingOrganizations(...['ORG2']);
The test is failing but it is taking 60 seconds to do so.
The logs are as follows:
peer (org1) logs:
2021-01-25 13:31:50.876 UTC [gossip.privdata] StoreBlock -> INFO 055 [default] Received block [15] from buffer
2021-01-25 13:31:50.878 UTC [vscc] Validate -> ERRO 056 VSCC error: stateBasedValidator.Validate failed, err validation of endorsement policy for collection _implicit_org_1 chaincode test-chaincode in tx 15:0 failed: signature set did not satisfy policy
2021-01-25 13:31:50.878 UTC [committer.txvalidator] validateTx -> ERRO 057 Dispatch for transaction txId = 5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084 returned error: validation of endorsement policy for collection _implicit_org_1 chaincode test-chaincode in tx 15:0 failed: signature set did not satisfy policy
2021-01-25 13:31:50.878 UTC [committer.txvalidator] Validate -> INFO 058 [default] Validated block [15] in 1ms
2021-01-25 13:31:50.878 UTC [gossip.privdata] fetchPrivateData -> WARN 059 Do not know any peer in the channel( default ) that matches the policies , aborting
2021-01-25 13:31:50.878 UTC [gossip.privdata] populateFromRemotePeers -> WARN 05a Failed fetching private data from remote peers for dig2src:[map[{5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084 test-chaincode _implicit_org_1 15 0}:[]]], err: Empty membership channel=default
2021-01-25 13:31:51.879 UTC [gossip.privdata] fetchPrivateData -> WARN 05b Do not know any peer in the channel( default ) that matches the policies , aborting
2021-01-25 13:31:51.879 UTC [gossip.privdata] populateFromRemotePeers -> WARN 05c Failed fetching private data from remote peers for dig2src:[map[{5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084 test-chaincode _implicit_org_1 15 0}:[]]], err: Empty membership channel=default
2021-01-25 13:31:52.880 UTC [gossip.privdata] fetchPrivateData -> WARN 05d Do not know any peer in the channel( default ) that matches the policies , aborting
2021-01-25 13:31:52.880 UTC [gossip.privdata] populateFromRemotePeers -> WARN 05e Failed fetching private data from remote peers for dig2src:[map[{5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084 test-chaincode _implicit_org_1 15 0}:[]]], err: Empty membership channel=default
fetchPrivateData and populateFromRemotePeers warnings repeat over and over until
2021-01-25 13:32:50.873 UTC [gossip.privdata] RetrievePvtdata -> WARN 0d4 Could not fetch all 1 eligible collection private write sets for block [15] (0 from local cache, 0 from transient store, 0 from other peers). Will commit block with missing private write sets:[txID: 5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084, seq: 0, namespace: test-chaincode, collection: _implicit_org_1, hash: c189e3f3e8546ecde9b98b3aae67885cb8effeac1d35371a512c47db6a84
] channel=default
2021-01-25 13:32:50.873 UTC [validation] preprocessProtoBlock -> WARN 0d5 Channel [default]: Block [15] Transaction index [0] TxId [5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084] marked as invalid by committer. Reason code [ENDORSEMENT_POLICY_FAILURE]
2021-01-25 13:32:50.903 UTC [kvledger] CommitLegacy -> INFO 0d6 [default] Committed block [15] with 1 transaction(s) in 29ms (state_validation=0ms block_and_pvtdata_commit=11ms state_commit=16ms) commitHash=[bcfc168b343de9297a2cd4d9f202840dbde2478ab898998915b2c589]
2021-01-25 13:33:00.433 UTC [gossip.privdata] fetchPrivateData -> WARN 0d7 Do not know any peer in the channel( default ) that matches the policies , aborting
2021-01-25 13:33:00.433 UTC [gossip.privdata] reconcile -> ERRO 0d8 reconciliation error when trying to fetch missing items from different peers: Empty membership
2021-01-25 13:33:00.434 UTC [gossip.privdata] run -> ERRO 0d9 Failed to reconcile missing private info, error: Empty membership
The problem isn't the result, it's the time it takes to return the error. Anyone know what could be causing this and is it expected behaviour to take this long? In the peer logs it looks like the validation of the endorsement policy fails right at the beginning, but then it continues to try and fetch the private data anyway.
Check the core.yaml. The usual default setting is
pvtData:
pullRetryThreshold: 60s
That looks like the variable that might control that.

channel Got error &{FORBIDDEN} while joining new peer to a channel

I created my network with a script like:
docker-compose -f $COMPOSE_FILE up -d $CA
docker-compose -f $COMPOSE_FILE up -d $ORDERER1 $PEER0 $PEER1
docker-compose -f $COMPOSE_FILE up -d $CLI
docker exec cli peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f /etc/hyperledger/config/channel.tx
The channel is created and if I enter the cli, inside the working directory, I can find with ls the new generated file beerchannel.block. In this directory I also have crypto, which contains genesis block and other config files, and crypto-config that contains msp and certificates.
At this point containers logs seems good.
Now I want to join peer0 to the channel with:
docker exec -e $ENV_ADDRESSP0 $CLI peer channel join -b $CHANNEL_NAME.block
As soon as I do this command, I cannot join peer0 to the channel.
The strange thing is that running the script return this message:
2019-11-22 10:04:00.868 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2019-11-22 10:04:00.922 UTC [channelCmd] executeJoin -> INFO 002 Successfully submitted proposal to join channel
So, everything seems fine.
But when I enter the logs of orderer, I get this message repeated:
2019-11-22 09:59:07.429 UTC [fsblkstorage] newBlockfileMgr -> INFO 009 Getting block information from block storage
2019-11-22 09:59:07.438 UTC [orderer.commmon.multichannel] newChain -> INFO 00a Created and starting new chain beerchannel
2019-11-22 09:59:07.440 UTC [comm.grpc.server] 1 -> INFO 00b streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.29.0.6:41778 grpc.code=OK grpc.call_duration=25.385144ms
2019-11-22 10:04:06.923 UTC [common.deliver] deliverBlocks -> WARN 00c [channel: beerchannel] Client authorization revoked for deliver request from 172.29.0.4:48406: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied
2019-11-22 10:04:06.923 UTC [comm.grpc.server] 1 -> INFO 00d streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.29.0.4:48406 grpc.code=OK grpc.call_duration=1.001442ms
2019-11-22 10:04:07.026 UTC [common.deliver] deliverBlocks -> WARN 00e [channel: beerchannel] Client authorization revoked for deliver request from 172.29.0.4:48408: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied
2019-11-22 10:04:07.026 UTC [comm.grpc.server] 1 -> INFO 00f streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.29.0.4:48408 grpc.code=OK grpc.call_duration=582.912µs
Since the peer involved is peer0, I also entered in peer0 logs and found this errors:
2019-11-22 10:04:00.870 UTC [endorser] callChaincode -> INFO 029 [][ec4f5097] Entry chaincode: name:"cscc"
2019-11-22 10:04:00.870 UTC [ledgermgmt] CreateLedger -> INFO 02a Creating ledger [beerchannel] with genesis block
2019-11-22 10:04:00.874 UTC [fsblkstorage] newBlockfileMgr -> INFO 02b Getting block information from block storage
2019-11-22 10:04:00.896 UTC [kvledger] CommitWithPvtData -> INFO 02c [beerchannel] Committed block [0] with 1 transaction(s) in 16ms (state_validation=0ms block_and_pvtdata_commit=10ms state_commit=2ms) commitHash=[]
2019-11-22 10:04:00.899 UTC [ledgermgmt] CreateLedger -> INFO 02d Created ledger [beerchannel] with genesis block
2019-11-22 10:04:00.902 UTC [gossip.gossip] JoinChan -> INFO 02e Joining gossip network of channel beerchannel with 1 organizations
2019-11-22 10:04:00.902 UTC [gossip.gossip] learnAnchorPeers -> INFO 02f No configured anchor peers of Org1MSP for channel beerchannel to learn about
2019-11-22 10:04:00.917 UTC [gossip.state] NewGossipStateProvider -> INFO 030 Updating metadata information, current ledger sequence is at = 0, next expected block is = 1
2019-11-22 10:04:00.919 UTC [sccapi] deploySysCC -> INFO 031 system chaincode lscc/beerchannel(github.com/hyperledger/fabric/core/scc/lscc) deployed
2019-11-22 10:04:00.919 UTC [cscc] Init -> INFO 032 Init CSCC
2019-11-22 10:04:00.920 UTC [sccapi] deploySysCC -> INFO 033 system chaincode cscc/beerchannel(github.com/hyperledger/fabric/core/scc/cscc) deployed
2019-11-22 10:04:00.920 UTC [qscc] Init -> INFO 034 Init QSCC
2019-11-22 10:04:00.920 UTC [sccapi] deploySysCC -> INFO 035 system chaincode qscc/beerchannel(github.com/hyperledger/fabric/core/scc/qscc) deployed
2019-11-22 10:04:00.920 UTC [sccapi] deploySysCC -> INFO 036 system chaincode (+lifecycle,github.com/hyperledger/fabric/core/chaincode/lifecycle) disabled
2019-11-22 10:04:00.921 UTC [endorser] callChaincode -> INFO 037 [][ec4f5097] Exit chaincode: name:"cscc" (51ms)
2019-11-22 10:04:00.921 UTC [comm.grpc.server] 1 -> INFO 038 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.29.0.6:42736 grpc.code=OK grpc.call_duration=51.473337ms
2019-11-22 10:04:06.919 UTC [gossip.election] beLeader -> INFO 039 42a5181dbddcff9d15ae32b05300e849fbcad1cf138e62f3d8b726d7b5db25d3 : Becoming a leader
2019-11-22 10:04:06.919 UTC [gossip.service] func1 -> INFO 03a Elected as a leader, starting delivery service for channel beerchannel
2019-11-22 10:04:06.923 UTC [blocksProvider] DeliverBlocks -> ERRO 03b [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:04:07.026 UTC [blocksProvider] DeliverBlocks -> ERRO 03c [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:04:07.239 UTC [blocksProvider] DeliverBlocks -> ERRO 03d [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:04:07.643 UTC [blocksProvider] DeliverBlocks -> ERRO 03e [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:04:08.445 UTC [blocksProvider] DeliverBlocks -> ERRO 03f [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:04:10.051 UTC [blocksProvider] DeliverBlocks -> ERRO 040 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:04:13.254 UTC [blocksProvider] DeliverBlocks -> ERRO 041 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:04:19.657 UTC [blocksProvider] DeliverBlocks -> ERRO 042 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:04:29.662 UTC [blocksProvider] DeliverBlocks -> ERRO 043 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:04:39.668 UTC [blocksProvider] DeliverBlocks -> ERRO 044 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:04:49.671 UTC [blocksProvider] DeliverBlocks -> ERRO 045 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:04:49.671 UTC [blocksProvider] DeliverBlocks -> ERRO 046 [beerchannel] Wrong statuses threshold passed, stopping block provider
2019-11-22 10:04:49.671 UTC [gossip.election] stopBeingLeader -> INFO 047 42a5181dbddcff9d15ae32b05300e849fbcad1cf138e62f3d8b726d7b5db25d3 Stopped being a leader
2019-11-22 10:04:49.671 UTC [gossip.service] func1 -> INFO 048 Renounced leadership, stopping delivery service for channel beerchannel
2019-11-22 10:05:56.924 UTC [gossip.election] beLeader -> INFO 049 42a5181dbddcff9d15ae32b05300e849fbcad1cf138e62f3d8b726d7b5db25d3 : Becoming a leader
2019-11-22 10:05:56.924 UTC [gossip.service] func1 -> INFO 04a Elected as a leader, starting delivery service for channel beerchannel
2019-11-22 10:05:56.929 UTC [blocksProvider] DeliverBlocks -> ERRO 04b [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:05:57.032 UTC [blocksProvider] DeliverBlocks -> ERRO 04c [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:05:57.235 UTC [blocksProvider] DeliverBlocks -> ERRO 04d [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:05:57.638 UTC [blocksProvider] DeliverBlocks -> ERRO 04e [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:05:58.441 UTC [blocksProvider] DeliverBlocks -> ERRO 04f [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:06:00.044 UTC [blocksProvider] DeliverBlocks -> ERRO 050 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:06:03.247 UTC [blocksProvider] DeliverBlocks -> ERRO 051 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:06:09.652 UTC [blocksProvider] DeliverBlocks -> ERRO 052 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:06:19.656 UTC [blocksProvider] DeliverBlocks -> ERRO 053 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:06:29.659 UTC [blocksProvider] DeliverBlocks -> ERRO 054 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:06:39.662 UTC [blocksProvider] DeliverBlocks -> ERRO 055 [beerchannel] Got error &{FORBIDDEN}
2019-11-22 10:06:39.662 UTC [blocksProvider] DeliverBlocks -> ERRO 056 [beerchannel] Wrong statuses threshold passed, stopping block provider
2019-11-22 10:06:39.662 UTC [gossip.election] stopBeingLeader -> INFO 057 42a5181dbddcff9d15ae32b05300e849fbcad1cf138e62f3d8b726d7b5db25d3 Stopped being a leader
2019-11-22 10:06:39.662 UTC [gossip.service] func1 -> INFO 058 Renounced leadership, stopping delivery service for channel beerchannel
It seems something related to permissions but I cannot understand what's wrong in here.
The cli contains the beerchannel.block file, successfully generated it and now I just want to add peer0 to the channel.
Adding configtx.yaml
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/c.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
- &s
Name: sMSP
ID: sMSP
MSPDir: crypto-config/peerOrganizations/s.c.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('sMSP.admin', 'sMSP.peer', 'sMSP.client')"
Writers:
Type: Signature
Rule: "OR('sMSP.admin', 'sMSP.client')"
Admins:
Type: Signature
Rule: "OR('sMSP.admin')"
AnchorPeers:
- Host: peer1.s.c.com
Port: 7051
- Host: peer2.s.c.com
Port: 8051
Capabilities:
Channel: &ChannelCapabilities
V1_4_3: true
V1_3: false
V1_1: false
Orderer: &OrdererCapabilities
V1_4_2: true
V1_1: false
Application: &ApplicationCapabilities
V1_4_2: true
V1_3: false
V1_2: false
V1_1: false
Application: &ApplicationDefaults
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ApplicationCapabilities
Orderer: &OrdererDefaults
OrdererType: solo
Addresses:
- orderer1.c.com:7050
BatchTimeout: 500ms
BatchSize:
MaxMessageCount: 15
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 kb
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
BlockValidation:
Type: ImplicitMeta
Rule: "ANY Writers"
Capabilities:
<<: *OrdererCapabilities
Channel: &ChannelDefaults
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ChannelCapabilities
Profiles:
OneOrgOrdererGenesis:
<<: *ChannelDefaults
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *s
OneOrgChannel:
<<: *ChannelDefaults
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *s
SampleMultiNodeEtcdRaft:
<<: *ChannelDefaults
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
OrdererType: etcdraft
EtcdRaft:
Consenters:
- Host: orderer1.c.com
Port: 7050
ClientTLSCert: crypto-config/ordererOrganizations/c.com/orderers/orderer1.c.com/tls/server.crt
ServerTLSCert: crypto-config/ordererOrganizations/c.com/orderers/orderer1.c.com/tls/server.crt
- Host: orderer2.c.com
Port: 7050
ClientTLSCert: crypto-config/ordererOrganizations/c.com/orderers/orderer2.c.com/tls/server.crt
ServerTLSCert: crypto-config/ordererOrganizations/c.com/orderers/orderer2.c.com/tls/server.crt
- Host: orderer3.c.com
Port: 7050
ClientTLSCert: crypto-config/ordererOrganizations/c.com/orderers/orderer3.c.com/tls/server.crt
ServerTLSCert: crypto-config/ordererOrganizations/c.com/orderers/orderer3.c.com/tls/server.crt
Addresses:
- orderer1.c.com:7050
- orderer2.c.com:7050
- orderer3.c.com:7050
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Application:
<<: *ApplicationDefaults
Organizations:
- <<: *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *s
Check for the Reader policies that you have defined in your configtx.yaml this error is generated because of the policy mismatch. You have defined some specific user type(admin, peer, client) in your Reader policies but this specific user type is not passed into certificates that you have generated for your peer.
Edited:
If you want to make it generic and not specific to the identity type then you can edit the s org policies like this:
- &s
Name: sMSP
ID: sMSP
MSPDir: crypto-config/peerOrganizations/s.c.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('sMSP.member')"
Writers:
Type: Signature
Rule: "OR('sMSP.member')"
Admins:
Type: Signature
Rule: "OR('sMSP.admin')"
Check your crypto-config.yaml under peerOrgs section and add EnableNodeOUs property if missing then regenerate the crypto materials. Config example:
PeerOrgs:
- Name: Org1
Domain: org1.example.com
EnableNodeOUs: true

Warning Msg While Feching the Config Block

While trying to fetch the config block from the Orderer we are gettng the below warning at the Orderer side though we are able to successfully fetch the block. Can anyone let us know why this warning message is coming from Orderer side? Can we safely ignore the same?
2019-03-18 05:37:47.304 UTC [common.deliver] Handle -> WARN 020 Error reading from 127.0.0.1:48474: rpc error: code = Canceled desc = context canceled
2019-03-18 05:37:47.304 UTC [comm.grpc.server] 1 -> INFO 021 streaming call completed {"grpc.start_time": "2019-03-18T05:37:47.295Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "127.0.0.1:48474", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "8.871178ms"}
That warning is generally benign. It indicates that client does not gracefully shuts down the stream gRPC connection after fetching block.

Resources