Warning Msg While Feching the Config Block - hyperledger-fabric

While trying to fetch the config block from the Orderer we are gettng the below warning at the Orderer side though we are able to successfully fetch the block. Can anyone let us know why this warning message is coming from Orderer side? Can we safely ignore the same?
2019-03-18 05:37:47.304 UTC [common.deliver] Handle -> WARN 020 Error reading from 127.0.0.1:48474: rpc error: code = Canceled desc = context canceled
2019-03-18 05:37:47.304 UTC [comm.grpc.server] 1 -> INFO 021 streaming call completed {"grpc.start_time": "2019-03-18T05:37:47.295Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "127.0.0.1:48474", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "8.871178ms"}

That warning is generally benign. It indicates that client does not gracefully shuts down the stream gRPC connection after fetching block.

Related

Hyperledger fabric peers consistently logging peer changed its PKI-ID logs

I have hyperledger fabric peers running on version 2.3.2 and the peer's certificates are renewed. The peers continuously logging with message peer2.xorg:7051 changed its PKI-ID from xxxxxx to xxxxxxx and then purging xxxxxxxx from membership
Does anyone knows the reason for these continuos logs?
Below are the complete logs:
2022-06-14 08:47:42.647 UTC [comm.grpc.server] 1 -> INFO 10d08 streaming call completed grpc.service=gossip.Gossip grpc.method=GossipStream grpc.peer_address=10.20.30.140:38550 grpc.peer_subject="CN=peer2.org1.com,OU=peer,O=Hyperledger,ST=North Carolina,C=US" grpc.code=OK grpc.call_duration=24.617863135s
2022-06-14 08:47:42.647 UTC [gossip.discovery] purge -> INFO 10d09 Purging e3c96c537b91675f3a6428a509a287addb65bddeeacb4b5d000b6e4ef567b013 from membership
2022-06-14 08:47:42.647 UTC [gossip.comm] createConnection -> INFO 10d0a Peer peer2.org1.com:7051 changed its PKI-ID from 1c56c0d7a0397dd9c756205197067ef26bef156cdf5ee27af16728a62123fb76 to 9994c9e8d63ae1f6564d1713f9a5393c458a78dfdb915ea2a4a4f6efb6d26dae
2022-06-14 08:47:42.647 UTC [gossip.discovery] purge -> INFO 10d0b Purging 1c56c0d7a0397dd9c756205197067ef26bef156cdf5ee27af16728a62123fb76 from membership
2022-06-14 08:47:42.648 UTC [gossip.comm] createConnection -> INFO 10d0c Peer peer2.org1.com:7051 changed its PKI-ID from da99b167b6c3a7b8289dd943568a382ac0f27d2d0ffcee53725f4fd18a10be9c to 9994c9e8d63ae1f6564d1713f9a5393c458a78dfdb915ea2a4a4f6efb6d26dae
2022-06-14 08:47:42.648 UTC [gossip.discovery] purge -> INFO 10d0d Purging da99b167b6c3a7b8289dd943568a382ac0f27d2d0ffcee53725f4fd18a10be9c from membership
2022-06-14 08:47:42.649 UTC [gossip.comm] createConnection -> INFO 10d0e Peer peer2.org1.com:7051 changed its PKI-ID from 87b299aa1d0a71002dbbac8b0b1bf049a6bd1aa58e669d31f0355587af15a8e9 to 9994c9e8d63ae1f6564d1713f9a5393c458a78dfdb915ea2a4a4f6efb6d26dae
2022-06-14 08:47:42.649 UTC [gossip.comm] func1 -> WARN 10d0f peer2.org1.com:7051, PKIid:87b299aa1d0a71002dbbac8b0b1bf049a6bd1aa58e669d31f0355587af15a8e9 isn't responsive: EOF
2022-06-14 08:47:42.649 UTC [gossip.discovery] purge -> INFO 10d10 Purging 87b299aa1d0a71002dbbac8b0b1bf049a6bd1aa58e669d31f0355587af15a8e9 from membership
2022-06-14 08:47:42.657 UTC [comm.grpc.server] 1 -> INFO 10d11 streaming call completed grpc.service=gossip.Gossip grpc.method=GossipStream grpc.peer_address=10.20.30.140:38546 grpc.peer_subject="CN=peer2.org1-shared.com,OU=peer,O=Hyperledger,ST=North Carolina,C=US" error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=24.712078411s
2022-06-14 08:47:42.659 UTC [gossip.comm] createConnection -> INFO 10d12 Peer peer2.org1-shared.com:7051 changed its PKI-ID from 5f03f639eb1bc912609b9208a2577cb8575c20a103d71155efe68487dedde236 to 99d3b90022039ca4d3311c96b1ccddc64e58d170f15e39cc18232e43be1c7b63
2022-06-14 08:47:42.659 UTC [gossip.discovery] purge -> INFO 10d13 Purging 5f03f639eb1bc912609b9208a2577cb8575c20a103d71155efe68487dedde236 from membership
2022-06-14 08:47:42.659 UTC [gossip.comm] createConnection -> INFO 10d14 Peer peer2.org1-shared.com:7051 changed its PKI-ID from d4b6c5c8659587ea44ac4ba1f813dc3e52194ca0c2e09b7ecfe1cbd47d1db7c4 to 99d3b90022039ca4d3311c96b1ccddc64e58d170f15e39cc18232e43be1c7b63
2022-06-14 08:47:42.660 UTC [gossip.comm] func1 -> WARN 10d15 peer2.org1-shared.com:7051, PKIid:d4b6c5c8659587ea44ac4ba1f813dc3e52194ca0c2e09b7ecfe1cbd47d1db7c4 isn't responsive: EOF
2022-06-14 08:47:42.660 UTC [gossip.discovery] purge -> INFO 10d16 Purging d4b6c5c8659587ea44ac4ba1f813dc3e52194ca0c2e09b7ecfe1cbd47d1db7c4 from membership
2022-06-14 08:47:42.675 UTC [comm.grpc.server] 1 -> INFO 10d17 unary call completed grpc.service=gossip.Gossip grpc.method=Ping grpc.request_deadline=2022-06-14T08:47:44.674Z grpc.peer_address=10.20.30.140:39676 grpc.peer_subject="CN=peer2.org1.com,OU=peer,O=Hyperledger,ST=North Carolina,C=US" grpc.code=OK grpc.call_duration=62.13µs
2022-06-14 08:47:42.710 UTC [endorser] callChaincode -> INFO 10d18 finished chaincode: assets duration: 37ms channel=assetschannel txID=58a9628e
2022-06-14 08:47:42.711 UTC [comm.grpc.server] 1 -> INFO 10d19 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.20.30.140:34366 grpc.peer_subject="CN=fabric-common" grpc.code=OK grpc.call_duration=39.919313ms
2022-06-14 08:47:43.034 UTC [endorser] callChaincode -> INFO 10d1a finished chaincode: assets duration: 35ms channel=assetschannel txID=010913f5
2022-06-14 08:47:43.035 UTC [comm.grpc.server] 1 -> INFO 10d1b unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.20.30.140:34366 grpc.peer_subject="CN=fabric-common" grpc.code=OK grpc.call_duration=38.114437ms
2022-06-14 08:47:43.153 UTC [endorser] callChaincode -> INFO 10d1c finished chaincode: assets duration: 49ms channel=assetschannel txID=49d4c88f
2022-06-14 08:47:43.153 UTC [comm.grpc.server] 1 -> INFO 10d1d unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.20.30.140:34366 grpc.peer_subject="CN=fabric-common" grpc.code=OK grpc.call_duration=52.987518ms
2022-06-14 08:47:43.279 UTC [endorser] callChaincode -> INFO 10d1e finished chaincode: assets duration: 85ms channel=assetschannel txID=69279b3e
It keeps on purging same PKI ID again and again
After talking to some experts, I got to know that peers might be fetching service discovery cache from other peers. This is why the errors were not being resolved even after restarting a peer.
After restarting all the peers at the same time, the issue was resolved.
Thanks to Yacov Manevich

Endorsement policy failure error is taking a long time in hyperledger fabric v2.2

I have discovery enabled and I am testing if a transaction will fail if the endorsing organizations set in the transaction do not match the organizations actually involved in the transaction.
I am attempting to create a private data collection with ORG1 and as part of the transaction I have used the following method to set the endorsing organizations:
transaction.setEndorsingOrganizations(...['ORG2']);
The test is failing but it is taking 60 seconds to do so.
The logs are as follows:
peer (org1) logs:
2021-01-25 13:31:50.876 UTC [gossip.privdata] StoreBlock -> INFO 055 [default] Received block [15] from buffer
2021-01-25 13:31:50.878 UTC [vscc] Validate -> ERRO 056 VSCC error: stateBasedValidator.Validate failed, err validation of endorsement policy for collection _implicit_org_1 chaincode test-chaincode in tx 15:0 failed: signature set did not satisfy policy
2021-01-25 13:31:50.878 UTC [committer.txvalidator] validateTx -> ERRO 057 Dispatch for transaction txId = 5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084 returned error: validation of endorsement policy for collection _implicit_org_1 chaincode test-chaincode in tx 15:0 failed: signature set did not satisfy policy
2021-01-25 13:31:50.878 UTC [committer.txvalidator] Validate -> INFO 058 [default] Validated block [15] in 1ms
2021-01-25 13:31:50.878 UTC [gossip.privdata] fetchPrivateData -> WARN 059 Do not know any peer in the channel( default ) that matches the policies , aborting
2021-01-25 13:31:50.878 UTC [gossip.privdata] populateFromRemotePeers -> WARN 05a Failed fetching private data from remote peers for dig2src:[map[{5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084 test-chaincode _implicit_org_1 15 0}:[]]], err: Empty membership channel=default
2021-01-25 13:31:51.879 UTC [gossip.privdata] fetchPrivateData -> WARN 05b Do not know any peer in the channel( default ) that matches the policies , aborting
2021-01-25 13:31:51.879 UTC [gossip.privdata] populateFromRemotePeers -> WARN 05c Failed fetching private data from remote peers for dig2src:[map[{5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084 test-chaincode _implicit_org_1 15 0}:[]]], err: Empty membership channel=default
2021-01-25 13:31:52.880 UTC [gossip.privdata] fetchPrivateData -> WARN 05d Do not know any peer in the channel( default ) that matches the policies , aborting
2021-01-25 13:31:52.880 UTC [gossip.privdata] populateFromRemotePeers -> WARN 05e Failed fetching private data from remote peers for dig2src:[map[{5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084 test-chaincode _implicit_org_1 15 0}:[]]], err: Empty membership channel=default
fetchPrivateData and populateFromRemotePeers warnings repeat over and over until
2021-01-25 13:32:50.873 UTC [gossip.privdata] RetrievePvtdata -> WARN 0d4 Could not fetch all 1 eligible collection private write sets for block [15] (0 from local cache, 0 from transient store, 0 from other peers). Will commit block with missing private write sets:[txID: 5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084, seq: 0, namespace: test-chaincode, collection: _implicit_org_1, hash: c189e3f3e8546ecde9b98b3aae67885cb8effeac1d35371a512c47db6a84
] channel=default
2021-01-25 13:32:50.873 UTC [validation] preprocessProtoBlock -> WARN 0d5 Channel [default]: Block [15] Transaction index [0] TxId [5c52e14fa24a6e90effbd9dffcbb3fbc6cac1091c1bf3b6512616084] marked as invalid by committer. Reason code [ENDORSEMENT_POLICY_FAILURE]
2021-01-25 13:32:50.903 UTC [kvledger] CommitLegacy -> INFO 0d6 [default] Committed block [15] with 1 transaction(s) in 29ms (state_validation=0ms block_and_pvtdata_commit=11ms state_commit=16ms) commitHash=[bcfc168b343de9297a2cd4d9f202840dbde2478ab898998915b2c589]
2021-01-25 13:33:00.433 UTC [gossip.privdata] fetchPrivateData -> WARN 0d7 Do not know any peer in the channel( default ) that matches the policies , aborting
2021-01-25 13:33:00.433 UTC [gossip.privdata] reconcile -> ERRO 0d8 reconciliation error when trying to fetch missing items from different peers: Empty membership
2021-01-25 13:33:00.434 UTC [gossip.privdata] run -> ERRO 0d9 Failed to reconcile missing private info, error: Empty membership
The problem isn't the result, it's the time it takes to return the error. Anyone know what could be causing this and is it expected behaviour to take this long? In the peer logs it looks like the validation of the endorsement policy fails right at the beginning, but then it continues to try and fetch the private data anyway.
Check the core.yaml. The usual default setting is
pvtData:
pullRetryThreshold: 60s
That looks like the variable that might control that.

spring integration dsl aggregator not releasing the messages

We are using the below code for aggregation and we have noticed the messages are not being released intermittently to subsequent flow. We enabled the trace log for aggregate package
IntegrationFlows
.from("upstream")
.log(INFO, g -> "Message Received for Aggregation: " + g.getPayload())
.aggregate(aggregatorSpec -> aggregatorSpec.correlationStrategy( m -> 1)
.expireGroupsUponCompletion(true)
.expireGroupsUponTimeout(true)
.groupTimeout(30000)
.sendPartialResultOnExpiry(true)
.releaseStrategy(new TimeoutCountSequenceSizeReleaseStrategy(100, 30000)))
.log(INFO, g -> "Message released:" + ((ArrayList) g.getPayload()).size())
.handle(someService)
.get();
This log shows the message never completed by aggregate function
2019-02-20 16:53:44,366 UTC INFO [org.springframework.jms.listener.DefaultMessageListenerContainer#0-4] org.springframework.integration.handler.LoggingHandler- Message Received for Aggregation: Message1
2019-02-20 16:53:44,366 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-4] org.springframework.integration.aggregator.AggregatingMessageHandler- org.springframework.integration.aggregator.AggregatingMessageHandler#0 received message: GenericMessage [payload=Message1, headers={jms headers}]
2019-02-20 16:53:44,366 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-4] org.springframework.integration.aggregator.AggregatingMessageHandler- Handling message with correlationKey [1]: GenericMessage [payload=Message1, headers={jms headers}]
2019-02-20 16:53:44,367 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-4] org.springframework.integration.aggregator.AggregatingMessageHandler- Schedule MessageGroup [ SimpleMessageGroup{groupId=1, messages=[GenericMessage [payload=Message1, headers={jms headers}] to 'forceComplete'.
This log shows the message was completed by aggregate function
2019-02-20 16:58:15,386 UTC INFO [org.springframework.jms.listener.DefaultMessageListenerContainer#0-3] org.springframework.integration.handler.LoggingHandler- Message Received for Aggregation: Message2
2019-02-20 16:58:15,386 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-3] org.springframework.integration.aggregator.AggregatingMessageHandler- org.springframework.integration.aggregator.AggregatingMessageHandler#0 received message: GenericMessage [payload=Message2, headers={jms headers}]
2019-02-20 16:58:15,386 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-3] org.springframework.integration.aggregator.AggregatingMessageHandler- Handling message with correlationKey [1]: GenericMessage [payload=Message2, headers={jms headers}]
2019-02-20 16:58:15,386 UTC DEBUG [org.springframework.jms.listener.DefaultMessageListenerContainer#0-3] org.springframework.integration.aggregator.AggregatingMessageHandler- Schedule MessageGroup [ SimpleMessageGroup{groupId=1, messages=[GenericMessage [payload=Message2, headers={jms headers}] to 'forceComplete'.
2019-02-20 16:58:45,387 UTC DEBUG [task-scheduler-6] org.springframework.integration.aggregator.AggregatingMessageHandler- Cancel 'forceComplete' scheduling for MessageGroup [ SimpleMessageGroup{groupId=1, messages=[GenericMessage [payload=Message2, headers={jms headers}].
2019-02-20 16:58:45,387 UTC DEBUG [task-scheduler-6] org.springframework.integration.aggregator.AggregatingMessageHandler- Completing group with correlationKey [1]
2019-02-20 16:58:45,387 UTC INFO [task-scheduler-6] org.springframework.integration.handler.LoggingHandler- Message released: 1
Can you help what is missing in the code

how to set cert when calling createChannel in Fabric Node SDK

Help me please with channel creation.
In node sdk i have
// // extract the channel config bytes from the envelope to be signed
const envelope = fs.readFileSync(`${channelConfigPath+channelName}.tx`),
channelConfig = client.extractChannelConfig(envelope),
signature = client.signChannelConfig(channelConfig);
// get an admin based transactionID
// send to orderer
const request = {
config: channelConfig,
signatures: [signature],
name: channelName,
txId: client.newTransactionID(true)
};
client.createChannel(request)
But i get error in docker logs orderer.example.com
-2018-06-26 14:41:04.631 UTC [policies] Evaluate -> DEBU 120 Signature set did not satisfy policy /Channel/Application/Gov1MSP/Admins
-2018-06-26 14:41:04.631 UTC [policies] Evaluate -> DEBU 121 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/Gov1MSP/
-2018-06-26 14:41:04.631 UTC [policies] func1 -> DEBU 122 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Gov1MSP.Admins ]
-2018-06-26 14:41:04.631 UTC [policies] Evaluate -> DEBU 123 Signature set did not satisfy policy /Channel/Application/ChannelCreationPolicy
-2018-06-26 14:41:04.631 UTC [policies] Evaluate -> DEBU 124 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy
-2018-06-26 14:41:04.631 UTC [orderer/common/broadcast] Handle -> WARN 125 [channel: usachannel] Rejecting broadcast of config message from 172.18.0.1:46638 because of error: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
-2018-06-26 14:41:04.631 UTC [orderer/common/server] func1 -> DEBU 126 Closing Broadcast stream
So, how should I set cert from /etc/hyperledger/msp/users/Admin#org1.example.com/msp in Fabric Node SDK?
P.S. with the cert above i can create channel using peer channel create
I am using "^1.2.0" version for fabric-client and fabric-ca-client.
To set the signing identity of the client you need to use the setAdminSigningIdentity method.
For the private key I used the private key in the keystore directory of the msp folder.
In my case it was: "crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/keystore".
And for the certificate I used the same folder but the file "signcerts/Admin#org1.example.com-cert.pem".
Then you need to use newTransaction(true) because if you don't it will use the userContext which you do not want because you provided the adminSigningIdentity.

Timeout while deploying chaincode in hyperledger fabric

Getting this message in my docker container, when I try to deploy my chaincode from a node.js file
[dockercontroller] Start -> DEBU 0a1 start-could not find image ...attempt to recreate image no such image
vp_1 | 18:49:40.460 [eventhub_producer] deRegisterHandler -> DEBU 0a2 deRegisterHandler BLOCK
vp_1 | 18:49:40.515 [eventhub_producer] Chat -> ERRO 0a3 Error during Chat, stopping handler: stream error: code = 1 desc = "context canceled"

Resources