Minimum requested sequence is X but orderer is at sequence x ERROR - hyperledger-fabric

We started to experience the following error, seemingly out of the blue, everything was working fine (or appear to be) until one the orderers appear to get out of sync and isn't able to connect with other components:
DEBU [common.deliver] deliverBlocks - [channel: mychannel] Delivering block [35151] for (0xc00bcf2b40) for 10.0.1.37:60946","stream":"stderr","time":"2022-12-14T21:02:21.383441052Z"
DEBU [blkstorage] waitForBlock - Going to wait for newer blocks. maxAvailaBlockNumber=[35151], waitForBlockNum=[35152]","stream":"stderr","time":"2022-12-14T21:02:21.383483616Z"
INFO [orderer.common.cluster.puller] fetchLastBlockSeq - Skipping pulling from orderer.example.com:7050: minimum requested sequence is 35152 but orderer.example.com:7050 is at sequence 35151 channel=mychannel","stream":"stderr","time":"2022-12-14T21:02:21.383715084Z"
[grpc] InfoDepth - DEBU 1627e [core]Channel Connectivity change to SHUTDOWN","stream":"stderr","time":"2022-12-14T21:02:21.383722749Z"
[grpc] InfoDepth - DEBU 1627f [core]Subchannel Connectivity change to SHUTDOWN","stream":"stderr","time":"2022-12-14T21:02:21.383744556Z"
WARN [orderer.common.cluster.puller] func1 - Received error of type 'minimum requested sequence is 35152 but orderer.example.com:7050 is at sequence 35151' from orderer.example.com:7050 channel=mychannel","stream":"stderr","time":"2022-12-14T21:02:21.383757632Z"
[grpc] InfoDepth - DEBU 16280 [transport]transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"","stream":"stderr","time":"2022-12-14T21:02:21.383792554Z"
We've tried to recreate the world state a couple of times and it works OK until a certain point, usually at block 35152.
The HL Fabric network is composed of three orgs running on docker-swarm, each ORG on a different VM.
Any ideas?

Related

Unable to fetch private data from peers (Empty membership) - Hyperledged Fabric 1.4.6

I got a problem with a 1.4.6 Fabric network with a network (27 peers and 5 orderers) where there is one peer (anchor peer) in an Organization that stopped commiting transactions. I can't understand why it started showing this message without any updates on the network, but it was working normally before that.
The message is:
2020-08-26 19:56:35.147 UTC [gossip.privdata] fetchPrivateData -> WARN fc2 Do not know any peer in the channel( xxxx ) that matches the policies , aborting
2020-08-26 19:56:35.147 UTC [gossip.privdata] fetchFromPeers -> WARN fc3 Failed fetching private data for block 743444 from peers: Empty membership
2020-08-26 19:56:36.149 UTC [gossip.privdata] fetchPrivateData -> WARN fc4 Do not know any peer in the channel( xxxx ) that matches the policies , aborting
I already tried to update the chaincode across all peers to see if something changes, but even through all the other peers are updated and still working with their respectives PDCs, this one stopped updating the chaincode as well.
I know we should have other peers configured to disseminate the pvt data, but unfortunately we didn't did it and now i need to find a way to make this peer work again. All the others 26 are fine and they all have the same config (changing only the Organization). Can anyone help me to find a way to just make this peer accept and commit new transaction even if it causes some of the pvt data to be lost ?
Editing for some more info. When i try to send a new transaction for this peer, here's what it happens:
2020-08-28 17:29:07.018 UTC [endorser] callChaincode -> INFO 3c0b [channel][6b3e2fcc] Entry chaincode: name:"chaincode"
2020-08-28 17:29:07.022 UTC [endorser] callChaincode -> INFO 3c0c [channel][6b3e2fcc] Exit chaincode: name:"chaincode" (4ms)
2020-08-28 17:29:07.033 UTC [comm.grpc.server] 1 -> INFO 3c0d unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.21.0.4:40998 grpc.code=OK grpc.call_duration=17.194301ms
2020-08-28 17:29:53.670 UTC [gossip.privdata] StoreBlock -> WARN 3c2f [channel] Could not fetch all missing collection private write sets from remote peers. Will commit block [744876] with missing private write sets:[txID: bdeb55aa80d4c2a2f615abeefe0dbb97a60a08babb5ef2a1f9a0627fe4bf2ccb, seq: 0, namespace: chaincode, collection: collectionTestResults, hash: e0cde0ce12a35de1e7628e9283b26e20849ea5e112ef0daeb8a5a6d7aa1a1706
txID: f505a249ca1c4136711c8402cb2333a2e1b59cb02b573749f4c9488194e3a682, seq: 1, namespace: chaincode, collection: collectionTestResults, hash: ca65f8ed487f7e06201f25a7e0872c522afcb54c1c64c137cec8ef1d31e56d6d
txID: 3032cc2c4ce4702ef1ec0da28ca7e5a0bd4b2c4604915704ae63fc9d8342c138, seq: 2, namespace: chaincode, collection: collectionTestResults, hash: 35c4be3879a191f9e2f132188b313bf4f297e2881e4ad6149c40708191d972fb
]
2020-08-28 17:29:53.675 UTC [statebasedval] ValidateAndPrepareBatch -> WARN 3c30 Block [744876] Transaction index [0] TxId [bdeb55aa80d4c2a2f615abeefe0dbb97a60a08babb5ef2a1f9a0627fe4bf2ccb] marked as invalid by state validator. Reason code [MVCC_READ_CONFLICT]
2020-08-28 17:29:53.675 UTC [statebasedval] ValidateAndPrepareBatch -> WARN 3c31 Block [744876] Transaction index [1] TxId [f505a249ca1c4136711c8402cb2333a2e1b59cb02b573749f4c9488194e3a682] marked as invalid by state validator. Reason code [MVCC_READ_CONFLICT]
2020-08-28 17:29:53.699 UTC [kvledger] CommitWithPvtData -> INFO 3c32 [channel] Committed block [744876] with 3 transaction(s) in 29ms (state_validation=4ms block_and_pvtdata_commit=4ms state_commit=19ms) commitHash=[128eff402fae08d58f50e6529e8e9903116374cac557901bad4fd666153c55aa]
After that, i query the ledger for this specific document, looked inside couchdb, but he's not added to the world state or to the PDC.
Another thing suspicious is that his blocks are way behind the orderer, but he doesn't seems to be fetching them, he's accepting queries normally and even commiting transactions that doesn't use this PDC.
You will get this error if the gossip layer can't find any other peers in the network that have access to this collection.
Check the peer logs for the gossip "Membership view" messages. These messages will state which other peers are known by this peer. If you don't see such messages, restart the peer so that you can see in the logs which other peers show up in the new "Membership view" messages.
Typically these issues are related to gossip configuration - double check your configuration values for:
peer.gossip.bootstrap
peer.gossip.endpoint
peer.gossip.externalEndpoint
And make sure the peer can access the bootstrap peer addresses, and that other peers in the org can access this peer via the endpoint address, and that other peers in other orgs can access this peer via the externalEndpoint address.
Once your peer makes connection to another peer that belongs to the same collection, it will reconcile (retrieve) the private data that was missed during this period.

Undocumented error in Hyperledger Fabric 2.1.0 Too many requests for /protos.Deliver, exceeding concurrency limit (2500)

The running instances were fine today in the afternoon, but suddenly this error started to pop out. It presumes to call the function and write to the ledger successfully but rejects with this:
Too many requests for /protos.Deliver, exceeding concurrency limit (2500)
peer0.org1.xxx.com-hold-em_1-c068ec292dc2ab380801d4f31ca83c6b86104d5adf00d7ee78b940a7a7381c02] func2 -> INFO 1402c 2020-06-16T05:43:20.627Z info [c-api:lib/handler.js] [xxx-dc987af0] Calling chaincode Invoke() succeeded. Sending COMPLETED message back to peer
2020-06-16 05:43:20.628 UTC [peer.chaincode.dev-peer0.org1.xxx.com-xxx_1-171cd178bf0f55cbedf8e78207e38a4f25f12a26f8ae99732d2b7a0a64ffb656] func2 -> INFO 1402d 2020-06-16T05:43:20.628Z info [c-api:lib/handler.js] [xxx-dc987af0] Calling chaincode Invoke() succeeded. Sending COMPLETED message back to peer
2020-06-16 05:43:20.628 UTC [endorser] callChaincode -> INFO 1402e finished chaincode: xxxduration: 39ms channel=xxx txID=dc987af0
2020-06-16 05:43:20.629 UTC [comm.grpc.server] 1 -> INFO 1402f unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.23.0.254:44788 grpc.peer_subject="CN=fabric-common" grpc.code=OK grpc.call_duration=41.702366ms
2020-06-16 05:43:20.647 UTC [nodeCmd] func1 -> ERRO 14030 Too many requests for /protos.Deliver, exceeding concurrency limit (2500)
2020-06-16 05:43:20.647 UTC [comm.grpc.server] 1 -> INFO 14031 streaming call completed grpc.service=protos.Deliver grpc.method=DeliverFiltered grpc.peer_address=10.23.0.254:44792 grpc.peer_subject="CN=fabric-common" error="too many requests for /protos.Deliver, exceeding concurrency limit (2500)" grpc.code=Unknown grpc.call_duration=124.448µs
2020-06-16 05:43:22.676 UTC [gossip.privdata] StoreBlock -> INFO 14032 [xxx] Received block [2657] from buffer
2020-06-16 05:43:22.681 UTC [committer.txvalidator] Validate -> INFO 14033 [xxx] Validated block [2657] in 4ms
2020-06-16 05:43:22.681 UTC [gossip.privdata] prepareBlockPvtdata -> INFO 14034 Successfully fetched all eligible collection private write sets for block [2657] channel=xxx
I don't know what could be happening. If there is a fix for this on a newer version I would like to know how can I upgrade Fabric to the latest version.
The number of concurrent requests to the peer services has been capped by default in Fabric v2.1.0+ to prevent poorly programmed or malicious clients from DoS-ing the peers. You can always remove this restriction, or increase it by modifying these values in core.yaml. To remove the restriction, set the limit to 0, or increase it to a value that makes sense for your environment.
Generally speaking, if these messages are occurring, it is because the client application is misbehaving and not appropriately releasing resources. Once a client is done with a particular call, it's important to close and cleanup the associated network resources. A common source of leaks for connections to the peer is requesting event streams, reading one event, and then opening a new one without closing the previous.

Hyperledger 2.0: reconciliation error when trying to fetch missing items from different peers: Empty membership

I have 3 Orgs with currently 1 peer per org running and one Orderer.
I have a private data collection defined for 2 orgs.
"name": "privateOrg1-2",
"policy": "OR('Org1MSP.member','Org2MSP.member')",
"requiredPeerCount": 0,
"maxPeerCount": 3,
"blockToLive": 30000,
"memberOnlyRead": true
However, when I add data as member of Org1, these data are not synced with Org2. When I add data for Org2, these data are not synced with Org1. The following errors are seen in logs:
2020-05-11 15:30:28.137 UTC [gossip.privdata] fetchPrivateData -> WARN 7a0a Do not know any peer in the channel( data-channel ) that matches the policies , aborting
2020-05-11 15:30:28.137 UTC [gossip.privdata] reconcile -> ERRO 7a0b reconciliation error when trying to fetch missing items from different peers: Empty membership
2020-05-11 15:30:28.137 UTC [gossip.privdata] run -> ERRO 7a0c Failed to reconcile missing private info, error: Empty membership
Non-private data is synced without problems.
What could be the problem?
I fixed the issue. But I believe the way everything works is not optimal.
I simulate a distributed network. Each peer runs on a separate machine. All the docker containers run standalone and not as a part of kubernetes or docker network.
I did the following steps:
update the config of each peer with the Org1MSPanchors.tx, Org2MSPanchors.tx, Org3MSPanchors.tx generated by configtxgen. Earlier I didn't do that.
I analyzed the logs and found out that each peer tries to connect to an anchor peer of another org directly. This connection failed. To make it work, I added the anchor peers of all the orgs to extra_hosts of my peer docker_compose files.
Initially I thought that the ordering service knows each anchor peer and peers of Org1 should get IP adresses of Org2 peers from the ordering service. This was a little bit naive.

Getting error Failed pulling the last config block: retry attempts exhausted channel on raft orderer

I am trying to implement the Raft orderer to my HLF network which is implement on aws EKS cluster(Kubernetes). I have create the 3 orderer in Raft orderer and 2 orgs which have 2 peer each which is running fine however when I trying to create the channel its give error from orderer side. as " [orderer.common.cluster.puller] func1 -> WARN 15fce Received error of type 'failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp: lookup orderer2.dfarmadmin.com on 10.100.0.10:53: no such host"' from {orderer2.dfarmadmin.com:7050 " and " [orderer.consensus.etcdraft] confirmSuspicion -> ERRO 225cb Failed pulling the last config block: retry attempts exhausted channel=testchainid node=3"
I am confused how to solve it as all file and other stuff in right folder. Please see the attached screenshot of error
Please let me know what I am doing wrong also if need more information let me know.
Thanks in Advance

Error in Channel Creation in Hyperledger Fabric using Node.js

I am looking to setup a simple Hyperledger Fabric network without using docker and trying to create channel by following this tutorial using Node.js.
Steps i performed:
Setup crypto-config.yaml and generated crypto-material (crypto-config)
Setup fabric-ca-server-config.yaml by updating keyfile & certfile. started CA server
Setup configtx.yaml by defining one orderer and one organization. Created genesis block and configuration transaction
Now by when i run above tutorial node.js code, i get error on order terminal & as response of node.js call:
2019-01-09 16:16:54.619 IST [msp] DeserializeIdentity -> INFO 007
Obtaining identity
2019-01-09 16:16:54.619 IST [orderer/common/broadcast] Handle -> WARN 008 [channel: firstchannel]
Rejecting broadcast of config message from 127.0.0.1:44198 because of
error: Failed to reach implicit threshold of 1 sub-policies, required
1 remaining: permission denied
I tried many changes and still getting same error. Same error also appear while creating channel through terminal using ./peer channel create -o localhost:7050 -c firstchannel -f ./channel.tx
Here is my channel.tx converted in JSON.
How this can be resolved?
I got it worked!
Actually in orderer.yaml, i set LOG to DEBUG and now i can see the problem in channel creation well descriptive.
There were multiple things which i needed to improve but main thing was that in orderer.yaml, GenesisMethod was set to file, so it was creating system channel and was looking the signature of the OrdererMSP admin. Changing GenesisMethod to provisional got channel creation worked.

Resources