Rejecting deliver request because of consenter error - hyperledger-fabric

We have a ongoing project running on Fabric 1.0.1.. We are struck at an issue. Basically the environment we have is 3 orderers/kafkas/zookeerps - 2 on one server and 1 on another for all 3
We had a system upgrade and had to restart all of the dockers.
Now the orderer shows below warnings: 2018-06-19 20:56:23.992 UTC [orderer/common/deliver] Handle -> WARN 407 [channel: channel] Rejecting deliver request because of consenter error
whenever we post a transaction we get the error below
2018-06-19 20:43:15.522 UTC [orderer/kafka] Enqueue -> DEBU 376 [channel: channnel] Enqueueing envelope...
2018-06-19 20:43:15.522 UTC [orderer/kafka] Enqueue -> WARN 377 [channel: channel] Will not enqueue, consenter for this channel hasn't started yet

Please try to clear logs of all peer and kafka also.And restart the network IF you find any clue for happening this then please let us know.

Related

Hyperledger fabric peer connection with HSM fails randomly after running for a while

Good day,
We have an integration between an HSM Luna 6.3 and Hyperledger Fabric, we use Luna to store the private keys of peers and orderers. The integration works fine but after a while running we are getting this error in the peers:
[34m2021-04-26 19:33:04.544 UTC [endorser] callChaincode -> INFO f80a [0m [mychannel][a3eb7ef5] Exit chaincode: name:"mycontract" (21ms)
[34m2021-04-26 19:33:04.614 UTC [comm.grpc.server] 1 -> INFO f80b [0m unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=X.X.X.X:48698 grpc.peer_subject="CN=user#company.com.tls,OU=client" grpc.code=OK grpc.call_duration=92.644ms
[33m2021-04-26 20:30:18.831 UTC [gossip.gossip] Gossip -> WARN f80c [0m Failed signing message: Failed generating signature [P11: sign failed [pkcs11: 0x30: CKR_DEVICE_ERROR]]
github.com/hyperledger/fabric/gossip/gossip.(*gossipServiceImpl).Gossip
/opt/gopath/src/github.com/hyperledger/fabric/gossip/gossip/gossip_impl.go:683
github.com/hyperledger/fabric/gossip/election.(*adapterImpl).Gossip
/opt/gopath/src/github.com/hyperledger/fabric/gossip/election/adapter.go:99
github.com/hyperledger/fabric/gossip/election.(*leaderElectionSvcImpl).leader
/opt/gopath/src/github.com/hyperledger/fabric/gossip/election/election.go:350
github.com/hyperledger/fabric/gossip/election.(*leaderElectionSvcImpl).run
/opt/gopath/src/github.com/hyperledger/fabric/gossip/election/election.go:282
runtime.goexit
Although the error ends with runtime.goexit the program doesn't kill the pod where its running and it isn't able to stablish a new connection with the HSM, it just keeps repeating the same error.
After restarting the pod the connection works well again, the peer runs normally peeking the private keys from the HSM.
Any idea why this is happening? is there a way to force the end of the program so the pod can restablish the connection? or any way to prevent this in the future?
Any help would be appreciated.
Thanks,

Unable to update channel config using Fabric SDK Java: field "common.ConfigUpdate.channel_id" contains invalid UTF-8

Network setup:
The network is setup with 1 orderer + 2 organizations with 2 peers each (2 * 2 = 4 peers).
I don't think there's a problem with the network, nor with the crypto materials, the channel config transactions, since I've done similar things using Fabric SDK Go without going into this kind of problem.
What I have done:
The error occurs after I created the channel "mychannel", added two peers of the client org to the channel, initialized the channel using Fabric SDK Java and then tried to update the channel.
Before I tried to invoke channel.updateChannelConfiguration() to apply the config tx file Org1MSPanchors.tx, I managed to get the signatures from the admins of both the orgs.
The key lines (the project is written in Kotlin, the following is the Java equivalent):
var updateConfig = new UpdateChannelConfiguration(new File("path/to/file.tx"));
// The signatures have been created from the admins of the 2 orgs.
channel.updateChannelConfiguration(updateConfig, signatures);
Logs:
After the invocation, the program crashed with the following info.
Caused by: org.hyperledger.fabric.sdk.exception.TransactionException: Channel mychannel, send transaction failed on orderer OrdererClient{id: 4, channel: mychannel, name: orderer.***.com, url: grpcs://localhost:7050}. Reason: Channel mychannel orderer orderer.***.com status returned failure code 400 (BAD_REQUEST) during orderer next
at org.hyperledger.fabric.sdk.OrdererClient.sendTransaction(OrdererClient.java:240) ~[fabric-sdk-java-1.4.13.jar:na]
at org.hyperledger.fabric.sdk.Orderer.sendTransaction(Orderer.java:164) ~[fabric-sdk-java-1.4.13.jar:na]
at org.hyperledger.fabric.sdk.Channel.sendUpdateChannel(Channel.java:549) ~[fabric-sdk-java-1.4.13.jar:na]
at org.hyperledger.fabric.sdk.Channel.updateChannelConfiguration(Channel.java:455) ~[fabric-sdk-java-1.4.13.jar:na]
at org.hyperledger.fabric.sdk.Channel.updateChannelConfiguration(Channel.java:412) ~[fabric-sdk-java-1.4.13.jar:na]
at com.***.util.SDKUtil$Companion.updateChannel(SDKUtil.kt:68) ~[main/:na]
at com.***.***Application.configureChannel(FjstApplication.kt:76) ~[main/:na]
at com.***.***Application.access$configureChannel(FjstApplication.kt:19) ~[main/:na]
at com.***.***Application$commandLineRunner$2.run(FjstApplication.kt:54) ~[main/:na]
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:804) [spring-boot-2.4.0.jar:2.4.0]
... 5 common frames omitted
Caused by: org.hyperledger.fabric.sdk.exception.TransactionException: Channel mychannel orderer orderer.***.com status returned failure code 400 (BAD_REQUEST) during orderer next
at org.hyperledger.fabric.sdk.OrdererClient$1.onNext(OrdererClient.java:186) ~[fabric-sdk-java-1.4.13.jar:na]
And the docker logs of orderer.***.com:
2020-12-14 09:14:48.443 UTC [orderer.commmon.multichannel] newChain -> INFO 00b Created and starting new chain mychannel
2020-12-14 09:14:48.451 UTC [comm.grpc.server] 1 -> INFO 00c streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.128.1:35988 grpc.code=OK grpc.call_duration=60.307408ms
2020-12-14 09:15:01.201 UTC [comm.grpc.server] 1 -> INFO 00d streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.128.1:35988 grpc.code=OK grpc.call_duration=1.770753ms
2020-12-14 09:15:01.208 UTC [orderer.common.broadcast] ProcessMessage -> WARN 00e [channel: mychannel] Rejecting broadcast of config message from 192.168.128.1:35988 because of error: error applying config update to existing channel 'mychannel': error authorizing update: proto: field "common.ConfigUpdate.channel_id" contains invalid UTF-8
2020-12-14 09:15:01.208 UTC [comm.grpc.server] 1 -> INFO 00f streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=192.168.128.1:35988 grpc.code=OK grpc.call_duration=349.969µs
The config and network files:
https://1drv.ms/u/s!Aj_rPvkyS8y8gtsXEaKBXD1riM12CQ?e=jkMeYn
Please help:
If a solution is not obvious, could you please tell me what the possible causes are?
Thanks!

Failed to order the transaction. Error code: SERVICE_UNAVAILABLE using raft consensus protocol

I am facing this error while using Raft consensus protocol in which I have setup 5 orderers(1 in 1st server, 2 each in other 2 servers) namely orderer1 to orderer5.
Everything works fine with the setup and all the orderers are participating in the orderer election process but while I'm trying to invoke a transaction then I'm facing an error like this:
[ERROR] invoke-chaincode - Failed to order the transaction. Error code: SERVICE_UNAVAILABLE
This error comes only when i try to invoke using orderer2 but works well with any other orderer. Please help to resolve the issue.
Here are the logs of orderer2 and its running:
2019-08-13 07:05:59.374 UTC [orderer.consensus.etcdraft] run -> INFO 318 raft.node: 2 elected leader 4 at term 2 channel=invoice node=2
2019-08-13 07:05:59.375 UTC [orderer.consensus.etcdraft] serveRequest -> INFO 319 Raft leader changed: 0 -> 4 channel=invoice node=2
2019-08-13 07:05:59.580 UTC [common.deliver] Handle -> WARN 31a Error reading from xx.xx.xx.xx:56890: rpc error: code = Canceled desc = context canceled
2019-08-13 07:05:59.580 UTC [comm.grpc.server] 1 -> INFO 31b streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=xx.xx.xx.xx:56890 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=207.535623ms
2019-08-13 07:13:20.952 UTC [orderer.common.broadcast] ProcessMessage -> WARN 320 [channel: invoice] Rejecting broadcast of normal message from xx.xx.xx.xx:56916 with SERVICE_UNAVAILABLE: rejected by Order: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 192.168.224.4:8050: connect: connection refused"
2019-08-13 07:13:20.952 UTC [comm.grpc.server] 1 -> INFO 321 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=xx.xx.xx.xx:56916 grpc.code=OK grpc.call_duration=35.477429971s
I just ran into this very issue, orderer2 as well. It turns out for me, I had made a typo in the orderer config. In the General.Cluster section, I accidentally renamed the ClientCertificate and ClientPrivateKey fields to ServerCertificate and ServerPrivateKey. I switched them, left the Server* values blank and pointed to my client certs with the others and everything worked.

How to fix "context finished before block retrieved: context canceled" occurred while instantiating chaincode?

The instantiation command completes successfully, but when analyzing peer logs, you may notice this:
2019-04-17 17:25:52.581 UTC [gossip.state] commitBlock -> DEBU 48c [canal-contrato] Committed block [1] with 1 transaction(s)
2019-04-17 17:25:52.581 UTC [common.deliver] deliverBlocks -> DEBU 48d [channel: canal-contrato] Delivering block for (0xc00023f9c0) for 192.168.16.1:48230
2019-04-17 17:25:52.581 UTC [fsblkstorage] waitForBlock -> DEBU 48e Going to wait for newer blocks. maxAvailaBlockNumber=[1], waitForBlockNum=[2]
2019-04-17 17:25:52.586 UTC [common.deliver] deliverBlocks -> DEBU 48f Context canceled, aborting wait for next block
2019-04-17 17:25:52.586 UTC [common.deliverevents] func1 -> DEBU 490 Closing Deliver stream
2019-04-17 17:25:52.586 UTC [comm.grpc.server] 1 -> INFO 491 streaming call completed {"grpc.start_time": "2019-04-17T17:25:50.441Z", "grpc.service": "protos.Deliver", "grpc.method": "DeliverFiltered", "grpc.peer_address": "192.168.16.1:48230", "error": "context finished before block retrieved: context canceled", "grpc.code": "Unknown", "grpc.call_duration": "2.144399922s"}
Can anyone orient me what I'm possibly doing wrong and what are the consequences of this error?
Notes:
The orderer logs don't present any type of error
All containers are running correctly
I'm using node version 8.9.0 (with npm 5.5.1)
I have 1 organization with 1 peer, 1 CA and 1 ordered (just to test)
I'm using hyperlegder fabric version 1.4
This is not an error. You are using an SDK that connects to the peer and waits for the instantiate to finish. The block is received by the peer, and when it does - the SDK closes the gRPC stream because it doesn't need it anymore, and the peer logs this to notify you why it closed the stream from the server side.

REQUEST_TIMEOUT when trying to start business network

When I try to start my business network app on composer v1.1 I get a timeout after 5 minutes with the following message:
2018-03-31 12:54:39.183 UTC [chaincode] Launch -> ERRO 4c3
launchAndWaitForRegister failed: timeout expired while starting
chaincode
sre-frontend-app:0.0.1(networkid:dev,peerid:peer0.org1.example.com,tx:ab15fb53ed8e1de99ad7253ffa2ab4b68bd787b8a9561c9b422bc203e14fc048)
There is also another error in the logs:
2018-03-31 12:54:39.183 UTC [endorser] simulateProposal -> ERRO 4c5
[composerchannel][ab15fb53] failed to invoke chaincode name:"lscc" ,
error: timeout expired while starting chaincode
sre-frontend-app:0.0.1(networkid:dev,peerid:peer0.org1.example.com,tx:ab15fb53ed8e1de99ad7253ffa2ab4b68bd787b8a9561c9b422bc203e14fc048)
Any idea what might be happening? I have tried changing localhost to 0.0.0.0 when building the PeerAdmin card and even tried changing the listening address of fabric-ca to localhost based on reading other's solutions but these changes did not work for me.
Any other suggestions?? Do I need to extend the timeouts - and how I would do that?
Thanks in advance :)

Resources