I'm trying to setup hyperledger fabric 1.2 on different hosts.
But when creating channel via: ./peer.sh channel create -o orderer0.trade.com:7050 -c tradechannel -f ../tradechannel.tx --cafile tlsca.trade.com-cert.pem -t 150s.
I got this error in CLI: got unexpected status: SERVICE_UNAVAILABLE -- will not enqueue, consenter for this channel hasn't started yet
and here is log from orderer:
[channel: tradechannel] Rejecting broadcast of message from 192.168.167.149:60655 with SERVICE_UNAVAILABLE: rejected by Consenter: will not enqueue, consenter for this channel hasn't started yet.
Closing Broadcast stream
transport: http2Server.HandleStreams failed to read frame: read tcp 192.168.171.33:7050->192.168.167.149:60655: read: connection reset by peer
It seems I have problem with grpc but totally have no idea about this.
the CLI is in Macbook and Orderer runs in RedHat.
I also faced the same issue when I was spawning the hyperledger fabric on kubernetes using kafka as ordering service.
The error is coming because kafka need some to sync with zookeepers. So you can wait for some time after the blockchain network creation. Then create the channel. In my case I wait for 10 minutes and I am able to create a channel.
But when I am spawning the fabric network using docker and kafka as ordering service this error never occurs and in this case kafka synchronise with zookeepers very fast, I don't why no error "rejected by Consenter: will not enqueueenter image description here".
Related
we have done a migration of a Fabric network, from one tenant to another Azure Cloud. The nodes are on three IaaS and we have done a snpashot of them, the network did not stop . So, as a result when we use this snpashots on the new tenant, it looks like the peers have more blocks than the orderers. Orderers have RAFT consensus without problems and update their blocks. But they can't get the latest blocks from the peers.
I can run chaincode, and read values, but cannot insert new transanctions.
There is this error in the peers :
"2023-01-17 13:20:56.933 UTC [peer.blocksprovider] func1 -> WARN 269 Encountered an error reading from deliver stream: EOF channel=test orderer-address=orderer5.example.com:7050
2023-01-17 13:20:56.933 UTC [peer.blocksprovider] DeliverBlocks -> WARN 26a Got error while attempting to receive blocks: received bad status NOT_FOUND from orderer channel=test"
.
and on the orderer: " ERRO 05b [channel: test] Error reading from channel, cause was: NOT_FOUND".
It seems that the peer to request blocks the orderer does not know the channel, but when it starts it detects it and inserts the blocks it had pending from other orderers.
Can the orderers read the missing blocks from the peers, or the only way to start a network from snpashots from different IaaS is to stop the network first?
I' m using Hyperledger Fabric 1.4.3 with the following conditions.
Ubuntu 18.04
CouchDB
Raft
TLS enabled
Using discovery service
Endorsement policy: "AND ('Org0MSP.peer')"
When I send 100 transactions asynchronously from Node.js SDK, all transactions are processed normally.
But, I send 2000 transactions asynchronously, the following error occurs.
[Node.js SDK log]
[DiscoveyEndorsementHandler]: _build_endorse_group_member >> G0:0 - endorsement failed - Error: Failed to connect before the deadline URL::grpcs//peer0.org0:7051
[Peer log]
[core.com] SerdrHandshake -> ERRO TLS handshake failed with error read tcp {org0 peer ip address:port} -> {Node.js SDK server ip address:port}: i/o timeout server=PeerServer remoteaddress={Node.js SDK server ip address:port}
So, some transactions failed.
Why does this error occur? Is there any way to solve this error?
When you say 'asynchronously' I assume you mean 'concurrently'. The error being returned is an i/o timeout, so this would indicate to me that most likely, your server hardware is not fast enough to handle the volume of concurrent requests you are attempting.
If you are concerned about denial of service, you can employ standard techniques for limiting the number of concurrent connections to your peer such as via network proxies. If you are simply trying to test the throughput bounds of Fabric, I would try slowly ramping up your transaction rate until you begin to see errors such as this timeout.
I am testing the Kafka mode in Fabric Network, after I create a channel
named 'mychannel', I joined two ORG in it. After these actions, I tried to use 'down' and 'up' command to refresh the kafka、zookeeper and the orderer's containers.In this way, I want to test if a peer can persistent 'channels' in different fabric network.
When tailing the logs of Orderer, I found the problem :
[common.deliver] deliverBlocks -> DEBU dc9 Rejecting deliver for 192.168.11.61:60156 because channel mychannel not found
and I used the kafka shell tool to check topic list, and find the 'mychannel' topic disappeared.
After doing above, I create a new channel 'mychannel' which used the same 'channel.tx' . And I found the error in log:
UTC [common.deliver] deliverBlocks -> ERRO b1b [channel: mychannel] Error reading from channel, cause was: NOT_FOUND
I used command:
peer channel getinfo -c mychannel
in one org's peer , and get info, the block height is 16:
Blockchain info: {"height":16,"currentBlockHash":"gHOfUnVRT0pGMRssz8fUXWH4jdH/1hcPUPLBqau7L9c=","previousBlockHash":"yvKUrJDg3+60Sbc0HHKs+N5vVkW2WBJWhy9TLFGmMug="}
I guess the orderer genesis block's height is 0, and can't match the current block height.
How can I fix this problem? Can I use channel update method to update the channel config?
kafka mode: 4 kafka brokers, 3 zookeepers
1 orderer
2 orgs
restart orderer and kafka cluster(cmd: 'docker-compose down & up')
It appears that you were not using externally mounted volumes with your Kafka, Zookeeper and Orderer containers. When you run docker-compose down it actually destroys the containers. If you want to start/stop the containers, you need to use docker-compose stop and docker-compose start.
If you want to preserve data in the event that the containers are destroyed (or even to upgrade them), then you need to attach external volumes to your containers.
We have 2 servers with a peer, orderer and kafka each. They are connected in the same channel, both have a chaincode installed and instantiated and the policy is one organization or the other.
Imagine that the internet goes down and they disconnect:
Would both work individually?
Can the write new transactions to the ledger?
What would happen with the new submited blocks in the ledger when the internet goes up and running? How do this new blocks synchronize?
Thanks
EDIT1:
See image for clarification:
How would the network syncrhonize If during the disconnection both write to the ledger, how are those new generated blocks react? One gets invalidated? Or both are valid?
The peers once disconnected won't receive keep alive from the channel peers and will keep throwing the same if you have debug logging enabled.
The peer won't lose any config even though it got disconnected from network. The discovery service in fabric takes care of finding the peers configured in the channel. So, Once the connection resumes it will automatically re-synchronize with the peers with gossip messages.
The peers can then write and read from ledger as usual.
There are multiple things to consider here:
1) When you use a Kafka-based orderer, you will have to cluster the Kafka brokers if you expect them to be part of the same ordering service. Kafka is used to distribute the messages to the ordering nodes. If your Kafka brokers are not in a cluster, then you will have separate ordering services. Recall that Kafka also requires Zookeeper as well. Zookeeper has a 2f+1 fault tolerance model, so if you want to tolerate failure of a single node (failure includes communication issues), you will need at least 3 Zookeeper nodes and they should be deployed on separate hosts. For Kafka, you will want at least 2 brokers and would need to set the minimum ISRs (in sync replicas) to 2. Ideally you'd have 4 Kafka brokers.
2) In order for transactions to be processed, enough peers to satisfy the endorsement policy as well as the ordering service must be available / accessible. Peers which cannot connect to the ordering service will catch up once they can reestablish connectivity.
I have 2 different machines in cloud.
Containers on first machine:
orderer.mydomain.com
peer0.org1.mydomain.com
db.peer0.org1.mydomain.com
ca.org1.mydomain.com
Containers on second machine:
peer0.org2.mydomain.com
db.peer0.org2.mydomain.com
ca.org2.mydomain.com
I start them both. I can make them both join the same channel. I deploy a BNA exported from hyperledger composer to both peers. I send transactions to peer0.org1.mydomain.com and query and get same results from peer0.org2.mydomain.com.
Everything works perfectly so far.
However after 5 - 10 minutes peer on second machine (peer0.org2) gets disconnected from the orderer. When I send transactions to org1 I can query them from org1 and I see the results. But org2 gets detached. Doesn't accept new tx. (orderer connection gone) I can query org2 and see old results.
I added CORE_CHAINCODE_KEEPALIVE=30 to my peer env variables. I see keep alive actions in org2 peers logs. But didn't solve my problem.
I should note: Containers are in a docker network called "basic". This network was used in my local computer. However it still works in cloud.
In orderer logs:
Error sending to stream: rpc error: code = Internal desc = transport is closing
This happens every time I try. But when I run these containers in my local machine they keep connected without problems.
EDIT1: After checking the logs: peer0.org2 receives all tx and sends them to orderer. Orderer receives requests from peer but can't update peers. I can connect to both requestUrl or eventUrl on the problematic peer. There is no network problem.
I guess I found the problem. It is about MS Azure networking. After 4 minutes azure cuts idle connections:
https://discuss.pivotal.io/hc/en-us/articles/115005583008-Azure-Networking-Connection-idle-for-more-than-4-minutes
EDIT1:
Yes the problem was about MS Azure.. If there is anyone out there trying to run hyperledger on Azure keep in mind that if peer stays idle for more than 4 minutes azure times out tcp connections. You can configure it to timeout in 30 mins. It is not a bug but it was annoying for us not being able to understand why it wasn't working after 4 mins.
So you can use your own server or other cloud solution or use azure by adapting to their rules.