I have setup a fabric network with more than one orderer and analyzing few scenarios on how it is working. Have two question.
One of the advantages of multi orderer network is to avoid a single
point of failure. So if one orderer fails it has to automatically
take another orderer into the picture and continue the work. But in
the actual scenario for peer chaincode invoke through cli we pass
arguments of orderer and cafile of orderer to make a transaction.
Here we are passing the orderer info so if the orderer we choose is
down the transaction will not be done. My question is - this is not
the objective of multi orderer network so why we need to pass the
orderer related arguments?
I deployed this network with 4 kafka brokers and 3 zookeepers. Even
after stopping all the three zookeepers the fabric network is giving
the correct response. What is the significance of zookeeper?
The point of multiple orderers is to eliminate a single point of
failure and to allow the ordering service to scale horizontally. The
peer CLI is really not intended to be used for invokes in a
production application. Typically, an SDK such as Node or Java would
be used, and on failure, the invoke would be retried to another
orderer.
The Kafka brokers use Zookeeper to manage leader election, and
generally orchestrate changes in the Kafka cluster. I would expect
that with zookeeper down, eventually you will experience problems
with the cluster. Network may run properly until kakfa doesn't have any issues. But, when kafka gets some issue, it is the Zookeeper which will take care of next steps.
Related
I am using Hyperledger Fabric for one of my project. I am bit confused with Kafka and Raft which is better for production level.
For Kafka you need to configure broker how this would be different than raft
#fama,
From my personal experience, I would suggest you go with raft in the production
kafka will come with additional baggage such as zookeeper & kafka itself and many complained about connection issues
kafka & raft are distributed consensus mechanisms, but raft is matured
Considering a production scenario where I have two or more Ordering nodes (Kafka mode), each one on a different host, do the ordering nodes need to communicate each other in some way?
With the Kafka-based orderer, the actual ordering nodes DO NOT directly communicate with each other. All coordination is done via the Kafka cluster. So each ordering service node just needs to be able to communicate with the Kafka cluster
I have an existing Hyperledger Fabric 1.0.x install, how do I perform an upgrade to the new 1.1 release(s)?
At a high level, upgrading a Fabric network can be performed with the following sequence:
Update orderers, peers, and fabric-ca. These updates may be done in parallel.
Update client SDKs.
Enable v1.1 channel capability requirements.
(Optional) Update the Kafka cluster.
The details of each step in the process are described in the documentation.
For my development environment, I deleted 1.0.5 images(emptied bin folder) and executed the command:
I have recently been able to deploy a production environment for Fabric and I am looking to find what should be considered for deploying a Fabric network in production. Is there any considerations I need to take when deploying the orderers and kafka nodes? IE number of nodes and configurations. I cannot find much information on production grade Fabric networks.
Quoting the Hyperledger Fabric documentation here Docs ยป Bringing up a Kafka-based Ordering Service
Let K and Z be the number of nodes in the Kafka cluster and the
ZooKeeper ensemble respectively:
At a minimum, K should be set to 4. (As we will explain in Step 4
below, this is the minimum number of nodes necessary in order to
exhibit crash fault tolerance, i.e. with 4 brokers, you can have 1
broker go down, all channels will continue to be writeable and
readable, and new channels can be created.)
Z will either be 3, 5, or > 7. It has to be an odd number to avoid split-brain scenarios, and larger than 1 in order to avoid single point of failures. Anything beyond 7 ZooKeeper servers is considered an overkill.
Update 14 Nov 2020
Please note that Hyperledger has deprecated the use of Ordering Service Network based on Kafka. Usage of Raft Ordering Service is recommended for production.
Based on this guide Bringing up a Kafka-based Ordering Service, I configured 4 kafka nodes for production.
I can see there are 3 types of orderer. When I deploy a fabric-network up to 2 types are used.
https://hub.docker.com/r/hyperledger/fabric-orderer/
https://hub.docker.com/r/hyperledger/fabric-kafka/
https://hub.docker.com/r/hyperledger/fabric-ca-orderer/
The order documentation describes usage
https://github.com/hyperledger/fabric/blob/master/orderer/README.md
but I do not expect to see fabric-orderer and fabric-kafka containers in a fabric network.
What am I misunderstanding here?
The architecture for Hyperledger Fabric allows for multiple types of ordering services. At the heart of the architecture is a common atomic broadcast interface.
The orderer interfaces are implemented in the orderer executable which is packaged as the fabric-orderer Docker image.
There are two configuration modes for the orderer:
1) Solo - this is a standalone, single process orderer primarily for use during development and test (although nothing would stop someone from using it for production - it would just not be fault tolerant)
2) Kafka - this leverages Kafka as the "consensus" mechanism to make multiple orderer processes crash fault tolerant and order transactions. In this case, multiple orderer processes communicate with a Kafka cluster which ensures that each orderer process receives transactions and generates blocks in the same order. The orderer process (or fabric-orderer containers) communicate with a Kafka cluster (which can be run using the fabric-kafka and fabric-zookeeper Docker images).