I would like to know what is the best practice or preferred method to design a fabric network in 2.x versions? Shall we have orderers in the same organization as peers or should we need to have a separate org for orderers? Or should we have two different orgs for peers and orderers for each participating orgs? Can anyone shed some light and point me to resources?
It all depends on the real-world problem you are trying to solve.
The only requirement for production networks is that you have an ordering service with Raft Consensus.
At least 3 ordering nodes, because if it was only 1 node, it can go down and the network would not be able to cut new blocks, and if it is an even number of nodes (i.e. 2 nodes or 4 nodes) Raft would have a hard time electing a leader. In production, 5 ordering nodes are recommended.
Now that you have decided on the number of ordering nodes for the ordering service, you have to decide which organizations should contribute an ordering node to the service. This is where you specific use-case comes at play. For example, a regulatory body contributes 2 ordering nodes, and the other 3 ordering nodes are owned by separate businesses that the regulatory body oversees.
As for the peer nodes, any organization can have as many as required, even if they have ordering nodes. For example, a regulatory body may have 1 peer node and 2 ordering nodes on the same channel.
To sum it up, organizations can have as many peers as needed. For the ordering service, you have to select the organizations that will contribute ordering nodes to the ordering service. This selection process is dependent on your specific use-case.
Related
hope you all are well.
I'm researching Hyperledger Fabric and have a question about how integrity of the network works when peers are byzantine.
In the documentation it states that: "State is maintained by peers, but not by orderers and clients" [1]. It also states that "As long as peers are connected for sufficiently long periods of time to the channel (they can disconnect or crash, but will restart and reconnect), they will see an identical series of delivered(seqno, prevhash, blob) messages [from the ordering service.]"[1].
In essence my question is: Does the orderers save a copy of all the blocks that they have delivered to peers? If we assume that they are correct then any correct peer that joins the network should be able to retrieve a correct sequence of delivers so that it can recreate the state correctly. However since the documentation also states that the state is not maintained by the orderers we could have a situation where incorrect blocks will be delivered to the newly connected correct peer from a byzantine peer.
This might not be an issue in practice since one would probably configure a newly connected peer to receive blocks from peers of the same organization and why would peers in the same organization attack each other. I'm just trying to understand how Fabric works and this seems like an attack vector to me.
Thanks!
References:
1
Hyperledger Fabric is not Byzantine Tolerance yet. The orderers use the Raft consensus mechanism that is Crash Fault Tolerant.
This is a more theoretical question than a practical one, but I was thinking on possible attacks in Hyperledger Fabric.
On a high level, orderers are the block makers, and the whole blockchain is eventually maintained by the peers. The consensus algorithm is executed among the orderers (which might tolerate up to a certain number of byzantine orderers if the consensus is byzantine fault tolerant).
But what happens if some peers are compromised? What would happen if an attacker subverts more than half of the peers in the system? Could it result in a chain fork or reorganization?
It depends on your endorsement policy. For example, if you requires a AND (b OR C) for a certain type of transactions, where b and c are comprised, they can do no harm to a, as a would reject transactions that have not been signed by it. (obviously b and c may commit the transactions but they are malicious in this case and the behavior can be arbitrary)
Keep in mind that Fabric is a permissioned blockchain, and you need to define policies according to your business requirements.
It turns out that in fact all of the peers can be Byzantine (same for the clients as well).
This is precisely stated in the Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains paper, section 3.5 (Trust and Fault Model). The integrity of HLF relies solely on the orderers. This is because even if all peers collude and try to rewrite history in the blockchain, they won't be able to produce signed blocks (as the orderers are the only entities that can make blocks).
The best they can do is to try to delete blocks, but even with the presence of a single honest peer, that peer will show a "longer" history of blocks which will be the accepted one.
Is there any limit of creating number of nodes while configuring hyperledger fabric?
I have gone through the below answer but I'm not clear what he is explaining.
Limit of number of nodes in Hyperledger
When I say number of nodes, it could be number of stakeholders(marked as organizations) or peers or endorser nodes.
The answer on that post is now incorrect. Fabric does not currently used Byzantine Fault Tolerance, it only has Crash Tolerance through Kafka ordering. Byzantine Fault Tolerance is estimated to come around Fabric 1.4.
With Kafka, there is not a limit on the number of nodes. There is a performance hit as you introduce nodes, Hyperledger Sawtooth is known to be better for node scalability
There is no limit to creating the number of nodes in fabric ( that's the idea behind distributed system) but be aware that as and when you start adding more and more nodes, you may see the performance being adversely hit when you do the transactions.
As per my recent conversations with the teams which have implemented Hyperledger Fabric on version 1.1 it seems the performance is okay for upto 16 to 18 nodes. It seems to be a trade off due to the faster finality demonstrated by Hyperledger Fabric.
In Hyperledger Fabric, nodes can be of type orderers, endrosing peers or clients.
If we are talking about how many Byzantine nodes, then the precise answer is as follows: a) There is no limit on Byzantine peers and clients. If there are too many of them, a client just won't be able to get his transaction endorsed. However the integrity of the system is not endangered. b) Since the consensus algorithm is run between the orderers, then the limit depends on that specific algorithm used. Remember Hyperledger Fabric supports pluggable consensus, meaning that the consensus algorithm is not necessarily hardcoded. In its current implementation, Hypeledger Fabric runs "Kafka" which is NOT Byzantine-Fault tolerant. This means that even one Byzantine orderer can compromise the whole system! However, there are plans for BFT-Smart which is Byzantine-Fault tolerant and supports up to 33% faulty nodes, as the above answer says.
If we are talking about the total number of nodes, then the precise answer is as follows: a) There is (theoretically) no limit on the number of clients-peers. b) The practical limit of orderers again depends on the consensus. For BFT, this translates up to practically 10 (maybe 20) orderers.
I am looking for information on how many peer nodes , ordering nodes and CA servers are required to handle 1 million transactions per minute. Which deployment strategy is helpful. Docker Swarm or Kubernetes - which one is ideal to use to provide scaling and extensibility.
The scaling of Hyperledger fabric depends on the chosen consensus method. The consensus methods that support Byzantine Fault Tolerance can handle transactions <1000 per seconds for <20 nodes. For more number of transactions or more number of nodes, other non-BFT consensus methods can be chosen. However, these other consensus methods cannot guarantee the correctness of transactions as guaranteed by the former.
I'm attempting to design a P2P network where all peers share the same data, as well as make changes to it. Without getting into the consensus portion (ie assume that only one node will make changes to the data at one time).
How would I make sure that all peers are connected to other peers in a fault tolerant way? I can only think of one way, and that is that each peer can request more peers from another peer, but how do I make sure that connections are distributed as evenly as possible, without one peer overloading on TCP connections, while another peer might barely have any connections? Or even how can I prevent all peers splitting into two separate groups?
Something like bittorrent's canonical peer priority (github link) which calculates a preference-order by hashing identifiers of both endpoints together should allow nodes to reach a pseudo-random layout of the overlay while avoiding nodes being "left out". Hashing both identities together results in a different but globally agreed on ordering from the perspective of each peer, thus constructing a randomized layout. As the number of edges per node increases the chance of the network splitting will rapidly go towards zero.
And you can put a limit on the number of connections that each peer accepts, that will force others to go look elsewhere once it is saturated.