Large number of channels in a hyperledger fabric network - hyperledger-fabric

Is there a limit on number of channels in a Hyperledger Fabric network?
What are the implications of larger number of channels?
Thanks,
Naveen

There is an upper bound that you can define for the ordering service:
# Max Channels is the maximum number of channels to allow on the ordering
# network. When set to 0, this implies no maximum number of channels. MaxChannels: 0
In the peer, every channel logic is maintained by its own goroutines and data structures.
I'm pretty confident that unless for very extreme use cases, you shouldn't be too worried about the number of channels a peer has joined to.

Related

What is the payload size of Hyperledger Caliper?

I have done benchmark tests using Hyperledger Caliper. I have two types of transactions: read transactions, and write transactions.
How can I know the payload size of those transactions? I need to make an estimate? Or can I measure it somehow? An average is enough.

Hyperledger Fabric - Detection of Modification to StateDB in Peer

This is a question about StateDB automatic fix after tampering.
We're wondering if manipulation to StateDB (CouchDB) could be detected and fixed automatically by peer.
The following documents states there is a a state reconciliation process synchronizing world state in Peer.
https://hyperledger-fabric.readthedocs.io/ja/latest/gossip.html#gossip-messaging
In addition to the automatic forwarding of received messages, a state
reconciliation process synchronizes world state across peers on each
channel. Each peer continually pulls blocks from other peers on the
channel, in order to repair its own state if discrepancies are
identified.
But when we test it as follows:
Step 4: modify value of key a in StateDB
Step 10: Wait for 15 minutes
Step 11: Create a new block
Step 12: Check value of a through chaincode and confirm it in StateDB directly
The tampered value is not fixed automatically by peer.
Can you help clarify the meaning of "state reconciliation process" in the above document and if peer would fix the tampering to StateDB.
Thank you.
Gossip protocol is to sync up legit data among peers, not tampered data in my view. What is legit data? Data whose computed hash at anypoint of time matches with the orignally computed hash, which will not be the case for the tampered data, and so I'd not expect Gossip protocol to sync such 'dirty data'. This defeats the purpose of Blockchain altogether as a technology, and hence this is a wrong test to be performed in my view.
Now, then what is Gossip protocol? Refer https://hyperledger-fabric.readthedocs.io/ja/latest/gossip.html#gossip-protocol
"Peers affected by delays, network partitions, or other causes
resulting in missed blocks will eventually be synced up to the current
ledger state by contacting peers in possession of these missing
blocks."
So, in cases where the peer should have comitted a block to ledger and would have missed due to some reasons like the ones said above, 'Gossip' is only a fall back strategy for the HLF to reconcile the ledger among the peers.
Now in your test case, I see you are using 'query', now query does not go via the orderer to all the peers, it just goes to one peer and returns the value, you need to do a 'getStringState' as a 'transaction', for the 'enderosement' to run, and that is when would the endorsement fail citing the mismatch between the values for the same key among the peers is what I'd expect.
# Gossip state transfer related configuration
state:
# indicates whenever state transfer is enabled or not
# default value is true, i.e. state transfer is active
# and takes care to sync up missing blocks allowing
# lagging peer to catch up to speed with rest network
enabled: false
.......................................................................................................
# the process of reconciliation is done in an endless loop, while in each iteration reconciler tries to
# pull from the other peers the most recent missing blocks with a maximum batch size limitation.
# reconcileBatchSize determines the maximum batch size of missing private data that will be reconciled in a
# single iteration.
reconcileBatchSize: 10
# reconcileSleepInterval determines the time reconciler sleeps from end of an iteration until the beginning
# of the next reconciliation iteration.
reconcileSleepInterval: 1m
# reconciliationEnabled is a flag that indicates whether private data reconciliation is enable or not.
reconciliationEnabled: true
Link: https://github.com/hyperledger/fabric/blob/master/sampleconfig/core.yaml
In addition to the automatic forwarding of received messages, a state reconciliation process synchronizes world state across peers on each channel. Each peer continually pulls blocks from other peers on the channel, in order to repair its own state if discrepancies are identified. Because fixed connectivity is not required to maintain gossip-based data dissemination, the process reliably provides data consistency and integrity to the shared ledger, including tolerance for node crashes.

How to estimate how many storage will fabric peer occupy

How to estimate how many storage will fabric peer occupy?
If I have a main struct (the biggest size and used in high frequency ) in chaincode.
I found the storage of a peer's ledger is almost equal to the size of the data of main struct multiplied by 5~10.
Also the world state storage is almost the same with the ledger.

How to decide the maximum number of faulty replicas in Practical Byzantine Fault Tolerance?

In PBFT consensus, we know that there should be 3f+1 replicas out of which 2f+1 correct ones and f are the maximum number of faulty replicas network can tolerate. I wonder while setting up fabric how to keep this thing in mind. What are the parameters on which we can predict the chances of faulty replicas?
I assume you are setting up your orderers with a BFT consensus plugin such as BFT-SMART. BFT algorithms are only required if you want to tolerate malicious faults. If you're only concerned about crash faults, you can also use Kafka consensus, which tolerates up to 50% crashing nodes.
So if you're setting up a business network, each partner should be running one ordering node. The number of tolerated malicious partners depends on your total number of partners. So if you have 4 partners, one of them can be malicious without your network breaking down, if you have 7 partners, you tolerate two, etc.
So it's not really a conscious choice that you're making in how many replicas to run. The number of tolerated malicious nodes depends on the number of independent partners you have running ordering nodes. There is no point in having one organization run multiple ordering nodes, since they could manipulate all of them if they were acting maliciously.

How can I increase the number of peers in my routing table associated with a given infohash

I'm working on a side project and trying to monitor peers on popular torrents, but I can't see how I can get a hold of the full dataset.
If the theoretical limit on routing table size is 1,280 (from 160 buckets * bucket size k = 8) then I'm never going to be able to hold the full number of peers on a popular torrent (~9,000 on a current top-100 torrent)
My concern with simulating multiple nodes is low efficiency due to overlapping values. I would assume that their bootstrapping paths being similar would result in similar routing tables.
Your approach is wrong since it would violate reliability goals of the DHT, you would essentially be performing an attack on the keyspace region and other nodes may detect and blacklist you and it would also simply be bad-mannered.
If you want to monitor specific swarms don't collect data passively from the DHT.
if the torrents have trackers, just contact them to get peer lists
connect to the swarm and get peer lists via PEX which provides far more accurate information than the DHT
if you really want to use the DHT perform active lookups (get_peers) at regular intervals

Resources