What is the payload size of Hyperledger Caliper? - hyperledger-fabric

I have done benchmark tests using Hyperledger Caliper. I have two types of transactions: read transactions, and write transactions.
How can I know the payload size of those transactions? I need to make an estimate? Or can I measure it somehow? An average is enough.

Related

What does write operations (every 4MB, per 10000) mean?

Link: https://azure.microsoft.com/en-in/pricing/details/storage/data-lake/
Under transaction pricing there is Write operations (every 4MB, per 10000).
What is 10000? And what does every 4MB per 10000 mean?
Transactions are incurred any time you read and write data operations to the service.
Each transaction can consist of maximum size of 4MB data.
So let us assume that you are writing a 8MB data to service - then there will be counted as 2 transactions. Similarly, if one read operation gets 10MB of data. It will be considered as 3 transactions (4+4+2)
So, let us assume that you are writing only data of 256 KB - it will still be considered as a single transaction. (Anything up to 4MB will be considered as 1 transaction.)
Coming back to your question :
As per the above logic, Write operations for 10000 transactions with 4MB as Max data size for each transaction would be Rs. 4.296 for the hot and Rs.8.592 for cold.
I might be misunderstanding this explanation but it seems wrong:
If it's saying that the charge is for 10,000 transactions with up to 4MB per transaction (i.e. 40GB across 10k transactions) then this is wrong.
The charges are per transaction up to a max of 4MB per transaction e.g. a single read/write of 10MB data incurs 3 transaction charges.
The ADLS Storage team at Microsoft maintain an FAQ that explains this better although it's still not entirely clear: (https://azure.github.io/Storage/docs/analytics/azure-storage-data-lake-gen2-billing-faq/).
The FAQ seems to suggest that reading/writing data from/to a file is based on the "per 4MB approach" whilst metadata operations (Copy File, Rename, Set Properties, etc) are charged on a per 10k operations basis.
So effectively data read/writes are charged per transaction up to 4MB (single transactions >4MB are charged as multiple 4MB transactions), whilst metadata operations are charged on a per 10,000 operations basis.
How you're ever supposed to work out how much this is going to cost upfront is beyond me...

Hyperledger Fabric: How to get performance difference in LevelDB using I/O-bound transaction based on HDD/SSD?

I am currently developing a fabric chaincode.
I created a function in the chaincode that was I/O-bound (reading many values ​​from the ledger).
Experiment with this on two nodes. One node uses HDD and the other node uses SSD.
In the ledger, 10,000 objects with 4k size keys were stored. (It seems to be too small.. When I put 100,000, an error occurred, so I tested it with 10,000.)
If I READ(GetState) a lot of values ​​in the ledger, I expected that the READ speed of the node using SSD would be faster, but there was no difference.
I understood that LevelDB is a key-value storage, so there is no difference because it is fast. (Sequential and random reads have similar execution times)
Wondering how to experiment so that the difference in performance of HDD/SDD appears using LevelDB.
And if there is a way, I would like to ask for advice.
 
If you are querying data via chain code, it is unlikely that you will be I/O bound at the disk level. The path length for querying data via chaincode has a few hops so that is likely going to be your constraint.

Hyperledger Fabric transactions are too slow

I configured my Hyperledger Fabric network with 2 peers under 1 org and 2 couchdb, 1 each peer.
I am seeing that when I do a transaction, it takes some time to do it, sometimes around 1 second. For me it's too much time, it should be just some ms.
I have a simulator that is able to insert around 30k samples into the blockchain but it runs very slow because sometimes a transaction takes 1s, so you can imagine that with a such amount of data it takes a lot.
How can I solve this? Is Fabric able to handle more transaction than this?
What I have noticed and it seems wrong to me is that:
Using Fauxton to see inside couchdb, if I upload 300 samples on the blockchain, I see 300 blocks created. Could this be a problem? I know that a block should encapsulate more transaction, but my blockchains seems not to do this. How to solve?
Another thing that I have noticed is that I did not configure any endorsment policy. Should I do it and should it make it faster? How to do this?
And, finally: there is the possibility that couchdb is slowing down the network? How to disable it?
Two hidden complexities can impact performance
The complexity of your query, per record type. It’s important to form a performance histogram based on object types
Whether your data structure is pre-ordered to suiting the hashing algorithm. If not, you’ll experience a slight bit for drag if you object size is large.

What is the business implication of Hyperledger Fabric ledger metrics?

Hyperledger Fabric metrics (https://hyperledger-fabric.readthedocs.io/en/latest/metrics_reference.html) has three metrics:ledger_block_processing_time, ledger_blockstorage_commit_time and ledger_statedb_commit_time. My questions are:
What does ledger_block_processing_time mean in terms of business? Does it refer to the process of putting multiple transactions into a block by orderer?Does it include subsequent process in which blocks are submitted by peers?
ledger_blockstorage_commit_time and ledger_statedb_commit_time look similar, what is the difference between them?
Thank you.
For Q 2:
The ledger has 2 parts, first is the actual blockchain/blockstorage, and the second is stateDB, which represents the current world-state. First metrics is commit time in blockstorage, second one is commit time in stateDB.

Large number of channels in a hyperledger fabric network

Is there a limit on number of channels in a Hyperledger Fabric network?
What are the implications of larger number of channels?
Thanks,
Naveen
There is an upper bound that you can define for the ordering service:
# Max Channels is the maximum number of channels to allow on the ordering
# network. When set to 0, this implies no maximum number of channels. MaxChannels: 0
In the peer, every channel logic is maintained by its own goroutines and data structures.
I'm pretty confident that unless for very extreme use cases, you shouldn't be too worried about the number of channels a peer has joined to.

Resources