Block size and transaction per block in Hyperledger Fabric - hyperledger-fabric

What is the relationship between MaxMessageCount, AbsoluteMaxBytes, and PreferredMaxBytes?
A block in fabric consists of a MaxMessageCount number of transaction or PreferredMaxBytes?
What should be the value of these to get maximum throughput?

Max Message Count: The maximum number of transactions/messages to permit in block.
Absolute Max Bytes: The (strict) maximum number of bytes allowed for serialized transactions/messages in a block.
Preferred Max Bytes: The preferred maximum number of bytes allowed the serialized transactions/messages in a batch. A transaction/message larger than the preferred max bytes will result in a batch larger than preferred max bytes.
The criteria that is encountered first will be taken into consideration while the orderer cuts the block.
If you have a constantly flowing high number of transactions, then pack as many transactions as possible in a block to get max throughput. Otherwise tweak the BatchTimeout and MaxMessageCount to optimize your transaction throughput.
If you want to dig deep on this aspect refer to this research paper: https://arxiv.org/pdf/1805.11390.pdf

Related

Why is Redis Sorted Set using so much memory overhead?

I am designing a Redis datastore with ~3000 sorted set keys, each with 60 - 300 items each around 250 bytes in size.
used_memory_overhead = 1055498028 bytes and used_memory_dataset= 9681332 bytes. This ratio seems way too high. used_memory_dataset_perc is less than 1%. Memory usage is exceeding the max of 1.16G and causing keys to be evicted.
Do sorted sets really have 99% memory overhead? Will I have to just find another solution? I just want a list of values that is sorted by a field in the value.
Here's the output of MEMORY INFO . used_memory_dataset_perc just keeps decreasing until it's <1% and eventually the max memory is exceeded
# Memory
used_memory:399243696
used_memory_human:380.75M
used_memory_rss:493936640
used_memory_rss_human:471.05M
used_memory_peak:1249248448
used_memory_peak_human:1.16G
used_memory_peak_perc:31.96%
used_memory_overhead:390394038
used_memory_startup:4263448
used_memory_dataset:8849658
used_memory_dataset_perc:2.24%
allocator_allocated:399390096
allocator_active:477728768
allocator_resident:499613696
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:1248854016
maxmemory_human:1.16G
maxmemory_policy:volatile-lru
allocator_frag_ratio:1.20
allocator_frag_bytes:78338672
allocator_rss_ratio:1.05
allocator_rss_bytes:21884928
rss_overhead_ratio:0.99
rss_overhead_bytes:-5677056
mem_fragmentation_ratio:1.24
mem_fragmentation_bytes:94804256
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:385555150
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
In case it is relevant, I am using AWS Elasticache

calculate maximum total of RU/s in 24 hours for CosmosDB

I set 400 RU/s for my collection in CosmosDB. I want to estimate, maximum total of RU/s in 24 hours. please let me know, how can I calculate it?
is the calculation right?
400 * 60 * 60 * 24 = 34.560.000 RU in 24 hours
When talking about maximum RU available within a given time window, you're correct (except that would be total RU, not total RU/second - it would remain constant at 400/sec unless you adjusted your RU setting).
But... given that RUs are reserved on a per-second basis: What you consume within a given one-second window is really up to you and your specific operations, and if you don't consume all allocated RUs (400, in this case) in a given one-second window, it's gone (you're paying for reserved capacity). So yes, you're right about absolute maximum, but that might not match your real-world scenario. You could end up consuming far less than the maximum you've allocated (imagine idle-times for your app, when you're just not hitting the database much). Also note that RUs are distributed across partitions, so depending on your partitioning scheme, you could end up not using a portion of your available RUs.
Note that it's entirely possible to use more than your allocated 400 RU in a given one-second window, which then puts you into "RU Debt" (see my answer here for a detailed explanation).

Hyperledger Fabric - Fabcar performance

I'm trying to run a project using Hyperledger Fabric with a setup similar to the Fabcar example.
I'm surprised by the huge amount of time it takes to submit a transaction.
Just to make it simple and fully reproducible I measured the time needed submit the transaction createCar on the actual Fabcar project.
After setting up the network (startFabric.sh javascript) and enrolling the admin and a user, I run the invoke.js script. The whole script takes about 2.5 seconds!
As far as I can understand running the contract takes just few milliseconds. The same for sending the Transaction Proposal.
It mostly spend time having the eventHandler listening and waiting for the event (in the transaction.js library).
Is there a way of speeding up the process? I was expecting to be able to submit several transactions per second.
Short answer : decrease BatchTimeout in configtx.yaml
Long answer :
If you only run one query, like you describe, this is totaly normal.
If you look at your configtx.yaml, in the part 'Orderer', you can see this :
Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: solo
Addresses:
- orderer.example.com:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
There are 2 important things :
BatchTimeout
MaxMessageCount
BatchTimeout define the maximum time before creating a block. This mean, when an invoke is made, the orderer compute the transaction and wait for 2 seconds before creating the block. This mean, each first transaction will take more than 2 seconds ! But if there is another invoke, lets say, at 1,5s after the first transaction, the second call will take less than 1s !
MaxMessageCount, speak for itself. It mean that if there are more than 10 invoke, a block will be created, even if the 2 seconds are not past. For example, 10 invoke in a row of 0.5s will result in a block creation in less than a second.
This settings are here to balance the load depending on your network. Let's say you have a low use application, with less than 10 tps (transaction per second), you can reduce the BatchTimeout to less than 2s to increased response time of invoke. If you have a hight tps, you can increased the MaxMessageCount to create larger block.
The others settings define the max size of a message.
Try to know how your network will be, simulate the estimated tps with a test case and tweak the parameters to find the configuration of your needs.

Cassandra batch prepared statement size warning

I see this error continuously in the debug.log in cassandra,
WARN [SharedPool-Worker-2] 2018-05-16 08:33:48,585 BatchStatement.java:287 - Batch of prepared statements for [test, test1] is of size 6419, exceeding specified threshold of 5120 by 1299.
In this
where,
6419 - Input payload size (Batch)
5120 - Threshold size
1299 - Byte size above threshold value
so as per this ticket in Cassandra, https://github.com/krasserm/akka-persistence-cassandra/issues/33 I see that it is due to the increase in input payload size so I Increased the commitlog_segment_size_in_mb in cassandra.yml to 60mb and we are not facing this warning anymore.
Is this Warning harmful? Increasing the commitlog_segment_size_in_mb will it affect anything in performance?
This is not related to the commit log size directly, and I wonder why its change lead to disappearing of the warning...
The batch size threshold is controlled by batch_size_warn_threshold_in_kb parameter that is default to 5kb (5120 bytes).
You can increase this parameter to higher value, but you really need to have good reason for using batches - it would be nice to understand the context of their usage...
commit_log_segment_size_in_mb represents your block size for commit log archiving or point-in-time backup. These are only active if you have configured archive_command or restore_command in your commitlog_archiving.properties file.
Default size is 32mb.
As per Expert Apache Cassandra Administration book:
you must ensure that value of commitlog_segment_size_in_mb must be twice the value of max_mutation_size_in_kb.
you can take reference of this:
Mutation of 17076203 bytes is too large for the maxiumum size of 16777216

Microsoft.WindowsAzure.Storage.Table.CloudTable.ExecuteBatchAsync() truncates message

When I call this method with large EntityProperty (around 17Kb of text), it truncates the string.
I know that there is a limitation of 64Kb for a column and 1Mb for 1 entire row when it comes to Azure Table.
Any insights?
Apart from all these size restrictions, you forgot about the size restriction in an entity group transaction which is done by ExecuteBatchAsync method and that is:
The transaction can include at most 100 entities, and its total
payload may be no more than 4 MB in size.
Ref: http://msdn.microsoft.com/en-us/library/azure/dd894038.aspx
Please ensure that your payload size is less than 4 MB.

Resources