I am new to ETCD and I wanted to know that if there is a size limit for a value that can be stored in ETCD?
Found it :
etcd is designed to handle small key value pairs typical for metadata. Larger requests will work, but may increase the latency of other requests. By default, the maximum size of any request is 1.5 MiB. This limit is configurable through --max-request-bytes flag for etcd server.
Storage size limit:
The default storage size limit is 2GB, configurable with --quota-backend-bytes flag. 8GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.
Source:
Related
Can anyone tell me how to implement ActiveMQ Artemis using the nodejs module stompit as I am getting error:
AMQ229119: Disk Capacity is Low, cannot produce more messages.
ActiveMQ Artemis will automatically detect low disk capacity and block producers. This is calculated based on a percentage which is defined, by default, in broker.xml:
<max-disk-usage>90</max-disk-usage>
So if the disk is over 90% full the broker will block production of new messages.
Based on the size of your disk it could be that there actually is sufficient capacity to produce more messages. For example, if the total disk size was 1 terabyte and was 90% full then that would still leave 100 gigabytes of capacity. In cases like this the recommendation would be to increase the <max-disk-usage> value. Otherwise the recommendation would be to consume messages to free up the necessary disk space.
I'm getting these errors:
java.lang.IllegalArgumentException: Mutation of 16.000MiB is too large for the maximum size of 16.000MiB
in Apache Cassandra 3.x. I'm doing inserts of 4MB or 8MB blobs, but not anything greater than 8MB. Why am I hitting the 16MB limit? Is Cassandra batching up multiple writes (inserts) and creating a "mutation" that is too large? (If so, why would it do that, since the configured limit is 8MB?)
There is little documentation on mutations -- except to say that a mutation is an insert or delete. How can I prevent these errors?
you can increase the commit log size to 64 mb in cassandra.yaml
commitlog_segment_size_in_mb: 64
By default the commitLog size is 32 mb.
By design intent the maximum allowed segment size is 50% of the configured commit_log_segment_size_in_mb. This is so Cassandra avoids writing segments with large amounts of empty space.
you should investigate why the write size has suddenly increased. If it is not expected i.e. due to a planned change then it may well be a problem with the client application that needs further inspection.
I am using this memcached package with nodejs. As default max size of data per key is 1mb i am facing problem when data is more than 1mb for a particular key.
One work around would be in memcache.conf setting default max size more than 1 mb using
-I 2M
and in code setting the maxValue
var memcached = new Memcached('localhost:11211', {maxValue: 2097152});
What would be proper way to stay in 1mb limit? I have read suggestion about splitting data into multiple keys. How can i achieve multiple key splitting with JSON data in memcached package.
Options available :
1/ Make sure you are using compression while storing them in memcached, your nodejs memcached driver would be supporting gzip compression.
2/ Split the data into multiple keys
3/ Increase max object size to more than 1 MB ( but that may increase fragmentation,decrease performance based on your cache usage )
4/ Use redis as cache instead of memcached if your object sizes are usually large. Redis string datatype supports objects upto 512 MB in size, that would be easily available as direct get,set interface in any standard nodejs-redis cache driver
I'm going over the documentation for Hazelcast and I'm noticing the differences in eviction policies and I noticed one that I didn't fully understand.
map_size_per_jvm: Max map size per JVM.
partitions_wide_map_size: Partitions (default 271) wide max map size.
I'm assuming both of these are talking about entries and not size in terms of storage space. Isn't a partition going to rest on 1 JVM? To me this would see like these are the same option, can anyone help me make sense of the difference between these 2?
Firstly, yes, the max sizes map_size_per_jvm, cluster_wide_map_size and partitions_wide_map_size are per-entry (not size in terms of storage space).
Secondly, these max sizes are hard limits, and whilst similar they are in fact different to the eviction policies (being LRU, LFU or NONE).
Here's how they work:
cluster_wide_map_size - this is the total map entries across all hazelcast nodes.
map_size_per_jvm - this is essentially the number of map entries per hazelcast node.
So if you are running 2 nodes using this policy with max size = 10 (and backupCount = 0, see below), you have max 20 map entries across all nodes. Adding another hazelcast node increases the total max map size.
partitions_wide_map_size - this one is a little unpredictable, since it depends on the distribution of partitions across your nodes.
A cluster node reaches it's maximum when it reaches it's proportion of (owned partitions / total partitions) of the max size. Code: MaxSizePartitionsWidePolicy
Please note that all of these max sizes include backups, so backupCount = 1 effectively reduces the real max map size in half.
The other max size settings, used_heap_size and used_heap_percentage seem clear in their usage.
I hope this helps, good luck!
how much is the default memory size that consider for per CF in cassandra 1.2?
in previous edition it was 128 MB. and it was declare in MemtableThroughputInMB parameter in cassandra.yaml file. but now i cant find it in cassandra 1.2 config file.
thanks .
It is replaced by memtable_total_space_in_mb.
(Default: 1/3 of the heap**) Specifies the total memory used for all memtables on a node. This replaces the per-table storage settings memtable_operations_in_millions and memtable_throughput_in_mb.
Now a days cassandra does not provide you the scope to set the default memory size per column family basis. Rather you can define the whole size of the memtable in your configuration i.e .yaml file using memtable_total_space_in_mb. By default its value is one third of your JVM heap size.
Cassandra manages this space across all your ColumnFamilies and flushes memtables to disk as needed. Do note that a minimum of 1MB per memtable is used by the per-memtable arena allocator , which is worth keeping in mind if you are looking at going from thousands to tens of thousands of ColumnFamilies.