I used Couchdb to create a private NPM mirror, but I found that beam.smp kept my CPU usage to 100%, is there any way to make it lower, like 50%?
Thank you very much.
You cannot directly limit CPU/memory usage for CouchDB, but you may tweak Replicator options to reduce their usage. Options you're interested:
http_connections
Defines maximum number of HTTP connections per replication. Keeping them lower reduces transfer bandwidth.
[replicator]
http_connections = 20
worker_batch_size
With lower batch sizes checkpoints are done more frequently. Lower batch sizes also reduce the total amount of used RAM memory.
[replicator]
worker_batch_size = 500
worker_processes
Amount of replication workers. Keeping them lower reduces amount of data replication handled => reduces CPU usage because of less data to process.
[replicator]
worker_processes = 4
Play with these options to find right combination to fit your limits.
Related
We are running GRAFANA/PROMETHEUS to monitor our CPU metrics and find aggregated CPU Usage of all cpus. the problem is we have enabled hyperthreading and when we stress CPU the percentage exceeds from 100%. my question is how to limit that cpu usage to show only usage in 100% not more even if cpu is highly utilized.
P.S i have tried setting the max and min limit in grafana but still the graph spikes goes above that limit.
Kindly give me the right query for this problem.
The queries I have tried are given below.
sum(irate(node_cpu_seconds_total{instance="localhost",job="node", mode!="idle"}[5m]))*100
100 - avg(irate(node_cpu_seconds_total{instance="localhost",job="node", mode!="idle"}[5m]))*100
and other similar queries we have tried.
If all you want is to "cap" a variable or expression result to a maximum value (that is, 100) you could simply use the Prometheus function clamp_max.
Thus, you could do:
clamp_max(<expr>, 100)
This is probably the most helpful query.
(1 - avg(irate(node_cpu_seconds_total{instance="$instance",job="$job",mode!="idle"}[5m])))*100
Replace your instance IP and your node exporter job name.
I am designing a Redis datastore with ~3000 sorted set keys, each with 60 - 300 items each around 250 bytes in size.
used_memory_overhead = 1055498028 bytes and used_memory_dataset= 9681332 bytes. This ratio seems way too high. used_memory_dataset_perc is less than 1%. Memory usage is exceeding the max of 1.16G and causing keys to be evicted.
Do sorted sets really have 99% memory overhead? Will I have to just find another solution? I just want a list of values that is sorted by a field in the value.
Here's the output of MEMORY INFO . used_memory_dataset_perc just keeps decreasing until it's <1% and eventually the max memory is exceeded
# Memory
used_memory:399243696
used_memory_human:380.75M
used_memory_rss:493936640
used_memory_rss_human:471.05M
used_memory_peak:1249248448
used_memory_peak_human:1.16G
used_memory_peak_perc:31.96%
used_memory_overhead:390394038
used_memory_startup:4263448
used_memory_dataset:8849658
used_memory_dataset_perc:2.24%
allocator_allocated:399390096
allocator_active:477728768
allocator_resident:499613696
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:1248854016
maxmemory_human:1.16G
maxmemory_policy:volatile-lru
allocator_frag_ratio:1.20
allocator_frag_bytes:78338672
allocator_rss_ratio:1.05
allocator_rss_bytes:21884928
rss_overhead_ratio:0.99
rss_overhead_bytes:-5677056
mem_fragmentation_ratio:1.24
mem_fragmentation_bytes:94804256
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:385555150
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
In case it is relevant, I am using AWS Elasticache
After running tpstats on all nodes. I see a lot of nodes having high number of ALL TIME BLOCKED NTR. We have a 4 node cluster and the values for NTR ALL TIME BLOCKED are :
NODE 1: 23953
NODE 2: 2935
NODE 3: 15229
NODE 4: 5951
I know ALL TIME BLOCKED is bad and hence worried as to what I am doing wrong.
This pool handles cql requests, so it is the number of active CQL requests allowed. Its limited to prevent too many active ones from OOMing your system (ie each returning large blobs). This effectively applies backpressure to your client application to slow down. Unfortunately if you have small requests this isnt ideal and hurts your throughput so in CASSANDRA-11363 they added a setting to make the space tradeoff for small bursty workloads.
If you upgrade to 2.2.8+ you can set the max queue size of that threadpool with -Dcassandra.max_queued_native_transport_requests=4096
The YCSB Endpoint benchmark would have you believe that Cassandra is the golden child of Nosql databases. However, recreating the results on our own boxes (8 cores with hyperthreading, 60 GB memory, 2 500 GB SSD), we are having dismal read throughput for workload b (read mostly, aka 95% read, 5% update).
The cassandra.yaml settings are exactly the same as the Endpoint settings, barring the different ip addresses, and our disk configuration (1 SSD for data, 1 for a commit log). While their throughput is ~38,000 operations per second, ours is ~16,000 regardless (relatively) of the threads/number of client nodes. I.e. one worker node with 256 threads will report ~16,000 ops/sec, while 4 nodes will each report ~4,000 ops/sec
I've set the readahead value to 8KB for the SSD data drive. I'll put the custom workload file below.
When analyzing disk io & cpu usage with iostat, it seems that the reading throughput is consistently ~200,000 KB/s, which seems to suggest that the ycsb cluster throughput should be higher (records are 100 bytes). ~25-30% of cpu seems to be under %iowait, 10-25% in use by the user.
top and nload stats are not ostensibly bottlenecked (<50% memory usage, and 10-50 Mbits/sec for a 10 Gb/s link).
# The name of the workload class to use
workload=com.yahoo.ycsb.workloads.CoreWorkload
# There is no default setting for recordcount but it is
# required to be set.
# The number of records in the table to be inserted in
# the load phase or the number of records already in the
# table before the run phase.
recordcount=2000000000
# There is no default setting for operationcount but it is
# required to be set.
# The number of operations to use during the run phase.
operationcount=9000000
# The offset of the first insertion
insertstart=0
insertcount=500000000
core_workload_insertion_retry_limit = 10
core_workload_insertion_retry_interval = 1
# The number of fields in a record
fieldcount=10
# The size of each field (in bytes)
fieldlength=10
# Should read all fields
readallfields=true
# Should write all fields on update
writeallfields=false
fieldlengthdistribution=constant
readproportion=0.95
updateproportion=0.05
insertproportion=0
readmodifywriteproportion=0
scanproportion=0
maxscanlength=1000
scanlengthdistribution=uniform
insertorder=hashed
requestdistribution=zipfian
hotspotdatafraction=0.2
hotspotopnfraction=0.8
table=usertable
measurementtype=histogram
histogram.buckets=1000
timeseries.granularity=1000
The key was increasing native_transport_max_threads in the casssandra.yaml file.
Along with the increased settings in the comment (increasing connections in ycsb client as well as concurrent read/writes in cassandra), Cassandra jumped to ~80,000 ops/sec.
I've got pretty unusual latency patterns in my production setup:
the whole cluster (3 machines: 48 gig ram, 7500 rpm disk, 6 cores) shows latency spikes every 10 minutes, all machines at the same time.
See this screenshot.
I checked the logfiles and it seems as there are no compactions taking place at that time.
I've got 2k reads and 5k reads/sec. No optimizations have been made so far.
Caching is set to "ALL", hit rate for row cache is at ~0,7.
Any ideas? Is tuning memtable size an option?
Best,
Tobias