I've consumer-applications which reads (no-write) the database of size ~4GiB and performs some tasks. To make sure same database is not duplicated across applications, I've stored it on all node machines of k8s-cluster.
daemonset
I've used one daemonset which is using "hostpath" volume. The daemonset pod extracts the database on each node machine (/var/lib/DATABASE).
For health-check of daemonset pod, I've written the shell script which checks the modification time of the database file (using date command).
For database extraction, approximately 300MiB memory is required and to perform health-check 50MiB is more than sufficient. Hence I've set the memory-request as 100MiB and memory-limit as 1.5GiB.
When I run the daemonset, I observed memory usage is high ~300MiB for first 10 seconds (to perform database extraction) and after that it goes down to ~30MiB. The daemonset works fine as per my expectation.
Consumer Application
Now, The consumer applications (written in golang) pods are using same "hostPath" volume (/var/lib/DATABASE) and reading the database from that location (/var/lib/DATABASE). This consumer applications does not perform any write operations on /var/lib/DATABASE directory.
However, when I deploy this consumer application on k8s then I see huge increase in memory usage of the daemonset-pod from 30MiB to 1.5GiB. The memory-usage by daemonset-pods is almost same as that of memory-limit.
I am not able to understand this behaviour, why consumer application is causing memory usage of daemonset pod ?
Any help/suggestion/truobleshooting steps would be of great help !!
Note : I'm using 'kubernetes top" command to measure the memory (working-set-bytes).
I've found this link (Kubernetes: in-memory shared cache between pods),
which says
hostPath by itself poses a security risk, and when used, should be scoped to only the required file or directory, and mounted as ReadOnly. It also comes with the caveat of not knowing who will get "charged" for the memory, so every pod has to be provisioned to be able to absorb it, depending how it is written. It also might "leak" up to the root namespace and be charged to nobody but appear as "overhead"
However, I did not find any reference from official k8s documentation. It would be helpful if someone can elaborate on it.
Following are the content of memory.stat file from daemonset pod.
cat /sys/fs/cgroup/memory/memory.stat*
cache 1562779648
rss 1916928
rss_huge 0
shmem 0
mapped_file 0
dirty 0
writeback 0
swap 0
pgpgin 96346371
pgpgout 95965640
pgfault 224070825
pgmajfault 0
inactive_anon 0
active_anon 581632
inactive_file 37675008
active_file 1522688000
unevictable 0
hierarchical_memory_limit 1610612736
hierarchical_memsw_limit 1610612736
total_cache 1562779648
total_rss 1916928
total_rss_huge 0
total_shmem 0
total_mapped_file 0
total_dirty 0
total_writeback 0
total_swap 0
total_pgpgin 96346371
total_pgpgout 95965640
total_pgfault 224070825
total_pgmajfault 0
total_inactive_anon 0
total_active_anon 581632
total_inactive_file 37675008
total_active_file 1522688000
total_unevictable 0
Related
I'm new to Db2 and just installed v11.5 on my Ubuntu 18.04.
I referred these two links for setup purpose:
IBM and DCON
I'm using DB2 CLI to create a database. On typing in create database <database_name> and press enter, it just stays there; there's no output.
I checked the db2diag.log as well, it stops at this:
2021-05-18-15.41.46.618309+330 E653248E505 LEVEL: Event
PID : 29136 TID : 140104312022784 PROC : db2sysc 0
INSTANCE: db2inst1 NODE : 000 DB : SOURCE
APPHDL : 0-7 APPID: *LOCAL.db2inst1.210518101139
AUTHID : DB2INST1 HOSTNAME: Host
EDUID : 23 EDUNAME: db2agent (instance) 0
FUNCTION: DB2 UDB, base sys utilities, sqeLocalDatabase::FirstConnect, probe:1000
START : DATABASE: SOURCE : ACTIVATED: NO
Tried 3-4 times; on one occasion I just let it be and it took around 30-40 mins but it created the database. I'm not sure if I'm missing out any initialization step.
Kindly guide.
System Spec:
RAM: 16GB, CPU(s): 8, Model name: Intel(R) Core(TM) i5-1035G1 CPU # 1.00GHz
The time it takes to create your skeleton Db2-LUW database is mainly determined by your I/O and logging configuration, and whether or not you get swapping/paging. The CPU speed is less important than the I/O throughput for the create database action.
As an example: my Db2-LUW v11.5 on ubuntu 18.04 and 20.04, the create database completes with the following times as reported by the time tool (with zero paging) and no containerization/virtualization:
with nvme(ssd): around 2 minutes
with spinning disk ext4 4k sector size, 256mb cache sata3, 3.5inch 7200rpm: around 4 minutes
with spinning disk ext4 512byte sector size, 64mb disk cache sata3 2.5 inch 5400rpm : around 10 minutes.
If you have more than one physical drive and/or controller, it can help to put the transaction logs on a different drive / controller to the tablespaces.
So you can see that the performance varies greatly with the I/O configuration. You get what you pay for, and how you configure it.
For creation of other objects on the skelton database like tablespaces, tables, views, indexes, mqt's, routines etc, the performance will vary further, again dependent on your I/O and logging configuration.
Is it possible to monitor all write access to the filesystem of all process under linux?
I've some different mounted filesystems. A lot of them are tempfs.
I'm interested in all writes to the root filesystem except the tempfs,devtmpfs etc.
I'm looking for something that will output: <PID xy> write n Bytes to /targe/filepath.
What monitoring tool can list all this write syscalls? Can they be filtered by mount points?
iotop (kernel version 2.6.20 or higher) or dstat could help you. E.g. iotop -o -b -d 10 like discussed in this similar thread.
/proc/diskstats has data for all the block devices.
https://www.kernel.org/doc/Documentation/iostats.txt
The /proc/diskstats file displays the I/O statistics of block devices. Each line contains the following 14 fields:
1 - major number
2 - minor mumber
3 - device name
4 - reads completed successfully
5 - reads merged
6 - sectors read
7 - time spent reading (ms)
8 - writes completed
9 - writes merged
10 - sectors written
11 - time spent writing (ms)
12 - I/Os currently in progress
13 - time spent doing I/Os (ms)
14 - weighted time spent doing I/Os (ms)
For more details refer to Documentation/iostats.txt
You can write a SystemTap script to monitor filesystem operations. Maybe you can visit the Brendan D. Gregg's blog, where there are many monitor tools.
fatrace (File Activity Trace)
fatrace reports file access events (Open, Read, Write, Close) from all running processes. Its main purpose is to find processes which keep waking up the disk unnecessarily and thus prevent some power saving.
When running it outputs one line per event in this format:
<timestamp> <processName(id)>: <accessType> </path/to/file>
For example:
23:10:21.375341 Plex Media Serv(2290): W /srv/dev-disk-by-uuid-UID/Plex/Library/Application Support/Plex Media Server/Logs/Plex Media Server.log
From which you easily get the all necessary infos
Timestamp from the --timestamp option
Process name (who is accessing)
File operation (O-pen R-read W-rite C-lose)
Filepath (where is it writing to).
You can limit the search scope with --current-mount to only record events on partition/mount of current directory.
So simply cd into the volume which corresponds to your spinning HDD first, and there run ftrace with the --current-mount option.
Without this option, all (real) partitions/mount points are being watched.
Very practical
With it I found out easily that the reason why my NAS disk was spinning 24/7 also when nobody accessed the NAS and also no maintenance tasks where about to run was unnecessary logging of the Plex Media Server.
since a few days we encounter a problem with our ArangoDB installation. A few minutes/up to an hour after start up all connections to the database are refused. The arango log file says that there are "Too many open files". A "lsof | grep arango | wc -l" shows that the database has around 50,000 open file handles, which is a lot under the max. allowed by the linux system (around 3m).
Has anyone an idea where this error comes from?
We are using a Ubuntu Linux with a 3.13 kernel. 30 GB RAM and three cores. The database is still very small with around 1,5m entries and a size of 50GB.
Thx, secana
EDIT:
"netstat -anpt | fgrep 2480" shows:
root#syssec-graphdb-001-test:~# netstat -anpt | fgrep 2480
tcp 0 0 10.215.17.193:2480 0.0.0.0:* LISTEN 7741/arangod
tcp 0 0 10.215.17.193:2480 10.215.50.30:53453 ESTABLISHED 7741/arangod
tcp 0 0 10.215.17.193:2480 10.215.50.31:49299 ESTABLISHED 7741/arangod
tcp 0 0 10.215.17.193:2480 10.215.50.30:53155 ESTABLISHED 7741/arangod
"ulimit -n" has a result of 1024, so I think that the ~50,000 are all arango processes together.
Last lines in log file before the database died:
2015-05-26T12:20:43Z [9672] ERROR cannot open datafile '/data/arangodb/databases/database-235999516/collection-28464454696/datafile-18806474509149.db': 'Too many open files'
2015-05-26T12:20:43Z [9672] ERROR cannot open datafile '/data/arangodb/databases/database-235999516/collection-28464454696/datafile-18806474509149.db': Too many open files
2015-05-26T12:20:43Z [9672] DEBUG [arangod/VocBase/collection.cpp:1632] cannot open '/data/arangodb/databases/database-235999516/collection-28464454696', check failed
2015-05-26T12:20:43Z [9672] ERROR cannot open document collection from path '/data/arangodb/databases/database-235999516/collection-28464454696'
It looks like it will make sense to increase the max. number of open files a process is allowed to manage. Given the stated database size of around 50 GB, the (presumably default) value of 1024 seems to be too low.
arangod will require one file descriptor for each parallel client connection. That may not be many, but in the face of HTTP keep-alive connections this could already account for several file descriptors.
Additionally, each datafile of an active collection will need to be memory-mapped and cost one file descriptor as well. With the default datafile size of 32 MB, a database size of 50 GB (on disk) will already consume 1,600 file descriptors:
50 GB database size / (32 MB default size / 1 datafile) = 1600 datafiles
Increasing the ulimit -n value for the arangod user and environment therefore will make sense. You can confirm that arangod can actually use the configured number of file descriptors by starting it with option --server.descriptors-minimum <value>, e.g.
--server.descriptors-minimum 32768
for that many file descriptors. If arangod cannot effectively use that specified amount of file descriptors, it will fail at start with a fatal error. Of course that option can also be put into the arangod.conf file.
Additionally, the default size for (new) datafiles can be increased via the journalSize parameter for collections. That won't help right now, but will lower the number of required file descriptors for data saved in the future.
For emergencies when you can't restart the database, like in my case, you will find very useful this blog post that explains how you can change the ulimit of a running process.
If your distribution has util-linux-2.21, you can use the "prlimit" tool, or you can compile the small example C program in the blog post that worked great for me.
To check the actual limits of a process you can use:
cat /proc/<PID>/limits
Good luck!
Our redis servers are, since yesterday, gradually (200MB/hour) using more memory, while the amount of keys (330K) and their data (132MB redis-rdb-tools) stay about the same.
Output of redis-cli info shows 6.89G used memory?!
redis_version:2.4.10
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.6
process_id:3437
uptime_in_seconds:296453
uptime_in_days:3
lru_clock:1905188
used_cpu_sys:8605.03
used_cpu_user:1480.46
used_cpu_sys_children:1035.93
used_cpu_user_children:3504.93
connected_clients:404
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:7400076728
used_memory_human:6.89G
used_memory_rss:7186984960
used_memory_peak:7427443856
used_memory_peak_human:6.92G
mem_fragmentation_ratio:0.97
mem_allocator:jemalloc-2.2.5
loading:0
aof_enabled:0
changes_since_last_save:1672
bgsave_in_progress:0
last_save_time:1403172198
bgrewriteaof_in_progress:0
total_connections_received:3616
total_commands_processed:127741023
expired_keys:0
evicted_keys:0
keyspace_hits:18817574
keyspace_misses:8285349
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:1619791
vm_enabled:0
role:slave
master_host:***BLOCKED***
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
db0:keys=372995,expires=372995
db6:keys=68399,expires=68399
The problem started when we updated our (.net) client code from BookSleeve 1.1.0.4 to ServiceStack v3.9.71 to prepare for an upgrade to Redis 2.8. But a lot of other stuff was updated to And our session state store (also redis, but with harbour client) does not show the same symptoms.
Where is all that Redis memory going? How can I troubleshoot it's usage?
Edit: I just restarted this instance and memory returned to 350M and is now climbing again. The top 10 largest objects are still the same size, ranging from 100K to 25M for nr 1. The amount of keys has dropped to 270K (330K earlier).
Here are some sources of "hidden" memory consumption in Redis:
Marc already mentioned the buffers maintained by the master to feed the slave. If a slave is lagging behind its master (because it runs on a slower box for instance), then some memory will be consumed on the master.
when long running commands are detected, Redis logs them in the SLOWLOG area, which takes some memory. You may want to use the SLOWLOG LEN command to check the number of records you have here.
communication buffers can also take memory. As far as I remember, with old versions of Redis (and 2.4 is quite old - you should really upgrade), it was unbounded, meaning that if you transfer a big object at a point, the communication buffer associated to this client connection will grow and never shrink. If there are many clients dealing occasionally with large objects, it could be a possible explanation. If you use commands retrieving very large data from Redis (in one shot), it can be an explanation as well. For instance, a simple KEYS * command applied on a Redis server storing millions of keys will consume a significant amount of memory.
You mentioned that you have objects as big as 25 MB. You have 404 client connections, if each of them needs to access such objects at a point in time, it will consume 10 GB of memory.
I have an EC2 server running Elasticsearch 0.9 with a nginx server for read/write access. My index has about 750k small-medium documents. I have a pretty continuous stream of minimal writes (mainly updates) to the content. The speeds/consistency I receive with search is fine with me, but I have some sporadic timeout issues with multi-get (/_mget).
On some pages in my app, our server will request a multi-get of a dozen to a few thousand documents (this usually takes less than 1-2 seconds). The requests that fail, fail with a 30,000 millisecond timeout from the nginx server. I am assuming this happens because the index was temporarily locked for writing/optimizing purposes. Does anyone have any ideas on what I can do here?
A temporary solution would be to lower the timeout and return a user friendly message saying documents couldn't be retrieved (however they still would have to wait ~10 seconds to see an error message).
Some of my other thoughts were to give read priority over writes. Anytime someone is trying to read a part of the index, don't allow any writes/locks to that section. I don't think this would be scalable and it may not even be possible?
Finally, I was thinking I could have a read-only alias and a write-only alias. I can figure out how to set this up through the documentation, but I am not sure if it will actually work like I expect it to (and I'm not sure how I can reliably test it in a local environment). If I set up aliases like this, would the read-only alias still have moments where the index was locked due to information being written through the write-only alias?
I'm sure someone else has come across this before, what is the typical solution to make sure a user can always read data from the index with a higher priority over writes. I would consider increasing our server power, if required. Currently we have 2 m2x-large EC2 instances. One is the primary and the replica, each with 4 shards.
An example dump of cURL info from a failed request (with an error of Operation timed out after 30000 milliseconds with 0 bytes received):
{
"url":"127.0.0.1:9200\/_mget",
"content_type":null,
"http_code":100,
"header_size":25,
"request_size":221,
"filetime":-1,
"ssl_verify_result":0,
"redirect_count":0,
"total_time":30.391506,
"namelookup_time":7.5e-5,
"connect_time":0.0593,
"pretransfer_time":0.059303,
"size_upload":167002,
"size_download":0,
"speed_download":0,
"speed_upload":5495,
"download_content_length":-1,
"upload_content_length":167002,
"starttransfer_time":0.119166,
"redirect_time":0,
"certinfo":[
],
"primary_ip":"127.0.0.1",
"redirect_url":""
}
After more monitoring using the Paramedic plugin, I noticed that I would get timeouts when my CPU would hit ~80-98% (no obvious spikes in indexing/searching traffic). I finally stumbled across a helpful thread on the Elasticsearch forum. It seems this happens when the index is doing a refresh and large merges are occurring.
Merges can be throttled at a cluster or index level and I've updated them from the indicies.store.throttle.max_bytes_per_sec from the default 20mb to 5mb. This can be done during runtime with the cluster update settings API.
PUT /_cluster/settings HTTP/1.1
Host: 127.0.0.1:9200
{
"persistent" : {
"indices.store.throttle.max_bytes_per_sec" : "5mb"
}
}
So far Parmedic is showing a decrease in CPU usage. From an average of ~5-25% down to an average of ~1-5%. Hopefully this can help me avoid the 90%+ spikes I was having lock up my queries before, I'll report back by selecting this answer if I don't have any more problems.
As a side note, I guess I could have opted for more balanced EC2 instances (rather than memory-optimized). I think I'm happy with my current choice, but my next purchase will also take more CPU into account.