Can anybody let me know the command to obtain the current maximum size to store the object in the Memcached - linux

Can anybody let me know the command to obtain the current maximum size to store the object in the Memcached.
In the documentation I have not come across getting the actual size of my Memcached capacity.

It's available in the output of stat settings command.
I didn't get if you wanted the maximum size of a single item or the total available memory but both are available.
Maximum item size
(at least in not too old releases, I checked with 1.4.13). Using telnet:
telnet <hostname> <port>
> stat settings
(...)
item_size_max 1048576
(...)
Maximum capacity in bytes
Available both in stat and stat settings:
telnet <hostname> <port>
> stat settings
(...)
maxbytes 10737418240
(...)
> stat
(...)
limit_maxbytes 10737418240
(...)
Remaining capacity
As far as I know, it's not directly available, you have to compute it from stat command output:
> stat
(...)
bytes 5349380740
(...)
limit_maxbytes 10737418240
(...)
Here I have limit_maxbytes - bytes = 5388037500 bytes remaining.
Documentation
This is confirmed by the documentation shipped with sources, in doc/protocol.txt:
| maxbytes | size_t | Maximum number of bytes allows in this cache |
| item_size_max | size_t | maximum item size |
Side note
Note that this is only memory consumption and memory limit from a memcached point of view. The limit is the one that has been given in command line with -m option. It won't tell you if the physical memory is not sufficient to handle that much data.

Related

How is RealMemory (slurmd -C) calculated? Why does it differ from MemTotal?

The SLURM documentation says
RealMemory
Size of real memory on the node in megabytes (e.g. "2048"). The default value is 1. Lowering RealMemory with the goal of setting aside some amount for the OS and not available for job allocations will not work as intended if Memory is not set as a consumable resource in SelectTypeParameters. So one of the *_Memory options need to be enabled for that goal to be accomplished. Also see MemSpecLimit.
In several places I have seen the recommendation to set this value equal to the value reported by slurmd -C, for example: How to set RealMemory in slurm?
However, I am confused about how this value is calculated and relates to other information, as for example MemTotal from /proc/meminfo.
RealMem is in MBs
slurmd -C
RealMemory=193086
MemTotal is in KBs
cat /proc/meminfo
MemTotal: 197721028 kB
Just divide MemTotal by 1024
197721028 / 1024 = 193086

How to increase number of child proceses

cat /sys/fs/cgroup/pids/parent/pids.max = "max"
I created it following https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/pids.html
Consider this Python Code demonstrating the problem:
from os import fork, getpid
from time import sleep
i=0
print( "pid = %d " % getpid())
with open("/proc/%d/limits" % getpid(), "r") as f:
print(f.read())
try:
while fork():
i+=1
except BaseException as e:
print(i)
print(e)
sleep(10)
print("done")
exit(1)
My output:
pid = 18091
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 999999 999999 processes
Max open files 1024 1048576 files
Max locked memory 67108864 67108864 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 31412 31412 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
10227
[Errno 11] Resource temporarily unavailable
done
In Linux you have the pid_max limit:
$ cat /proc/sys/kernel/pid_max
32768
However, if your Linux is running on systemd you might hit user-slice limits:
for root
$ cat /sys/fs/cgroup/pids/user.slice/user-0.slice/pids.max
for currently logged in user:
$ cat /sys/fs/cgroup/pids/user.slice/user-$(id -u).slice/pids.max
10813
The systemd equivalent would be:
$ systemd-analyze dump | sed -n "/-> Unit user-$(id -u).slice:/,/-> Unit /p"| grep -e "TasksMax="
TasksMax=10813
According to man logind.conf
UserTasksMax=
Sets the maximum number of OS tasks each user may run concurrently. > This controls the TasksMax= setting of the per-user slice unit, see systemd.resource-control(5) for details. If assigned the special value
"infinity", no tasks limit is applied. Defaults to 33%, which equals 10813 with the kernel's defaults on the host, but might be smaller in OS containers.
This limit is defined in /etc/systemd/logind.conf, even though it might be commented out, 33% out of 32768.
#UserTasksMax=33%
When you modify the UserTasksMax limit or increase sysctl kernel.pid_max you'll have to restart systemd-logind service:
service systemd-logind restart
On 64-bit system you should be able to increase the value up to 2^22, i.e.: 4194304
sysctl kernel.pid_max=4194304

TCP receiving window size higher than net.core.rmem_max

I am running iperf measurements between two servers, connected through 10Gbit link. I am trying to correlate the maximum window size that I observe with the system configuration parameters.
In particular, I have observed that the maximum window size is 3 MiB. However, I cannot find the corresponding values in the system files.
By running sysctl -a I get the following values:
net.ipv4.tcp_rmem = 4096 87380 6291456
net.core.rmem_max = 212992
The first value tells us that the maximum receiver window size is 6 MiB. However, TCP tends to allocate twice the requested size, so the maximum receiver window size should be 3 MiB, exactly as I have measured it. From man tcp:
Note that TCP actually allocates twice the size of the buffer requested in the setsockopt(2) call, and so a succeeding getsockopt(2) call will not return the same size of buffer as requested in the setsockopt(2) call. TCP uses the extra space for administrative purposes and internal kernel structures, and the /proc file values reflect the larger sizes compared to the actual TCP windows.
However, the second value, net.core.rmem_max, states that the maximum receiver window size cannot be more than 208 KiB. And this is supposed to be the hard limit, according to man tcp:
tcp_rmem
max: the maximum size of the receive buffer used by each TCP socket. This value does not override the global net.core.rmem_max. This is not used to limit the size of the receive buffer declared using SO_RCVBUF on a socket.
So, how come and I observe a maximum window size larger than the one specified in net.core.rmem_max?
NB: I have also calculated the Bandwidth-Latency product: window_size = Bandwidth x RTT which is about 3 MiB (10 Gbps # 2 msec RTT), thus verifying my traffic capture.
A quick search turned up:
https://github.com/torvalds/linux/blob/4e5448a31d73d0e944b7adb9049438a09bc332cb/net/ipv4/tcp_output.c
in void tcp_select_initial_window()
if (wscale_ok) {
/* Set window scaling on max possible window
* See RFC1323 for an explanation of the limit to 14
*/
space = max_t(u32, sysctl_tcp_rmem[2], sysctl_rmem_max);
space = min_t(u32, space, *window_clamp);
while (space > 65535 && (*rcv_wscale) < 14) {
space >>= 1;
(*rcv_wscale)++;
}
}
max_t takes the higher value of the arguments. So the bigger value takes precedence here.
One other reference to sysctl_rmem_max is made where it is used to limit the argument to SO_RCVBUF (in net/core/sock.c).
All other tcp code refers to sysctl_tcp_rmem only.
So without looking deeper into the code you can conclude that a bigger net.ipv4.tcp_rmem will override net.core.rmem_max in all cases except when setting SO_RCVBUF (whose check can be bypassed using SO_RCVBUFFORCE)
net.ipv4.tcp_rmem takes precedence net.core.rmem_max according to https://serverfault.com/questions/734920/difference-between-net-core-rmem-max-and-net-ipv4-tcp-rmem:
It seems that the tcp-setting will take precendence over the common max setting
But I agree with what you say, this seems to conflict with what's written in man tcp, and I can reproduce your findings. Maybe the documentation is wrong? Please find out and comment!

Setting Socket Receive Buffer Size, gets truncated to 244KB

I'm trying to increase the size of my socket receive buffer size using setsockopt() on linux. I can set it successfully to any value below 244KB. Any value above 244KB gets truncated to 244KB.
There appears to be some sort of system limit in place, but I can't figure where it is coming from, as it doesn't correspond with values below:
$ cat /proc/sys/net/ipv4/tcp_rmem
4096 87380 4194304
$ cat /proc/sys/net/ipv4/tcp_wmem
4096 16384 4194304
$ cat /proc/sys/net/core/rmem_default
124928
$ cat /proc/sys/net/core/wmem_default
124928
The default value is 87380 as expected, but I can't increase it to 4194304. It gets limited to 244KB. Interestingly that value is 2X rmem_default, do I need to change that?
Thanks
From man page for TCP:
The maximum sizes for socket buffers declared via the SO_SNDBUF and
SO_RCVBUF mechanisms are limited by the global net.core.rmem_max and
net.core.wmem_max sysctls. Note that TCP actually allocates twice the
size of the buffer requested in the setsockopt(2) call, and so a suc-
ceeding getsockopt(2) call will not return the same size of buffer as
requested in the setsockopt(2) call
So what you pass for SO_SNDBUF/SO_RCVBUF gets doubled while allocation. And as such you cannot pass the max value (4194304) in setsockopt

How to find the socket buffer size of linux

What's the default socket buffer size of linux? Is there any command to see it?
If you want see your buffer size in terminal, you can take a look at:
/proc/sys/net/ipv4/tcp_rmem (for read)
/proc/sys/net/ipv4/tcp_wmem (for write)
They contain three numbers, which are minimum, default and maximum memory size values (in byte), respectively.
For getting the buffer size in c/c++ program the following is the flow
int n;
unsigned int m = sizeof(n);
int fdsocket;
fdsocket = socket(AF_INET,SOCK_DGRAM,IPPROTO_UDP); // example
getsockopt(fdsocket,SOL_SOCKET,SO_RCVBUF,(void *)&n, &m);
// now the variable n will have the socket size
Whilst, as has been pointed out, it is possible to see the current default socket buffer sizes in /proc, it is also possible to check them using sysctl (Note: Whilst the name includes ipv4 these sizes also apply to ipv6 sockets - the ipv6 tcp_v6_init_sock() code just calls the ipv4 tcp_init_sock() function):
sysctl net.ipv4.tcp_rmem
sysctl net.ipv4.tcp_wmem
However, the default socket buffers are just set when the sock is initialised but the kernel then dynamically sizes them (unless set using setsockopt() with SO_SNDBUF). The actual size of the buffers for currently open sockets may be inspected using the ss command (part of the iproute/iproute2 package), which can also provide a bunch more info on sockets like congestion control parameter etc. E.g. To list the currently open TCP (t option) sockets and associated memory (m) information:
ss -tm
Here's some example output:
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 192.168.56.102:ssh 192.168.56.1:56328
skmem:(r0,rb369280,t0,tb87040,f0,w0,o0,bl0,d0)
Here's a brief explanation of skmem (socket memory) - for more info you'll need to look at the kernel sources (i.e. sock.h):
r:sk_rmem_alloc
rb:sk_rcvbuf # current receive buffer size
t:sk_wmem_alloc
tb:sk_sndbuf # current transmit buffer size
f:sk_forward_alloc
w:sk_wmem_queued # persistent transmit queue size
o:sk_omem_alloc
bl:sk_backlog
d:sk_drops
I'm still trying to piece together the details, but to add to the answers already given, these are some of the important commands:
cat /proc/sys/net/ipv4/udp_mem
cat /proc/sys/net/core/rmem_max
cat /proc/sys/net/ipv4/tcp_rmem
cat /proc/sys/net/ipv4/tcp_wmem
ss -m # see `man ss`
References & help pages:
Man pages
man 7 socket
man 7 udp
man 7 tcp
man ss
https://www.linux.org/threads/how-to-calculate-tcp-socket-memory-usage.32059/
Atomic size is 4096 bytes, max size is 65536 bytes. Sendfile uses 16 pipes each of 4096 bytes size.
cmd : ioctl(fd, FIONREAD, &buff_size).

Resources