fstat - total file descriptor count on freebsd system? - freebsd

How to get the current count of file descriptors in the system?
I know how to get the maximum.
% sysctl kern.maxfiles
kern.maxfiles: 8232
Ref:
http://www.freebsd.org/cgi/man.cgi?query=fstat&apropos=0&sektion=0&manpath=FreeBSD+9.0-RELEASE&arch=default&format=html

Are you looking for kern.openfiles?
[ghoti#pc ~]$ sysctl -ad | grep 'kern.*files:'
kern.maxfiles: Maximum number of files
kern.openfiles: System-wide number of open files
[ghoti#pc ~]$

cat /proc/sys/fs/file-nr
Three columns are total file descriptors allocated since boot, free allocated file descriptors, maximum open file descriptors.
Additional info here.

Related

Error: EMFILE: too many open files, watch, unless I use sudo

Description
Recently I've run into an problem. I am not able to run yarn start in element-web directory, I get these errors. Originally I thought it had something to do with element-web itself so I created an issue. Some time after that I tried to run wintersmith preview in bibviz directory and got the same errors. This was weird so I tried to create an Angular project and run ng serve and errors again. I headed to the issue to close it as it wasn't an element-web issue. I found that there was another issue created with the same problem. It had already been closed by turt2live saying it looks like you've run out of memory on your system. Based on this I tried to turn of most programs running in the background and now all the commands worked.
I am sure that ng serve used to work in the past.
My PC has 16 GB of RAM and the commands already fail when I am on 7/16 GB. I can't see any memory spikes when running the commands. Running the commands with sudo also completely eliminates the problem. This doesn't make any sense to me.
Research lead me to ulimits but they seem to have no effect. I have also installed watchman with no effect.
Can someone tell me what I am missing?
Thank you in advance!
Info
I am on Debian 11 Bullseye. This is the output of a few commands that could be useful.
As a regular user:
> uname -a
Linux Simon-s-PC 5.8.0-3-amd64 #1 SMP Debian 5.8.14-1 (2020-10-10) x86_64 GNU/Linux
> sudo sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288
> ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 46482
-n: file descriptors 8192
-l: locked-in-memory size (kbytes) unlimited
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 63664
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 95
-N 15: unlimited
> yarn --version
1.22.5
With sudo su:
> sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288
> ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 63664
-n: file descriptors 1024
-l: locked-in-memory size (kbytes) 2043392
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 63664
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 0
-N 15: unlimited
I think I've found a solution:
Set limits in /etc/sysctl.conf by adding:
fs.inotify.max_user_watches=524288
fs.inotify.max_user_instances=512
Open a new terminal or reload sysctl.conf variables with
sudo sysctl --system
Run yarn start
Everything should work fine now, hopefully. If it doesn't work try setting the limits higher.

how to determine ulimits - linux

how to determine ulimits (linux)?
Im using ubuntu 16.04,
kernel version 4.4.0-21-generic
I set the nofile to maximum for root (in /etc/security/limits.conf)
the line is: * hard nofile NUMBER
according to file /proc/sys/fs/file-max
the value is : 32854728
when Im running the command ulimit -a
i found that the limitation is 1024.
i tested it , and i found that the highest value of max open file is 1048575.
If I set it to higher value the limit is 1024.
how to determine ulimit of openfiles? why I can't set it to higher limit than 1048575?
To determine the maximum number of file handles for the entire system, run:
cat /proc/sys/fs/file-max
To determine the current usage of file handles, run:
$ cat /proc/sys/fs/file-nr
1154 133 8192
|   |  |
|   |   |
|        | maximum open file descriptors
   | total free allocated file descriptors
total allocated file descriptors
(the number of file descriptors allocated since boot)

Setting system memory for a process in linux

I need to limit the system memory free -- available for a process to 8GB. So, do I need to set
ulimit -S -m 8388608
or do I need to set:
ulimit -S -v 8388608
I am a non root user therefore I can change only soft limit. How can I raise the memory limit to unlimited again. I tried
ulimit -S -v ulimited
but it gives me:
bash: ulimit: ulimited: invalid number
'invalid number' happens, if there is a wrong line-ending, e.g. CR LF instead of Unix LF

How to check a disk for partitions for use in a script in Linux?

I'm scripting something in Bash for Linux systems. How would I check a disk for partitions in a robust manner?
I could use grep, awk, or sed to parse the output from fdisk, sfdisk, etc., but this doesn't seem to be an exact science.
I could also check if there are partitions in /dev, but it is also possible that the partitions exist and haven't been probed yet (via partprobe, as an example).
What would you recommend?
I think I figured out a reliable way. I accidentally learned some more features of partprobe while reading the man page:
-d Don’t update the kernel.
-s Show a summary of devices and their partitions.
Used together, I can scan a disk for partitions without updating the kernel and get a reliable output to parse. It's still parsing text, but at least the output isn't as "human-oriented" as fdisk or sfdisk. This also is information as read from the disk and doesn't rely on the kernel being up-to-date on the partition status for this disk.
Take a look:
On a disk with no partition table:
# partprobe -d -s /dev/sdb
(no output)
On a disk with a partition table but no partitions:
# partprobe -d -s /dev/sdb
/dev/sdb: msdos partitions
On a disk with a partition table and one partition:
# partprobe -d -s /dev/sdb
/dev/sdb: msdos partitions 1
On a disk with a partition table and multiple partitions:
# partprobe -d -s /dev/sda
/dev/sda: msdos partitions 1 2 3 4 <5 6 7>
It is important to note that every exit status was 0 regardless of an existing partition table or partitions. In addition, I also noticed that the options cannot be grouped together (partprobe -d -s /dev/sdb works while partprobe -ds /dev/sdb does not).
Another option is to run:
lsblk
See https://unix.stackexchange.com/a/108951
you could also use:
parted /dev/sda print 1 &> /dev/null echo $?
if a partition (first partition) exist it return true and otherwise false

Too many open files error on Ubuntu 8.04

mysqldump: Couldn't execute 'show fields from `tablename`': Out of resources when opening file './databasename/tablename#P#p125.MYD' (Errcode: 24) (23)
on checking the error 24 on the shell it says
>>perror 24
OS error code 24: Too many open files
how do I solve this?
At first, to identify the certain user or group limits you have to do the following:
root#ubuntu:~# sudo -u mysql bash
mysql#ubuntu:~$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 71680
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 71680
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
mysql#ubuntu:~$
The important line is:
open files (-n) 1024
As you can see, your operating system vendor ships this version with the basic Linux configuration - 1024 files per process.
This is obviously not enough for a busy MySQL installation.
Now, to fix this you have to modify the following file:
/etc/security/limits.conf
mysql soft nofile 24000
mysql hard nofile 32000
Some flavors of Linux also require additional configuration to get this to stick to daemon processes versus login sessions. In Ubuntu 10.04, for example, you need to also set the pam session limits by adding the following line to /etc/pam.d/common-session:
session required pam_limits.so
Quite an old question but here are my two cents.
The thing that you could be experiencing is that the mysql engine didn't set its variable "open-files-limit" right.
You can see how many files are you allowing mysql to open
mysql> SHOW VARIABLES;
Probably is set to 1024 even if you already set the limits to higher values.
You can use the option --open-files-limit=XXXXX in the command line for mysqld.
Cheers
add --single_transaction to your mysqldump command
It could also be possible that by some code that accesses the tables dint close those properly and over a point of time, the number of open files could be reached.
Please refer to http://dev.mysql.com/doc/refman/5.0/en/table-cache.html for a possible reason as well.
Restarting mysql should cause this problem to go away (although it might happen again unless the underlying problem is fixed).
You can increase your OS limits by editing /etc/security/limits.conf.
You can also install "lsof" (LiSt Open Files) command to see Files <-> Processes relation.
There are no need to configure PAM, as I think. On my system (Debian 7.2 with Percona 5.5.31-rel30.3-520.squeeze ) I have:
Before my.cnf changes:
\#cat /proc/12345/limits |grep "open files"
Max open files 1186 1186 files
After adding "open_files_limit = 4096" into my.cnf and mysqld restart, I got:
\#cat /proc/23456/limits |grep "open files"
Max open files 4096 4096 files
12345 and 23456 is mysqld process PID, of course.
SHOW VARIABLES LIKE 'open_files_limit' show 4096 now.
All looks ok, while "ulimit" show no changes:
\# su - mysql -c bash
\# ulimit -n
1024
There is no guarantee that "24" is an OS-level error number, so don't assume that this means that too many file handles are open. It could be some type of internal error code used within mysql itself. I'd suggest asking on the mysql mailing lists about this.

Resources