Limit for files opening - linux

I am using the linux ulimit command to set some limits for opening files. If I am using ulimit -n 4 this will open just 1 file. If I am using ulimit -n 5 this will open 2 files. So the formula will be ulimit -n number of files+3. The question is why is that difference of +3? What is that 3 reprezent? Maybe one for file one for executable file and one for...?

Each process has the first three open file descriptors: stdin, stdout, stderr

Related

What is the most correct way to set limits of number of files on Linux?

There are 3 ways to set limits of number of files and sockets on Linux:
echo "100000" > /proc/sys/fs/file-max
ulimit -n 100000
sysctl -w fs.file-max=100000
What is the difference?
What is the most correct way to set limits of number of files on Linux?
sysctl is an interface for writing to /proc/sys and so does the same as echoing directly to the files. Whereas sysctl applies across the whole filesystem, ulimit only applies to writes from the shell and processes started by the shell.

Why are my ulimit settings ignored in the shell?

I have to execute a .jar, and I need to use ulimit before this execution, so I wrote a shell script:
#!/bin/sh
ulimit -S -c unlimited
/usr/java/jre1.8.0_91/bin/java -jar /home/update.jar
But the ulimit seems to be ignored, because I have this error :
java.lang.InternalError: java.io.FileNotFoundException: /usr/java/jre1.8.0_91/lib/ext/localedata.jar (Too many open files)
If you want to change the maximum open files you need to use ulimit -n.
Example:
ulimit -n 8192
The -c option is changing the core file size (core dumps), not the maximum open files.
You need to apply the ulimit to the shell that will call the java application.

Linux File descriptors

I have a Java program after 2 weeks of running in average will become stuck and produce the following error:
Caused by: java.net.SocketException: Too many open files
at sun.nio.ch.Net.socket0(Native Method)
at sun.nio.ch.Net.socket(Net.java:415)
at sun.nio.ch.Net.socket(Net.java:408)
at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:105)
That hints to me that many sockets are opened but never closed.
Before diving into programmatic instrumentation i started to inspect what information i could draw from linux itself. I am using Redhat.
And then, a few questions came up as follows:
Why the following commands do not give the same output?
See
[ec2-user#ip-172-22-28-102 ~]$ sudo ls /proc/32085/fd | wc -l
592
[ec2-user#ip-172-22-28-102 ~]$ sudo lsof -a -p 32085 | wc -l
655
Is there a way to know from the proc stat info which thread created which file descriptor?
It seems like there is not because if i do the following, i am getting the same information:
[ec2-user#ip-172-22-28-102 ~]$ sudo ls /proc/32085/task/22386/fd | wc -l
592
[ec2-user#ip-172-22-28-102 ~]$ sudo ls /proc/32085/fd | wc -l
592
Same if i go to the thread directly from under /proc/ .
Thx
Is there a way to know from the proc stat info which thread created which file descriptor?
I am pretty sure the answer here is "no". File descriptors are opened by processes, not threads (and will be visible to all threads spawned by the same process).
Why the following commands do not give the same output?
First, the -a argument to lsof appears to be a no-op in this case. Specfically, the man says that it "causes list selection options to be ANDed, as described above". So you are really just running:
sudo lsof -p 32085
And that will print things other than open file descriptors (such as memory-mapped files, current working directory, etc), while /proc/<PID>/fd contains only open file descriptors. So you're getting different results because you're asking for different information.
The only reason you can receive that message is that you have opened files and you didn't close them after use. You have a file descriptor leak in your java application. Java programmers normally don't check memory as the garbage collector copes with unreferenced objects. If you save file descriptors without closing in some data structure or you don't close the files after using, you can reach the maximum limit allowed to a process (this is controlled per process and can be changed by the ulimit shell command)
But if your problem is a file descriptor leak, pushing up the ulimit will only delay the problem some time. File descriptors must be closed, or you'll run into trouble.
I've just ran across this difference today, the explanation is that lsof takes into account more types of files, like memory-mapped objects, run-time libraries etc

How to close open (deleted) file descriptor on linux shell

If i use
lsof -n | grep deleted
I have along list of php5-fpm list values.
two sample output of a list value:
(deleted)/dev/zero (stat: No such file or directory)
(deleted)/tmp/.ZendSem.JQTejx
1) How can i close them within an openVZ container?
2) Is this a result of forgetting to close a mysql handle within a php script?
df -h
shows 41% /var/lib/vz/root/102/var/www/clients/client1/web1/log
and within the log directory there are only a few MB
so how to restore the lost webspace??

How to detect file descriptor leaks in Node

I suspect that I have a file descriptor leak in my Node application, but I'm not sure how to confirm this. Is there a simple way to detect file descriptor leaks in Node?
Track open files
On linux you can use the lsof command to list the open files [for a process].
Get the PIDs of the thing you want to track:
ps aux | grep node
Let's say its PID 1111 and 1234, list the open files:
lsof -p 1111,1234
You can save that list and compare when you expect them to be released by your app.
Make it easier to reproduce
If it's taking a while to confirm this (because it takes a while to run out of descriptors) you can try to lower the limit for file descriptors available using ulimit
ulimit -n 500 #or whatever number makes sense for you
#now start your node app in this terminal

Resources