How do I increase the limit on the number of open files in NixOS? - nixos

I'm testing out IPFS on NixOS and I'm seeing errors due to "too many open files" in the journalctl -u ipfs logs. ulimit -n shows the limit on the number of open files is set at 1024. How do I increase the file descriptor limit in configuration.nix?

I was able to increase the number of open files by adding the following to configuration.nix.
security.pam.loginLimits = [{
domain = "*";
type = "soft";
item = "nofile";
value = "8192";
}];
After running nixos-rebuild switch and reboot, ulimit -n reported 8192.
More specifically, it's also possible to limit the number of file handles used by the IPFS service by adding the following to configuration.nix.
services.ipfs.serviceFdlimit = 32768;

Related

increase number of oppened files at the same time. Ubuntu 16.04.4 LTS

In Ubuntu Mate 16.04.4LTS, every time I run the command:
$ ulimit -a
I get:
open files (-n) 1024
I tried to increase this limit adding at the /etc/security/limits.conf the command:
myusername hard nofile 100000
but doesn't matter this value 1024 persist if I run ulimit -a. I rebooted the system after the modification yet the problem persist.
Also, if I run
ulimit -n 100000
I get the response:
ulimit: open files: cannot modify limit: Operation not permitted
and if I run
sudo ulimit -n 100000
I get:
sudo: ulimit: command not found
Any ideas on how to increse that limit?
thx
From man bash under ulimit:
-n The maximum number of open file descriptors (most systems do not allow this value to be set)
Maybe your problem is simply that your system does not support modifying this limit?
I found the solution, just after I posted this question. Based on:
https://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user
I also edited:
/etc/pam.d/common-session
and added the following line to the end:
session required pam_limits.so
All works now.

What is the most correct way to set limits of number of files on Linux?

There are 3 ways to set limits of number of files and sockets on Linux:
echo "100000" > /proc/sys/fs/file-max
ulimit -n 100000
sysctl -w fs.file-max=100000
What is the difference?
What is the most correct way to set limits of number of files on Linux?
sysctl is an interface for writing to /proc/sys and so does the same as echoing directly to the files. Whereas sysctl applies across the whole filesystem, ulimit only applies to writes from the shell and processes started by the shell.

How to detect file descriptor leaks in Node

I suspect that I have a file descriptor leak in my Node application, but I'm not sure how to confirm this. Is there a simple way to detect file descriptor leaks in Node?
Track open files
On linux you can use the lsof command to list the open files [for a process].
Get the PIDs of the thing you want to track:
ps aux | grep node
Let's say its PID 1111 and 1234, list the open files:
lsof -p 1111,1234
You can save that list and compare when you expect them to be released by your app.
Make it easier to reproduce
If it's taking a while to confirm this (because it takes a while to run out of descriptors) you can try to lower the limit for file descriptors available using ulimit
ulimit -n 500 #or whatever number makes sense for you
#now start your node app in this terminal

Node.js apache bench test

Playing and trying to see how fast node.js can serve a static file from disk using apache bench
ab -r -n 10000 -c 1000 http://server:8080/loadtestfile.txt
I've got ulimit problem on Ubuntu 11.04 x64 VirtualBox VM on OSX Lion
(node) Hit max file limit. Increase "ulimit - n"
I can't increase the limit anymore.
$ ulimit -n 1000000
$ limit -n 1100000
-su: ulimit: open files: cannot modify limit: Operation not permitted
Is this the right way to force node.js to reload the file from disk to serve each HTTP request? How do I increase the limit beyond 1000000?
Normal curl request works:
curl http://server:8080/loadtestfile.txt
Code
var sys = require('sys'),
http = require('http'),
fs = require('fs'),
index;
http.createServer(function(request, response) {
fs.readFile('./loadtestfile.txt', function (err, data) {
response.writeHeader(200, {"Content-Type": "text/plain"});
response.write(data);
response.end();
});
}).listen(8080);
Ideally, your application would run within the limits given to it; it should cleanly handle accept(2) returning an error condition with errno(3) set to EMFILE or ENFILE by not trying to accept another connection until an existing connection dies.
However, fixing the application to handle this case gracefully can be difficult; if you just want to raise your limits further to do more testing, there are two possibilities.
Edit /etc/security/limits.conf to raise the hard limit for your user account for the nofile limit to something much higher and then log in your user account again.
Do your testing in a root shell; you could either log in as root directly or use sudo -s or su to start the shell, then run the ulimit -n command from there.
The difficulty is because raising limits requires administrative access. (When you've got a few hundred users on the system simultaneously, you want to keep them playing nice...)
However, I'm going to guess that your application will also run into the system-wide maximum number of open files limit; that is configured via /proc/sys/fs/file-max. Raise that limit too, if the application complains.

How do I change the number of open files limit in Linux? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
When running my application I sometimes get an error about too many files open.
Running ulimit -a reports that the limit is 1024. How do I increase the limit above 1024?
Edit
ulimit -n 2048 results in a permission error.
You could always try doing a ulimit -n 2048. This will only reset the limit for your current shell and the number you specify must not exceed the hard limit
Each operating system has a different hard limit setup in a configuration file. For instance, the hard open file limit on Solaris can be set on boot from /etc/system.
set rlim_fd_max = 166384
set rlim_fd_cur = 8192
On OS X, this same data must be set in /etc/sysctl.conf.
kern.maxfilesperproc=166384
kern.maxfiles=8192
Under Linux, these settings are often in /etc/security/limits.conf.
There are two kinds of limits:
soft limits are simply the currently enforced limits
hard limits mark the maximum value which cannot be exceeded by setting a soft limit
Soft limits could be set by any user while hard limits are changeable only by root.
Limits are a property of a process. They are inherited when a child process is created so system-wide limits should be set during the system initialization in init scripts and user limits should be set during user login for example by using pam_limits.
There are often defaults set when the machine boots. So, even though you may reset your ulimit in an individual shell, you may find that it resets back to the previous value on reboot. You may want to grep your boot scripts for the existence ulimit commands if you want to change the default.
If you are using Linux and you got the permission error, you will need to raise the allowed limit in the /etc/limits.conf or /etc/security/limits.conf file (where the file is located depends on your specific Linux distribution).
For example to allow anyone on the machine to raise their number of open files up to 10000 add the line to the limits.conf file.
* hard nofile 10000
Then logout and relogin to your system and you should be able to do:
ulimit -n 10000
without a permission error.
1) Add the following line to /etc/security/limits.conf
webuser hard nofile 64000
then login as webuser
su - webuser
2) Edit following two files for webuser
append .bashrc and .bash_profile file by running
echo "ulimit -n 64000" >> .bashrc ; echo "ulimit -n 64000" >> .bash_profile
3) Log out, then log back in and verify that the changes have been made correctly:
$ ulimit -a | grep open
open files (-n) 64000
Thats it and them boom, boom boom.
If some of your services are balking into ulimits, it's sometimes easier to put appropriate commands into service's init-script. For example, when Apache is reporting
[alert] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread
Try to put ulimit -s unlimited into /etc/init.d/httpd. This does not require a server reboot.

Resources