How do I change the number of open files limit in Linux? [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
When running my application I sometimes get an error about too many files open.
Running ulimit -a reports that the limit is 1024. How do I increase the limit above 1024?
Edit
ulimit -n 2048 results in a permission error.

You could always try doing a ulimit -n 2048. This will only reset the limit for your current shell and the number you specify must not exceed the hard limit
Each operating system has a different hard limit setup in a configuration file. For instance, the hard open file limit on Solaris can be set on boot from /etc/system.
set rlim_fd_max = 166384
set rlim_fd_cur = 8192
On OS X, this same data must be set in /etc/sysctl.conf.
kern.maxfilesperproc=166384
kern.maxfiles=8192
Under Linux, these settings are often in /etc/security/limits.conf.
There are two kinds of limits:
soft limits are simply the currently enforced limits
hard limits mark the maximum value which cannot be exceeded by setting a soft limit
Soft limits could be set by any user while hard limits are changeable only by root.
Limits are a property of a process. They are inherited when a child process is created so system-wide limits should be set during the system initialization in init scripts and user limits should be set during user login for example by using pam_limits.
There are often defaults set when the machine boots. So, even though you may reset your ulimit in an individual shell, you may find that it resets back to the previous value on reboot. You may want to grep your boot scripts for the existence ulimit commands if you want to change the default.

If you are using Linux and you got the permission error, you will need to raise the allowed limit in the /etc/limits.conf or /etc/security/limits.conf file (where the file is located depends on your specific Linux distribution).
For example to allow anyone on the machine to raise their number of open files up to 10000 add the line to the limits.conf file.
* hard nofile 10000
Then logout and relogin to your system and you should be able to do:
ulimit -n 10000
without a permission error.

1) Add the following line to /etc/security/limits.conf
webuser hard nofile 64000
then login as webuser
su - webuser
2) Edit following two files for webuser
append .bashrc and .bash_profile file by running
echo "ulimit -n 64000" >> .bashrc ; echo "ulimit -n 64000" >> .bash_profile
3) Log out, then log back in and verify that the changes have been made correctly:
$ ulimit -a | grep open
open files (-n) 64000
Thats it and them boom, boom boom.

If some of your services are balking into ulimits, it's sometimes easier to put appropriate commands into service's init-script. For example, when Apache is reporting
[alert] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread
Try to put ulimit -s unlimited into /etc/init.d/httpd. This does not require a server reboot.

Related

How to increase maximum open file limit in Red Hat Enterprise Linux 5?

As the title says.
I've found this question:
How to increase Neo4j's maximum file open limit (ulimit) in Ubuntu?
But I don't even has this file: /etc/init.d/neo4j-service, I'm guessing it's because I'm using RHEL5, not Debian, as the responder was using.
Then I've added both two lines:
root soft nofile 40000
root hard nofile 40000
into my /etc/security/limits.conf
Then after logging out and logging in again, $ulimit -Sn and $ulimit -Hn still returns 1024,
Also, I don't even has this file:
/etc/pam.d/common-session under pam.d directory. Should I create this file myself and just one that one line in here? I don't think this should be the way out.
Any ideas please?
Thanks
I don't know what is true RHEL way, but you can change the limit using sysctl:
$ sysctl -w fs.file-max=100000
To make the change permanent, add next string to /etc/sysctl.conf:
fs.file-max = 100000
then apply the change using command
$ sysctl -p

Settings in limits.conf can't affect process started by init.d script

My host is redhat, default fd limit is 1024, and I added following lines in /etc/security/limits.conf :
* soft nofile 8192
* hard nofile 65535
After this, newly logged in shells have FD limit raised to 8192 correctly, but the processes started by init.d script doesn't have their fd limit raised, i.e. they still have the fd limit as 1024, only after I logged in and use service command to restart them, could their fd limit be raised to 8192.
So how can I make daemons started by init.d script to have the FD limit set in limits.conf?
Thanks.
The /etc/security/limits.conf is a configuration file for pam authentication. It sets limits for logged in users not for system processes:
From man limits.conf:
The pam_limits.so module applies ulimit limits, nice priority and number of simultaneous login sessions limit to
user login sessions. This description of the configuration file syntax applies to the /etc/security/limits.conf file
and *.conf files in the /etc/security/limits.d directory.

fork: retry: Resource temporarily unavailable [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I tried installing Intel MPI Benchmark on my computer and I got this error:
fork: retry: Resource temporarily unavailable
Then I received this error again when I ran ls and top command.
What is causing this error?
Configuration of my machine:
Dell precision T7500
Scientific Linux release 6.2 (Carbon)
This is commonly caused by running out of file descriptors.
There is the systems total file descriptor limit, what do you get from the command:
sysctl fs.file-nr
This returns counts of file descriptors:
<in_use> <unused_but_allocated> <maximum>
To find out what a users file descriptor limit is run the commands:
sudo su - <username>
ulimit -Hn
To find out how many file descriptors are in use by a user run the command:
sudo lsof -u <username> 2>/dev/null | wc -l
So now if you are having a system file descriptor limit issue you will need to edit your /etc/sysctl.conf file and add, or modify it it already exists, a line with fs.file-max and set it to a value large enough to deal with the number of file descriptors you need and reboot.
fs.file-max = 204708
Another possibility is too many threads. We just ran into this error message when running a test harness against an app that uses a thread pool. We used
watch -n 5 -d "ps -eL <java_pid> | wc -l"
to watch the ongoing count of Linux native threads running within the given Java process ID. After this hit about 1,000 (for us--YMMV), we started getting the error message you mention.

How do do configure a Hudson linux slave to generate core files?

I've seeing occasional segmentation faults in glibc on several different Fedora Core 9 Hudson Slaves. I've attempted to configure each slave to generate core files and place them in /corefiles, but have had no luck.
Here is what I've done on each linux slave:
1) Create a corefile storage location
sudo install -m 1777 -d /corefiles
2) Directed the corefiles to the storage location by adding the following to /etc/sysctl.conf
kernel.core_pattern = /corefiles/core.%e-PID:%p-%t-signal_%s-%h
3) Enabled unlimited corefiles for all users by adding the following to /etc/profile
ulimit -c unlimited
Is there some additional Linux magic required or do I need to do something to the Hudson slave or JVM?
Thanks for the help
Did you reboot or run "sysctl -p" (as root) after editing /etc/sysctl.conf ?
Also, if i remember correctly, ulimit values are per user and calling ulimit wont survive a boot. You should add this to /etc/security/limits.conf:
* soft core unlimited
Or call ulimit in the script that starts hudson if you don't wont everyone to produce coredumps.
I figured this out :-).
The issue is Hudson invokes the bash shell as a non-interactive shell, which will bypass the ulimit setting in /etc/profile. The solution is to add the BASH_ENV environmental variable tothe Hudson slaves and set the value to a file with ulimit -c unlimited set.

ssh remote command execution and ulimit

I have the following script:
cat > /tmp/script.sh <<EndOfScript
#!/bin/sh
ulimit -n 8192
run_app
EndOfScript
which runs smoothly locally, it is always ok. But if I try to run it remotely through ssh:
scp /tmp/script.sh user#host:/tmp/script.sh
ssh user#host "chmod 755 /tmp/script.sh; /tmp/script.sh"
I got the error:
ulimit: open files: cannot modify limit: Operation not permitted
I also tried the following command:
ssh user#host "ulimit -n 8192"
same error.
It looks like that ssh remote command execution is enforcing a 1024 hard limit on nofile limit, but I can not find out how to modify this default value. I tried to modify /etc/security/limits.conf and restart sshd, still the same error.
Instead of using the workaround of /etc/initscript (and do not make a typo in that file.. :), if you just want sshd to honor the settings you made in /etc/security/limits.conf, you should make sure you have UsePAM yes in /etc/ssh/sshd_config, and /etc/pam.d/sshd lists session required pam_limits.so (or otherwise includes another file that does so).
That should be all there is to it.
In older versions od openssh (<3.6 something) there was also a problem with UsePrivilegeSeparation that prevented limits being honored, but it was fixed in newer versions.
Fiannly figured out the answer: add the following to /etc/initscript
ulimit -c unlimited
ulimit -HSn 65535
# Execute the program.
eval exec "$4"
ulimit requires superuser privileges to run.
I would suggest you to ask the server administrator to modify that value for you on the server you are trying to run the script on.
He/She can do that by modifying /etc/secutiry/limits.conf on Linux. Here is an example that might help:
* soft nofile 8192
* hard nofile 8192
After that, you don't need to restart sshd. Just logout and login again.
I would suggest you to ask the same question in ServerFault though. You'll get better server-side related answers there.
Check the start up scripts (/etc/profile, ~/.??*) for a call to ulimit. IIRC, once a limit has been imposed, it can't be widened anymore.

Resources