How to solve a fs.file-max value too low? - linux

I had an error saying "Too many open files in system", and I read that I needed to increase the value of the fs.file-max variable in /etc/sysctl.conf.
After I ran ulimit -Hn which returned 1024, I assumed I would increase this value and set fs.file-max to 4096, and applied this change with sysctl -p.
Now I cannot do anything as I get the error "Too many open files in system" for every single command I run.
I read a bit more and it seems I need to set fs.file-max to a much larger value, such as 200000 as suggested in several places.
Now my problem is: how can I edit that value again now? If possible without restarting the machine..

The solution was to hard reboot the machine. After restart, I have been able to run commands and remove the fs.file-max value.
I curious however, as to what would have been the solution if I set fs.file-max = 1 ?

Related

Limit Chromium cache size in Linux

I need to limit the cache size of Chromium in my debian computer. I've tried to edit the master-preferences in order to solve this problem, but every time I reopen the browser this file restore its original values.
How can I modify this values to have for example a limit of 10M cache everytime?
Absolutely. An easy fix to this is to add the following argument to the command.
chromium-browser --disk-cache-size=n
say n is 500000000 this would be 500 MB
You can check to make sure it increased it by typing the following in your browser and then looking at the Max Size value.
chrome://net-internals/#httpCache
Please see https://askubuntu.com/questions/104415/how-do-i-increase-cache-size-in-chrome/104429#104429

what's the difference between output of "ulimit" command and the content of file "/etc/security/limits.conf"?

I am totally confused by obtaining the limits of open file descriptors in Linux.
which value is correct by them?
ulimit -n ======> 65535
but
vim /etc/security/limits.conf
soft nofile 50000
hard nofile 90000
The limits applied in /etc/security/limits.conf are applied by the limits authentication module at login if it's part of the PAM configuration. Then the shell gets invoked which can apply it's own limits to the shell.
If you're asking which one is in effect, then it's the result from the ulimit call. if it's not invoked with the -H option, then it displays the soft limit.
The idea behind the limits.conf settings is to have a global place to apply limits for, for example, remote logins
Limits for things like file descriptors can be set at the user level, or on a system wide level. /etc/security/limits.conf is where you can set user level limits, which might be different limits for each user, or just defaults that apply to all users. The example you show has a soft (~warning) level limit of 50000, but a hard (absolute maximum) limit of 90000.
However, a system limit of 65535 might be in place, which would take precedence over the user limit. I think system limits are set in /etc/sysctl.conf, if my memory serves correctly. You might check there to see if you're being limited by the system.
Also, the ulimit command can take switches to specifically show the soft (-Sn) and hard (-Hn) limits for file descriptors.
i think this conf is used by all the apps in the system. if you do want to change one particular app, you can try setrlimit() or getrlimt(). man doc dose explain everything.

Linux #open-file limit

we are facing a situation where a process gets stuck due to running out of open files limit. The global setting file-max was set extremely high (set in sysctl.conf) & per-user value also was set to a high value in /etc/security/limits.conf. Even ulimit -n reflects the per-user value when ran as that head-less user (process-owner). So the question is, does this change require system reboot (my understanding is it doesn't) ? Has anyone faced the similar problem ? I am running ubuntu lucid & the application is a java process. #of ephemeral port range too is high enough, & when checked during the issue, the process had opened #1024 (Note the default value) files (As reported by lsof).
One problem you might run into is that the fd_set used by select is limited to FD_SETSIZE, which is fixed at compile time (in this case of the JRE), and that is limited to 1024.
#define FD_SETSIZE __FD_SETSIZE
/usr/include/bits/typesizes.h:#define __FD_SETSIZE 1024
Luckily both the c library and the kernel can handle arbitrary sized fd_set, so, for a compiled C program, it is possible to raise that limit.
Considering you have edited file-max value in sysctl.conf and /etc/security/limits.conf correctly; then:
edit /etc/pam.d/login, adding the line:
session required /lib/security/pam_limits.so
and then do
#ulimit -n unlimited
Note that you may need to log out and back in again before the changes take effect.

What does "soft/hard nofile" mean on Linux

When I tried to install a software on RedHat EL5, I got the error that the expected value of soft/hard nofile is 4096 while the default is 1024. I managed to increase the number, but I don't know what the parameters are. Are they refering to soft link and hard link?
The way I change it is:
A) modify the /etc/security/limits.conf
user soft nofile 5000
user hard nofile 6000
B) modify the /etc/pam.d/system-auth
session required /lib/security/$ISA/pam_limits.so
C) modify /etc/pam.d/login
session required pam_limits.so
After making the change (by switching to root). It seems that I have to reboot machine to make it effect. But some post online say that it should come to effect right after making the change. Would appreciate if someone can clarify it.
These are: a 'soft' and a 'hard' limit for number of files a process may have opened at a time. Both limit the same resource (no relation to hard links or anything). The difference is: the soft limit may be changed later, up to the hard limit value, by the process running with these limits and hard limit can only be lowered – the process cannot assign itself more resources by increasing the hard limit (except processes running with superuser privileges (as root)).
Similar limits can be set for other system resources: system memory, CPU time, etc. See the setrlimit(2) manual page or the description of your shell's ulimit build-in command (e.g. in the bash(1) manual page.
No reboot is required, but /etc/security/limits.conf is only processed when /lib/security/pam_limits.so runs, which is at login time, and the values are inherited by child processes. After a new login, anything under that login will inherit the values specified.
As an additional aside, some distros include /etc/security/limits.d where "snippets" of limit configurations can be placed. You can create files such as this:
$ ll /etc/security/limits.d/
-rw-r--r--. 1 root root 191 Aug 18 10:26 90-nproc.conf
-rw-r--r-- 1 root root 70 Sep 29 12:54 90-was-filedesc.conf
With files containing whatever limits you want to set:
$ more /etc/security/limits.d/90-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
* soft nproc 1024
root soft nproc unlimited
$ more /etc/security/limits.d/90-was-filedesc.conf
root hard nofile 20000
I find using this method to manage these types of overrides much cleaner than mucking with /etc/security/limits.conf.
Also if you want to set both soft/hard to the same value you can use the - as the type.
$ more /etc/security/limits.d/90-was-filedesc.conf
root - nofile 20000

PHP - Plesk - Cron - Allowed memory size exhausted?

ini_set('max_execution_time',0);
ini_set('memory_limit','1000M');
These are the first two lines at the very top of my script.
I was under the impression if I ran something via cron memory limits didn't apply, but I was wrong. Safe mode is off and when I test to see if these values are being set they are but I keep getting the good ol' "PHP Fatal: Memory exhausted" error.
Any ideas what I may be doing wrong? And whats the "more elegant way" of writing "infinite" for the "memory limit" value is it -1 or something?
possible that suhosin is running on your Server? If yes, you have to set the "suhosin.memory_limit" inside your php.ini.
Suhosin does now allow to allocate more memory, even if safemode is off.
Changed memory limit to -1 instead of '1000M' now everything works perfectly.
You can't use non-numeric values ("M", "K") outside php.ini proper. Setting 10000000 would probably work.

Resources