What does "soft/hard nofile" mean on Linux - linux

When I tried to install a software on RedHat EL5, I got the error that the expected value of soft/hard nofile is 4096 while the default is 1024. I managed to increase the number, but I don't know what the parameters are. Are they refering to soft link and hard link?
The way I change it is:
A) modify the /etc/security/limits.conf
user soft nofile 5000
user hard nofile 6000
B) modify the /etc/pam.d/system-auth
session required /lib/security/$ISA/pam_limits.so
C) modify /etc/pam.d/login
session required pam_limits.so
After making the change (by switching to root). It seems that I have to reboot machine to make it effect. But some post online say that it should come to effect right after making the change. Would appreciate if someone can clarify it.

These are: a 'soft' and a 'hard' limit for number of files a process may have opened at a time. Both limit the same resource (no relation to hard links or anything). The difference is: the soft limit may be changed later, up to the hard limit value, by the process running with these limits and hard limit can only be lowered – the process cannot assign itself more resources by increasing the hard limit (except processes running with superuser privileges (as root)).
Similar limits can be set for other system resources: system memory, CPU time, etc. See the setrlimit(2) manual page or the description of your shell's ulimit build-in command (e.g. in the bash(1) manual page.

No reboot is required, but /etc/security/limits.conf is only processed when /lib/security/pam_limits.so runs, which is at login time, and the values are inherited by child processes. After a new login, anything under that login will inherit the values specified.

As an additional aside, some distros include /etc/security/limits.d where "snippets" of limit configurations can be placed. You can create files such as this:
$ ll /etc/security/limits.d/
-rw-r--r--. 1 root root 191 Aug 18 10:26 90-nproc.conf
-rw-r--r-- 1 root root 70 Sep 29 12:54 90-was-filedesc.conf
With files containing whatever limits you want to set:
$ more /etc/security/limits.d/90-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
* soft nproc 1024
root soft nproc unlimited
$ more /etc/security/limits.d/90-was-filedesc.conf
root hard nofile 20000
I find using this method to manage these types of overrides much cleaner than mucking with /etc/security/limits.conf.
Also if you want to set both soft/hard to the same value you can use the - as the type.
$ more /etc/security/limits.d/90-was-filedesc.conf
root - nofile 20000

Related

Unable to increase open file limits in Linux

I had an error message indicating that a process was unable to complete because the number of open files exceeded the limit. Checking the soft limit associated with my username
ulimit -Sn
gives a user soft limit 1024. I attempted to increase the soft limit associated with my username (I have root privilege) to 4096 following the instructions
https://www.tecmint.com/increase-set-open-file-limits-in-linux/
i.e. in /etc/security/limits.conf I added the line
# <username> soft nofile 4096
When I exited and logged back in, the user limit was still 1024 (rebooting the server didn't reset it either).
Does the limits.conf file require a specific spacing (e.g. use of tabs vs. spaces, or a certain number of spaces between columns)? I can't think of anything else that may be wrong at this point.

What are the total number of users operating in a linux os at a time?

I wanted to know the answer since I couldn't find it anywhere.
It depends on the maximum UID and PID. UIDs are 32 bit, so it can be 4,294,967,296, but PID range is narrower: 2^22, which is exactly: 4,194,304. This is the theoretic maximum; in real world there are some running daemons already, so approximately 4 million. (on 32bit it's only 32,768)
You can use "word count" command like below :
users | wc -w
Theoretically you can have as many users as the user ID space supports. To determine this on a particular system check out the definition of the uid_t type. It is usually defined as unsigned_int or int. On intel architectures, sizes are defined in /usr/include/bits/typsizes.h. You can check value of this variable in your system by typing following command on terminal
cat /usr/include/bits/typesizes.h | grep UID_T
In my system, output of this command shows:
define __UID_T_TYPE __U32_TYPE
This means system can host 4294967296 (2^32) different users. However, other resources may become exhausted before you reach this limit, e.g. disk space. If you create a home directory for each user then even with just 1MB of space for each user you need over 4PBs of storage. Also, large number of users leaving processes running in the background, scheduling cron jobs, opening ftp and/or ssh sessions can create a severe burden on the system.
Limit for simultaneous logins:
When logging in using SSH, you use a pseudo-terminal (a pty) allocated to the SSH daemon, not a real one (a tty). Pseudo-terminals are created and destroyed as needed. You can find the number of ptys allowed to be allocated at one time by
cat /proc/sys/kernel/pty/max
In my system, output of this command shows:
4096
This means 4096 users can simultaneously login on this machine (remote login).
PS: My Linux distribution is 64-bit Fedora 23
PS.PS: Please don't forget to mark answered. And hit my rating. Thank you :)

How can I limit the max numbers of folders that user can create in linux

Since I have been told that if a user in my computer will create "infinite" number of folders / files (even empty) it can cause my computer to become much much slower (even stuck), I want to limit the maximum number of files/directories that user can create.
I'm afraid that one user will try to create a huge number of files and it will become a problem for all the other users, so it will be a security issue,
How do I do that, how do I limit the max number of files/directories each user can create?
You should first enable quota check on your filesystem
Modify the /etc/fstab, and add the keyword usrquota and grpquota to the corresponding filesystem that you would like to monitor.
The following example indicates that both user and group quota check is enabled on /home filesystem
# cat /etc/fstab
LABEL=/home /home ext2 defaults,usrquota,grpquota 1 2
reboot after this is done.
Once you’ve enabled disk quota check on the filesystem, collect all quota information initially as shown below.
# quotacheck -avug
quotacheck: Scanning /dev/sda3 [/home] done
quotacheck: Checked 5182 directories and 31566 files
quotacheck: Old file not found.
quotacheck: Old file not found.
Now, use the edquota command as shown below, to edit the quota information for a specific user.
For example, to change the disk quota for user ‘ramesh’, use edquota command, which will open the soft, hard limit values in an editor as shown below.
# edquota ramesh
Disk quotas for user ramesh (uid 500):
Filesystem blocks soft hard inodes soft hard
/dev/sda3 1419352 0 0 1686 0 0
Hard limit – if you specify 2GB as hard limit, user will not be able to create new files after 2GB
Soft limit – if you specify 1GB as soft limit, user will get a warning message “disk quota exceeded”, once they reach 1GB limit. But, they’ll still be able to create new files until they reach the hard limit
Lastly, if you would like a report each day on a users quota you can do the following.
Add the quotacheck to the daily cron job. Create a quotacheck file as shown below under the /etc/cron.daily directory, that will run the quotacheck command everyday. This will send the output of the quotacheck command to root email address.
# cat /etc/cron.daily/quotacheck
quotacheck -avug
This is what quotas are designed for. You can use file system quotas to enforce limits, per user and/or per group for:
the amount of disk size space that can be used
the number of blocks that can be used
the number of inodes that can be created.
The number of inodes will essentially limit the number of files and directories a user can create.
There is extensive, very good quality documentation about how to configure file system quotas in many sources, which I suggest you read further:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-disk-quotas.html
https://wiki.archlinux.org/index.php/disk_quota
http://www.ibm.com/developerworks/library/l-lpic1-v3-104-4/
http://www.firewall.cx/linux-knowledgebase-tutorials/linux-administration/838-linux-file-system-quotas.html

what's the difference between output of "ulimit" command and the content of file "/etc/security/limits.conf"?

I am totally confused by obtaining the limits of open file descriptors in Linux.
which value is correct by them?
ulimit -n ======> 65535
but
vim /etc/security/limits.conf
soft nofile 50000
hard nofile 90000
The limits applied in /etc/security/limits.conf are applied by the limits authentication module at login if it's part of the PAM configuration. Then the shell gets invoked which can apply it's own limits to the shell.
If you're asking which one is in effect, then it's the result from the ulimit call. if it's not invoked with the -H option, then it displays the soft limit.
The idea behind the limits.conf settings is to have a global place to apply limits for, for example, remote logins
Limits for things like file descriptors can be set at the user level, or on a system wide level. /etc/security/limits.conf is where you can set user level limits, which might be different limits for each user, or just defaults that apply to all users. The example you show has a soft (~warning) level limit of 50000, but a hard (absolute maximum) limit of 90000.
However, a system limit of 65535 might be in place, which would take precedence over the user limit. I think system limits are set in /etc/sysctl.conf, if my memory serves correctly. You might check there to see if you're being limited by the system.
Also, the ulimit command can take switches to specifically show the soft (-Sn) and hard (-Hn) limits for file descriptors.
i think this conf is used by all the apps in the system. if you do want to change one particular app, you can try setrlimit() or getrlimt(). man doc dose explain everything.

Linux #open-file limit

we are facing a situation where a process gets stuck due to running out of open files limit. The global setting file-max was set extremely high (set in sysctl.conf) & per-user value also was set to a high value in /etc/security/limits.conf. Even ulimit -n reflects the per-user value when ran as that head-less user (process-owner). So the question is, does this change require system reboot (my understanding is it doesn't) ? Has anyone faced the similar problem ? I am running ubuntu lucid & the application is a java process. #of ephemeral port range too is high enough, & when checked during the issue, the process had opened #1024 (Note the default value) files (As reported by lsof).
One problem you might run into is that the fd_set used by select is limited to FD_SETSIZE, which is fixed at compile time (in this case of the JRE), and that is limited to 1024.
#define FD_SETSIZE __FD_SETSIZE
/usr/include/bits/typesizes.h:#define __FD_SETSIZE 1024
Luckily both the c library and the kernel can handle arbitrary sized fd_set, so, for a compiled C program, it is possible to raise that limit.
Considering you have edited file-max value in sysctl.conf and /etc/security/limits.conf correctly; then:
edit /etc/pam.d/login, adding the line:
session required /lib/security/pam_limits.so
and then do
#ulimit -n unlimited
Note that you may need to log out and back in again before the changes take effect.

Resources