What it is a soft variable? - linux

I am trying to understand this fragment from the benchmark of CentOs6' guide Hardering but I'm not been success yet... Can anybody explain me what it is a soft variable?
The context where is said is the next:
"Setting a hard limit on core dumps prevents users from overriding the soft variable. If core dumps are required, consider setting limits for user groups (see limits.conf(5)). In addition, setting the fs.suid_dumpable variable to 0 will prevent setuid programs from dumping core [...]"
Page 39 from Hardering Guide of CentOs6
Thank you so much!

Here read this: linux.die.net/man/5/limits.conf it helps explain hard v. soft. – Tom Myddeltyn

A "soft" limit is one that you may decrease or increase (up to the "hard" limit), and a "hard" limit is one that you can't increase.
Initial limits are imposed by the system configuration. See the limits.conf manual.
See also the entry for the ulimit builtin of your particular shell.

Related

Limiting the memory usage of a program in Linux

I'm new to Linux and Terminal (or whatever kind of command prompt it uses), and I want to control the amount of RAM a process can use. I already looked for hours to find an easy-t-use guide. I have a few requirements for limiting it:
Multiple instances of the program will be running, but I only want to limit some of the instances.
I do not want the process to crash once it exceeds the limit. I want it to use HDD page swap.
The program will run under WINE, and is a .exe.
So can somebody please help with the command to limit the RAM usage on a process in Linux?
The fact that you’re using Wine makes no difference in this particular context, which leaves requirements 1 and 2. Requirement 2 –
I do not want the process to crash once it exceeds the limit. I want it to use HDD page swap.
– is known as limiting the resident set size or rss of the process, and it’s actually rather nontrivial to do on Linux, as is demonstrated by a question asked in 2010. You’ll need to set up Linux control groups (cgroups). Fortunately, Justin L.’s answer gives a brief rundown on how to do so. Note that
instead of jlebar, you should use your own Unix user name, and
instead of your/program, you should use wine /path/to/Windows/program.exe.
Using cgroups will also satisfy your other requirements – you can start as many instances of the program as you wish, but only those which you start with cgexec -g memory:limited will be limited.

what's the difference between output of "ulimit" command and the content of file "/etc/security/limits.conf"?

I am totally confused by obtaining the limits of open file descriptors in Linux.
which value is correct by them?
ulimit -n ======> 65535
but
vim /etc/security/limits.conf
soft nofile 50000
hard nofile 90000
The limits applied in /etc/security/limits.conf are applied by the limits authentication module at login if it's part of the PAM configuration. Then the shell gets invoked which can apply it's own limits to the shell.
If you're asking which one is in effect, then it's the result from the ulimit call. if it's not invoked with the -H option, then it displays the soft limit.
The idea behind the limits.conf settings is to have a global place to apply limits for, for example, remote logins
Limits for things like file descriptors can be set at the user level, or on a system wide level. /etc/security/limits.conf is where you can set user level limits, which might be different limits for each user, or just defaults that apply to all users. The example you show has a soft (~warning) level limit of 50000, but a hard (absolute maximum) limit of 90000.
However, a system limit of 65535 might be in place, which would take precedence over the user limit. I think system limits are set in /etc/sysctl.conf, if my memory serves correctly. You might check there to see if you're being limited by the system.
Also, the ulimit command can take switches to specifically show the soft (-Sn) and hard (-Hn) limits for file descriptors.
i think this conf is used by all the apps in the system. if you do want to change one particular app, you can try setrlimit() or getrlimt(). man doc dose explain everything.

Linux #open-file limit

we are facing a situation where a process gets stuck due to running out of open files limit. The global setting file-max was set extremely high (set in sysctl.conf) & per-user value also was set to a high value in /etc/security/limits.conf. Even ulimit -n reflects the per-user value when ran as that head-less user (process-owner). So the question is, does this change require system reboot (my understanding is it doesn't) ? Has anyone faced the similar problem ? I am running ubuntu lucid & the application is a java process. #of ephemeral port range too is high enough, & when checked during the issue, the process had opened #1024 (Note the default value) files (As reported by lsof).
One problem you might run into is that the fd_set used by select is limited to FD_SETSIZE, which is fixed at compile time (in this case of the JRE), and that is limited to 1024.
#define FD_SETSIZE __FD_SETSIZE
/usr/include/bits/typesizes.h:#define __FD_SETSIZE 1024
Luckily both the c library and the kernel can handle arbitrary sized fd_set, so, for a compiled C program, it is possible to raise that limit.
Considering you have edited file-max value in sysctl.conf and /etc/security/limits.conf correctly; then:
edit /etc/pam.d/login, adding the line:
session required /lib/security/pam_limits.so
and then do
#ulimit -n unlimited
Note that you may need to log out and back in again before the changes take effect.

Limit the memory and cpu available for a user in Linux

I am a little concerned with the amount of resources that I can use in a shared machine. Is there any way to test if the administrator has a limit in the amount of resources that I can use? And if does, to make a more complete question, how can I set up such limit?
For process related limits, you can have a look in /etc/security/limits.conf (read the comments in the file, use google or use man limits.conf for more information). And as jpalecek points out, you may use ulimit -a to see (and possibly modify) all such limits currently in effect.
You can use the command quota to see if a disk quota is in effect.
You can try running
ulimit -a
to see what resource limits are in effect. Also, if you are allowed to change such limits, you can change them by the ulimit command, eg.
ulimit -c unlimited
lifts any limit for a size of a core file a process can make.
At the C level, the relevant functions (actually syscalls(2)...) could be setrlimit(2) and setpriority(2) and sched_setattr(2). You probably would want them to be called from your shell.
See also proc(5) -and try cat /proc/self/limits and sched(7).
You may want to use the renice(1) command.
If you run a long-lasting program (for several hours) not requiring user interaction, you could consider using some batch processing. Some Linux systems have a batch or at command.

Using "top" in Linux as semi-permanent instrumentation

I'm trying to find the best way to use 'top' as semi-permanent instrumentation in the development of a box running embedded Linux. (The instrumentation will be removed from the final-test and production releases.)
My first pass is to simply add this to init.d:
top -b -d 15 >/tmp/toploop.out &
This runs top in "batch" mode every 15 seconds. Let's assume that /tmp has plenty of space…
Questions:
Is 15 seconds a good value to choose for general-purpose monitoring?
Other than disk space, how seriously is this perturbing the state of the system?
What other (perhaps better) tools could be used like this?
Look at collectd. It's a very light weight system monitoring framework coded for performance.
We use sysstat to monitor things like this.
You might find that vmstat and iostat with a delay and no repeat counter is a better option.
I suspect 15 seconds would be more than adequate unless you actually want to watch what's happening in real time, but that doesn't appear to be the case here.
As far as load, on an idling PIII 900Mhz w/ 768MB of RAM running Ubuntu (not sure which version, but not more than a year old) I have top updating every 0.5 seconds and it's about 2% CPU utilization. At 15s updates, I'm seeing 0.1% CPU utilization.
depending upon what exactly you want, you could use the output of uptime, free, and ps to get most, if not all, of top's information.
If you are looking for overall load, uptime is probably sufficient. However, if you want specific information about processes, you are adventurous, and have the /proc filessystem enabled, you may want to write your own tools. The primary benefit in this environment is that you can focus on exactly what you want and minimize the load introduced to the system.
The proc file system gives your application read access to the kernel memory that keeps track of many of the interesting variables. Reading from /proc is one of the lightest ways to get this information. Additionally, you may be able to get more information than provided by top. I've done this in the past to get amount of time spent in user and system by this process. Additionally, you can use this to get information about the number of file descriptors open by the process. You might also use this to get detailed information about how the network system is working.
Much of this information is pre-processed by other applications which can be used if you get the information you need. However, it is rather straight-forward to read the raw information. Do a man proc for more information.
Pity you haven't said what you are monitoring for.
You should decide whether 15 seconds is ok or not. Feel free to drop it way lower if you wish (and have a fast HDD)
No worries unless you are running a soft real-time system
Have a look at tools suggested in other answers. I'll add another sugestion: "iotop", for answering a "who is thrashing the HDD" questions.
At work for system monitoring during stress tests we use a tool called nmon.
What I love about nmon is it has the ability to export to XLS and generate beautiful graphs for you.
It generates statistics for:
Memory Usage
CPU Usage
Network Usage
Disk I/O
Good luck :)

Resources