I have a linux user with soft virtual memory limit (ulimit -v) set to aroud 5GB.
Having this in mind I try to do:
get all user processes with ps -u -o pid --no-heading;
for each pid, open file in /proc/pid/status;
get VmSize parameter and sum them up over all pids.
After doing so, my sum of VmSizes is 22 GB, which is not something to expect.
My question is: Is my assumption about ulimit -v >= sum of VmSizes correct? If not, what does soft limit actually mean? Is it possible to get over soft limit for specific user and still be okay with it?
Btw, ulimit -v -H is set to unlimited of it makes any difference.
The virtual memory limit is per process, not per user.
Related
In Ubuntu Mate 16.04.4LTS, every time I run the command:
$ ulimit -a
I get:
open files (-n) 1024
I tried to increase this limit adding at the /etc/security/limits.conf the command:
myusername hard nofile 100000
but doesn't matter this value 1024 persist if I run ulimit -a. I rebooted the system after the modification yet the problem persist.
Also, if I run
ulimit -n 100000
I get the response:
ulimit: open files: cannot modify limit: Operation not permitted
and if I run
sudo ulimit -n 100000
I get:
sudo: ulimit: command not found
Any ideas on how to increse that limit?
thx
From man bash under ulimit:
-n The maximum number of open file descriptors (most systems do not allow this value to be set)
Maybe your problem is simply that your system does not support modifying this limit?
I found the solution, just after I posted this question. Based on:
https://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user
I also edited:
/etc/pam.d/common-session
and added the following line to the end:
session required pam_limits.so
All works now.
There are 3 ways to set limits of number of files and sockets on Linux:
echo "100000" > /proc/sys/fs/file-max
ulimit -n 100000
sysctl -w fs.file-max=100000
What is the difference?
What is the most correct way to set limits of number of files on Linux?
sysctl is an interface for writing to /proc/sys and so does the same as echoing directly to the files. Whereas sysctl applies across the whole filesystem, ulimit only applies to writes from the shell and processes started by the shell.
how to limit gearman job server memory allocate? it seems that gearman job server will not limit its own memory usage. Is there any way to limit the gearman job server not bigger than 1G?
You are right, the gearman server does not currently have an option to limit its memory usage.
However, here are two workarounds:
Run gearmand inside a virtual machine like Virtualbox.
If you are running Linux, try using the ulimit command to limit memory usage.
$ ulimit -Sv 1000000
$ gearmand [the rest of your command line parameters]
where:
-S Specifies the soft limit for the given resource
-v Specifies virtual memory allocation, in kbytes
I am trying to generate coredump for a particular pid.
I tried to change the core file size limit using ulimit, but it will change only in /proc/self/limits ( which is for shell).
So how do i edit it for particular pid?
Bascially i have to change "Max core file size=unlimited"
Note:
1)Our linux version dont have prlimit.
2)Even the below command didnt help
echo -n "Max core file size=unlimited:unlimited" > /proc/1/limits
Thanks,
prlimit
if you want to modify core_file limit, you can type
prlimit --pid ${pid} --core=soft_limit:hard_limit
the help page of prlimit is :
Usage:
prlimit [options] [-p PID]
prlimit [options] COMMAND
General Options:
-p, --pid <pid> process id
-o, --output <list> define which output columns to use
--noheadings don't print headings
--raw use the raw output format
--verbose verbose output
-h, --help display this help and exit
-V, --version output version information and exit
Resources Options:
-c, --core maximum size of core files created
-d, --data maximum size of a process's data segment
-e, --nice maximum nice priority allowed to raise
-f, --fsize maximum size of files written by the process
-i, --sigpending maximum number of pending signals
-l, --memlock maximum size a process may lock into memory
-m, --rss maximum resident set size
-n, --nofile maximum number of open files
-q, --msgqueue maximum bytes in POSIX message queues
-r, --rtprio maximum real-time scheduling priority
-s, --stack maximum stack size
-t, --cpu maximum amount of CPU time in seconds
-u, --nproc maximum number of user processes
-v, --as size of virtual memory
-x, --locks maximum number of file locks
-y, --rttime CPU time in microseconds a process scheduled
under real-time scheduling
Available columns (for --output):
DESCRIPTION resource description
RESOURCE resource name
SOFT soft limit
HARD hard limit (ceiling)
UNITS units
For more details see prlimit(1).
I always do this with ulimit command:
$ ulimit -c unlimited
On my Linux distro (ubuntu 16.04), core files are left on this directory:
/var/lib/systemd/coredump/
If your distro is based on systemd, you can setup this directory by modifiying pattern on this file:
$ cat /proc/sys/kernel/core_pattern
Please, read this info:
$ man 5 core
Check information related with /proc/sys/kernel/core_pattern.
As suggested previously, you can define the directory where all your core files are dumped, modifying the content of this file with "echo" command. For example:
$ echo "/var/log/dumps/core.%e.%p" > /proc/sys/kernel/core_pattern
This will dump all cores on /var/log/dumps/core.%e.%p, where %e is pattern for the executable filename, and %p pattern for pid of dumped process.
Hopefully you can play with this to customize your own needs.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
When running my application I sometimes get an error about too many files open.
Running ulimit -a reports that the limit is 1024. How do I increase the limit above 1024?
Edit
ulimit -n 2048 results in a permission error.
You could always try doing a ulimit -n 2048. This will only reset the limit for your current shell and the number you specify must not exceed the hard limit
Each operating system has a different hard limit setup in a configuration file. For instance, the hard open file limit on Solaris can be set on boot from /etc/system.
set rlim_fd_max = 166384
set rlim_fd_cur = 8192
On OS X, this same data must be set in /etc/sysctl.conf.
kern.maxfilesperproc=166384
kern.maxfiles=8192
Under Linux, these settings are often in /etc/security/limits.conf.
There are two kinds of limits:
soft limits are simply the currently enforced limits
hard limits mark the maximum value which cannot be exceeded by setting a soft limit
Soft limits could be set by any user while hard limits are changeable only by root.
Limits are a property of a process. They are inherited when a child process is created so system-wide limits should be set during the system initialization in init scripts and user limits should be set during user login for example by using pam_limits.
There are often defaults set when the machine boots. So, even though you may reset your ulimit in an individual shell, you may find that it resets back to the previous value on reboot. You may want to grep your boot scripts for the existence ulimit commands if you want to change the default.
If you are using Linux and you got the permission error, you will need to raise the allowed limit in the /etc/limits.conf or /etc/security/limits.conf file (where the file is located depends on your specific Linux distribution).
For example to allow anyone on the machine to raise their number of open files up to 10000 add the line to the limits.conf file.
* hard nofile 10000
Then logout and relogin to your system and you should be able to do:
ulimit -n 10000
without a permission error.
1) Add the following line to /etc/security/limits.conf
webuser hard nofile 64000
then login as webuser
su - webuser
2) Edit following two files for webuser
append .bashrc and .bash_profile file by running
echo "ulimit -n 64000" >> .bashrc ; echo "ulimit -n 64000" >> .bash_profile
3) Log out, then log back in and verify that the changes have been made correctly:
$ ulimit -a | grep open
open files (-n) 64000
Thats it and them boom, boom boom.
If some of your services are balking into ulimits, it's sometimes easier to put appropriate commands into service's init-script. For example, when Apache is reporting
[alert] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread
Try to put ulimit -s unlimited into /etc/init.d/httpd. This does not require a server reboot.