Though I have done the following setting, and even restarted the server:
# head /etc/security/limits.conf -n2
www-data soft nofile -1
www-data hard nofile -1
# /sbin/sysctl fs.file-max
fs.file-max = 201558
The open files limitation of specific process is still 1024/4096:
# ps aux | grep nginx
root 983 0.0 0.0 85872 1348 ? Ss 15:42 0:00 nginx: master process /usr/sbin/nginx
www-data 984 0.0 0.2 89780 6000 ? S 15:42 0:00 nginx: worker process
www-data 985 0.0 0.2 89780 5472 ? S 15:42 0:00 nginx: worker process
root 1247 0.0 0.0 11744 916 pts/0 S+ 15:47 0:00 grep --color=auto nginx
# cat /proc/984/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 15845 15845 processes
Max open files 1024 4096 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 15845 15845 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
I've tried all possible solutions from googling but in vain. What setting did I miss?
On CentOS (tested on 7.x):
Create file /etc/systemd/system/nginx.service.d/override.conf with the following contents:
[Service]
LimitNOFILE=65536
Reload systemd daemon with:
systemctl daemon-reload
Add this to Nginx config file:
worker_rlimit_nofile 16384; (has to be smaller or equal to LimitNOFILE set above)
And finally restart Nginx:
systemctl restart nginx
You can verify that it works with cat /proc/<nginx-pid>/limits.
I found the answer in few minutes after posting this question...
# cat /etc/default/nginx
# Note: You may want to look at the following page before setting the ULIMIT.
# http://wiki.nginx.org/CoreModule#worker_rlimit_nofile
# Set the ulimit variable if you need defaults to change.
# Example: ULIMIT="-n 4096"
ULIMIT="-n 15000"
/etc/security/limit.conf is used by PAM, so it shoud be nothing to do with www-data (it's nologin user).
For nginx simply editing nginx.conf and setting worker_rlimit_nofile should change the limitation.
I initially thought it is a self-imposed limit of nginx, but it increases limit per process:
worker_rlimit_nofile 4096;
You can test by getting a nginx process ID (from top -u nginx), then run:
cat /proc/{PID}/limits
To see the current limits
Another way on CentOS 7 is by systemctl edit SERVICE_NAME, add the variables there:
[Service]
LimitNOFILE=65536
save that file and reload the service.
For those looking for an answer for pre-systemd Debian machines, the nginx init script executes /etc/default/nginx. So, adding the line
ulimit -n 9999
will change the limit for the nginx daemon without messing around with the init script.
Adding ULIMIT="-n 15000" as in a previous answer didn't work with my nginx version.
Related
in Linux, I test about ulimit and WebLogic. I set soft limit and hard limit differently, but process's soft limit and hard limit is same. Why they have same value?
# Set Soft limit
[was#was10 bin]$ ulimit –S -n 2048
# Check Soft limit
[was#was10 bin]$ ulimit -S -a
……
open files (-n) 2048
……
# Check Hard limit
[was#was10 bin]$ ulimit -H -a
……
open files (-n) 4096
……
# restart Weblogic and check limits
[was#was10 bin]$ cat /proc/$PID/limits
Limit Soft Limit Hard Limit Units
……
Max open files 4096 4096 files
……
# They have same value 4096
CentOS 7
/etc/security/limits.conf is default.
cat /etc/security/limits.d/*.conf
* soft nproc 4096
root soft nproc unlimited
I find about it.
It is because of MaxFDLimit.
Thanks
I have a machine with minified single user OS based on 64bit Fedora 24:
Vendor: Acer Veriton VN4640G
CPU: Intel(R) Core(TM) i5-6400T CPU # 2.20GHz
RAM: 4GB DDR4 2133 MHz
Storage: 32GB 2,5" ADATA SP600
I wrote a simple script /root/test.sh which run 10000 processes on background:
ulimit -a > /tmp/ulimit
i=1
while [ $i -le 10000 ]; do
echo $i
sleep 60 & disown
i=$(( $i + 1 ))
done
When I run this script directly from console, it runs 10000 sleep processes and print numbers as expected.
# bash test.sh
1
2
...
9999
10000
# ps ax | grep -c [s]leep
10000
Ulimit looks well
# cat /tmp/ulimit
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15339
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15339
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
BUT
if I run this script via cron (/etc/cron.d/custom), e.g.
0 8 * * * root bash /root/test.sh
I see in journalctl -e -o cat messages like:
(root) CMDOUT (494)
(root) CMDOUT (495)
(root) CMDOUT (496)
(root) CMDOUT (/root/test.sh: fork: retry: Resource temporarily unavailable)
(root) CMDOUT (/root/test.sh: fork: retry: Resource temporarily unavailable)
(root) CMDOUT (/root/test.sh: fork: retry: Resource temporarily unavailable)
(root) CMDOUT (/root/test.sh: fork: retry: Resource temporarily unavailable)
(root) CMDOUT (/root/proc.sh: fork: Resource temporarily unavailable)
So it run only about 500 processes and then cann't fork any other process even if there is still enough resources and user limits are the same as console case.
# free -h
total used free shared buff/cache available
Mem: 3,8G 472M 2,8G 62M 498M 3,0G
Swap: 0B 0B 0B
The count of running sleeps is always the same. Is there any recource limit for tasks run from cron?
P.S.: I did the test even on full Fedora 24 and result is the same...
Well, I found a solution during writing this question.
The main pointer to the problem was that I once saw in journalctl message
kernel: cgroup: fork rejected by pids controller in /system.slice/crond.service
So I checked the cron.service and found a parameter TasksMax.
# systemctl show crond.service
Type=simple
Restart=no
...
TasksMax=512
EnvironmentFile=/etc/sysconfig/crond (ignore_errors=no)
UMask=0022
LimitCPU=18446744073709551615
LimitCPUSoft=18446744073709551615
Solution
Add parameter TasksMax to the service configuration in /usr/lib/systemd/system/crond.service, e.g.:
Note: As Mark Plotnick wrote, better way is copy this service to /etc/systemd/system/ folder and modify this file to avoid rewriting service in /usr/ during upgrade.
# cat /usr/lib/systemd/system/crond.service
[Unit]
Description=Command Scheduler
After=auditd.service nss-user-lookup.target systemd-user-sessions.service time-sync.target ypbind.service
[Service]
EnvironmentFile=/etc/sysconfig/crond
ExecStart=/usr/sbin/crond -n $CRONDARGS
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
TasksMax=100000
[Install]
WantedBy=multi-user.target
Then reload systemd daemons
# systemctl daemon-reload
General solution
If you want avoid this problem with any systemd service you can change default value in /etc/systemd/system.conf, e.g.:
sed -i 's/#DefaultTasksMax=512/DefaultTasksMax=10000/' /etc/systemd/system.conf
And reload systemd daemons to apply the changes
# systemctl daemon-reload
But I don't know the exact consequences of this solution, so I can not recommend it.
I have set boostrap.memory_lock=true
Updated /etc/security/limits.conf added memlock unlimited for elastic search user
My elastic search was running fine for many months. Suddenly it failed 1 day back. In logs I can see below error and process never starts
ERROR: bootstrap checks failed
memory locking requested for elasticsearch process but memory is not locked
I hit ulimit -as and I can see max locked memory set to unlimited. What is going wrong here? I have been trying for hours but all in vain. Please help.
OS is RHEL 7.2
Elasticsearch 5.1.2
ulimit -as output
core file size (blocks -c) 0
data seg size (kbytes -d) unlimited
scheduling policy (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 83552
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -q) 8
POSIX message queues (bytes,-q) 819200
real-time priority (-r) 0
stack size kbytes, -s) 8192
cpu time seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Here is what I have done to lock the memory on my ES nodes on RedHat/Centos 7 (it will work on other distributions if they use systemd).
You must make the change in 4 different places:
1) /etc/sysconfig/elasticsearch
On sysconfig: /etc/sysconfig/elasticsearch you should have:
ES_JAVA_OPTS="-Xms4g -Xmx4g"
MAX_LOCKED_MEMORY=unlimited
(replace 4g with HALF your available RAM as recommended here)
2) /etc/security/limits.conf
On security limits config: /etc/security/limits.conf you should have
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
3) /usr/lib/systemd/system/elasticsearch.service
On the service script: /usr/lib/systemd/system/elasticsearch.service you should uncomment:
LimitMEMLOCK=infinity
you should do systemctl daemon-reload after changing the service script
4) /etc/elasticsearch/elasticsearch.yml
On elasticsearch config finally: /etc/elasticsearch/elasticsearch.yml you should add:
bootstrap.memory_lock: true
Thats it, restart your node and the RAM will be locked, you should notice a major performance improvement.
OS = Ubuntu 16
ElasticSearch = 5.6.3
I also used to have the same problem.
I set in elasticsearch.yml
bootstrap.memory_lock: true
and i got in my logs:
memory locking requested for elasticsearch process but memory is not locked
i tried several things, but actually you need to do only one thing (according to https://www.elastic.co/guide/en/elasticsearch/reference/master/setting-system-settings.html );
file:
/etc/systemd/system/elasticsearch.service.d/override.conf
add
[Service]
LimitMEMLOCK=infinity
A little bit explanation.
The really funny thing is that systemd does not really care about ulimit settings at all. ( https://fredrikaverpil.github.io/2016/04/27/systemd-and-resource-limits/ ). You can easily check this fact.
Set in /etc/security/limits.conf
elasticsearch - memlock unlimited
check that for elasticsearch max locked memory is unlimited
$ sudo su elasticsearch -s /bin/bash
$ ulimit -l
disable bootstrap.memory_lock: true in /etc/elasticsearch/elasticsearch.yml
# bootstrap.memory_lock: true
start service elasticsearch via systemd
# service elasticsearch start
check what max memory lock settings has service elasticsearch after it is
started
# systemctl show elasticsearch | grep -i limitmemlock
OMG! In spite we have set unlimited max memlock size via ulimit , systemd
completely ignores it.
LimitMEMLOCK=65536
So, we come to conclusion.
To start elasticsearch via systemd with enabled
bootstrap.memory_lock: true
we dont need to care about ulimit settings but we need
explecitely set it in systemd config file.
the end of story.
try setting
in /etc/sysconfig/elasticsearch file
set MAX_LOCKED_MEMORY=unlimited
in /usr/lib/systemd/system/elasticsearch.service
set LimitMEMLOCK=infinity
Make sure that your elasticsearch start process is configured to unlimited. For if e.g. you start elasticsarch with another user as the one configured in /etc/security/limits.conf or as root while defining a wildcard entry in limits.conf (which is not for root) it won't work.
Test itto be sure:
you could e.g. put ulimit -a ; exit just after the "#Start Daemon" in /etc/init.d/elasticsearch and start with bash /etc/init.d/elasticsearch start (adapt accordingly to your start mechanism).
check for the actual limit when the process is running (albeit short) with:
cat /proc/<pid>/limits
You will find lines similar to this:
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
<truncated>
Then depend on the runner or container (in my case it was supervisord's minfds value), you can lift the actual limitation configuration.
I hope it gives a little hint for more general cases.
Followed this post
On ubuntu 18.04 with elasticsearch 6.x, there wasn't entry LimitMEMLOCK=infinity in file /usr/lib/systemd/system/elasticsearch.service.
So adding that in that file and setting MAX_LOCKED_MEMORY=unlimited in /etc/default/elasticsearch did the trick.
The jvm options can be added in /etc/elasticsearch/jvm.options file.
If you use the tar distribution and want to monitor it with monit you
have to tell monit to use unlimited - all other places for this configuration are ignored.
Add ulimit -s unlimited at the beginning of /etc/init.d/monit, then do systemctl daemon-reload and then service monit restart and monit start $yourMonitLabel.
One thing it "can" be is that your /tmp is mounted with noexec https://discuss.elastic.co/t/not-able-to-start-elasticsearch-due-to-failed-memory-lock/158009/6 check your logs and see if it complains about .UnsatisfiedLinkError: Native library
especially CentOS/RedHat but maybe others? Might be fixed in ES 7?
I have changed /etc/security/limits.com and rebooted the machine remotely, However, after the boot, the nproc parameter has still the old value.
[ost#compute-0-1 ~]$ cat /etc/security/limits.conf
* - memlock -1
* - stack -1
* - nofile 4096
* - nproc 4096 <=====================================
[ost#compute-0-1 ~]$
Broadcast message from root#compute-0-1.local
(/dev/pts/0) at 19:27 ...
The system is going down for reboot NOW!
Connection to compute-0-1 closed by remote host.
Connection to compute-0-1 closed.
ost#cluster:~$ ssh compute-0-1
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Last login: Tue Sep 27 19:25:25 2016 from cluster.local
Rocks Compute Node
Rocks 6.1 (Emerald Boa)
Profile built 19:00 23-Aug-2016
Kickstarted 19:08 23-Aug-2016
[ost#compute-0-1 ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 516294
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 1024 <=========================
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Please see that I set max user processes to 4096 but after the reboot, the value is still 1024.
Please take a look at a file named /etc/pam.d/sshd .
If you can find it, open the file and insert a following line.
session required pam_limits.so
Then the new value will be effective even after rebooting.
PAM is a module which is related to authentication. So you need to enable the module through ssh login.
More details on man pam_limits.
Thanks!
mysqldump: Couldn't execute 'show fields from `tablename`': Out of resources when opening file './databasename/tablename#P#p125.MYD' (Errcode: 24) (23)
on checking the error 24 on the shell it says
>>perror 24
OS error code 24: Too many open files
how do I solve this?
At first, to identify the certain user or group limits you have to do the following:
root#ubuntu:~# sudo -u mysql bash
mysql#ubuntu:~$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 71680
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 71680
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
mysql#ubuntu:~$
The important line is:
open files (-n) 1024
As you can see, your operating system vendor ships this version with the basic Linux configuration - 1024 files per process.
This is obviously not enough for a busy MySQL installation.
Now, to fix this you have to modify the following file:
/etc/security/limits.conf
mysql soft nofile 24000
mysql hard nofile 32000
Some flavors of Linux also require additional configuration to get this to stick to daemon processes versus login sessions. In Ubuntu 10.04, for example, you need to also set the pam session limits by adding the following line to /etc/pam.d/common-session:
session required pam_limits.so
Quite an old question but here are my two cents.
The thing that you could be experiencing is that the mysql engine didn't set its variable "open-files-limit" right.
You can see how many files are you allowing mysql to open
mysql> SHOW VARIABLES;
Probably is set to 1024 even if you already set the limits to higher values.
You can use the option --open-files-limit=XXXXX in the command line for mysqld.
Cheers
add --single_transaction to your mysqldump command
It could also be possible that by some code that accesses the tables dint close those properly and over a point of time, the number of open files could be reached.
Please refer to http://dev.mysql.com/doc/refman/5.0/en/table-cache.html for a possible reason as well.
Restarting mysql should cause this problem to go away (although it might happen again unless the underlying problem is fixed).
You can increase your OS limits by editing /etc/security/limits.conf.
You can also install "lsof" (LiSt Open Files) command to see Files <-> Processes relation.
There are no need to configure PAM, as I think. On my system (Debian 7.2 with Percona 5.5.31-rel30.3-520.squeeze ) I have:
Before my.cnf changes:
\#cat /proc/12345/limits |grep "open files"
Max open files 1186 1186 files
After adding "open_files_limit = 4096" into my.cnf and mysqld restart, I got:
\#cat /proc/23456/limits |grep "open files"
Max open files 4096 4096 files
12345 and 23456 is mysqld process PID, of course.
SHOW VARIABLES LIKE 'open_files_limit' show 4096 now.
All looks ok, while "ulimit" show no changes:
\# su - mysql -c bash
\# ulimit -n
1024
There is no guarantee that "24" is an OS-level error number, so don't assume that this means that too many file handles are open. It could be some type of internal error code used within mysql itself. I'd suggest asking on the mysql mailing lists about this.