Setting system memory for a process in linux - linux

I need to limit the system memory free -- available for a process to 8GB. So, do I need to set
ulimit -S -m 8388608
or do I need to set:
ulimit -S -v 8388608
I am a non root user therefore I can change only soft limit. How can I raise the memory limit to unlimited again. I tried
ulimit -S -v ulimited
but it gives me:
bash: ulimit: ulimited: invalid number

'invalid number' happens, if there is a wrong line-ending, e.g. CR LF instead of Unix LF

Related

Message: file size limit exceeded when doing scp command on macOS Big Sur Version 11.6

I am trying to fetch a dump file from one of my Ubuntu servers. The dump file is stored in .gzip format and his size is about 3GB. And then when I execute a scp command in macOS Big Sur Version 11.6 the download begins normally. After that when about 95MB has bin downloaded the command stops with this message.
sh: file size limit exceeded scp -P1021 /Users/andrej/Desktop
even though I have enough space on my machine
enter image description here
Also the settings for filesize limit is set to unlimitted on my laptop here is the output of the launchctl limit command from my terminal and ulimit -a.
% launchctl limit
cpu unlimited unlimited
filesize unlimited unlimited
data unlimited unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 2784 4176
maxfiles 64000 524288
The output of ulimit -a
% ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) 200000
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-v: address space (kbytes) unlimited
-l: locked-in-memory size (kbytes) unlimited
-u: processes 2042
-n: file descriptors 65536
Maybe someone has encountered a similar problem? Any help would be appreciated.
I had not noticed that I had a configuration set to 200000 for the filesize when I run the ulimit -a command. The issue was resolved after setting this value to unlimited.
try using rsync utility it's well suited with large files

Error: EMFILE: too many open files, watch, unless I use sudo

Description
Recently I've run into an problem. I am not able to run yarn start in element-web directory, I get these errors. Originally I thought it had something to do with element-web itself so I created an issue. Some time after that I tried to run wintersmith preview in bibviz directory and got the same errors. This was weird so I tried to create an Angular project and run ng serve and errors again. I headed to the issue to close it as it wasn't an element-web issue. I found that there was another issue created with the same problem. It had already been closed by turt2live saying it looks like you've run out of memory on your system. Based on this I tried to turn of most programs running in the background and now all the commands worked.
I am sure that ng serve used to work in the past.
My PC has 16 GB of RAM and the commands already fail when I am on 7/16 GB. I can't see any memory spikes when running the commands. Running the commands with sudo also completely eliminates the problem. This doesn't make any sense to me.
Research lead me to ulimits but they seem to have no effect. I have also installed watchman with no effect.
Can someone tell me what I am missing?
Thank you in advance!
Info
I am on Debian 11 Bullseye. This is the output of a few commands that could be useful.
As a regular user:
> uname -a
Linux Simon-s-PC 5.8.0-3-amd64 #1 SMP Debian 5.8.14-1 (2020-10-10) x86_64 GNU/Linux
> sudo sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288
> ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 46482
-n: file descriptors 8192
-l: locked-in-memory size (kbytes) unlimited
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 63664
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 95
-N 15: unlimited
> yarn --version
1.22.5
With sudo su:
> sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288
> ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 63664
-n: file descriptors 1024
-l: locked-in-memory size (kbytes) 2043392
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 63664
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 0
-N 15: unlimited
I think I've found a solution:
Set limits in /etc/sysctl.conf by adding:
fs.inotify.max_user_watches=524288
fs.inotify.max_user_instances=512
Open a new terminal or reload sysctl.conf variables with
sudo sysctl --system
Run yarn start
Everything should work fine now, hopefully. If it doesn't work try setting the limits higher.

how do update open files ulimit for non user

The root user runs the following
ulimit -a | grep open
and gets a result of
open files (-n) 65536
A user runs the same command and gets a result of
open files (-n) 65000
How can you set the ulimit of the user to 8162?
You can just run ulimit -n to reduce the limit:
$ ulimit -n 8192
$ ulimit -n
8192
However, the user can override this limit. If you want to establish a hard limit for this user, you need to edit the file /etc/security/limits.conf.

How to increase Neo4j's maximum file open limit (ulimit) in Ubuntu?

Currently ulimit -n shows 10000. I want to increase it to 40000. I've edited "/etc/sysctl.conf" and put fs.file-max=40000. I've also edited /etc/security/limits.conf and updated hard and soft values. But still ulimit shows 10000. After making all these changes I rebooted my laptop. I've access to root password.
usr_name#usr_name-lap:/etc$ /sbin/sysctl fs.file-max
fs.file-max = 500000
Added following lines in /etc/security/limits.conf -
* soft nofile 40000
* hard nofile 40000
I also added following line in /etc/pam.d/su-
session    required   pam_limits.so
I've tried every possible way as given on other forums, but I can reach up to a maximum limit of 10000, not beyond that. What can be the issue?
I'm making this change because neo4j throws maximum open file limits reached error.
What you are doing will not work for root user. Maybe you are running your services as root and hence you don't get to see the change.
To increase the ulimit for root user you should replace the * by root. * does not apply for root user. Rest is the same as you did. I will re-quote it here.
Add the following lines to the file: /etc/security/limits.conf
root soft nofile 40000
root hard nofile 40000
And then add following line in the file: /etc/pam.d/common-session
session required pam_limits.so
This will update the ulimit for root user. As mentioned in comments, you may don't even have to reboot to see the change.
1) Check sysctl file-max limit:
$ cat /proc/sys/fs/file-max
If the limit is lower than your desired value, open the sysctl.conf and add this line at the end of file:
fs.file-max = 65536
Finally, apply sysctl limits:
$ sysctl -p
2) Edit /etc/security/limits.conf and add below the mentioned
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
These limits won't apply for root user, if you want to change root limits you have to do that explicitly:
root soft nofile 65535
root hard nofile 65535
...
3) Reboot system or add following line to the end of /etc/pam.d/common-session:
session required pam_limits.so
Logout and login again.
4) Check soft limits:
$ ulimit -a
and hard limits:
$ ulimit -Ha
....
open files (-n) 65535
Reference : http://ithubinfo.blogspot.in/2013/07/how-to-increase-ulimit-open-file-and.html
I am using Debian but this solution should work fine with Ubuntu.
You have to add a line in the neo4j-service script.
Here is what I have done :
nano /etc/init.d/neo4j-serviceAdd « ulimit –n 40000 » just before the start-stop-daemon line in the do_start section
Note that I am using version 2.0 Enterprise edition.
Hope this will help you.
I was having the same issue, and got it to work by adding entries to /etc/security/limits.d/90-somefile.conf. Note that in order to see the limits working, I had to log out completely from the ssh session, and then log back in.
I wanted to set the limit for a specific user that runs a service, but it seems that I was getting the limit that was set for the user I was logging in as. Here's an example to show how the ulimit is set based on authenticated user, and not the effective user:
$ sudo cat /etc/security/limits.d/90-nofiles.conf
loginuser soft nofile 10240
loginuser hard nofile 10240
root soft nofile 10241
root hard nofile 10241
serviceuser soft nofile 10242
serviceuser hard nofile 10242
$ whoami
loginuser
$ ulimit -n
10240
$ sudo -i
# ulimit -n
10240 # loginuser's limit
# su - serviceuser
$ ulimit -n
10240 # still loginuser's limit.
You can use an * to specify an increase for all users. If I restart the service as the user I logged in, and add ulimit -n to the init script, I see that the initial login user's limits are in place. I have not had a chance to verify which user's limits are used during a system boot or of determining what the actual nofile limit is of the service I am running (which is started with start-stop-daemon).
There's 2 approaches that are working for now:
add a ulimit adjustment to the init script, just before start-stop-daemon.
wildcard or more extensive ulimit settings in the security file.
You could alter the init script for neo4j to do a ulimit -n 40000 before running neo4j.
However, I can't help but feel you are barking up the wrong tree. Does neo4j legitimately need more than 10,000 open file descriptors? This sounds very much like a bug in neo4j or the way you are using it. I would try to address that.
I have lots of trouble getting this to work.
Using the following allows you to update it regardless of your user permission.
sudo sysctl -w fs.inotify.max_user_watches=100000
Edit
Just saw this from another user also on another stackexchange site (both work, but this version permanently updates the system setting, rather than temporarily):
echo fs.inotify.max_user_watches=100000 | sudo tee -a /etc/sysctl.conf;
sudo sysctl -p
Try run this command it will create a *_limits.conf file under /etc/security/limits.d
echo "* soft nofile 102400" > /etc/security/limits.d/*_limits.conf && echo "* hard nofile 102400" >> /etc/security/limits.d/*_limits.conf
Just exit from terminal and login again and verify by ulimit -n it will set for * users
tl;dr set both the soft and hard limits
I'm sure it's working as intended but I'll add it here just in case.
For completeness the limit is set here (see below for syntax):
/etc/security/limits.conf
some_user soft nofile 60000
some_user hard nofile 60000
and activated with the following in /etc/pam.d/common-session:
session required pam_limits.so
If you set only the hard limit, ulimit -a will show the default (1024):
If you set only the soft the limit ulimit -a will show (4096)
If you set them both ulimit -a will show the soft limit (up to the hard limit of course)
I did it like this
echo "NEO4J_ULIMIT_NOFILE=50000" >> neo4j
mv neo4j /etc/default/
ULIMIT configuration:
Login by root
vi security/limits.conf
Make Below entry
Ulimit configuration start for website user
website soft nofile 8192
website hard nofile 8192
website soft nproc 4096
website hard nproc 8192
website soft core unlimited
website hard core unlimited
Make Below entry for ALL USER
Ulimit configuration for every user
* soft nofile 8192
* hard nofile 8192
* soft nproc 4096
* hard nproc 8192
* soft core unlimited
* hard core unlimited
After modifying the file, user need to logoff and login again to see the new values.

Too many open files error on Ubuntu 8.04

mysqldump: Couldn't execute 'show fields from `tablename`': Out of resources when opening file './databasename/tablename#P#p125.MYD' (Errcode: 24) (23)
on checking the error 24 on the shell it says
>>perror 24
OS error code 24: Too many open files
how do I solve this?
At first, to identify the certain user or group limits you have to do the following:
root#ubuntu:~# sudo -u mysql bash
mysql#ubuntu:~$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 71680
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 71680
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
mysql#ubuntu:~$
The important line is:
open files (-n) 1024
As you can see, your operating system vendor ships this version with the basic Linux configuration - 1024 files per process.
This is obviously not enough for a busy MySQL installation.
Now, to fix this you have to modify the following file:
/etc/security/limits.conf
mysql soft nofile 24000
mysql hard nofile 32000
Some flavors of Linux also require additional configuration to get this to stick to daemon processes versus login sessions. In Ubuntu 10.04, for example, you need to also set the pam session limits by adding the following line to /etc/pam.d/common-session:
session required pam_limits.so
Quite an old question but here are my two cents.
The thing that you could be experiencing is that the mysql engine didn't set its variable "open-files-limit" right.
You can see how many files are you allowing mysql to open
mysql> SHOW VARIABLES;
Probably is set to 1024 even if you already set the limits to higher values.
You can use the option --open-files-limit=XXXXX in the command line for mysqld.
Cheers
add --single_transaction to your mysqldump command
It could also be possible that by some code that accesses the tables dint close those properly and over a point of time, the number of open files could be reached.
Please refer to http://dev.mysql.com/doc/refman/5.0/en/table-cache.html for a possible reason as well.
Restarting mysql should cause this problem to go away (although it might happen again unless the underlying problem is fixed).
You can increase your OS limits by editing /etc/security/limits.conf.
You can also install "lsof" (LiSt Open Files) command to see Files <-> Processes relation.
There are no need to configure PAM, as I think. On my system (Debian 7.2 with Percona 5.5.31-rel30.3-520.squeeze ) I have:
Before my.cnf changes:
\#cat /proc/12345/limits |grep "open files"
Max open files 1186 1186 files
After adding "open_files_limit = 4096" into my.cnf and mysqld restart, I got:
\#cat /proc/23456/limits |grep "open files"
Max open files 4096 4096 files
12345 and 23456 is mysqld process PID, of course.
SHOW VARIABLES LIKE 'open_files_limit' show 4096 now.
All looks ok, while "ulimit" show no changes:
\# su - mysql -c bash
\# ulimit -n
1024
There is no guarantee that "24" is an OS-level error number, so don't assume that this means that too many file handles are open. It could be some type of internal error code used within mysql itself. I'd suggest asking on the mysql mailing lists about this.

Resources