Why are my ulimit settings ignored in the shell? - linux

I have to execute a .jar, and I need to use ulimit before this execution, so I wrote a shell script:
#!/bin/sh
ulimit -S -c unlimited
/usr/java/jre1.8.0_91/bin/java -jar /home/update.jar
But the ulimit seems to be ignored, because I have this error :
java.lang.InternalError: java.io.FileNotFoundException: /usr/java/jre1.8.0_91/lib/ext/localedata.jar (Too many open files)

If you want to change the maximum open files you need to use ulimit -n.
Example:
ulimit -n 8192
The -c option is changing the core file size (core dumps), not the maximum open files.
You need to apply the ulimit to the shell that will call the java application.

Related

Limit for files opening

I am using the linux ulimit command to set some limits for opening files. If I am using ulimit -n 4 this will open just 1 file. If I am using ulimit -n 5 this will open 2 files. So the formula will be ulimit -n number of files+3. The question is why is that difference of +3? What is that 3 reprezent? Maybe one for file one for executable file and one for...?
Each process has the first three open file descriptors: stdin, stdout, stderr

increase number of oppened files at the same time. Ubuntu 16.04.4 LTS

In Ubuntu Mate 16.04.4LTS, every time I run the command:
$ ulimit -a
I get:
open files (-n) 1024
I tried to increase this limit adding at the /etc/security/limits.conf the command:
myusername hard nofile 100000
but doesn't matter this value 1024 persist if I run ulimit -a. I rebooted the system after the modification yet the problem persist.
Also, if I run
ulimit -n 100000
I get the response:
ulimit: open files: cannot modify limit: Operation not permitted
and if I run
sudo ulimit -n 100000
I get:
sudo: ulimit: command not found
Any ideas on how to increse that limit?
thx
From man bash under ulimit:
-n The maximum number of open file descriptors (most systems do not allow this value to be set)
Maybe your problem is simply that your system does not support modifying this limit?
I found the solution, just after I posted this question. Based on:
https://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user
I also edited:
/etc/pam.d/common-session
and added the following line to the end:
session required pam_limits.so
All works now.

What is the most correct way to set limits of number of files on Linux?

There are 3 ways to set limits of number of files and sockets on Linux:
echo "100000" > /proc/sys/fs/file-max
ulimit -n 100000
sysctl -w fs.file-max=100000
What is the difference?
What is the most correct way to set limits of number of files on Linux?
sysctl is an interface for writing to /proc/sys and so does the same as echoing directly to the files. Whereas sysctl applies across the whole filesystem, ulimit only applies to writes from the shell and processes started by the shell.

How to edit /proc/<pid>/limits file in linux?

I am trying to generate coredump for a particular pid.
I tried to change the core file size limit using ulimit, but it will change only in /proc/self/limits ( which is for shell).
So how do i edit it for particular pid?
Bascially i have to change "Max core file size=unlimited"
Note:
1)Our linux version dont have prlimit.
2)Even the below command didnt help
echo -n "Max core file size=unlimited:unlimited" > /proc/1/limits
Thanks,
prlimit
if you want to modify core_file limit, you can type
prlimit --pid ${pid} --core=soft_limit:hard_limit
the help page of prlimit is :
Usage:
prlimit [options] [-p PID]
prlimit [options] COMMAND
General Options:
-p, --pid <pid> process id
-o, --output <list> define which output columns to use
--noheadings don't print headings
--raw use the raw output format
--verbose verbose output
-h, --help display this help and exit
-V, --version output version information and exit
Resources Options:
-c, --core maximum size of core files created
-d, --data maximum size of a process's data segment
-e, --nice maximum nice priority allowed to raise
-f, --fsize maximum size of files written by the process
-i, --sigpending maximum number of pending signals
-l, --memlock maximum size a process may lock into memory
-m, --rss maximum resident set size
-n, --nofile maximum number of open files
-q, --msgqueue maximum bytes in POSIX message queues
-r, --rtprio maximum real-time scheduling priority
-s, --stack maximum stack size
-t, --cpu maximum amount of CPU time in seconds
-u, --nproc maximum number of user processes
-v, --as size of virtual memory
-x, --locks maximum number of file locks
-y, --rttime CPU time in microseconds a process scheduled
under real-time scheduling
Available columns (for --output):
DESCRIPTION resource description
RESOURCE resource name
SOFT soft limit
HARD hard limit (ceiling)
UNITS units
For more details see prlimit(1).
I always do this with ulimit command:
$ ulimit -c unlimited
On my Linux distro (ubuntu 16.04), core files are left on this directory:
/var/lib/systemd/coredump/
If your distro is based on systemd, you can setup this directory by modifiying pattern on this file:
$ cat /proc/sys/kernel/core_pattern
Please, read this info:
$ man 5 core
Check information related with /proc/sys/kernel/core_pattern.
As suggested previously, you can define the directory where all your core files are dumped, modifying the content of this file with "echo" command. For example:
$ echo "/var/log/dumps/core.%e.%p" > /proc/sys/kernel/core_pattern
This will dump all cores on /var/log/dumps/core.%e.%p, where %e is pattern for the executable filename, and %p pattern for pid of dumped process.
Hopefully you can play with this to customize your own needs.

How to set ulimit to unlimited in perl?

I want all child processes of my perl script to generate core files in case of unexpected failures. So how can I set ulimit to unlimited inside perl?
You have to change the openfiles parameters of the user that launch your perl script. So you can change the limit on-the-fly with:
ulimit -n unlimited && perl path/to/your/script.pl
Or you can make a bash script foo.sh:
#!/bin/bash
ulimit -n unlimited
perl path/to/your/script.pl

Resources