How to set ulimit to unlimited in perl? - linux

I want all child processes of my perl script to generate core files in case of unexpected failures. So how can I set ulimit to unlimited inside perl?

You have to change the openfiles parameters of the user that launch your perl script. So you can change the limit on-the-fly with:
ulimit -n unlimited && perl path/to/your/script.pl
Or you can make a bash script foo.sh:
#!/bin/bash
ulimit -n unlimited
perl path/to/your/script.pl

Related

What is the most correct way to set limits of number of files on Linux?

There are 3 ways to set limits of number of files and sockets on Linux:
echo "100000" > /proc/sys/fs/file-max
ulimit -n 100000
sysctl -w fs.file-max=100000
What is the difference?
What is the most correct way to set limits of number of files on Linux?
sysctl is an interface for writing to /proc/sys and so does the same as echoing directly to the files. Whereas sysctl applies across the whole filesystem, ulimit only applies to writes from the shell and processes started by the shell.

How to edit /proc/<pid>/limits file in linux?

I am trying to generate coredump for a particular pid.
I tried to change the core file size limit using ulimit, but it will change only in /proc/self/limits ( which is for shell).
So how do i edit it for particular pid?
Bascially i have to change "Max core file size=unlimited"
Note:
1)Our linux version dont have prlimit.
2)Even the below command didnt help
echo -n "Max core file size=unlimited:unlimited" > /proc/1/limits
Thanks,
prlimit
if you want to modify core_file limit, you can type
prlimit --pid ${pid} --core=soft_limit:hard_limit
the help page of prlimit is :
Usage:
prlimit [options] [-p PID]
prlimit [options] COMMAND
General Options:
-p, --pid <pid> process id
-o, --output <list> define which output columns to use
--noheadings don't print headings
--raw use the raw output format
--verbose verbose output
-h, --help display this help and exit
-V, --version output version information and exit
Resources Options:
-c, --core maximum size of core files created
-d, --data maximum size of a process's data segment
-e, --nice maximum nice priority allowed to raise
-f, --fsize maximum size of files written by the process
-i, --sigpending maximum number of pending signals
-l, --memlock maximum size a process may lock into memory
-m, --rss maximum resident set size
-n, --nofile maximum number of open files
-q, --msgqueue maximum bytes in POSIX message queues
-r, --rtprio maximum real-time scheduling priority
-s, --stack maximum stack size
-t, --cpu maximum amount of CPU time in seconds
-u, --nproc maximum number of user processes
-v, --as size of virtual memory
-x, --locks maximum number of file locks
-y, --rttime CPU time in microseconds a process scheduled
under real-time scheduling
Available columns (for --output):
DESCRIPTION resource description
RESOURCE resource name
SOFT soft limit
HARD hard limit (ceiling)
UNITS units
For more details see prlimit(1).
I always do this with ulimit command:
$ ulimit -c unlimited
On my Linux distro (ubuntu 16.04), core files are left on this directory:
/var/lib/systemd/coredump/
If your distro is based on systemd, you can setup this directory by modifiying pattern on this file:
$ cat /proc/sys/kernel/core_pattern
Please, read this info:
$ man 5 core
Check information related with /proc/sys/kernel/core_pattern.
As suggested previously, you can define the directory where all your core files are dumped, modifying the content of this file with "echo" command. For example:
$ echo "/var/log/dumps/core.%e.%p" > /proc/sys/kernel/core_pattern
This will dump all cores on /var/log/dumps/core.%e.%p, where %e is pattern for the executable filename, and %p pattern for pid of dumped process.
Hopefully you can play with this to customize your own needs.

Linux: how to change maximum number of files a process can open?

I have to execute a process on a cluster of machines. Size of cluster is of order 100. So I cannot execute processes manually, I have to execute them by script(which uses ssh, currently I am using python-paramiko for this). Number of tcp sockets these processes open is more than 1024(default limit of linux.) So I need to change that using {ulimit -n 10000}. This makes the changes for that shell session only. And this command works only with root user. So my script is not able to do that.
I tried to execute this command
sudo su && ulimit -n 10000 && <commandToExecuteMyProcess>
But this didn't work. The commands after "sudo su" didn't execute at all. They execute only when I logout of the su session.
This article shows way to make the changes permanently. But when I open limits.conf, I didn't find anything there. It only has some commented notes.
Please suggest me some way to increase the limit permanently or change it by script for each session.
That's not how it works: sudo su just opens a new shell so you can introduce commands as root, and after you exit that shell it executes the rest of the line as normal user.
Second: your this is a special case because ulimit is not actually a program, but a bash shell built-in command, so it must be used within bash, that is why something like sudo ulimit -n 10000 won't work: sudo can't find that program because it doesn't exist.
So, the only alternative is a bit ugly but works:
sudo bash -c 'ulimit -n 10000 && <command>'
Everything inside '...' will execute in a bash session of the root user.
Note that you can replace && with ; in this case: that's because it is being executed as root and ulimit -n 10000 will always complete successfully.

Why are my ulimit settings ignored in the shell?

I have to execute a .jar, and I need to use ulimit before this execution, so I wrote a shell script:
#!/bin/sh
ulimit -S -c unlimited
/usr/java/jre1.8.0_91/bin/java -jar /home/update.jar
But the ulimit seems to be ignored, because I have this error :
java.lang.InternalError: java.io.FileNotFoundException: /usr/java/jre1.8.0_91/lib/ext/localedata.jar (Too many open files)
If you want to change the maximum open files you need to use ulimit -n.
Example:
ulimit -n 8192
The -c option is changing the core file size (core dumps), not the maximum open files.
You need to apply the ulimit to the shell that will call the java application.

ulimit -s on Netbeans 8.0.2

I want to run a C program in NetBeans 8.0.2 (on Xubuntu 14.04) with ulimit -s set. I've already tried on Re-run with arguments writing ulimit -s 2048; "${OUTPUT_PATH}", but it shows me this error:
/bin/sh: 1: exec: ulimit: not found
I don't want to compile the program on my own in order to set ulimit on the terminal.
This doesn't look like a C question.
Anyway, on Linux, ulimitis not a system command, it's a bash builtin. Unless /bin/sh is linked to bash (which it is usually not) the command won't be known to the shell.
try /bin/bash -c ulimit -s 2048 instead.
Note that this new limit will only be active in this particular shell - once you return from it, you'll see whatever you had before.

Resources