Why ulimit can't limit resident memory successfully and how? - linux

I start a new bash shell, and execute:
ulimit -m 102400
ulimit -a
"
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 20
file size (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 102400
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
"
and then ,I execute compiling a huge project. the Linking of it will use large memory, more then 2G. The result, process ld used more then 2G resident memory.
is there any wrong ? how to use ulimit or can I use other programs to limit resident memory?
the target of limit resident memory, is because computer will freeze when one process almost used all memory.

According to the man page for setrlimit:
RLIMIT_RSS
Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED
You probably want to set the virtual memory size instead, via ulimit -v

You can restrict the resident memory using cgroups. See Resident Set Size (RSS) limit has no effect

Related

How do I set locked memory limit as unlimited on Google Colab?

Is it possible to expand locked memory limit on Google Colab notebooks? It runs on a Ubuntu 18.04 VM.
I'm running
ulimit -l unlimited
But I receive this in response
ulimit: max locked memory: cannot modify limit: Operation not permitted
This is what ulimit -a returns
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 51915
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Yes, by running this:
i = 0
while True:
i += 1
After 20 seconds, google with make your GPU virtual memory bigger.

linux bash, Got a error of Cannot allocate memory, despite there is enough memory, Why?

Sometime I got a error -bash: fork: Cannot allocate memory, but when I run free -m, it shows that I have really enough memory:
total used free shared buffers cached
Mem: 128942 107886 21055 0 1037 17665
-/+ buffers/cache: 89183 39758
Swap: 0 0 0
Maybe it has sth. to do with ulimit, I run ulimit -a'result like this:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1031505
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 99999
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1031505
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Also, lsof |wc -l'result like this:
14100
I seems everything is OK, But I got a -bash: fork: Cannot allocate memory when I am running commonds sometimes, such as top ls du...
It looks like your system is low on swap space. https://unix.stackexchange.com/questions/41547/linux-running-slow-with-0-swap-left
Please click thumbs up if it helps.

WorkbookFactory.create throws ClosedByInterruptException

I have a multi-threaded java application that spawns as many thread as the reports to be done in a certain moment. At the end of the process, I generate an Excel file with Apache POI (3.15) with WorkbookFactory.create(file), where file is an empty template I use to create a brand new Excel file.
With a particular intensive report (it takes hours to be generated), when the code reaches this point, it throws this exception:
Caused by: java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:163)
at org.apache.poi.util.IOUtils.readFully(IOUtils.java:164)
at org.apache.poi.poifs.filesystem.NPOIFSFileSystem.<init>(NPOIFSFileSystem.java:229)
at org.apache.poi.poifs.filesystem.NPOIFSFileSystem.<init>(NPOIFSFileSystem.java:168)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:250)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:222)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:201)
at it.habble.report.designers.InvoiceCheckDesigner.<init>(InvoiceCheckDesigner.java:87)
I've read somewhere it could be related to limits.conf file. Have you any advices on how to investigate on this? Current values:
[user#localhost ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 191942
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 8192
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Unable to create new native thread in Spark application

I am running a Spark application and I always getting an out of memory exception..
Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
I run my program under local[5] in a node cluster on linux but it stills gives me this error..can someone point me how to rectify that in my Spark application..
Looks like some problem with ulimit configured on your machine. Run the ulimit -a command, you will see below result.
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63604
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 10240
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 63604
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
check the open files and max user processes configured values. It should be high.
You can configure them using below commands:
ulimit -n 10240
ulimit -u 63604
Once you are done with configuration of ulimits. You can start your application to see the effect.

How to setup jackd and guitarix with real-time priority under Raspbian?

I am trying to get guitarix and jack running on the Raspberry Pi 2 (+ Cirrus audio card) with raspbian.
When starting jack via qjackctl, I get the errors
Cannot lock down 82278944 byte memory area (Cannot allocate memory)
Cannot use real-time scheduling (RR/10)(1: Operation not permitted)
It seems changes to /etc/security/limits.conf do not apply but to /etc/security/limits.d/audio.conf do.
I tried setting the memory lock size for the user and group:
#audio - rtprio 90 # maximum realtime priority
#audio - memlock unlimited # maximum locked-in-memory address space (KB)
#audio - nice -10
pi - rtprio 90
pi - memlock unlimited
pi - nice -10
From ssh I get a satisfactory result:
pi#raspberrypi ~ $ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 30
file size (blocks, -f) unlimited
pending signals (-i) 7349
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 90
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7349
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
But from the desktop terminal I access via vnc I get
pi#raspberrypi ~ $ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7349
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7349
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
How can the same user have different settings and how do I get rt and memory allocation running on the desktop?
It seems to have been a problem with the PAM configuration as stated here.
Uncommenting the line in /etc/pam.d/su did the trick.
# session required pam_limits.so

Resources