Open hundred of TCP (Websocket) Connections on one Client - linux

I want to figure out how many connections my Server can handle. Thats why I wrote a script which actually creates a lot of connections (websocket-connections).
This works find until 200 Connections then it stops!
I am guessing it has something to do with limits of the system: Red Hat Linux
I tried to change the values of ulimit but it didn't work -> after reboot they were gone
Also I changed the value of the max File Handler:
cat /proc/sys/fs/file-max
900000
-> also gone after reboot
Can someone tell me in what kind of system limit I am running into and how I can change that permantly?!
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) 14904
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 14904
virtual memory (kbytes, -v) unlimited

Related

How do I set locked memory limit as unlimited on Google Colab?

Is it possible to expand locked memory limit on Google Colab notebooks? It runs on a Ubuntu 18.04 VM.
I'm running
ulimit -l unlimited
But I receive this in response
ulimit: max locked memory: cannot modify limit: Operation not permitted
This is what ulimit -a returns
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 51915
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Yes, by running this:
i = 0
while True:
i += 1
After 20 seconds, google with make your GPU virtual memory bigger.

WorkbookFactory.create throws ClosedByInterruptException

I have a multi-threaded java application that spawns as many thread as the reports to be done in a certain moment. At the end of the process, I generate an Excel file with Apache POI (3.15) with WorkbookFactory.create(file), where file is an empty template I use to create a brand new Excel file.
With a particular intensive report (it takes hours to be generated), when the code reaches this point, it throws this exception:
Caused by: java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:163)
at org.apache.poi.util.IOUtils.readFully(IOUtils.java:164)
at org.apache.poi.poifs.filesystem.NPOIFSFileSystem.<init>(NPOIFSFileSystem.java:229)
at org.apache.poi.poifs.filesystem.NPOIFSFileSystem.<init>(NPOIFSFileSystem.java:168)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:250)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:222)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:201)
at it.habble.report.designers.InvoiceCheckDesigner.<init>(InvoiceCheckDesigner.java:87)
I've read somewhere it could be related to limits.conf file. Have you any advices on how to investigate on this? Current values:
[user#localhost ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 191942
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 8192
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

How to setup jackd and guitarix with real-time priority under Raspbian?

I am trying to get guitarix and jack running on the Raspberry Pi 2 (+ Cirrus audio card) with raspbian.
When starting jack via qjackctl, I get the errors
Cannot lock down 82278944 byte memory area (Cannot allocate memory)
Cannot use real-time scheduling (RR/10)(1: Operation not permitted)
It seems changes to /etc/security/limits.conf do not apply but to /etc/security/limits.d/audio.conf do.
I tried setting the memory lock size for the user and group:
#audio - rtprio 90 # maximum realtime priority
#audio - memlock unlimited # maximum locked-in-memory address space (KB)
#audio - nice -10
pi - rtprio 90
pi - memlock unlimited
pi - nice -10
From ssh I get a satisfactory result:
pi#raspberrypi ~ $ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 30
file size (blocks, -f) unlimited
pending signals (-i) 7349
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 90
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7349
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
But from the desktop terminal I access via vnc I get
pi#raspberrypi ~ $ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7349
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7349
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
How can the same user have different settings and how do I get rt and memory allocation running on the desktop?
It seems to have been a problem with the PAM configuration as stated here.
Uncommenting the line in /etc/pam.d/su did the trick.
# session required pam_limits.so

Changing number open files on ec2 instance

I'm using 32bit Amazon Linux (Centos?). Per the blog http://gnufreakz.wordpress.com/2009/08/12/increase-ulimit-in-centos/ I tried changing some parameters.
I added the below line to /etc/sysctl.conf
fs.file-max = 65536
and ran sysctl -p
I added the below line to /etc/security/limits.conf
* hard nofile 65536
No luck! After a restart, ulimit -a still gives me:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 26597
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Try adding
ulimit -n 65536
to your /etc/profile or to /home/[username]/.bash_profile

Why ulimit can't limit resident memory successfully and how?

I start a new bash shell, and execute:
ulimit -m 102400
ulimit -a
"
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 20
file size (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 102400
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
"
and then ,I execute compiling a huge project. the Linking of it will use large memory, more then 2G. The result, process ld used more then 2G resident memory.
is there any wrong ? how to use ulimit or can I use other programs to limit resident memory?
the target of limit resident memory, is because computer will freeze when one process almost used all memory.
According to the man page for setrlimit:
RLIMIT_RSS
Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED
You probably want to set the virtual memory size instead, via ulimit -v
You can restrict the resident memory using cgroups. See Resident Set Size (RSS) limit has no effect

Resources