I have a Ubuntu 12.04 setup on the server. Every registered user is also registered as linux user & jailed with limited system resource access through /etc/security/limits.conf .
I tried running a server as one of the registered users. The app is a nodejs app - http://github.com/pocha/terminal-codelearn . It uses https://github.com/chjj/pty.js to create a Pseudo Terminal for every user which comes to the nodejs app.
The app fails with 'forkpty(3) failed' error pointed to line 184 of https://github.com/chjj/pty.js/blob/65dd89fd8f87de914ff1814362918d7bd87c9cbf/src/unix/pty.cc
pid_t pid = pty_forkpty(&master, name, NULL, &winp);
if (pid) {
for (i = 0; i < argl; i++) free(argv[i]);
delete[] argv;
for (i = 0; i < envc; i++) free(env[i]);
delete[] env;
free(cwd);
}
switch (pid) {
case -1:
return ThrowException(Exception::Error(
String::New("forkpty(3) failed.")));
I am able to successfully deploy the app on http://nitrous.io . They probably have similar way to jail user. I tried running ulimits -a & matched every value except for pending signal. Somehow on my server, the maximum pending signal value does not exceed around 90k value while it is 584k on Nitrous server.
Below is the ulimit -a output from Nitrous server
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 548288
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 512
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 256
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
The app fails on heroku with exact similar error.
Can anybody help on how to make the app run on my server the way it works on nitrous.io
I know that heroku fails to forkpty because they're not actually running POSIX, just very posix-like. So some things, like forkpty, just don't work. I don't think there's a way around that :( wish there were.
I am not sure if I understand POSIX type. But I figured out that in my jailed environment there was no /dev/ptmx & /dev/pts/* . I googled & created them & it started working.
Related
I am getting an intermittent error at node.js end intermittently while subscribing the topic from MQTT.
I have configured MQTT log files and found the below error
Unable to accept new connection, system socket count has been exceeded. Try increasing "ulimit -n" or equivalent.
While I am encounter the above message at mqtt logile, I am getting the error ECONNRESET at node.js end at the same time.
I have checked the ulimit at the server end and gives me the below details
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256380
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 62987
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
My Linux version is as below
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1062.12.1.vz7.131.10
Architecture: x86-64
Is the problem is related to uilmit? Do I need to increase the value of ulimit at server level?
How to fix the issue for ECONRESET at node.js
You need to increase the open files count on the broker.
You can do it for the running process with the prlimit command, but you should do it for the user running mosquitto so it's persistent across restarts. You can do this by editing the /etc/security/limits.conf file. You will need to log out and back in for it to take effect for a normal user and probably restart the service for a daemon user.
I have changed /etc/security/limits.com and rebooted the machine remotely, However, after the boot, the nproc parameter has still the old value.
[ost#compute-0-1 ~]$ cat /etc/security/limits.conf
* - memlock -1
* - stack -1
* - nofile 4096
* - nproc 4096 <=====================================
[ost#compute-0-1 ~]$
Broadcast message from root#compute-0-1.local
(/dev/pts/0) at 19:27 ...
The system is going down for reboot NOW!
Connection to compute-0-1 closed by remote host.
Connection to compute-0-1 closed.
ost#cluster:~$ ssh compute-0-1
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Last login: Tue Sep 27 19:25:25 2016 from cluster.local
Rocks Compute Node
Rocks 6.1 (Emerald Boa)
Profile built 19:00 23-Aug-2016
Kickstarted 19:08 23-Aug-2016
[ost#compute-0-1 ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 516294
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 1024 <=========================
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Please see that I set max user processes to 4096 but after the reboot, the value is still 1024.
Please take a look at a file named /etc/pam.d/sshd .
If you can find it, open the file and insert a following line.
session required pam_limits.so
Then the new value will be effective even after rebooting.
PAM is a module which is related to authentication. So you need to enable the module through ssh login.
More details on man pam_limits.
Thanks!
I am running a Spark application and I always getting an out of memory exception..
Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
I run my program under local[5] in a node cluster on linux but it stills gives me this error..can someone point me how to rectify that in my Spark application..
Looks like some problem with ulimit configured on your machine. Run the ulimit -a command, you will see below result.
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63604
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 10240
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 63604
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
check the open files and max user processes configured values. It should be high.
You can configure them using below commands:
ulimit -n 10240
ulimit -u 63604
Once you are done with configuration of ulimits. You can start your application to see the effect.
How to debug following points just to find out exact reason which resource exceeding limit
How many process currently running
How many process running for per
user No. of opened files for per process.
Total no. of opened files for all process.
No. of process limit No. of open file limit
There can be multiple ways to go about what you are trying to achieve, e.g. you could get all the information you need by evaluating /proc/ fs. Below is a list of utilities you could use to debug the actual resource issue.
Good luck.
How many process currently running
ps -eaf | wc -l
How many process running for per user
ps -fu [username] | wc -l
No. of opened files for per process.
lsof -p < pid > | wc -l
Total no. of opened files for all process.
You could iterate over all the pid as shown above and make use of lsof command. Here, you might have to execute the command as root, else you would get permission denied while doing lsof
No. of process limit No. of open file limit
For a specific terminal, you could do
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15973
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15973
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I want to figure out how many connections my Server can handle. Thats why I wrote a script which actually creates a lot of connections (websocket-connections).
This works find until 200 Connections then it stops!
I am guessing it has something to do with limits of the system: Red Hat Linux
I tried to change the values of ulimit but it didn't work -> after reboot they were gone
Also I changed the value of the max File Handler:
cat /proc/sys/fs/file-max
900000
-> also gone after reboot
Can someone tell me in what kind of system limit I am running into and how I can change that permantly?!
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) 14904
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 14904
virtual memory (kbytes, -v) unlimited