Getting Erorr ECONNRESET intermittently with mosquitto and node.js - node.js

I am getting an intermittent error at node.js end intermittently while subscribing the topic from MQTT.
I have configured MQTT log files and found the below error
Unable to accept new connection, system socket count has been exceeded. Try increasing "ulimit -n" or equivalent.
While I am encounter the above message at mqtt logile, I am getting the error ECONNRESET at node.js end at the same time.
I have checked the ulimit at the server end and gives me the below details
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256380
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 62987
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
My Linux version is as below
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1062.12.1.vz7.131.10
Architecture: x86-64
Is the problem is related to uilmit? Do I need to increase the value of ulimit at server level?
How to fix the issue for ECONRESET at node.js

You need to increase the open files count on the broker.
You can do it for the running process with the prlimit command, but you should do it for the user running mosquitto so it's persistent across restarts. You can do this by editing the /etc/security/limits.conf file. You will need to log out and back in for it to take effect for a normal user and probably restart the service for a daemon user.

Related

Getting java.lang.OutOfMemoryError thrown at me when running Spark inside Docker

I'm trying to run a Spark instance within Docker and am frequently getting this exception thrown:
16/10/30 23:20:26 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-1,5,main]
java.lang.OutOfMemoryError: unable to create new native thread
I'm using this Docker image https://github.com/sequenceiq/docker-spark.
My ulimits seem ok inside the container:
bash-4.1# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29747
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1048576
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
They also look good outside the container, on the host:
kane#thinkpad ~> ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29747
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 29747
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
My Googling told me that systemd can limit the tasks and cause this issue, but I've got my task limit set to infinity:
kane#thinkpad ~> grep TasksMax /usr/lib/systemd/system/docker.service
20:TasksMax=infinity
kane#thinkpad ~> systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2016-10-31 08:22:39 AWST; 3h 14min ago
Docs: http://docs.docker.com
Main PID: 1107 (docker-current)
Tasks: 56
Memory: 34.9M
CPU: 30.292s
Any ideas? My Spark code is simply reading from a Kafka instance (running in a separate Docker container) and doing a basic map/reduce. Nothing fancy.
The error states that you can't create more native thread because you don't have enough memory. it doesn't necessarily mean you reach the ulimits but you don't have enough memory to create more thread.
The memory size to create a thread in a JVM is controlled by the -XSS Flag and is 1024k by default if i remember correctly. If you don't have a lot of recursive call, you may try to decrease the XSS flag and be able to create more thread with the same amount of memory available. If the XSS is too small, you will encounter a StackOverFlowError
The docker-spark image use the hadoop-docker image which contains an HDFS and Yarn service
You may allocate too much memory from your container for your JVMs heap size (hdfs, yarns), and therefore there is not enough memory remaining to allocate new thread.
Hope it'll help

Ulimit chnage after reboot as no effect

I have changed /etc/security/limits.com and rebooted the machine remotely, However, after the boot, the nproc parameter has still the old value.
[ost#compute-0-1 ~]$ cat /etc/security/limits.conf
* - memlock -1
* - stack -1
* - nofile 4096
* - nproc 4096 <=====================================
[ost#compute-0-1 ~]$
Broadcast message from root#compute-0-1.local
(/dev/pts/0) at 19:27 ...
The system is going down for reboot NOW!
Connection to compute-0-1 closed by remote host.
Connection to compute-0-1 closed.
ost#cluster:~$ ssh compute-0-1
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Last login: Tue Sep 27 19:25:25 2016 from cluster.local
Rocks Compute Node
Rocks 6.1 (Emerald Boa)
Profile built 19:00 23-Aug-2016
Kickstarted 19:08 23-Aug-2016
[ost#compute-0-1 ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 516294
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 1024 <=========================
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Please see that I set max user processes to 4096 but after the reboot, the value is still 1024.
Please take a look at a file named /etc/pam.d/sshd .
If you can find it, open the file and insert a following line.
session required pam_limits.so
Then the new value will be effective even after rebooting.
PAM is a module which is related to authentication. So you need to enable the module through ssh login.
More details on man pam_limits.
Thanks!

Unable to create new native thread in Spark application

I am running a Spark application and I always getting an out of memory exception..
Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
I run my program under local[5] in a node cluster on linux but it stills gives me this error..can someone point me how to rectify that in my Spark application..
Looks like some problem with ulimit configured on your machine. Run the ulimit -a command, you will see below result.
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63604
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 10240
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 63604
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
check the open files and max user processes configured values. It should be high.
You can configure them using below commands:
ulimit -n 10240
ulimit -u 63604
Once you are done with configuration of ulimits. You can start your application to see the effect.

forkpty fails for jailed linux user

I have a Ubuntu 12.04 setup on the server. Every registered user is also registered as linux user & jailed with limited system resource access through /etc/security/limits.conf .
I tried running a server as one of the registered users. The app is a nodejs app - http://github.com/pocha/terminal-codelearn . It uses https://github.com/chjj/pty.js to create a Pseudo Terminal for every user which comes to the nodejs app.
The app fails with 'forkpty(3) failed' error pointed to line 184 of https://github.com/chjj/pty.js/blob/65dd89fd8f87de914ff1814362918d7bd87c9cbf/src/unix/pty.cc
pid_t pid = pty_forkpty(&master, name, NULL, &winp);
if (pid) {
for (i = 0; i < argl; i++) free(argv[i]);
delete[] argv;
for (i = 0; i < envc; i++) free(env[i]);
delete[] env;
free(cwd);
}
switch (pid) {
case -1:
return ThrowException(Exception::Error(
String::New("forkpty(3) failed.")));
I am able to successfully deploy the app on http://nitrous.io . They probably have similar way to jail user. I tried running ulimits -a & matched every value except for pending signal. Somehow on my server, the maximum pending signal value does not exceed around 90k value while it is 584k on Nitrous server.
Below is the ulimit -a output from Nitrous server
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 548288
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 512
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 256
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
The app fails on heroku with exact similar error.
Can anybody help on how to make the app run on my server the way it works on nitrous.io
I know that heroku fails to forkpty because they're not actually running POSIX, just very posix-like. So some things, like forkpty, just don't work. I don't think there's a way around that :( wish there were.
I am not sure if I understand POSIX type. But I figured out that in my jailed environment there was no /dev/ptmx & /dev/pts/* . I googled & created them & it started working.

Open hundred of TCP (Websocket) Connections on one Client

I want to figure out how many connections my Server can handle. Thats why I wrote a script which actually creates a lot of connections (websocket-connections).
This works find until 200 Connections then it stops!
I am guessing it has something to do with limits of the system: Red Hat Linux
I tried to change the values of ulimit but it didn't work -> after reboot they were gone
Also I changed the value of the max File Handler:
cat /proc/sys/fs/file-max
900000
-> also gone after reboot
Can someone tell me in what kind of system limit I am running into and how I can change that permantly?!
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) 14904
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 14904
virtual memory (kbytes, -v) unlimited

Resources