I am running into what appears to be a connection limit for the YEDIS interface to yugabyte (or maybe an internal rpc connection limit).
This limit is around 800 simultaneous connections. The following throws an error after a while:
java -jar ./yb-sample-apps.jar \
--workload RedisKeyValue \
--nodes 127.0.0.1:6379 \
--nouuid \
--value_size 256 \
--num_threads_read 0 \
--num_threads_write 800 \
--num_unique_keys 1000000000
The error looks like this:
tablet: f9b5581437774f97979c868e283c628d, num_ops: 1, num_attempts: 5, txn: 00000000-0000-0000-0000-000000000000) passed its deadline 57037.830s (passed: 3.851s
But this seems to run fine indefinetly:
java -jar ./yb-sample-apps.jar \
--workload RedisKeyValue \
--nodes 127.0.0.1:6379 \
--nouuid \
--value_size 256 \
--num_threads_read 0 \
--num_threads_write 500 \
--num_unique_keys 1000000000
How can I raise the connection limit? Or is this a bug? 800 connections is nowhere near enough for my application. My application maxes out at more like 8,000 simultaneous connections.
As far as I can tell, my ulimit settings are fine:
[root#72c14ca48af1 yugabyte]# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29892
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Thanks for reporting this issue, and the additional input on the YugaByte slack channel to help isolate the issue.
Turned out there were two issues at play here:
a) When a yb-tserver is launched on its own, it assumes it can use 85% of the system RAM (and this is configurable), but the yb-ctl way of launching a test cluster only gives the yb-tserver process 1GB of RAM by default.
b) For redis connections, the fixed overhead of each connection was 1MB. So at about 8000 connections, this overhead itself would require about 8GB of memory. This is controlled by the redis_rpc_block_size yb-tserver gflag that defaults to 1MB.
Due to these two factors, writes to the system were being rejected with the following error:
I0624 21:32:28.317205 6772 maintenance_manager.cc:341] we have exceeded our soft memory limit (current capacity is 136.82%). However, there are no ops currently runnable which would free memory.
The following overrides should unblock your workload:
./yb-ctl destroy
./yb-ctl start --disable_ysql --tserver_flags="redis_rpc_block_size=131072,memory_limit_hard_bytes=6000000000"
./yb-ctl setup_redis
The above memory_limit_hard_bytes value of ~6GB assumes that you have a 8GB machine. Note that the yb-master's memory requirements aren't too high.
Related
I am getting an intermittent error at node.js end intermittently while subscribing the topic from MQTT.
I have configured MQTT log files and found the below error
Unable to accept new connection, system socket count has been exceeded. Try increasing "ulimit -n" or equivalent.
While I am encounter the above message at mqtt logile, I am getting the error ECONNRESET at node.js end at the same time.
I have checked the ulimit at the server end and gives me the below details
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256380
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 62987
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
My Linux version is as below
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1062.12.1.vz7.131.10
Architecture: x86-64
Is the problem is related to uilmit? Do I need to increase the value of ulimit at server level?
How to fix the issue for ECONRESET at node.js
You need to increase the open files count on the broker.
You can do it for the running process with the prlimit command, but you should do it for the user running mosquitto so it's persistent across restarts. You can do this by editing the /etc/security/limits.conf file. You will need to log out and back in for it to take effect for a normal user and probably restart the service for a daemon user.
I have a multi-threaded java application that spawns as many thread as the reports to be done in a certain moment. At the end of the process, I generate an Excel file with Apache POI (3.15) with WorkbookFactory.create(file), where file is an empty template I use to create a brand new Excel file.
With a particular intensive report (it takes hours to be generated), when the code reaches this point, it throws this exception:
Caused by: java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:163)
at org.apache.poi.util.IOUtils.readFully(IOUtils.java:164)
at org.apache.poi.poifs.filesystem.NPOIFSFileSystem.<init>(NPOIFSFileSystem.java:229)
at org.apache.poi.poifs.filesystem.NPOIFSFileSystem.<init>(NPOIFSFileSystem.java:168)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:250)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:222)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:201)
at it.habble.report.designers.InvoiceCheckDesigner.<init>(InvoiceCheckDesigner.java:87)
I've read somewhere it could be related to limits.conf file. Have you any advices on how to investigate on this? Current values:
[user#localhost ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 191942
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 8192
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I'm trying to run a Spark instance within Docker and am frequently getting this exception thrown:
16/10/30 23:20:26 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-1,5,main]
java.lang.OutOfMemoryError: unable to create new native thread
I'm using this Docker image https://github.com/sequenceiq/docker-spark.
My ulimits seem ok inside the container:
bash-4.1# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29747
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1048576
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
They also look good outside the container, on the host:
kane#thinkpad ~> ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29747
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 29747
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
My Googling told me that systemd can limit the tasks and cause this issue, but I've got my task limit set to infinity:
kane#thinkpad ~> grep TasksMax /usr/lib/systemd/system/docker.service
20:TasksMax=infinity
kane#thinkpad ~> systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2016-10-31 08:22:39 AWST; 3h 14min ago
Docs: http://docs.docker.com
Main PID: 1107 (docker-current)
Tasks: 56
Memory: 34.9M
CPU: 30.292s
Any ideas? My Spark code is simply reading from a Kafka instance (running in a separate Docker container) and doing a basic map/reduce. Nothing fancy.
The error states that you can't create more native thread because you don't have enough memory. it doesn't necessarily mean you reach the ulimits but you don't have enough memory to create more thread.
The memory size to create a thread in a JVM is controlled by the -XSS Flag and is 1024k by default if i remember correctly. If you don't have a lot of recursive call, you may try to decrease the XSS flag and be able to create more thread with the same amount of memory available. If the XSS is too small, you will encounter a StackOverFlowError
The docker-spark image use the hadoop-docker image which contains an HDFS and Yarn service
You may allocate too much memory from your container for your JVMs heap size (hdfs, yarns), and therefore there is not enough memory remaining to allocate new thread.
Hope it'll help
I am running a Spark application and I always getting an out of memory exception..
Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
I run my program under local[5] in a node cluster on linux but it stills gives me this error..can someone point me how to rectify that in my Spark application..
Looks like some problem with ulimit configured on your machine. Run the ulimit -a command, you will see below result.
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63604
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 10240
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 63604
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
check the open files and max user processes configured values. It should be high.
You can configure them using below commands:
ulimit -n 10240
ulimit -u 63604
Once you are done with configuration of ulimits. You can start your application to see the effect.
I want to figure out how many connections my Server can handle. Thats why I wrote a script which actually creates a lot of connections (websocket-connections).
This works find until 200 Connections then it stops!
I am guessing it has something to do with limits of the system: Red Hat Linux
I tried to change the values of ulimit but it didn't work -> after reboot they were gone
Also I changed the value of the max File Handler:
cat /proc/sys/fs/file-max
900000
-> also gone after reboot
Can someone tell me in what kind of system limit I am running into and how I can change that permantly?!
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) 14904
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 14904
virtual memory (kbytes, -v) unlimited