I'm curious what limit I'm raised? I have the next code:
import httplib, resource, socket, traceback
print("NOFILE: %s" % str(resource.getrlimit(resource.RLIMIT_NOFILE)))
socket.setdefaulttimeout(100000)
conns = []
for _ in xrange(10000000):
con = httplib.HTTPConnection('ya.ru') # or 93.158.134.3
try:
con.connect()
conns.append(con)
except:
print("Total connections: %s\n" % len(conns))
print(traceback.format_exc())
break
input("Press any key to exit...")
When HTTPConnection uses a host name, I get the next output:
NOFILE: (2560, 9223372036854775807)
Total connections: 1019
Traceback (most recent call last):
File "test.py", line 13, in <module>
con.connect()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 772, in connect
self.timeout, self.source_address)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 65] No route to host
And when I use the direct IP address:
NOFILE: (2560, 9223372036854775807)
Total connections: 1021
Traceback (most recent call last):
File "test.py", line 13, in <module>
con.connect()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 772, in connect
self.timeout, self.source_address)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 36] Operation now in progress
It looks like pretty the same point of failure because 1019, 1021 and 1024 are very close values. I think 1024 is a some limit within my OS.
uname -a
Darwin hackmachine 13.3.0 Darwin Kernel Version 13.3.0: Tue Jun 3 21:27:35 PDT 2014; root:xnu-2422.110.17~1/RELEASE_X86_64 x86_64
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 2560
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
So, my questions are:
What's the limitation I encountered?
What are explanations for the error
messages? They don't look very self-explaining.
UPD: Tried the same thing on Debian 7. ulimit -n <limit> works as expected and I can establish as much connections as is set by ulimit. So, it looks like an OS X specific problem.
Related
I am trying to fetch a dump file from one of my Ubuntu servers. The dump file is stored in .gzip format and his size is about 3GB. And then when I execute a scp command in macOS Big Sur Version 11.6 the download begins normally. After that when about 95MB has bin downloaded the command stops with this message.
sh: file size limit exceeded scp -P1021 /Users/andrej/Desktop
even though I have enough space on my machine
enter image description here
Also the settings for filesize limit is set to unlimitted on my laptop here is the output of the launchctl limit command from my terminal and ulimit -a.
% launchctl limit
cpu unlimited unlimited
filesize unlimited unlimited
data unlimited unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 2784 4176
maxfiles 64000 524288
The output of ulimit -a
% ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) 200000
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-v: address space (kbytes) unlimited
-l: locked-in-memory size (kbytes) unlimited
-u: processes 2042
-n: file descriptors 65536
Maybe someone has encountered a similar problem? Any help would be appreciated.
I had not noticed that I had a configuration set to 200000 for the filesize when I run the ulimit -a command. The issue was resolved after setting this value to unlimited.
try using rsync utility it's well suited with large files
The error is coming only while printing reports which has more than
150 pages.
wkhtmltopdf version : 0.12.5 (with patched qt)
OS : Ubuntu 20.04.3 LTS
CPU(s) : 48
Memory(RAM): 94GB
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 377646
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 200000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 200000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
odoo configuration file
limit_memory_hard = 78114717696
limit_memory_soft = 65095598080
When I open git bash I get these messages:
2 [main] bash (40164) C:\Program Files\Git\usr\bin\bash.exe: *** fatal error - cygheap base mismatch detected - 0x1301410/0x12A1410.
This problem is probably due to using incompatible versions of the cygwin DLL.
Search for cygwin1.dll using the Windows Start->Find/Search facility
and delete all but the most recent version. The most recent version *should*
reside in x:\cygwin\bin, where 'x' is the drive on which you have
installed the cygwin distribution. Rebooting is also suggested if you
are unable to find another cygwin DLL.
1 [main] bash 45888 fork: child -1 - forked process 40164 died unexpectedly, retry 0, exit code 0xC0000142, errno 11
I also tried ulimit -a and got:
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 2032
cpu time (seconds, -t) unlimited
max user processes (-u) 256
virtual memory (kbytes, -v) unlimited
And found this command in another similar question ulimit -Sn unlimited && ulimit -Sl unlimited and got this result:
bash: ulimit: -l: invalid option
ulimit: usage: ulimit [-SHabcdefiklmnpqrstuvxPT] [limit]
I'm running Linux OS (Ubuntu) on virtual box. When I am running the command
ulimit -Sv 50
It's not setting the soft limit for virtual memory size in that bash session. The error I'm getting is:
fatal error: out of heap memory
I was running above command with sudo access. The output of ulimit -a is as follow:
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 515266
-n: file descriptors 1024
-l: locked-in-memory size (kbytes) 64
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 515266
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 0
-N 15: unlimited
I ran a binary with AFL using memory size (-m) = 50 MB. So, I need to run this binary (without AFL) with the same memory size. Any insights will be helpful.
Looks like you are setting limit to 50 kilobytes and not 50 megabytes (see units in help ulimit).
I want to figure out how many connections my Server can handle. Thats why I wrote a script which actually creates a lot of connections (websocket-connections).
This works find until 200 Connections then it stops!
I am guessing it has something to do with limits of the system: Red Hat Linux
I tried to change the values of ulimit but it didn't work -> after reboot they were gone
Also I changed the value of the max File Handler:
cat /proc/sys/fs/file-max
900000
-> also gone after reboot
Can someone tell me in what kind of system limit I am running into and how I can change that permantly?!
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) 14904
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 14904
virtual memory (kbytes, -v) unlimited