"bash: fork: retry: Resource temporarily unavailable" error on Windows 10 - windows-10

When I open git bash I get these messages:
2 [main] bash (40164) C:\Program Files\Git\usr\bin\bash.exe: *** fatal error - cygheap base mismatch detected - 0x1301410/0x12A1410.
This problem is probably due to using incompatible versions of the cygwin DLL.
Search for cygwin1.dll using the Windows Start->Find/Search facility
and delete all but the most recent version. The most recent version *should*
reside in x:\cygwin\bin, where 'x' is the drive on which you have
installed the cygwin distribution. Rebooting is also suggested if you
are unable to find another cygwin DLL.
1 [main] bash 45888 fork: child -1 - forked process 40164 died unexpectedly, retry 0, exit code 0xC0000142, errno 11
I also tried ulimit -a and got:
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 2032
cpu time (seconds, -t) unlimited
max user processes (-u) 256
virtual memory (kbytes, -v) unlimited
And found this command in another similar question ulimit -Sn unlimited && ulimit -Sl unlimited and got this result:
bash: ulimit: -l: invalid option
ulimit: usage: ulimit [-SHabcdefiklmnpqrstuvxPT] [limit]

Related

Message: file size limit exceeded when doing scp command on macOS Big Sur Version 11.6

I am trying to fetch a dump file from one of my Ubuntu servers. The dump file is stored in .gzip format and his size is about 3GB. And then when I execute a scp command in macOS Big Sur Version 11.6 the download begins normally. After that when about 95MB has bin downloaded the command stops with this message.
sh: file size limit exceeded scp -P1021 /Users/andrej/Desktop
even though I have enough space on my machine
enter image description here
Also the settings for filesize limit is set to unlimitted on my laptop here is the output of the launchctl limit command from my terminal and ulimit -a.
% launchctl limit
cpu unlimited unlimited
filesize unlimited unlimited
data unlimited unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 2784 4176
maxfiles 64000 524288
The output of ulimit -a
% ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) 200000
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-v: address space (kbytes) unlimited
-l: locked-in-memory size (kbytes) unlimited
-u: processes 2042
-n: file descriptors 65536
Maybe someone has encountered a similar problem? Any help would be appreciated.
I had not noticed that I had a configuration set to 200000 for the filesize when I run the ulimit -a command. The issue was resolved after setting this value to unlimited.
try using rsync utility it's well suited with large files

Getting Erorr ECONNRESET intermittently with mosquitto and node.js

I am getting an intermittent error at node.js end intermittently while subscribing the topic from MQTT.
I have configured MQTT log files and found the below error
Unable to accept new connection, system socket count has been exceeded. Try increasing "ulimit -n" or equivalent.
While I am encounter the above message at mqtt logile, I am getting the error ECONNRESET at node.js end at the same time.
I have checked the ulimit at the server end and gives me the below details
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256380
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 62987
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
My Linux version is as below
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1062.12.1.vz7.131.10
Architecture: x86-64
Is the problem is related to uilmit? Do I need to increase the value of ulimit at server level?
How to fix the issue for ECONRESET at node.js
You need to increase the open files count on the broker.
You can do it for the running process with the prlimit command, but you should do it for the user running mosquitto so it's persistent across restarts. You can do this by editing the /etc/security/limits.conf file. You will need to log out and back in for it to take effect for a normal user and probably restart the service for a daemon user.

Error: EMFILE: too many open files, watch, unless I use sudo

Description
Recently I've run into an problem. I am not able to run yarn start in element-web directory, I get these errors. Originally I thought it had something to do with element-web itself so I created an issue. Some time after that I tried to run wintersmith preview in bibviz directory and got the same errors. This was weird so I tried to create an Angular project and run ng serve and errors again. I headed to the issue to close it as it wasn't an element-web issue. I found that there was another issue created with the same problem. It had already been closed by turt2live saying it looks like you've run out of memory on your system. Based on this I tried to turn of most programs running in the background and now all the commands worked.
I am sure that ng serve used to work in the past.
My PC has 16 GB of RAM and the commands already fail when I am on 7/16 GB. I can't see any memory spikes when running the commands. Running the commands with sudo also completely eliminates the problem. This doesn't make any sense to me.
Research lead me to ulimits but they seem to have no effect. I have also installed watchman with no effect.
Can someone tell me what I am missing?
Thank you in advance!
Info
I am on Debian 11 Bullseye. This is the output of a few commands that could be useful.
As a regular user:
> uname -a
Linux Simon-s-PC 5.8.0-3-amd64 #1 SMP Debian 5.8.14-1 (2020-10-10) x86_64 GNU/Linux
> sudo sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288
> ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 46482
-n: file descriptors 8192
-l: locked-in-memory size (kbytes) unlimited
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 63664
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 95
-N 15: unlimited
> yarn --version
1.22.5
With sudo su:
> sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288
> ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 63664
-n: file descriptors 1024
-l: locked-in-memory size (kbytes) 2043392
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 63664
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 0
-N 15: unlimited
I think I've found a solution:
Set limits in /etc/sysctl.conf by adding:
fs.inotify.max_user_watches=524288
fs.inotify.max_user_instances=512
Open a new terminal or reload sysctl.conf variables with
sudo sysctl --system
Run yarn start
Everything should work fine now, hopefully. If it doesn't work try setting the limits higher.

Why can't I create 50k processes in Linux?

Using Linux
$ uname -r
4.4.0-1041-aws
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
With limits allowing up to 200k processes
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 563048
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 524288
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
$ cat /proc/sys/kernel/pid_max
200000
$ cat /proc/sys/kernel/threads-max
1126097
And enough free memory to give 1MB each to 127k processes
$ free
total used free shared buff/cache available
Mem: 144156492 5382168 130458252 575604 8316072 137302624
Swap: 0 0 0
And I have fewer than 1k existing processes/threads.
$ ps -elfT | wc -l
832
But I cannot start 50k processes
$ echo '
seq 50000 | while read _; do
sleep 20 &
done
' | bash
bash: fork: retry: Resource temporarily unavailable
bash: fork: retry: Resource temporarily unavailable
bash: fork: retry: Resource temporarily unavailable
bash: fork: retry: Resource temporarily unavailable
bash: fork: retry: Resource temporarily unavailable
bash: fork: retry: Resource temporarily unavailable
...
Why can't I create 50k processes?
It was caused by Linux cancer systemd.
In addition to kernel.pid_max and ulimit, I also needed to change a third limit.
/etc/systemd/logind.conf
[Login]
UserTasksMax=70000
And then restart.
Building on #Basile's answer, you probably ran out of pids.
cat /proc/sys/kernel/pid_max gives me 32768 on my machine (maximum value of a signed short). which is less than 50k
EDIT: I missed that /proc/sys/kernel/pid_max is set to 200000. That probably isn't the issue in this case.
Because each process requires some resources: some RAM (including some kernel memory), some CPU, etc.
Each process has its own virtual address space, including its own call stack (and some of it requires physical resources, including several pages of RAM; read more about resident set size; on my desktop the RSS of some bash process is about 6Mbytes). So a process is actually some quite heavy stuff.
BTW, this is not specific to Linux.
Read more about operating systems, e.g. Operating Systems : Three Easy Pieces
Try also cat /proc/$$/maps and cat /proc/$$/status and read more about proc(5). Read about failure of fork(2) and of execve(2). The resource temporarily unavailable is for EAGAIN (see errno(3)), and several reasons can make fork fail with EAGAIN. And on my system, cat /proc/sys/kernel/pid_max gives 32768 (and reaching that limit gives EAGAIN for fork).
BTW, imagine if you could fork ten thousand processes. Then the context switch time would be dominant w.r.t. to running time.
Your Linux system looks like some AWS instance. Amazon won't let you create that much processes, because their hardware is not expecting that much.
(on some costly supercomputer or server with e.g. a terabyte of RAM and a hundred of cores, perhaps you could run 50K processes; I guess that they need some particular kernel, or kernel configuration. I recommend getting help from Amazon support)

Ulimit chnage after reboot as no effect

I have changed /etc/security/limits.com and rebooted the machine remotely, However, after the boot, the nproc parameter has still the old value.
[ost#compute-0-1 ~]$ cat /etc/security/limits.conf
* - memlock -1
* - stack -1
* - nofile 4096
* - nproc 4096 <=====================================
[ost#compute-0-1 ~]$
Broadcast message from root#compute-0-1.local
(/dev/pts/0) at 19:27 ...
The system is going down for reboot NOW!
Connection to compute-0-1 closed by remote host.
Connection to compute-0-1 closed.
ost#cluster:~$ ssh compute-0-1
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Last login: Tue Sep 27 19:25:25 2016 from cluster.local
Rocks Compute Node
Rocks 6.1 (Emerald Boa)
Profile built 19:00 23-Aug-2016
Kickstarted 19:08 23-Aug-2016
[ost#compute-0-1 ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 516294
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 1024 <=========================
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Please see that I set max user processes to 4096 but after the reboot, the value is still 1024.
Please take a look at a file named /etc/pam.d/sshd .
If you can find it, open the file and insert a following line.
session required pam_limits.so
Then the new value will be effective even after rebooting.
PAM is a module which is related to authentication. So you need to enable the module through ssh login.
More details on man pam_limits.
Thanks!

Resources