Forked Processes When Rsyncing via SSH - linux

I'm having an issue where when I try to send a file to my remote web server via rsync on my Linux OS, I receive the following error message:
/etc/profile.d/locallib.sh: fork: retry: No child processes
/etc/profile.d/locallib.sh: fork: retry: No child processes
/etc/profile.d/locallib.sh: fork: retry: No child processes
/etc/profile.d/locallib.sh: fork: retry: No child processes
/etc/profile.d/locallib.sh: fork: Resource temporarily unavailable
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 254) at io.c(226) [sender=3.1.1]
rsync Code
rsync -vzhe ssh some.file user#remote.server:remote.dir/

Issue is not with rsync command, actual issue with remote server , unable to do ssh to remote server due to resources are exhausted from it.
Contact System Administrator in this case.

Related

Ubuntu 16 gives "fork: retry: Resource temporarily unavailable", Ubuntu 20 doesn't

I have 2 similar machines in terms of HW. One has Ubuntu 16, the other Ubuntu 20.
I'm running a python program that is meant to open 30K TCP connections to an end point. The Ubuntu 20 machine machine was able to do the job well just by doing these 2 commands before executing the program:
#ulimit -n 1000000
#ulimit -u 1000000
However the Ubuntu 16 machine after creating 12K connections gives this:
-su: fork: retry: No child processes
-su: fork: retry: Resource temporarily unavailable
-su: fork: retry: Resource temporarily unavailable
-su: fork: retry: Resource temporarily unavailable
Any idea what may be causing Ubuntu 16 to behave like that while Ubuntu 20 seems fine?
Note: I tried to do few things now from different posts but none has worked.
Thanks in advance.
I think the maximum number of processes overall is lower on Ubuntu 16.04 vs 20.04
i.e. On Ubuntu 16.04 /proc/sys/kernel/pid_max is 32768, while on 18.04 it's 131072
I think you might have enough other processes to hit this limit, at least it's worth checking pid_max to see.
Also it would be better to write your test program to make many connections from a single process/thread using async code, as that would allow higher limits.
PS: Ubuntu 16.04 is end-of-life (unless you're paying for the extension), so you might want to ensure you upgrade.
PS: Ubuntu versions (except the minimial-types) need to second digit block to be correct.
OK the solution here was to stay away from Ubuntu 16

sudo ./jetty Stop or Start Failure

The jetty on our linux server is not installed as a service as we have multiple jetty servers on different ports. And we use command./jetty.sh stop and ./jetty.sh start to stop and start jetty.
However, when I add sudo to the command, the server never stop/start successfully. When I run sudo ./jetty.sh stop, it shows
Stopping Jetty: start-stop-daemon: warning: failed to kill 18772: No such process
1 pids were not killed
No process in pidfile '/var/run/jetty.pid' found running; none killed.
and the server was not stopped.
When I run sudo ./jetty.sh start, it shows
Starting Jetty: FAILED Tue Apr 23 23:07:15 CST 2019
How could this happen? From my understanding. Using sudo gives you more power and privilege to run commands. If you can successfully execute without sudo, then the command should never fail with sudo, since it only grants superuser privilege.
As a user it uses $HOME.
As root it uses system paths.
The error you got ..
Stopping Jetty: start-stop-daemon: warning: failed to kill 18772: No such process
1 pids were not killed
No process in pidfile '/var/run/jetty.pid' found running; none killed.
... means that there was a bad pid file sitting around for a process that no longer exists.
Short answer, the processing is different if you are root (a service) vs a user (just an application).

node server is starting automatically and can't be killed

Environment
Windows 10 Pro
PATH setting:
C:\Program Files (x86)\nodejs\ (v0.10.13)
C:\Program Files\nodejs (v6.2.2)
node version global: v6.2.2
npm version : 3.9.5
Description
I installed node.js, and right after installation I got problem because nodejs starting background process taking port 8080 which must not be taken.
running command:
>netstat -ano | find "8080"
TCP 0.0.0.0:8080 0.0.0.0:0 LISTENING 4428
running command:
>taskkill /IM "node.exe" /T /F
SUCCESS: The process with PID 6140 (child process of PID 4428) has been terminated.
SUCCESS: The process with PID 2916 (child process of PID 2776) has been terminated.
SUCCESS: The process with PID 10888 (child process of PID 2776) has been terminated.
SUCCESS: The process with PID 4428 (child process of PID 2556) has been terminated.
SUCCESS: The process with PID 2776 (child process of PID 2560) has been terminated.
>netstat -ano | find "8080"
But after some time server is started again:
>netstat -ano | find "8080"
TCP 0.0.0.0:8080 0.0.0.0:0 LISTENING 11052
Looked up startup applications, services and scheduled tasks, found nothing containing node or npm.
Also found no info on this event in application logs of mmc console.
What part of windows system can cause startup of node server?
How can I change this default port for node server?
Also noticed that if i'll start some other process and bind it to 8080 right after killing node.exe processes, then node trying to: [start new process and connect to 8080]----{fail}---->[retry attempt].
Task manager displays 2 node.exe processes on port 8000 and 8080. Get request on those ports result in "Cannot get /". All processes of node pointing to C:\Program Files (x86)\nodejs\ (v0.10.13).
SOLVED
Don't know why exactly, but it seems that version v0.10.13 of node which (was installed along with aptana studio 3) has some sort of service, which wasn't easily visible in service list and which was causing the trouble. Removal of this old version of node removed server instances from processes.

Vagrant refusing to start (Remote connection disconnect)

Vagrant refuses to start after making some changes to networking, I was getting the following;
$ vagrant up
default: Warning: Connection timeout. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
I tried to fix this by restarting the service (which failed), which then resulted in this;
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: Clearing any previously set network interfaces...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["hostonlyif", "create"]
Stderr: 0%...
Progress state: NS_ERROR_FAILURE
VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterface, interface IHostNetworkInterface
VBoxManage: error: Context: "int handleCreate(HandlerArg*, int, int*)" at line 66 of file VBoxManageHostonly.cpp
Others recommended restarting VirtualBox service, but this also failed;
✗ sudo "/Library/Application Support/VirtualBox/LaunchDaemons/VirtualBoxStartup.sh" restart
Unloading VBoxDrv.kext
(kernel) Can't remove kext org.virtualbox.kext.VBoxDrv; services failed to terminate - 0xe00002c7.
Failed to unload org.virtualbox.kext.VBoxDrv - (iokit/common) unsupported function.
Error: Failed to unload VBoxDrv.kext
Fatal error: VirtualBox
After much digging, it appears the restart command was failing due to VirtualBox processes holding locks.
This was fixed by doing;
# kill all virtualbox related processes
$ ps aux | grep vbox -i | awk -F ' ' '{print $2}' | xargs
# restart virtualbox service
$ sudo "/Library/Application Support/VirtualBox/LaunchDaemons/VirtualBoxStartup.sh" restart
# try again
$ vagrant up
This worked for me..
make sure you have enable adapter 2 on virtualbox

Telnet inside chroot evironment

I have set up a chroot jail inside a folder using debootstrap. Inisde this jail, I installed telnetd. But when I try to login from a remote host, the connection is closed just after login.
administrator#ubuntu:/$ telnet 192.168.1.100
Trying 192.168.1.100...
Connected to 192.168.1.100.
Escape character is '^]'.
Ubuntu 12.04 LTS
dchub login: trail
Password:
Last login: Mon Sep 9 09:51:47 UTC 2013 from 192.168.1.200 on pts/3
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.9.9-1-ARCH x86_64)
* Documentation: https://help.ubuntu.com/
Cannot execute /bin/bash: Resource temporarily unavailable
Connection closed by foreign host.
administrator#ubuntu:/$
I have already mounted /proc and /dev/pts.
I finally figured out what the problem was.
My host system has zsh as default shell and I used it to go inside chroot jail and start the telnet server, which has bash as its default shell. So when I used bash to go inside chroot jail and start the telnet server, it worked!
This error message still shown to me on each login but everything else works fine.
-bash: fork: retry: No child processes
-bash: fork: retry: No child processes
-bash: fork: retry: No child processes
-bash: fork: retry: No child processes
-bash: fork: Resource temporarily unavailable

Resources