Linux ssh bash fork retry: no child processes - linux

I am on arch linux, accessing an account on a server over SSH. I have run a bash script containing recursion that results in an infinite loop of "no such file or directory" which continues despite any interrupt command ctrl C etc, it is totally uninterruptible. This eventually results in an endless stream of bash: fork: No child processes. I cannot execute any commands whilst this happens, and when it stops with "Resource temporarily unavailable", i am unable to execute any commands to kill the script because "bash: fork: No child processes" starts up again. I have no idea what to do, any help?
ps doesn't work

Looks like you've caused a fork bomb. You can try the methods here to stop it, but you'll most likely end up needing to reboot.

Run kill -9 -1 from the user login that caused the forkbomb . No need to reboot.
PS: Consult your seniors before running it on Prod server

1) ps faux (find PID and place in second command)
2) kill [PID]
If any virus attack then again this process come so you need to enable virus scanner on cpanel and scan and remove.
Important:
Hosting providers must install the following services for this interface to appear:
The ClamAV Scanner plugin in WHM’s Manage Plugins interface (WHM >> Home >> cPanel >> Manage Plugins).
The Exim Mail Server service on the server in WHM’s Service Manager interface (WHM >> Home >> Service Configuration >> Service Manager).

Run ps faux (you might need to run it from other user or with sudo) and search for the offending process (may look like a big branch of the tree)
If needed, kill the process via its PID

Related

Alias <cmd> to "do X then <cmd>" transparently

The title sucks but I'm not sure of the correct term for what I'm trying to do, if I knew that I'd probably have found the answer by now!
The problem:
Due to an over-zealous port scanner (customer's network monitor) and an overly simplistic telnet daemon (busybox linux) every time port 23 gets scanned, telnetd launches another instance of /bin/login waiting for user input via telnet.
As the port scanner doesn't actually try to login, there is no session, so there can be no session timeout, so we quickly end up with a squillion zombie copies of /bin/login running.
What I'm trying to do about it:
telnetd gives us the option (-l) of launching some other thing rather than /bin/login so I thought we could replace /bin/login with a bash script that kills old login processes then runs /bin/login as normal:
#!/bin/sh
# First kill off any existing dangling logins
# /bin/login disappears on successful login so
# there should only ever be one
killall -q login
# now run login
/bin/login
But this seems to return immediately (no error, but no login prompt). I also tried just chaining the commands in telnetd's arguments:
telnetd -- -l "killall -q login;/bin/login"
But this doesn't seem to work either (again - no error, but no login prompt). I'm sure there's some obvious wrinkle I'm missing here.
System is embedded Linux 2.6.x running Busybox so keeping it simple is the greatly preferred option.
EDIT: OK I'm a prat for not making the script executable, with that done I get the login: prompt but after entering the username I get nothing further.
Check that your script has the execute bit set. Permissions should be the same as for the original binary including ownership.
As for -l: My guess is that it tries to execute the command killall -q login;/bin/login (that's one word).
Since this is an embedded system, it might not write logs. But you should check /var/log anyway for error messages. If there are none, you should be able to configure it using the documentation: http://wiki.openwrt.org/doc/howto/log.overview
Right, I fixed it, as I suspected there was a wrinkle I was missing:
exec /bin/login
I needed exec to hand control over to /bin/login rather than just call it.
So the telnet daemon is started thusly:
/usr/sbin/telnetd -l /usr/sbin/not_really_login
The contents of the not-really-login script are:
#!/bin/sh
echo -n "Killing old logins..."
killall -q login
echo "...done"
exec /bin/login
And all works as it should, on telnet connect we get this:
**MOTD Etc...**
Killing old logins......done
login: zero_cool
password:
And we can login as usual.
The only thing I haven't figured out is if we can detect the exit-status of /bin/login (if we killed it) and print a message saying Too slow, sucker! or similar. TBH though, that's a nicety that can wait for a rainy day, I'm just happy our stuff can't be DDOS'ed over Telnet anymore!

How can I use ubuntu service to make a process persistent?

I am not convinced that is the right terminology but here is my situation.
I log into an Ubuntu server using ssh and start a node app that I wrote. I would like for the app to continue rinning even when I close the ssh window or when the window times out. I 'think' there is a way to do this by writing a .conf file for the app and placing it in /etc but I dont know where to go to learn how to do that.
Any tips?
You can use the program nohup to accomplish this. It is most likely already installed on your Ubuntu distribution, it was on my 12.04 install.
nohup node test.js &
This command kicks off node test.js. Output will be streamed to nohup.out so you can view what is has been doing later on if you so like. Even after the ssh session is ended, the process will keep right on going.
To kill the process later on, you can do a "killall node" or you can manually grep for the PID and kill it that way using "ps -ef | grep node"
Linux: Prevent a background process from being stopped after closing SSH client

Execute script on remote host - output given in local host

I am trying to execute two scripts which are available as sh files on remote host having 755 permissions.
I try callling them from client host as below:
REMOTE_HOST="host1"
BOUNCE_SCRIPT="
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/stopScript.sh ${ENV};
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/startScript.sh ${ENV};
"
ssh ${REMOTE_HOST} "${BOUNCE_SCRIPT}"
Above lines are in a script on local host.
While running the script on local host, the first command on remote host i.e. stopScript.sh gets executed correctly. It kills the running process which it was inteded to kill w/o any error.
However output of second script i.e. startScript.sh gets printed to local host window but the process it intended to start does not start on remote host.
Can anyone please let me know?
Is the way executing script on remote host correct?
Should I see output of running script on remote host locally as well? i.e. on the window of local host?
Thanks
You could try the -n flag for ssh:
ssh -n $REMOTE_HOST "$BOUNCE_SCRIPT" >> $LOG
The man page has further information (http://unixhelp.ed.ac.uk/CGI/man-cgi?ssh+1). The following is a snippet:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin).
Prefacing your startScript.sh line with 'nohup' may help. Often times if you remotely execute commands they will die when your ssh session ends, nohup allows your process to live after the session has ended. It would be helpful to know if your process is starting at all or if it starts and then dies.
I think cyber-monk is right, you should launch the processes with nohup to create à new independent process. Look if your stop script is killing the right process (the new one included).

Run a persistent process via ssh

I'm trying to start a test server via ssh but it always dies once i disconnect from ssh.
Is there a way to start a process (run the server) so it doesn't die upon the end of my ssh session?
As an alternative to nohup, you could run your remote application inside a terminal multiplexor, such as GNU screen or tmux.
Using these tools makes it easy to reconnect to a session from another host, which means that you can kick a long build or download off before you leave work and check on its status when you get home. For instance. I find this particularly useful when doing development work on servers that are very remote (in a different country) with unreliable connectivity between me and them, if the connection drops, I can simply reconnect and carry on without losing any state.
If you're SSHing to a Linux distro that has systemd, you can use systemd-run to launch a process in the background (in systemd's terms, "a transient service"). For example, assuming you want to ping something in the background:
systemd-run --unit=pinger ping 10.8.178.3
The benefit you'll get with systemd over just running a process with nohup is that systemd will track the process and its children, keep logs, remember the exit code and allow you to cleanly kill the process and all its children. Examples:
See the status and the last lines of output:
systemctl status pinger
Stream the output:
journalctl -xfu pinger
Kill:
systemctl kill pinger
Yes; you can use the nohup command to swallow the HUP ("hangup") signal that is sent to your program when you hang up your SSH session.
Alternatively, if you're writing the server yourself, you can code it to register a handler for the HUP signal, and swallow it inside the program (rather than using an external nohup program that does the same).
In addition to the other replies, you could start your test server thru batch (or at) but as Brian answered you should call daemon
And you could pass the -f option to ssh
As an alternative to nohup, screen, et al. you could revise your server to invoke daemon to detach it from the terminal. This is the idiomatic way to write services for linux.
See also daemon(3).

How to make sure an application keeps running on Linux

I'm trying to ensure a script remains running on a development server. It collates stats and provides a web service so it's supposed to persist, yet a few times a day, it dies off for unknown reasons. When we notice we just launch it again, but it's a pain in the rear and some users don't have permission (or the knowhow) to launch it up.
The programmer in me wants to spend a few hours getting to the bottom of the problem but the busy person in me thinks there must be an easy way to detect if an app is not running, and launch it again.
I know I could cron-script ps through grep:
ps -A | grep appname
But again, that's another hour of my life wasted on doing something that must already exist... Is there not a pre-made app that I can pass an executable (optionally with arguments) and that will keep a process running indefinitely?
In case it makes any difference, it's Ubuntu.
I have used a simple script with cron to make sure that the program is running. If it is not, then it will start it up. This may not be the perfect solution you are looking for, but it is simple and works rather well.
#!/bin/bash
#make-run.sh
#make sure a process is always running.
export DISPLAY=:0 #needed if you are running a simple gui app.
process=YourProcessName
makerun="/usr/bin/program"
if ps ax | grep -v grep | grep $process > /dev/null
then
exit
else
$makerun &
fi
exit
Then add a cron job every minute, or every 5 minutes.
Monit is perfect for this :)
You can write simple config files which tell monit to watch e.g. a TCP port, a PID file etc
monit will run a command you specify when the process it is monitoring is unavailable/using too much memory/is pegging the CPU for too long/etc. It will also pop out an email alert telling you what happened and whether it could do anything about it.
We use it to keep a load of our websites running while giving us early warning when something's going wrong.
-- Your faithful employee, Monit
Notice: Upstart is in maintenance mode and was abandoned by Ubuntu which uses systemd. One should check the systemd' manual for details how to write service definition.
Since you're using Ubuntu, you may be interested in Upstart, which has replaced the traditional sysV init. One key feature is that it can restart a service if it dies unexpectedly. Fedora has moved to upstart, and Debian is in experimental, so it may be worth looking into.
This may be overkill for this situation though, as a cron script will take 2 minutes to implement.
#!/bin/bash
if [[ ! `pidof -s yourapp` ]]; then
invoke-rc.d yourapp start
fi
If you are using a systemd-based distro such as Fedora and recent Ubuntu releases, you can use systemd's "Restart" capability for services. It can be setup as a system service or as a user service if it needs to be managed by, and run as, a particular user, which is more likely the case in OP's particular situation.
The Restart option takes one of no, on-success, on-failure, on-abnormal, on-watchdog, on-abort, or always.
To run it as a user, simply place a file like the following into ~/.config/systemd/user/something.service:
[Unit]
Description=Something
[Service]
ExecStart=/path/to/something
Restart=on-failure
[Install]
WantedBy=graphical.target
then:
systemctl --user daemon-reload
systemctl --user [status|start|stop|restart] something
No root privilege / modification of system files needed, no cron jobs needed, nothing to install, flexible as hell (see all the related service options in the documentation).
See also https://wiki.archlinux.org/index.php/Systemd/User for more information about using the per-user systemd instance.
I have used from cron "killall -0 programname || /etc/init.d/programname start". kill will error if the process doesn't exist. If it does exist, it'll deliver a null signal to the process (which the kernel will ignore and not bother passing on.)
This idiom is simple to remember (IMHO). Generally I use this while I'm still trying to discover why the service itself is failing. IMHO a program shouldn't just disappear unexpectedly :)
Put your run in a loop- so when it exits, it runs again... while(true){ run my app.. }
I couldn't get Chris Wendt solution to work for some reason, and it was hard to debug. This one is pretty much the same but easier to debug, excludes bash from the pattern matching. To debug just run: bash ./root/makerun-mysql.sh. In the following example with mysql-server just replace the value of the variables for process and makerun for your process.
Create a BASH-script like this (nano /root/makerun-mysql.sh):
#!/bin/bash
process="mysql"
makerun="/etc/init.d/mysql restart"
if ps ax | grep -v grep | grep -v bash | grep --quiet $process
then
printf "Process '%s' is running.\n" "$process"
exit
else
printf "Starting process '%s' with command '%s'.\n" "$process" "$makerun"
$makerun
fi
exit
Make sure it's executable by adding proper file permissions (i.e. chmod 700 /root/makerun-mysql.sh)
Then add this to your crontab (crontab -e):
# Keep processes running every 5 minutes
*/5 * * * * bash /root/makerun-mysql.sh
The supervise tool from daemontools would be my preference - but then everything Dan J Bernstein writes is my preference :)
http://cr.yp.to/daemontools/supervise.html
You have to create a particular directory structure for your application startup script, but it's very simple to use.
first of all, how do you start this app? Does it fork itself to the background? Is it started with nohup .. & etc? If it's the latter, check why it died in nohup.out, if it's the first, build logging.
As for your main question: you could cron it, or run another process on the background (not the best choice) and use pidof in a bashscript, easy enough:
if [ `pidof -s app` -eq 0 ]; then
nohup app &
fi
You could make it a service launched from inittab (although some Linuxes have moved on to something newer in /etc/event.d). These built in systems make sure your service keeps running without writing your own scripts or installing something new.
It's a job for a DMD (daemon monitoring daemon). there are a few around; but I usually just write a script that checks if the daemon is running, and run if not, and put it in cron to run every minute.
Check out 'nanny' referenced in Chapter 9 (p197 or thereabouts) of "Unix Hater's Handbook" (one of several sources for the book in PDF).
A nice, simple way to do this is as follows:
Write your server to die if it can't listen on the port it expects
Set a cronjob to try to launch your server every minute
If it isn't running it'll start, and if it is running it won't. In any case, your server will always be up.
I think a better solution is if you test the function, too. For example, if you had to test an apache, it is not enough only to test, if "apache" processes on the systems exist.
If you want to test if apache OK is, then try to download a simple web page, and test if your unique code is in the output.
If not, kill the apache with -9 and then do a restart. And send a mail to the root (which is a forwarded mail address to the roots of the company/server/project).
It's even simplier:
#!/bin/bash
export DISPLAY=:0
process=processname
makerun="/usr/bin/processname"
if ! pgrep $process > /dev/null
then
$makerun &
fi
You have to remember though to make sure processname is unique.
One can install minutely monitoring cronjob like this:
crontab -l > crontab;echo -e '* * * * * export DISPLAY=":0.0" && for
app in "eiskaltdcpp-qt" "transmission-gtk" "nicotine";do ps aux|grep
-v grep|grep "$app";done||"$app" &' >> crontab;crontab crontab
disadvantage is that the app names you enter have to be found in ps aux|grep "appname" output and at same time being able to be launched using that name: "appname" &
also you can use the pm2 library.
sudo apt-get pm2
And if its a node app can install.
Sudo npm install pm2 -g
them can run the service.
linux service:
sudo pm2 start [service_name]
npm service app:
pm2 start index.js

Resources