DSH (dancer's shell) hangs when getting tail -f logs - linux

I'm using dancer's shell dsh (http://www.netfort.gr.jp/~dancer/software/dsh.html.en) to send a tail -f command to 6 machines. I was hoping to use this to view a merged log from a service which resides in the same directory on each of these machines. The machines are all running RHEL 4. (Not my choice.)
What actually happens, is that I retrieve 4-20 lines from each log and then it just hangs.
Here are my options:
dsh -c -M -r ssh -g services -- /usr/bin/tail -f /var/myservice/my.log
"services" refers to a group of 6 servers.
I've tried several different ssh options in the dsh.conf file, including -n, -t, and -f, but it doesn't seem to make a difference. Also, screen is not installed on the target servers.
What's wrong with my command? How can I make it act like a proper tail -f?

It turns out chepner's comment is right. Those logs just don't create much output. I tried the identical command with a set of more active web applications and it works fine.

I know that command as "distributed shell", but no matter.
I'm suspicious the double-dash in the middle of your command string is asking for it to accept stdin input, which indeed would make it appear to hang. Try it without the "--"

Related

Conclusion of result of work of the Linux service?

Is automatically started as service at start of system a script. For example: myscript.service
Question - how to look through work (conclusion) of my script in real time. By analogy as though I have started the script from the terminal and I see a conclusion of work of a script.
service myscript status removes several lines and doesn't update a conclusion
journalctl -f -u myscript
Assuming you have used the systemd capabilities.
Either redirect> logs to the file and the tail -f command

LDAP - SSH script across multiple VM's

So I'm ssh'ing into a router that has several VM's. It is setup using LDAP so that each VM has the same files, settings, etc. However they have different cores allocated, different libraries and packages installed. Instead of logging into each VM individually and running the command, I want to automate it by putting the script in .bashrc.
So what I have so far:
export LD_LIBRARY_PATH=/lhome/username
# .so files are in ~/ to avoid permission denied problems
output=$(cat /proc/cpuinfo | grep "^cpu cores" | uniq | tail -c 2)
current=server_name
if [[ `hostname-s` != $current ]]; then
ssh $current
fi
/path/to/program --hostname $(echo $(hostname -s)) --threads $((output*2))
Each VM, upon logging in, will execute this script, so I have to check if the current VM has the hostname to avoid an SSH loop. The idea is to run the program, then exit back out to the origin to resume the script. The problem is of course that the process will die upon logging out.
It's been suggested to me to use TMUX on an array of the hostnames, but I would have no idea on how to approach this.
You could install clusterSSH, set up a list of hostnames, and execute things from the terminal windows opened. You may use screen/tmux/nohup to allow processes started to keep running, even after logout.
Yet, if you still want to play around with scripting, you may install tmux, and use:
while read host; do
scp "script_to_run_remotely" ${host}:~/
ssh ${host} tmux new-session -d '~/script_to_run_remotely'\; detach
done < hostlist
Note: hostlist should be a list of hostnames, one per line.

Shell script only starting applications when used through ssh

What can cause .sh scripts to work fine through an SSH shell, but not when executed through either PHP or crontab?
I have a VPS where I run game servers on, but in order to make it maintainable, I am planning on automating much of the tedious processes (like setting up or deleting the server) and making important features (like starting and stopping servers) easily acceptable for the ones who actually need it.
Now, when I made the shell scripts and tested them, they worked absolutely fine. startserver started the server, restartserver restarted it, etc. But when run from PHP, or - as I later figured out - crontab, starting servers magically does not work. Stopping them, checking if they are running, updating and all other features worked like intended, but starting a server just did not do anything. It just returned 0 while printing nothing.
For example, here is an example of a script which works in either case: (statusserver.sh)
/sbin/start-stop-daemon -v -t --start --exec ~mta/servers/$1/files/mta-server -- -d
And here is one which does not work in any case: (startserver.sh)
/sbin/start-stop-daemon -v --start --exec ~mta/servers/$1/files/mta-server -- -d
The only difference is that statusserver.sh has "-t", which will only tell you if doing the same command without -t will actually be successful. And executing statusserver.sh like so:
sudo -u mta ~mta/sh/statusserver.sh test
Indeed does work, printing something along the lines of "Would start ~mta/servers/test/files/mta-server -d". But doing this:
sudo -u mta ~mta/sh/startserver.sh $2
Does absolutely nothing. It does not print anything, and it actually returns 0. (which is supposed to mean the operation was successful)
Now for the fun part: When the server is already running, startserver.sh will do what it is supposed to do: Say that the server is already running, and returning an error code. (Because start-stop-daemon is kind enough to do that for me) But it flat out refuses to launch anything.
Replacing start-stop-daemon with something like:
sudo -u mta ~mta/servers/test/files/mta-server -d
Does exactly the same thing: It will just refuse to run, while still returning 0.
Oh by the way, it's not a sudo problem. Of that I am quite sure, since the following works fine too
sudo -u web1 sudo -u mta ~mta/scripts/startserver.sh test
So back to my question: What can cause Linux, Shell, Bash or whatever to flat out refuse to start an application when run through either PHP or crontab, while happily accepting it when launched through SSH? Is there any setting I need to switch? Any package that can be blocking up what I want to do? Any other thing I am just missing?
Look into using sudo.
Set up /etc/sudoer (using visudo) for the user that Apache runs as (usually for the 'nobody' user, or 'apache' user) as this is what Apache usually runs as. Grant sudo access to the commands you want to run, with the NOPASSWD option.
In your PHP script, use exec() to execute the commands to start/stop daemons and prefix the commands with the sudo command.
Here is an article about sudo:
http://www.cyberciti.biz/tips/allow-a-normal-user-to-run-commands-as-root.html
As I think Justin was touching on, but didn't say specifically, it would seem the problem of not being able to run the script is that the apache user account (which is generally pretty limited on purpose) can't see into the user's home directory because of the permissions. Generally only the user and root can see into their own home directory. You can do a few things, sudo to run the script in the home directory, move it out of the user's home directory or possibly change permissions on the scripts/homes so they can be run in the user's home directory by apache.

How to send data to local clipboard from a remote SSH session

Borderline ServerFault question, but I'm programming some shell scripts, so I'm trying here first :)
Most *nixes have a command that will let you pipe/redirect output to the local clipboard/pasteboard, and retrieve from same. On OS X these commands are
pbcopy, pbpaste
Is there anyway to replicate this functionality while SSHed into another server? That is,
I'm using Computer A.
I open a terminal window
I SSH to Computer B
I run a command on Computer B
The output of Computer B is redirected or automatically copied to Computer A's clipboard.
And yes, I know I could just (shudder) use my mouse to select the text from the command, but I've gotten so used to the workflow of pipping output directly to the clipboard that I want the same for my remote sessions.
Code is useful, but general approaches are appreciated as well.
My favorite way is ssh [remote-machine] "cat log.txt" | xclip -selection c. This is most useful when you don't want to (or can't) ssh from remote to local.
Edit: on Cygwin ssh [remote-machine] "cat log.txt" > /dev/clipboard.
Edit: A helpful comment from nbren12:
It is almost always possible to setup a reverse ssh connection using SSH port forwarding. Just add RemoteForward 127.0.0.1:2222 127.0.0.1:22 to the server's entry in your local .ssh/config, and then execute ssh -p 2222 127.0.0.1 on the remote machine, which will then redirect the connection to the local machine. – nbren12
I'm resurrecting this thread because I've been looking for the same kind of solution, and I've found one that works for me. It's a minor modification to a suggestion from OSX Daily.
In my case, I use Terminal on my local OSX machine to connect to a linux server via SSH. Like the OP, I wanted to be able to transfer small bits of text from terminal to my local clipboard, using only the keyboard.
The essence of the solution:
commandThatMakesOutput | ssh desktop pbcopy
When run in an ssh session to a remote computer, this command takes the output of commandThatMakesOutput (e.g. ls, pwd) and pipes the output to the clipboard of the local computer (the name or IP of "desktop"). In other words, it uses nested ssh: you're connected to the remote computer via one ssh session, you execute the command there, and the remote computer connects to your desktop via a different ssh session and puts the text to your clipboard.
It requires your desktop to be configured as an ssh server (which I leave to you and google). It's much easier if you've set up ssh keys to facilitate fast ssh usage, preferably using a per-session passphrase, or whatever your security needs require.
Other examples:
ls | ssh desktopIpAddress pbcopy
pwd | ssh desktopIpAddress pbcopy
For convenience, I've created a bash file to shorten the text required after the pipe:
#!/bin/bash
ssh desktop pbcopy
In my case, i'm using a specially named key
I saved it with the file name cb (my mnemonic (ClipBoard). Put the script somewhere in your path, make it executable and voila:
ls | cb
Found a great solution that doesn't require a reverse ssh connection!
You can use xclip on the remote host, along with ssh X11 forwarding & XQuartz on the OSX system.
To set this up:
Install XQuartz (I did this with soloist + pivotal_workstation::xquartz recipe, but you don't have to)
Run XQuartz.app
Open XQuartz Preferences (+,)
Make sure "Enable Syncing" and "Update Pasteboard when CLIPBOARD changes" are checked
ssh -X remote-host "echo 'hello from remote-host' | xclip -selection clipboard"
Reverse tunnel port on ssh server
All the existing solutions either need:
X11 on the client (if you have it, xclip on the server works great) or
the client and server to be in the same network (which is not the case if you're at work trying to access your home computer).
Here's another way to do it, though you'll need to modify how you ssh into your computer.
I've started using this and it's nowhere near as intimidating as it looks so give it a try.
Client (ssh session startup)
ssh username#server.com -R 2000:localhost:2000
(hint: make this a keybinding so you don't have to type it)
Client (another tab)
nc -l 2000 | pbcopy
Note: if you don't have pbcopy then just tee it to a file.
Server (inside SSH session)
cat some_useful_content.txt | nc localhost 2000
Other notes
Actually even if you're in the middle of an ssh session there's a way to start a tunnel but i don’t want to scare people away from what really isn’t as bad as it looks. But I'll add the details later if I see any interest
There are various tools to access X11 selections, including xclip and XSel. Note that X11 traditionally has multiple selections, and most programs have some understanding of both the clipboard and primary selection (which are not the same). Emacs can work with the secondary selection too, but that's rare, and nobody really knows what to do with cut buffers...
$ xclip -help
Usage: xclip [OPTION] [FILE]...
Access an X server selection for reading or writing.
-i, -in read text into X selection from standard input or files
(default)
-o, -out prints the selection to standard out (generally for
piping to a file or program)
-l, -loops number of selection requests to wait for before exiting
-d, -display X display to connect to (eg localhost:0")
-h, -help usage information
-selection selection to access ("primary", "secondary", "clipboard" or "buffer-cut")
-noutf8 don't treat text as utf-8, use old unicode
-version version information
-silent errors only, run in background (default)
-quiet run in foreground, show what's happening
-verbose running commentary
Report bugs to <astrand#lysator.liu.se>
$ xsel -help
Usage: xsel [options]
Manipulate the X selection.
By default the current selection is output and not modified if both
standard input and standard output are terminals (ttys). Otherwise,
the current selection is output if standard output is not a terminal
(tty), and the selection is set from standard input if standard input
is not a terminal (tty). If any input or output options are given then
the program behaves only in the requested mode.
If both input and output is required then the previous selection is
output before being replaced by the contents of standard input.
Input options
-a, --append Append standard input to the selection
-f, --follow Append to selection as standard input grows
-i, --input Read standard input into the selection
Output options
-o, --output Write the selection to standard output
Action options
-c, --clear Clear the selection
-d, --delete Request that the selection be cleared and that
the application owning it delete its contents
Selection options
-p, --primary Operate on the PRIMARY selection (default)
-s, --secondary Operate on the SECONDARY selection
-b, --clipboard Operate on the CLIPBOARD selection
-k, --keep Do not modify the selections, but make the PRIMARY
and SECONDARY selections persist even after the
programs they were selected in exit.
-x, --exchange Exchange the PRIMARY and SECONDARY selections
X options
--display displayname
Specify the connection to the X server
-t ms, --selectionTimeout ms
Specify the timeout in milliseconds within which the
selection must be retrieved. A value of 0 (zero)
specifies no timeout (default)
Miscellaneous options
-l, --logfile Specify file to log errors to when detached.
-n, --nodetach Do not detach from the controlling terminal. Without
this option, xsel will fork to become a background
process in input, exchange and keep modes.
-h, --help Display this help and exit
-v, --verbose Print informative messages
--version Output version information and exit
Please report bugs to <conrad#vergenet.net>.
In short, you should try xclip -i/xclip -o or xclip -i -sel clip/xclip -o -sel clip or xsel -i/xsel -o or xsel -i -b/xsel -o -b, depending on what you want.
If you use iTerm2 on the Mac, there is an easier way. This functionality is built into iTerm2's Shell Integration capabilities via the it2copy command:
Usage: it2copy
Copies to clipboard from standard input
it2copy filename
Copies to clipboard from file
To make it work, choose iTerm2-->Install Shell Integration menu item while logged into the remote host, to install it to your own account. Once that is done, you'll have access to it2copy, as well as a bunch of other aliased commands that provide cool functionality.
The other solutions here are good workarounds but this one is so painless in comparison.
This is my solution based on SSH reverse tunnel, netcat and xclip.
First create script (eg. clipboard-daemon.sh) on your workstation:
#!/bin/bash
HOST=127.0.0.1
PORT=3333
NUM=`netstat -tlpn 2>/dev/null | grep -c " ${HOST}:${PORT} "`
if [ $NUM -gt 0 ]; then
exit
fi
while [ true ]; do
nc -l ${HOST} ${PORT} | xclip -selection clipboard
done
and start it in background.
./clipboard-daemon.sh&
It will start nc piping output to xclip and respawning process after receiving portion of data
Then start ssh connection to remote host:
ssh user#host -R127.0.0.1:3333:127.0.0.1:3333
While logged in on remote box, try this:
echo "this is test" >/dev/tcp/127.0.0.1/3333
then try paste on your workstation
You can of course write wrapper script that starts clipboard-daemon.sh first and then ssh session. This is how it works for me. Enjoy.
Allow me to add a solution that if I'm not mistaken was not suggested before.
It does not require the client to be exposed to the internet (no reverse connections), nor does it use any xlibs on the server and is implemented completely using ssh's own capabilities (no 3rd party bins)
It involves:
Opening a connection to the remote host, then creating a fifo file on it and waiting on that fifo in parallel (same actual TCP connection for everything).
Anything you echo to that fifo file ends up in your local clipboard.
When the session is done, remove the fifo file on the server and cleanly terminate the connections together.
The solution utilizes ssh's ControlMaster functionality to use just one TCP connection for everything so it will even support hosts that require a password to login and prompt you for it just once.
Edit: as requested, the code itself:
Paste the following into your bashrc and use sshx host to connect.
On the remote machine echo SOMETHING > ~/clip and hopefully, SOMETHING will end up in the local host's clipboard.
You will need the xclip utility on your local host.
_dt_term_socket_ssh() {
ssh -oControlPath=$1 -O exit DUMMY_HOST
}
function sshx {
local t=$(mktemp -u --tmpdir ssh.sock.XXXXXXXXXX)
local f="~/clip"
ssh -f -oControlMaster=yes -oControlPath=$t $# tail\ -f\ /dev/null || return 1
ssh -S$t DUMMY_HOST "bash -c 'if ! [ -p $f ]; then mkfifo $f; fi'" \
|| { _dt_term_socket_ssh $t; return 1; }
(
set -e
set -o pipefail
while [ 1 ]; do
ssh -S$t -tt DUMMY_HOST "cat $f" 2>/dev/null | xclip -selection clipboard
done &
)
ssh -S$t DUMMY_HOST \
|| { _dt_term_socket_ssh $t; return 1; }
ssh -S$t DUMMY_HOST "rm $f"
_dt_term_socket_ssh $t
}
More detailed explanation is on my website:
https://xicod.com/2021/02/09/clipboard-over-ssh.html
The simplest solution of all, if you're on OS X using Terminal and you've been ssh'ing around in a remote server and wish to grab the results of a text file or a log or a csv, simply:
1) Cmd-K to clear the output of the terminal
2) cat <filename> to display the contents of the file
3) Cmd-S to save the Terminal Output
You'll have the manually remove the first line and last line of the file, but this method is a bit simpler than relying on other packages to be installed, "reverse tunnels" and trying to have a static IP, etc.
This answer develops both upon the chosen answer by adding more security.
That answer discussed the general form
<command that makes output> | \
ssh <user A>#<host A> <command that maps stdin to clipboard>
Where security may be lacking is in the ssh permissions allowing <user B> on host B> to ssh into host A and execute any command.
Of course B to A access may already be gated by an ssh key, and it may even have a password. But another layer of security can restrict the scope of allowable commands that B can execute on A, e.g. so that rm -rf / cannot be called. (This is especially important when the ssh key doesn't have a password.)
Fortunately, ssh has a built-in feature called command restriction or forced command. See ssh.com, or
this serverfault.com question.
The solution below shows the general form solution along with ssh command restriction enforced.
Example Solution with command restriction added
This security enhanced solution follows the general form - the call from the ssh session on host-B is simply:
cat <file> | ssh <user-A>#<host A> to_clipboard
The rest of this shows the setup to get that to work.
Setup of ssh command restriction
Suppose the user account on B is user-B, and B has an ssh key id-clip, that has been created in the usual way (ssh-keygen).
Then in user-A's ssh directory there is a file
/home/user-A/.ssh/authorized_keys
that recognizes the key id-clip and allows ssh connection.
Usually the contents of each line authorized_keys is exactly the public key being authorized, e.g., the contents of id-clip.pub.
However, to enforce command restriction that public key content is prepended (on the same line) by the command to be executed.
In our case:
command="/home/user-A/.ssh/allowed-commands.sh id-clip",no-agent-forwarding,no-port-forwarding,no-user-rc,no-x11-forwarding,no-pty <content of file id-clip.pub>
The designated command "/home/user-A/.ssh/allowed-commands.sh id-clip", and only that designated command, is executed whenever key id-clip is used initiate an ssh connection to host-A - no matter what command is written the ssh command line.
The command indicates a script file allowed-commands.sh, and the contents of that that script file is
#/bin/bash
#
# You can have only one forced command in ~/.ssh/authorized_keys. Use this
# wrapper to allow several commands.
Id=${1}
case "$SSH_ORIGINAL_COMMAND" in
"to-clipboard")
notify-send "ssh to-clipboard, from ${Id}"
cat | xsel --display :0 -i -b
;;
*)
echo "Access denied"
exit 1
;;
esac
The original call to ssh on machine B was
... | ssh <user-A>#<host A> to_clipboard
The string to-clipboard is passed to allowed-commands.sh by the environment variable SSH_ORIGINAL_COMMAND.
Addition, we have passed the name of the key, id-clip, from the line in authorized_keyswhich is only accessed by id-clip.
The line
notify-send "ssh to-clipboard, from ${Id}"
is just a popup messagebox to let you know the clipboard is being written - that's probably a good security feature too. (notify-send works on Ubuntu 18.04, maybe not others).
In the line
cat | xsel --display :0 -i -b
the parameter --display :0 is necessary because the process doesn't have it's own X display with a clipboard,
so it must be specificied explicitly. This value :0 happens to work on Ubuntu 18.04 with Wayland window server. On other setups it might not work. For a standard X server this answer might help.
host-A /etc/ssh/sshd_config parameters
Finally a few parameters in /etc/ssh/sshd_config on host A that should be set to ensure permission to connect, and permission to use ssh-key only without password:
PubkeyAuthentication yes
PasswordAuthentication no
ChallengeResponseAuthentication no
AllowUsers user-A
To make the sshd server re-read the config
sudo systemctl restart sshd.service
or
sudo service sshd.service restart
conclusion
It's some effort to set it up, but other functions besides to-clipboard can be constructed in parallel the same framework.
Not a one-liner, but requires no extra ssh.
install netcat if necessary
use termbin: cat ~/some_file.txt | nc termbin.com 9999. This will copy the output to the termbin website and prints the URL to your output.
visit that url from your computer, you get your output
Of course, do not use it for sensitive content.
#rhileighalmgren solution is good, but pbcopy will annoyingly copy last "\n" character, I use "head" to strip out last character to prevent this:
#!/bin/bash
head -c -1 | ssh desktop pbcopy
My full solution is here : http://taylor.woodstitch.com/linux/copy-local-clipboard-remote-ssh-server/
Far Manager Linux port supports synchronizing clipboard between local and remote host. You just open local far2l, do "ssh somehost" inside, run remote far2l in that ssh session and get remote far2l working with your local clipboard.
It supports Linux, *BSD and OS X; I made a special putty build to utilize this functionality from windows also.
For anyone googling their way to this:
The best solution in this day and age seem to be lemonade
Various solutions is also mentioned in the neovim help text for clipboard-tool
If you're working over e.g. a pod in a Kubernetes cluster and not direct SSH, so that there is no way for your to do a file transfer, you could use cat and then save the terminal output as text. For example in macOS you can do Shell -> Export as text.

How to make sure an application keeps running on Linux

I'm trying to ensure a script remains running on a development server. It collates stats and provides a web service so it's supposed to persist, yet a few times a day, it dies off for unknown reasons. When we notice we just launch it again, but it's a pain in the rear and some users don't have permission (or the knowhow) to launch it up.
The programmer in me wants to spend a few hours getting to the bottom of the problem but the busy person in me thinks there must be an easy way to detect if an app is not running, and launch it again.
I know I could cron-script ps through grep:
ps -A | grep appname
But again, that's another hour of my life wasted on doing something that must already exist... Is there not a pre-made app that I can pass an executable (optionally with arguments) and that will keep a process running indefinitely?
In case it makes any difference, it's Ubuntu.
I have used a simple script with cron to make sure that the program is running. If it is not, then it will start it up. This may not be the perfect solution you are looking for, but it is simple and works rather well.
#!/bin/bash
#make-run.sh
#make sure a process is always running.
export DISPLAY=:0 #needed if you are running a simple gui app.
process=YourProcessName
makerun="/usr/bin/program"
if ps ax | grep -v grep | grep $process > /dev/null
then
exit
else
$makerun &
fi
exit
Then add a cron job every minute, or every 5 minutes.
Monit is perfect for this :)
You can write simple config files which tell monit to watch e.g. a TCP port, a PID file etc
monit will run a command you specify when the process it is monitoring is unavailable/using too much memory/is pegging the CPU for too long/etc. It will also pop out an email alert telling you what happened and whether it could do anything about it.
We use it to keep a load of our websites running while giving us early warning when something's going wrong.
-- Your faithful employee, Monit
Notice: Upstart is in maintenance mode and was abandoned by Ubuntu which uses systemd. One should check the systemd' manual for details how to write service definition.
Since you're using Ubuntu, you may be interested in Upstart, which has replaced the traditional sysV init. One key feature is that it can restart a service if it dies unexpectedly. Fedora has moved to upstart, and Debian is in experimental, so it may be worth looking into.
This may be overkill for this situation though, as a cron script will take 2 minutes to implement.
#!/bin/bash
if [[ ! `pidof -s yourapp` ]]; then
invoke-rc.d yourapp start
fi
If you are using a systemd-based distro such as Fedora and recent Ubuntu releases, you can use systemd's "Restart" capability for services. It can be setup as a system service or as a user service if it needs to be managed by, and run as, a particular user, which is more likely the case in OP's particular situation.
The Restart option takes one of no, on-success, on-failure, on-abnormal, on-watchdog, on-abort, or always.
To run it as a user, simply place a file like the following into ~/.config/systemd/user/something.service:
[Unit]
Description=Something
[Service]
ExecStart=/path/to/something
Restart=on-failure
[Install]
WantedBy=graphical.target
then:
systemctl --user daemon-reload
systemctl --user [status|start|stop|restart] something
No root privilege / modification of system files needed, no cron jobs needed, nothing to install, flexible as hell (see all the related service options in the documentation).
See also https://wiki.archlinux.org/index.php/Systemd/User for more information about using the per-user systemd instance.
I have used from cron "killall -0 programname || /etc/init.d/programname start". kill will error if the process doesn't exist. If it does exist, it'll deliver a null signal to the process (which the kernel will ignore and not bother passing on.)
This idiom is simple to remember (IMHO). Generally I use this while I'm still trying to discover why the service itself is failing. IMHO a program shouldn't just disappear unexpectedly :)
Put your run in a loop- so when it exits, it runs again... while(true){ run my app.. }
I couldn't get Chris Wendt solution to work for some reason, and it was hard to debug. This one is pretty much the same but easier to debug, excludes bash from the pattern matching. To debug just run: bash ./root/makerun-mysql.sh. In the following example with mysql-server just replace the value of the variables for process and makerun for your process.
Create a BASH-script like this (nano /root/makerun-mysql.sh):
#!/bin/bash
process="mysql"
makerun="/etc/init.d/mysql restart"
if ps ax | grep -v grep | grep -v bash | grep --quiet $process
then
printf "Process '%s' is running.\n" "$process"
exit
else
printf "Starting process '%s' with command '%s'.\n" "$process" "$makerun"
$makerun
fi
exit
Make sure it's executable by adding proper file permissions (i.e. chmod 700 /root/makerun-mysql.sh)
Then add this to your crontab (crontab -e):
# Keep processes running every 5 minutes
*/5 * * * * bash /root/makerun-mysql.sh
The supervise tool from daemontools would be my preference - but then everything Dan J Bernstein writes is my preference :)
http://cr.yp.to/daemontools/supervise.html
You have to create a particular directory structure for your application startup script, but it's very simple to use.
first of all, how do you start this app? Does it fork itself to the background? Is it started with nohup .. & etc? If it's the latter, check why it died in nohup.out, if it's the first, build logging.
As for your main question: you could cron it, or run another process on the background (not the best choice) and use pidof in a bashscript, easy enough:
if [ `pidof -s app` -eq 0 ]; then
nohup app &
fi
You could make it a service launched from inittab (although some Linuxes have moved on to something newer in /etc/event.d). These built in systems make sure your service keeps running without writing your own scripts or installing something new.
It's a job for a DMD (daemon monitoring daemon). there are a few around; but I usually just write a script that checks if the daemon is running, and run if not, and put it in cron to run every minute.
Check out 'nanny' referenced in Chapter 9 (p197 or thereabouts) of "Unix Hater's Handbook" (one of several sources for the book in PDF).
A nice, simple way to do this is as follows:
Write your server to die if it can't listen on the port it expects
Set a cronjob to try to launch your server every minute
If it isn't running it'll start, and if it is running it won't. In any case, your server will always be up.
I think a better solution is if you test the function, too. For example, if you had to test an apache, it is not enough only to test, if "apache" processes on the systems exist.
If you want to test if apache OK is, then try to download a simple web page, and test if your unique code is in the output.
If not, kill the apache with -9 and then do a restart. And send a mail to the root (which is a forwarded mail address to the roots of the company/server/project).
It's even simplier:
#!/bin/bash
export DISPLAY=:0
process=processname
makerun="/usr/bin/processname"
if ! pgrep $process > /dev/null
then
$makerun &
fi
You have to remember though to make sure processname is unique.
One can install minutely monitoring cronjob like this:
crontab -l > crontab;echo -e '* * * * * export DISPLAY=":0.0" && for
app in "eiskaltdcpp-qt" "transmission-gtk" "nicotine";do ps aux|grep
-v grep|grep "$app";done||"$app" &' >> crontab;crontab crontab
disadvantage is that the app names you enter have to be found in ps aux|grep "appname" output and at same time being able to be launched using that name: "appname" &
also you can use the pm2 library.
sudo apt-get pm2
And if its a node app can install.
Sudo npm install pm2 -g
them can run the service.
linux service:
sudo pm2 start [service_name]
npm service app:
pm2 start index.js

Resources