SSH bash script to test if java process is running? - linux

I need to create a SSH BASH script (on Debian linux) to test if 'java' process is running.
Here how it should look like:
IF 'java' process is not running THEN run ./start.sh
to test if java process is running, I can make this test:
ps -A | grep java
This script should run every minute (I guess in a CRON)
Regards

First of all, to run a job every minute in cron, your crontab should look like this:
* * * * * /path/to/script.sh
Next, you have a few different options for detecting a Java process.
Note that each of the following is a negation: they detect the absence of Java:
With pgrep:
if [ ! $(pgrep java) ] ; then
# no java running
fi
With pidof:
if [ ! $(pidof java) ] ; then
# no java running
fi
With ps and grep:
if [ ! $(ps -A | grep 'java') ] ; then
# no java running
fi
Of these, pgrep and pidofare probably the most efficient. Don't quote me on that, though.

The check you are doing with PS and GREP doesn't look very detailed. What if other Java processes are running ? You may detect those, and come to a wrong conclusion, because you are just checking "any" Java, not some specific Java.

With pidof would be something like this:
script.sh
pidof java;
if[$? -ne 0];
then
# here put your code when errorcode of `pidof` wasn't 0, means that it didn't find process
# for example: /home/user/start.sh
# (please don't forget to use full paths if you want to use it in cron)
fi
Especially for haters:
man pidof:
EXIT STATUS
0 At least one program was found with the requested name.
1 No program was found with the requested name.

Related

Parallel run and wait for pocesses from subshell

Hi all/ I'm trying to make something like parallel tool for shell simply because the functionality of parallel is not enough for my task. The reason is that I need to run different versions of compiler.
Imagine that I need to compile 12 programs with different compilers, but I can run only 4 of them simultaneously (otherwise PC runs out of memory and crashes :). I also want to be able to observe what's going on with each compile, therefore I execute every compile in new window.
Just to make it easier here I'll replace compiler that I run with small script that waits and returns it's process id sleep.sh:
#!/bin/bash
sleep 30
echo $$
So the main script should look like parallel_run.sh :
#!/bin/bash
for i in {0..11}; do
xfce4-terminal -H -e "./sleep.sh" &
pids[$i]=$!
pstree -p $pids
if (( $i % 4 == 0 ))
then
for pid in ${pids[*]}; do
wait $pid
done
fi
done
The problem is that with $! I get pid of xfce4-terminal and not the program it executes. So if I look at ptree of 1st iteration I can see output from main script:
xfce4-terminal(31666)----{xfce4-terminal}(31668)
|--{xfce4-terminal}(31669)
and sleep.sh says that it had pid = 30876 at that time. Thus wait doesn't work at all in this case.
Q: How to get right PID of compiler that runs in subshell?
Maybe there is the other way to solve task like this?
It seems like there is no way to trace PID from parent to child if you invoke process in new xfce4-terminal as terminal process dies right after it executed given command. So I came to the solution which is not perfect, but acceptable in my situation. I run and put compiler's processes in background and redirect output to .log file. Then I run tail on these logfiles and I kill all tails which belongs to current $USER when compilers from current batch are done, then I run the other batch.
#!/bin/bash
for i in {1..8}; do
./sleep.sh > ./process_$i.log &
prcid=$!
xfce4-terminal -e "tail -f ./process_$i.log" &
pids[$i]=$prcid
if (( $i % 4 == 0 ))
then
for pid in ${pids[*]}; do
wait $pid
done
killall -u $USER tail
fi
done
Hopefully there will be no other tails running at that time :)

How to run a script in background (linux openwrt)?

I have this script:
#!/bin/sh
while [ true ] ; do
urlfile=$( ls /root/wget/wget-download-link.txt | head -n 1 )
dir=$( cat /root/wget/wget-dir.txt )
if [ "$urlfile" = "" ] ; then
sleep 30
continue
fi
url=$( head -n 1 $urlfile )
if [ "$url" = "" ] ; then
mv $urlfile $urlfile.invalid
continue
fi
mv $urlfile $urlfile.busy
wget -b $url -P $dir -o /www/wget.log -c -t 100 -nc
mv $urlfile.busy $urlfile.done
done
The script basically checks for any new URLs at wget-download-link.txt for every 30 seconds and if there's a new URL it'll download it with wget, the problem is that when I try to run this script on Putty like this
/root/wget/wget_download.sh --daemon
it's still running in the foreground, I still can see the terminal output. How do I make it run in the background ?
In OpenWRT there is neither nohup nor screen available by default, so a solution with only builtin commands would be to start a subshell with brackets and put that one in the background with &:
(/root/wget/wget_download.sh >/dev/null 2>&1 )&
you can test this structure easily on your desktop for example with
(notify-send one && sleep 15 && notify-send two)&
... and then close your console before those 15 seconds are over, you will see the commands in the brackets continue execution after closing the console.
The following command will also work:
((/root/wget/wget_download.sh)&)&
This way you don't have to install the 'nohub' command in the tight memory space of the router used for OpenWrt.
I found this somewhere several years ago. It works.
The &at the end of script should be enough, if you see output from the script it means, that stdout and/or stderr is not closed, or not redirect to /dev/null
You can use this answer:
How to redirect all output to /dev/null
I am using openwrt merlin and the only way to get it working was using the crud cron manager[1]. Nohub and screen are not available as solutions.
cru a pinggw "0 * * * * /bin/ping -c 10 -q 192.168.2.254"
works like charm
[1][https://www.cyberciti.biz/faq/how-to-add-cron-job-on-asuswrt-merlin-wifi-router/]
https://openwrt.org/packages/pkgdata/coreutils-nohup
opkg update
opkg install coreutils-nohup
nohup yourscript.sh &
You can use nohup.
nohup yourscript.sh
or
nohup yourscript.sh &
Your script will keep running even if you close your putty session, and all the output will be written to a text file in same directory.
nohup is often used in combination with the nice command to run processes on a lower priority.
nohup nice yourscript.sh &
See: http://en.wikipedia.org/wiki/Nohup
For busybox in Openwrt Merlin system, I got a better solution which combined cru and date command
cru a YOUR_UNIQUE_CRON_NAME "`date -D '%s' +'%M %H %d %m *' -d $(( \`date +%s\`+2*60 ))` YOUR_CMD_HERE"
which add a cron job running 2 minutes later, and only run once.
Inspired by PlagTag's idea.
In another way these code would tried:
ssh admin#192.168.1.1 "/jffs/your_script.sh &"
Simple and without any programs like nohup screen...
(BTW: worked on Asus-Merlin firmware)
Try this:
nohup /root/wget/wget_download.sh >/dev/null 2>&1 &
It will go to the background so when you close your Putty session, it will be still running, and it won't send messages to the terminal.

How to get the process id of command executed in bash script?

I have a script i want to run 2 programs at the same time, One is a c program and the other is cpulimit, I want to start the C program in the background first with "&" and then get the PID of the C program and hand it to cpulimit which will also run in the background with "&".
I tried doing this below and it just starts the first program and never starts cpulimit.
Also i am running this as a startup script as root using systemd in arch linux.
#!/bin/bash
/myprogram &
PID=$!
cpulimit -z -p $PID -l 75 &
exit 0
I think i have this solved now, According to this here: link I need to wrap the commands like this (command) to create a sub shell.
#!/bin/bash
(mygprgram &)
mypid=$!
(cpulimit -z -p $mypid -l 75 &)
exit 0
I just found this while googling and wanted to add something.
While your solution seems to be working (see comments about subshells), in this case you don't need to get the pid at all. Just run the command like this:
cpulimit -z -l 75 myprogram &

Redirecting Output of Bash Child Scripts

I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.

How do I translate init.d scripts from Ubuntu/Debian Linux to Solaris?

I have several init.d scripts that I'm using to start some daemons. Most of these scripts I've found on the internet and they all use start-stop-daemon. My understanding is that "start-stop-daemon" is a command that is specific to Linux or BSD distros and is not available on Solaris.
What is the best way to translate my init.d scripts from Linux to Solaris? Is there a command equivalent to start-stop-daemon that I can use, roughly?
Since I'm not much of a Solaris user, I'm willing to admit upfront that I don't even know if my question is inherently invalid or not.
start-stop-daemon is a Linux thing, and not used that much on Solaris. I guess you can port the command though, if you want to reuse your init scripts.
Otherwise it depends on what version of Solaris you are using. Starting with Solaris 10 and also OpenSolaris they use a new startup script framework "Solaris Service Management Facility", which you configure with the commands svcs, svccfg and svcadm.
See http://www.oracle.com/technetwork/server-storage/solaris/overview/servicemgmthowto-jsp-135655.html for more information.
For older Solaris releases most init scripts are written in pure shell without any helper commands like start-stop-daemon.
On Solaris 10 or later using SMF is recommended, but on an earlier release you'd create an init script in /etc/init.d and link to it from the rcX.d directories. Here's a bare-bones example of an init script for launching an rsync daemon:
#!/sbin/sh
startcmd () {
/usr/local/bin/rsync --daemon # REPLACE WITH YOUR COMMANDS
}
stopcmd () {
pkill -f "/usr/local/bin/rsync --daemon" # REPLACE WITH YOUR COMMANDS
}
case "$1" in
'start')
startcmd
;;
'stop')
stopcmd
;;
'restart')
stopcmd
sleep 1
startcmd
;;
*)
echo "Usage: $0 { start | stop | restart }"
exit 1
;;
esac
Create a link to the script from each rcX.d directory (following the "S"/"K" convention).
ln rsync /etc/rc3.d/S91rsync
for i in `ls -1d /etc/rc*.d | grep -v 3`; do ln rsync $i/K02rsync; done
See the README in each rcX.d directory and check the man page for init.d. Here's a bit of the man page:
File names in rc?.d directories are of the form
[SK]nn, where S means start this job, K means
kill this job, and nn is the relative sequence number for killing or
starting the job.
When entering a state (init S,0,2,3,etc.) the rc[S0-6] script
executes those scripts in /etc/rc[S0-6].d that are prefixed with K
followed by those scripts prefixed with S. When executing each
script in one of the /etc/rc[S0-6] directories, the /sbin/rc[S0-6]
script passes a single argu- ment. It passes the argument 'stop'
for scripts prefixed with K and the argument 'start' for scripts
prefixed with S. There is no harm in applying the same sequence
number to multiple scripts.

Resources