Launch mvn exec:java as linux daemon - linux

I am trying to execute the mvn plugin exec:java as a deamon on linux. When I do it "manually" from the console it seems to work:
$ nohup mvn -f $PATH_TO_POM exec:java -Pxyz &
launches the daemon and redirects the usual console output to the file nohup.out. I could not figure out what the -P parameter does, but I can use it to find the pid of the launches process and to kill the process
$ pgrep -f xyz # returns some pid, e. g. 12345
$ kill 12345
When I try to launch the daemon from within a simple bash script
# this is part of bash script in separate file
$ nohup mvn -f $PATH_TO_POM exec:java -Pxyz /tmp 2>> /dev/null >> /dev/null &
$ pgrep -f xyz # returns some pid, e. g. 12345
$ jps -l # shows that 12345 belongs to org.codehaus.plexus.classworlds.laucher.Laucher
$ jps -m # shows "12345 Launcher -f $PATH_TO_POM exec:java -Pxyz /tmp"
it fails, because after executing the file above and then
$ ps -p 12345
there is no such process with PID 12345, although the script above delivers the PID.

$ nohup mvn -f $PATH_TO_POM exec:java -Pxyz 2>> /dev/null >> /dev/null &
without /tmp works

Related

Running a interactive command as a background process in shell script

I am facing an issue when I am trying to run a interactive command/app in the background of the shell script. I am trying to log the output of the command to a file. But I don't see that command logging anything to the file. Even executing the command in the bash also did not work as it gets suspended.
Sample script
#!/bin/bash
while true
do
./a.out > test &
PID=$!
sleep 20
kill -9 $PID
done
[#myprog]$ ./a.out &
[1] 3275
Program started
[1]+ Stopped ./a.out
[#myprog]$
You don't need to create a background process to redirect data. You can do the following, which will create a logfile actions.log which holds your every action.
#! /bin/bash
while read -p "action " act; do
echo $act
done > actions.log
exit 0
For you, this would be something like:
$ a.out > test.log
If you do want to have a background process, but need to input data:
$ function inter_child {
./inter.sh <<-EOF
a
b
c
d
EOF
sleep 10
}
$ inter_child &
$ wait
$ cat actions.log
a
b
c
d
If this doesn't answer your question, please be more specific why you need to create a child process and what's a.out is expecting. Hope this helps!
EDIT:
stdout and stderr are two different redirections.
Write stderr to file 2>: $ ./a.out 2> error.log
Redirect stderr to stdout 2>&1: $ ./a.out > log.log 2>&1

How to kill a process by reading from pid file using bash script in Jenkins?

Inside Jenkins, I have to run 2 separate scripts: start.sh and stop.sh. These scripts are inside my application which is fetched from a SCM . They are inside same directory.
The start.sh script runs a process in the background using nohup, and writes the processId to save_pid.pid. This script works fine. It successfully starts my application.
Then inside stop.sh, I am trying to read the processId from save_pid.pid to delete the process. But,I am unable to delete the process and the application keeps running until I kill the process manually using: sudo kill {processId}.
Here are the approaches that I have tried so far inside stop.sh but none of these work:
kill $(cat /path/to/save_pid.pid)
kill `cat /path/to/save_pid.pid`
kill -9 $(cat /path/to/save_pid.pid)
kill -9 `cat /path/to/save_pid.pid`
pkill -F /path/to/save_pid.pid
I have also tried all of these steps with sudo as well. But, it just doesn't work. I have kept an echo statement inside stop.sh, which prints and then there is nothing.
What am I doing wrong here ?
UPDATE:
The nohup command that I am using inside start.sh is something like this:
nohup deploy_script > $WORKSPACE/app.log 2>&1 & echo $! > $WORKSPACE/save_pid.pid
Please Note:
In my case, the value written inside save_pid.pid is surprisingly
always less by 1 than the value of actual processId. !!!
I think the reason why this happens is because you are not getting the PID of the process that you are interested in, but the PID of the shell executing your command.
Look:
$ echo "/bin/sleep 10" > /tmp/foo
$ chmod +x /tmp/foo
$ nohup /tmp/foo & echo $!
[1] 26787
26787
nohup: ignoring input and appending output to 'nohup.out'
$ pgrep sleep
26789
So 'nohup' will exec the 'shell', the 'shell' will fork a second 'shell' to exec 'sleep' in, however I can only count two processes here, so I am unable to account for one created PID.
Note that, if you put the nohup and the pgrep on one line, then pgrep will apparently be started faster than the shell that 'exec's 'sleep' and thus pgrep will yield nothing, which somewhat confirms my theory:
$ nohup /tmp/foo & echo $! ; pgrep sleep
[2] 26899
nohup: ignoring input and appending output to 'nohup.out'
$
If you launch your process directly, then nohup will "exec" your process and thus keep the same PID for the process as nohup itself had (see http://sources.debian.net/src/coreutils/8.23-4/src/nohup.c/#L225):
$ nohup /bin/sleep 10 & echo "$!"; pgrep sleep
[1] 27130
27130
nohup: ignoring input and appending output to 'nohup.out'
27130
Also, if you 'exec' 'sleep' inside the script, then there's only one process that's created (as expected):
$ echo "exec /bin/sleep 10" > /tmp/foo
$ nohup /tmp/foo & echo "$!"; pgrep sleep
[1] 27309
27309
nohup: ignoring input and appending output to 'nohup.out'
27309
Thus, according to my theory, if you'd 'exec' your process inside the script, then you'd be getting the correct PID.

How to use multiple exec command in a upstart script?

Here is what I tried to run multiple exec command , but I am getting output of email but not for the sms . Is there a way to run the both exec command ?
description "starts a kafka consumer for email and sms "
respawn
start on runlevel [2345]
stop on runlevel [!2345]
env FOUNDATION_HOME=/opt/home/configs
env VIRTUAL_ENV=/opt/home/virtualenvs/analytics
# run as non privileged user
setuid xxx
setgid xxx
console log
chdir /opt/xxx
exec stdbuf -oL /opt/xxx/virtualenvs/analytics/bin/python -m yukon.pipelinerunnerexternal /opt/xxx/configs/datastream.pheme_sms > /tmp/sms.out 2>&1
exec stdbuf -oL /opt/xxx/virtualenvs/analytics/bin/python -m yukon.pipelinerunnerexternal /opt/xxx/configs/datastream.pheme_email > /tmp/email.out 2>&1
post-start script
PID=`status kafka_upstart | egrep -oi '([0-9]+)$' | head -n1`
echo $PID > /var/tmp/kafka_upstart.pid
end script
post-stop script
rm -f /var/tmp/kafka_upstart.pid
end script
You can try concatenating them with && (assuming they're not blocking indefinitely):
exec stdbuf -oL /opt/xxx/virtualenvs/analytics/bin/python -m yukon.pipelinerunnerexternal /opt/xxx/configs/datastream.pheme_sms > /tmp/sms.out 2>&1 && stdbuf -oL /opt/xxx/virtualenvs/analytics/bin/python -m yukon.pipelinerunnerexternal /opt/xxx/configs/datastream.pheme_email > /tmp/email.out 2>&1
Or put the commands in a separate script, kafkalaunch.sh, then run the script:
exec kafkalaunch.sh
Which is more elegant in my opinion.

why nohup does not launch my script?

Here is my script.sh
for ((i=1; i<=400000; i++))
do
echo "loop $i"
echo
numberps=`ps -ef | grep php | wc -l`;
echo $numberps
if [ $numberps -lt 110 ]
then
php5 script.php &
sleep 0.25
else
echo too much process
sleep 0.5
fi
done
When I launch it with:
./script.sh > /dev/null 2>/dev/null &
that works except when I logout from SSH and login again, I cannot stop the script with kill%1 and jobs -l is empty
When I try to launch it with
nohup ./script.sh &
It just ouputs
nohup: ignoring input and appending output to `nohup.out'
but no php5 are running: nohup has no effect at all
I have 2 aleternatives to solve my problem:
1) ./script.sh > /dev/null 2>/dev/null &
If I logout from SSH and login again, How can I delete this job ?
or
2) How to make nohup run correctly ?
Any idea ?
nohup is not supposed to allow you to use jobs -l or kill %1 to kill jobs after logging out and in again.
Instead, you can
Run the script in the foreground in a GNU Screen or tmux session, which lets you log out, log in, reattach and continue the same session.
killall script.sh to kill all running instances of script.sh running on the server.

Getting sudo and nohup to work together

Linux newbie here.
I have a perl script which takes two command line inputs. I tried to run it in the background but this is what I got:
[~user]$ nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
[2] 19603
[~user]$ nohup: appending output to `nohup.out'
after the system returns "nohup: appending output to `nohup.out'", no new prompt will appear. Then as long as I type in some other command, the shell will tell me that the process is stopped:
[~user]$ nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
[2] 19603
[~user]$ nohup: appending output to `nohup.out'
ls
ascii_loader_script.pl format_wrds_trd.txt nohup.out norm_wrds_trd.cfg
[2]+ Stopped nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv
I've looked at this post and tried to do "sudo date" before executing the command. Still got the same thing.
http://www.sudo.ws/pipermail/sudo-users/2003-July/001648.html
The solution is to use the -b flag for sudo to run the command in the background:
$ sudo -b ./ascii_loader_script.pl 20070502 ctm_20070502.csv
You should only use nohup if you want the program to continue even after you close your current terminal session
The problem here, imho, is not nohup, but background processing sudo.
You are putting the process in background (& at end of command) but probably sudo needs password authentication, and that is why the process stops.
Try one of these:
1) remove the ampersand from end of command, reply to passord prompt and afterwords put it in background (by typing CTRL-Z - which stops the process and issuing the bg command to send it to background)
2) Change the /etc/sudoers to not ask for users password by including the line:
myusername ALL=(ALL) NOPASSWD: ALL
If besides the password reply your application waits for other input, then you can pipe the input to the command like this:
$ cat responses.txt|sudo mycommand.php
hth
You can Try
sudo su
and then
nohup ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
instead of
nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
You must use sudo first, nohup second.
sudo nohup ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
My working solution for evaluating disk fragmentation in the background:
Exec sudo with nohup without ampersand (&) at the end:
$ sudo nohup nice -20 find / -type f -exec filefrag "{}" \; | sed 's/^\(.*\): \([0-9]\+\) extent.*/\2\t\1/'| awk -F ' ' '$1 > 0' | sort -n -r | head -50 > filefrag.txt
Enter password for sudo;
Press Ctrl+Z;
Put the running process in the background.
$ bg 1
[1]+ sudo nohup nice -20 find / -type f -exec filefrag "{}" \; | sed 's/^\(.*\): \([0-9]\+\) extent.*/\2\t\1/' | awk -F ' ' '$1 > 0' | sort -n -r | head -50 > filefrag.txt &
Now you can exit the terminal and log in later. The process will remain running in the background. Because nohup is used.
First of all, you should switch sudo and nohup.
And then:
if sudo echo Starting ...
then
sudo nohup <yourProcess> &
fi
The echo Starting ... can be replaced by any command that does not do much.
I only use it as dummy command for the sudo.
By this the sudo in the if-condition triggers the password-check.
If it is ok then the sudo session is logged in and the second call will succeed, otherwise the if will fail and not execute the actual command.
I open an editor and typed these lines:
#!/bin/bash
sudo echo Starting ...
sudo -b MyProcess
(Where MyProcess is anything I want to run as superuser.)
Then I save the file where I want it as MyShellScript.sh .
Then change the file permissions to allow execution.
Then run it in a terminal. the "-b" option tells sudo to run the process separately in the background, so the process keeps running after the terminal session dies.
Worked for me in linux-mint.
You can set it as your alias:
sudo sh -c 'nohup openvpn /etc/openvpn/client.ovpn 2>&1 > /dev/null &'
This should work
sudo -b -u userName ./myScript > logFile
I am just curious to understand that can I send this logFile as a email after the ./myScript is successful running in background.
Try:
xterm -e "sudo -b nohup php -S localhost:80 -t /media/malcolm/Workspace/sites &>/dev/null"
When you close xterm, the PHP web server still alive.
Don't put nohup before sudo or else the PHP web server will be killed after closing xterm.

Resources