Issue when starting wiremock-standalone using crontab - linux

I have a new regression suite that uses the Wiremock standalone JAR. In order to ensure this is running on the server, I have this script called checkwiremock.sh
#!/bin/bash
cnt=$(ps -eaflc --sort stime | grep wiremock-standalone-2.11.0.jar |grep -v grep | wc -l)
if(test $cnt -eq 1);
then
echo "Service already running..."
else
echo "Starting Service"
nohup java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324 --verbose &
fi
The script works as expected when ran manually
./checkwiremock.sh
However when started using Crontab,
* * * * * /bin/bash /etc/opt/wiremock/checkwiremock.sh
Wiremock returns
No response could be served as there are no stub mappings in this WireMock instance.
The only difference I can see between the manually started process and cron process is the TTY
root 31526 9.5 3.2 1309736 62704 pts/0 Sl 11:28 0:01 java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324
root 31729 22.0 1.9 1294104 37808 ? Sl 11:31 0:00 java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324
Can't figure out what is wrong here.
Server details:
Red Hat Enterprise Linux Server release 6.5 (Santiago)
*Edit: corrected paths to ones actually used

Change the directory in the checkwiremock.sh to:
cd /path/to/shell/script

Related

Linux: process of a bash shell launched by crontab still running after the shell is terminated

There is an issue I would like to solve: I'm going to deploy on a Linux Red Hat 5 production environment (called PENV) a php web application under apache server; I am developing such this application on a development environment (called DENV) with Linux Mint 20.3.
On DENV, I created a crontab for the user www-data containing the following scheduled command:
0 4,12 * * * sh /bdir/s_etlShell.sh >/dev/null 2>&1;
the shell /bdir/s_etlShell.sh starts everyday at 4.00 AM and at noon, and its execution lasts between 2 and 10 minutes. It also writes to a logfile /bdir/logshell.txt.
the last two instructions of the shell are
echo "SHELL TERMINATED" >> /bdir/logshell.txt
exit
Past the 4.00 AM and noon, I found SHELL TERMINATED as a final statement inside /bdir/logshell.txt, but when I give the following command by terminal
ps fax | grep "s_etlShell.sh" | grep -v grep
I get the following output (the PID's are varying obviously):
ps fax | grep "s_etlShell.sh" | grep -v grep
1596 ? Ss 0:00 \_ /bin/sh -c sh /bdir/s_etlShell.sh >/dev/null 2>&1
1605 ? S 0:00 \_ sh /bdir/s_etlShell.sh
the processes of the shell look as if they were still active despite the shell terminated. I would expect no output instead.
I need to check the status of the shell execution in the web application via php script (check_etl_shell_status.php) launched every 2 seconds by the following javascript funcion
function loadCall() {
setInterval(function () {$("#id_content").load("check_etl_shell_status.php",'q='); }, 2000);
}
the function loadCall() is being called on load the home page.
The content of check_etl_shell_status.php is the following
<?php
$output = shell_exec('ps fax | grep "s_etlShell.sh" | grep -v grep');
if ($output) {
echo "shell is still running...";
} else {
echo "shell terminated";
}
?>
and the output message is displayed inside a div of the home page
...
<div id="id_content"></div>
...
is there a way to make sure that, when a shell has terminated, whether is launched by crontab or on demand by web application, I have the right information on its status?
Thanks to whoever can help me

Cron script to restart memcached not working

I have a script in cron to check memcached and restart it if it's not working. For some reason it's not functioning.
Script, with permissions:
-rwxr-xr-x 1 root root 151 Aug 28 22:43 check_memcached.sh
Crontab entry:
*/5 * * * * /home/mysite/www/check_memcached.sh 1> /dev/null 2> /dev/null
Script contents:
#!/bin/sh
ps -eaf | grep 11211 | grep memcached
if [ $? -ne 0 ]; then
service memcached restart
else
echo "eq 0 - memcache running - do nothing"
fi
It works fine if I run it from the command line but last night memcached crashed and it was not restarted from cron. I can see cron is running it every 5 minutes.
What am I doing wrong?
Do I need to use the following instead of service memcached restart?
/etc/init.d/memcached restart
I have another script that checks to make sure my lighttpd instance is running and it works fine. It works a little differently to verify it's running but is using the init.d call to restart things.
Edit - Resolution: Using /etc/init.d/memcached restart solved this problem.
What usually causes crontab problems is command paths. In the command line, the paths to commands are already there, but in cron they're often not. If this is your issue, you can solve it by adding the following line into the top of your crontab:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
This will give cron explicit paths to look through to find the commands your script runs.
Also, your shebang in your script is wrong. It needs to be:
#!/bin/bash
I suspect the problem is with the grep 11211 - it's not clear the meaning of the number, and that grep may not be matching the desired process.
I think you need to log the actions of this script - then you see what's actually happening.
#!/bin/bash
exec >> /tmp/cronjob.log 2>&1
set -xv
cat2 () { tee -a /dev/stderr; }
ps -ef | cat2 | grep 11211 | grep memcached
if [ $? -ne 0 ]; then
service memcached restart
else
echo "eq 0 - memcache running - do nothing"
fi
exit 0
The set -xv output is captured to a log file in /tmp. The cat2 will copy the stdin to the log file, so you can see what grep is acting upon.
Save below code as check_memcached.sh
#!/bin/bash
MEMCACHED_STATUS=`systemctl is-active memcached.service`
if [[ ${MEMCACHED_STATUS} == 'active' ]]; then
echo " Service running.... so exiting "
exit 1
else
service memcached restart
fi
And you can schedule it as cron.

Bash subprocess is getting duplicated

I'm facing a behavior where a code that is running in a background bash subprocess (between parentesis and &) is being sometimes, apparently, called twice:
That's the case:
# script start.sh
#!/bin/bash
echo "Starting ..."
(
java -server ...
ret=$?
log "Process has stopped returning: [$ret]"
exit $ret
) &
In a normal scenario, running the start.sh script, two process would be created, one for the start.sh itself and other for the background bash subprocess (java program):
#> ps -ef | grep ^user
user 24538 1 0 Oct22 ? 00:00:00 /bin/bash start.sh
user 24539 24538 2 Oct22 ? 06:20:56 java -server ...
But, after a few days a new java process, that is child of 24539 process (java), is being created:
#> ps -ef | grep ^user
user 24538 1 0 Oct22 ? 00:00:00 /bin/bash start.sh
user 24539 24538 18 Oct22 ? 06:20:56 java -server ...
user 25888 24539 2 Oct25 ? 00:00:00 java -server ...
Does anyone have any idea why/how it's happening?
This has nothing to do with the shell; if bash were involved, the parent process id of the new Java process would be 24538, not 24539. The Java process is forking itself. You'd have to look at the code to see why.

How can I tell what user Jenkins is running as?

I have a bitnami Jenkins VM, how do I tell what user Jenkins is running as? I suspect it is Tomcat.
If you have access to the gui, you can go to "manage jenkins" > "system information" and look for "user.name".
I would use ps to get the uid of the process, and grep for that in /etc/passwd
You could also create a Jenkins job containing a shell script box with the "whoami" command.
Use this command to see under which process your Jenkins server works on:
ps axufwwww | grep 'jenkins\|java' -
To interpret the results, look for:
jenkins 1087 0.0 0.0 18740 396 ? S 08:00 0:00 /usr/bin/daemon --name=jenkins
jenkins 1088 1.6 20.7 3600900 840116 ? Sl 08:00 2:12 \_ /usr/bin/java
1087 and 1088 are the PIDs. They might differ for you.
ps aux | grep '/usr/bin/daemon' | grep 'jenkins' | awk {'print $1'}
The command will show running processes, then grep for a process running as a daemon that includes the string 'jenkins'. Finally, get the first row of the piped output which is the user that is running Jenkins.

Why the command in /root/.bash_profile start twice?

Here is my /root/.bash_profile:
export DISPLAY=:42 && cd /home/df/SimulatedRpu-ex/bin && ./SimulatedRpu-V1 &
When I start my server,I run ps aux | grep SimulatedRpu and here is the output:
root 2758 0.2 1.0 62316 20416 ? Sl 14:35 0:00 ./SimulatedRpu-V1
root 3197 0.5 0.9 61428 19912 pts/0 Sl 14:35 0:00 ./SimulatedRpu-V1
root 3314 0.0 0.0 5112 716 pts/0 S+ 14:35 0:00 grep SimulatedRpu
So,the program print error message about the port is already used.
But why the command in /root/.bash_profile start twice?
Please help me,thank you!By the way,I use Redhat Enterprise 5.5
The profile is read every time you log in. So just by logging in to run the ps aux | grep SimulatedRpu, you run the profile once more and thus start a new process.
You should put the command into an init script instead.
[EDIT] You should also run Xvnc in the same script - that way, you can start and stop the display server together with your app.
Try it like
if ! ps aux | grep '[S]imulateRpu'; then
export DISPLAY=:42 && cd /home/df/SimulatedRpu-ex/bin && ./SimulatedRpu-V1 &
fi;
This way it will first check if if the application is not running yet. The [] around the S are to prevent grep from finding itself ;)

Resources