best way to check logs after Makefile command - linux

One of my projects' Makefile runs a bunch of tests on a headless browser for the functional test step. Most of the test is for the front-end code, but i also check for any error/warning on the backend.
Currently, we are cleaning the web server log, running all the (very slow) tests, and then grepping the server log for any error or warning.
i was wondering if there was any way to have a listener parsing the log (e.g. tail -f | grep) starting on the background, and kill the make target if it detects any error/warning during the test run.
what i got so far was
start long lived grep in the background and store PID.
run tests.
check output of long lived grep
kill PID.
in case of any error, fail.
This only bought me the advantage that now i do not lose the server logs on my dev box every time, as i do not have to clean it every time. But i still have to wait a long time (minutes) for a failure that may have occurred in the very first one.
is there any solution for it?

I was wondering if there was any way to have a listener parsing the log (e.g. tail -f | grep) starting on the background, and kill the make target if it detects any error/warning during the test run.
Have you tried doing it the simple straightforward way? Like this, for example:
# make is started somehow, e.g.:
make target >make-target.log 2>&1 &
PIDOFMAKE=$!
# we know its PID and the log file name
tail -f make-target.log | grep 'ERROR!' |
while read DUMMY; do
# error was detected, kill the make:
kill $PIDOFMAKE
break
done
(Though it is not clear to me what OP is asking, I'm writing an answer since I can't put a lot of code into the comment.)

Related

Running multiple npm scripts on boot

I have a server that runs an express app and a react app. I needed to start both the apps on boot.
So I added two lines on rc.local but it seems like only the first line runs and the second one doesn't. Why is that and how can I solve it?
Just as in any other script, the second command will only be executed after the first one has finished. That's probably not what you want when the first command is supposed to keep running pretty much forever.
If you want the second command to execute before the first has finished, and if you want the script to exit before the second command has finished, then you must arrange for the commands to run in the background.
So, at a minimum, instead of
my-first-command
my-second-command
you want:
my-first-command &
my-second-command &
However, it's better to do something a little more complex that in addition to putting the command into the background also places the command's working directory at the root of the filesystem, disconnects the command's input from the console, delivers its standard output and standard error streams to the syslog service (which will typically append that data into /var/log/syslog) and protects it from unintended signals. Like:
( cd / && nohup sh -c 'my-first-command 2>&1 | logger -t my-first-command &' </dev/null >/dev/null 2>&1 )
and similarly for the second command.
The extra redirections at the end of the line are to keep nohup from emitting unwanted informational messages and creating an unused nohup.out file. You might want to leave the final 2>&1 out until you are sure that the rest of the command is correct and is behaving the way you want it to. When you get to the point where the only message shown is nohup: redirecting stderr to stdout then you can restore the 2>&1 to get rid of that message.

How to check if immediate previous run failed or not?

I have a perl script that runs and does some checks.
In some cases that script fails and stops processing and in others completes.
What I would like to do is to be able to check if the script run within 1 minute and if the run was successful somehow then exit.
I thought about saving some file or checking $?, as an indication but I thought there may be exist some standard clean approach for this.
Would like a solution that would work for both linux and mac
You could see if you script has ended after a minute by trying something like this :
sleep 60
ps -ae | grep yourScript.name
This has to be executed at the same time as your script(s). If it returns nothing, that means your script isn't running anymore, aka has ended.
For the final result, you could make your perl script write into a specific log file, and check the end of this log file if the ps -ae | grep yourScript.name returned nothing.
Hope it helped !

continuous bash script - show custom error when process killed

I am trying to write a script that keeps checking if process "XXX" gets killed and if it does, show error message with text from /proc/XXX_ID/fd/1 and start again.
I am trying to make it work on custom thinclient distro where is no more packages than needed. Linux 4.6.3TS_SMP i686
I am new in scripting in Bash and I can't seem to get it working. I were googling and trying different things last two days and I moved nowhere. What am I doing wrong?
#!/bin/bash
while [ true ] ; do
process_ID="$(pgrep XXX)"
tail -f /proc/${process_ID}/fd/1 > /bin/test2.txt
Everything works till now. Now I need to somehow check if test2.txt is empty or not and if its not, use text from it as error message and start checking again. I tried something like this
if [ -s /bin/test2.txt ]
then
err="$(cat /bin/test2.txt)"
notify-send "${err}"
else
fi
done
How about this:
output_filepath=$(readlink /proc/$pid/fd/1)
while ps $pid > /dev/null
do
sleep 1
done
tail "$output_filepath"
The whole idea only works if the stdout (fd 1) of the process is redirected into a file which can be read by someone else. Such a redirect will result in /proc/<pid>/fd/1 being a symlink. We read and memorize the target of that symlink in the first line.
Then we wait until the ps of the given PID fails which typically means the process has terminated. Then we tail the file at the memorized path.
This approach has several weaknesses. The process (or its caller) could remove, modify, or rename the output file at termination time, or reading it can somehow be impossible (permissions, etc.). The output could be redirected to something not being a file (e.g. a pipe, a socket, etc.), then tailing it won't work.
ps not failing does not necessarily mean that the same process is still there. It's next to impossible that a PID is reused that quickly, but not completely impossible.

How to restart background php process? (how to get pid)

I'm a PHP developer, and know very little about shell scripting... So I appreciate any help here.
I have four php scripts that I need running in the background on my server. I can launch them just fine - they work just fine - and I can kill them by looking up their PID.
The problem is I need my script to, from time to time, kill the processes and restart them, as they maintain long standing HTTP requests that sometimes are ended by the other side.
But I don't know how to write a command that'll find these processes and kill them without looking up the PID manually.
We'll start with one launch command :
/usr/local/php5/bin/php -f /home/path/to/php_script.php > /dev/null &
Is there a way to "assign" a PID so it's always the same? or give the process a name? and how would I go about writing that new command?
Thank you!
Nope, you can't "assign" the process PID; instead, you should do as "real" daemons do: make your script save its own PID in some file, and then read it from that file when you need to kill.
Alternative would be to use something like supervisor, that handles all that for you in a quite nice way.
Update - supervisor configuration
Since I mentioned supervisor, I'm also posting here a short supervisor configuration file that should do the job.
[program:yourscriptname]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
Have a look here for more configuration options.
Then you can use it like this:
# supervisorctl status
to show the process(es) status.
# supervisorctl start yourscriptname
to start your script
# supervisorctl stop yourscriptname
to stop your script
Update - real world supervisor configuration example
First of all, make sure you have this in your /etc/supervisor/supervisord.conf.
[include]
files = /etc/supervisor/conf.d/*.conf
if not, just add those two lines and
mkdir /etc/supervisor/conf.d/
Then, create a configurtion file for each process you want to launch:
/etc/supervisor/conf.d/script1.conf
[program:script1]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
stdout_logfile=/var/log/script1.log
stderr_logfile=/var/log/script1-error.log
/etc/supervisor/conf.d/script2.conf
[program:script2]
command=/usr/local/php5/bin/php -f /home/path/to/php_script2.php
stdout_logfile=/var/log/script2.log
stderr_logfile=/var/log/script2-error.log
...etc, etc.. for all your scripts.
(note that you don't need the trailing & as supervisor will handle all the daemonization thing for you; in fact you shouldn't execute programs that are self-daemonizing inside supervisor).
Then you can start 'em all with:
supervisorctl start all
or just one with something like:
supervisorctl start script1
Starting supervisor from php
Of course, you can start/stop the supervisor-controlled processes using the two commands above, even from inside a script.
Remember however that you'll need root privileges, and it's quite risky to allow eg. a web page to execute commands as root on the server..
If that's the case, I recommend you have a look at the instructions on how to run supervisor as a normal user (I never did that, but you should be able to run it as the www-data user too..).
The canonical way to solve this is to have the process write its PID into a file in a known location, and then any utility scripts can look up the file, read the PID, and manipulate that process. Add a command line argument to the script that gives the name of the PID file to write to.
A work around to this would be to use ps aux, this will show all of the processes with the command that called them. This presumes of course that the 4 scripts are different files, or can be uniquely identified by the command that called them. Pipe that through a grep and you're all set ps aux | grep runningscript.php
OK! so this has been a headache and a half for my who knows NOTHING about shell/bash whatever scripting...
#redShadow 's response would had been perfect, except my hosting provider will not give me access to the /etc/supervisor/ directory. as he said, you must be root - and even using sudo was an admin wouldn't let me make any chances there...
Here's what I came up with:
kill -9 `ps -ef | grep php | grep -v grep | awk '{print $2}'`
because the only types of commands I was executing showed up in the top command as php this command loops thru running processes, finds the php commands and their corresponding PIDs and KILLS them! woot!!
What I do is have my script check for a file that I name "run.txt". If it does not
exist, they exit. Then just br renaming that (empty) file, I can stop all my scripts.

simple timeout on I/O for command for linux

First the background to this intriguing challenge. The continuous integration build can often have failures during development and testing of deadlocks, loops, or other issues that result in a never ending test. So all the mechanisms for notifying that a build has failed become useless.
The solution will be to have the build script timeout if there's zero output to the build log file for more than 5 minutes since the build routinely writes out the names of unit tests as it proceeds. So that's the best way to identify it's "frozen".
Okay. Now the nitty gritty...
The build server uses Hudson to run a simple bash script that invokes the more complex build script based on Nant and MSBuild (all on Windows).
So far all solutions around the net involve a timeout on the total run time of the command. But that solution fails in this case because the tests might hang or freeze in the first 5 minutes.
What we've thought of so far:
First, here's the high level bash command run the full test suite in Hudson.
build.sh clean free test
That command simply sends all the Nant and MSBuild build logging to stdout.
It's obvious that we need to tee that output to a file:
build.sh clean free test 2>&1 | tee build.out
Then in parallel a command needs to sleep, check the modify time of the file and if more than 5 minutes kill the main process. A kill -9 will be fine at that point--nothing graceful needed once it has frozen.
That's the part you can help with.
In fact, I made a script like this over 15 years ago to kill the connection with a data phone line to japan after periods of inactivity but can't remember how I did it.
Sincerely,
Wayne
build.sh clean free test 2>&1 | tee build.out &
sleep 300
kill -KILL %1
You may be able to use timeout:
timeout 300 command
Solved this myself by writing a bash script.
It's called iotimeout with one parameter which is the number of seconds.
You use it like this:
build.sh clean dev test | iotimeout 120
iotimeout has 2 loops.
One is a simple while read line loop that echos echo line but
it also uses the touch command to update the modified time of a
tmp file every time it writes a line. Unfortunately, it wasn't
possible to monitor a build.out file because Windoze doesn't
update the file modified time until you close the file. Oh well.
Another loop runs in the background, that's a forever loop
which sleeps 10 seconds and then checks the modified time
of the temp file. If that ever exceeds 120 seconds old then
that loop forces the entire process group to exit.
The only tricky stuff was returning the exit code of the original
program. Bash gives you a PIPESTATUS array to solve that.
Also, figuring out how to kill the entire program group was
some research but turns out to be easy just--kill 0

Resources