Run tail -f for a specific time in bash script - linux

I need a script that will run a series of tail -f commands and output them into a file.
What I need is for tail -f to run for a certain amount of time to grep specific words. The reason it's a certain amount of time is because some of these values don't show up right away as this is a live log.
How can I run something like this for let's say 20 seconds, output the grep command and then continue on to the next command?
tail -f /example/logs/auditlog | grep test
Thanks

timeout 20 tail -f /example/logs/auditlog | grep test

tail -f /example/logs/auditlog | grep test &
pid=$!
sleep 20
kill $pid

What about this:
for (( N=0; $N < 20 ; N++)) ; do tail -f /example/logs/auditlog | grep test ; sleep 1 ; done
EDIT: I misread your question, sorry. You want something like this:
tail -f /example/logs/auditlog | grep test
sleep 20

Related

Limit number of parallel jobs in bash [duplicate]

This question already has answers here:
Bash: limit the number of concurrent jobs? [duplicate]
(14 answers)
Closed 1 year ago.
I want to read links from file, which is passed by argument, and download content from each.
How can I do it in parallel with 20 processes?
I understand how to do it with an unlimited number of processes:
#!/bin/bash
filename="$1"
mkdir -p saved
while read -r line; do
url="$line"
name_download_file_sha="$(echo $url | sha256sum | awk '{print $1}').jpeg"
curl -L $url > saved/$name_download_file_sha &
done < "$filename"
wait
You can add this test :
until [ "$( jobs -lr 2>&1 | wc -l)" -lt 20 ]; do
sleep 1
done
This will maintain maximum 21 instance of curl in parallel .
And wait until you reach 19 or a lower value to start another one .
If you are using GNU sleep , you can do sleep 0.5 , to optimize the wait time
So you code will be
#!/bin/bash
filename="$1"
mkdir -p saved
while read -r line; do
until [ "$( jobs -lr 2>&1 | wc -l)" -lt 20 ]; do
sleep 1
done
url="$line"
name_download_file_sha="$(echo $url | sha256sum | awk '{print $1}').jpeg"
curl -L $url > saved/$name_download_file_sha &
done < "$filename"
wait
xargs -P is the simple solution. It gets somewhat more complicated when you want to save to separate files, but you can use sh -c to add this bit.
: ${processes:=20}
< $filename xargs -P $processes -I% sh -c '
line="$1"
url_file="$line"
name_download_file_sha="$(echo $url_file | sha256sum | awk "{print \$1}").jpeg"
curl -L $url > saved/$name_download_file_sha
' -- %
Based on triplee's suggestions, I've lower-cased the environment variable and changed its name to 'processes' to be more correct.
I've also made the suggested corrections to the awk script to avoid quoting issues.
You may still find it easier to replace the awk script with cut -f1, but you'll need to specify the cut delimeter if it's spaces (not tabs).

Linux: Tail -f multiple options

I want to add multiple tail scripts in one.
First one:
tail -f /var/script/log/script-log.txt | if grep -q "Text1"; then echo "0:$?:AAC32 ONLINE"
fi
I want to add 5 more lines with a diffrent word, is this possible?
else if, if etc. etc.
Thanks!
tail -f /var/script/log/script-log.txt | if grep -E "Text1|Text2|Text3"; then echo "0:$?:AAC32 ONLINE" fi
In your case it's enough to use logical AND operator:
tail -f /var/script/log/script-log.txt | grep -q "text1\|text2\|text3" && echo "0:$?:AAC32 ONLINE"
#!/bin/sh
PIPENAME="`mktemp -u "/tmp/something-XXXXXX"`"
mkfifo -m 600 "$PIPENAME"
tail -f /tmp/log.txt >"$PIPENAME" &
while read line < "$PIPENAME"
do
echo $line # Whatever you want goes here
done
rm -f "$PIPENAME"
If you want Bash specific, you can use the -u option for read, and then you can rm the named pipe before the loop starts, which is more guaranteed to leave things clean when you're done.

Bash free command stops working

I am trying to do following:
Get output of free -mo , take the 2nd line and log it to a file every 30 seconds.
When I run
$free -mo -s 30
It runs and displays output every 30 seconds.
But when I run
$ free -mo -s 30 | head -2 | tail -1
It runs only once. I am not able to figure out what is wrong.
free Manual says free -s 30 run the command every 30 seconds.
head -2 returns only the first 2 lines of output then quits. tail -1 returns the last line, then quits. When any program quits in a pipeline, it kills the entire pipeline, so free is stopped when head and tail finish.
Use free -mo -s 30 &> test.txt &
This will take all of the output from the free command and output it to test.txt and run it in the background.
Try
free -mos 30 | grep 'Mem:' >yourlog.txt
(but you might be better considering something like sar to capture this kind of data - it can also reports lots of other things - just postpone the filtering/extraction until you generate a resport from the data).
Will Hartung is right. Instead do this:
while true; do free -mo | head -2 | tail -1; sleep 30; done
Thanks to your answers. I was trying to monitor memory utilization of a process. I think I got it.
START_TIME=$(date);
cd /data;
INPUT_DATA=$1;
CODE_FILE=$2;
TIMES=$3;
echo "$START_TIME" > "$CODE_FILE.freeMemory_$TIMES.log";
free -mo -s 30 >> "$CODE_FILE.freeMemory_$TIMES.log" &
freepid=$!;
sleep 1m;
#echo "PID generated for free command -- $freepid";
START_TIME=$(date);
i=0;
while [ $i -le $TIMES ]
do
sh runCode.sh $CODE_FILE "output.csv" $INPUT_DATA;
i=`expr $i + 1`
done
END_TIME=$(date);
echo "process started at $START_TIME and ended at $END_TIME " ;
sleep 1m;
kill -9 $freepid;
END_TIME=$(date);
echo "$END_TIME" >> "$CODE_FILE.freeMemory_$TIMES.log";

xargs' $1 conflicts with $1 in shell script

I have these lines in one shell script file foo.sh:
ps ax | grep -E "bar" | grep -v "grep" | awk '{print $1}' | xargs kill -9 $1
when I execute the shell script with an arguments like this:
sh foo.sh arg_one
the xargs can't work now. It takes the $1 from the shell script but not the output of awk.
I do know I can store the output of awk into one file and use it in xargs later.
But, is there any better solution?
== edited ==
thanks the answer from #peterph.
But, is there any way that I can use $1 in xargs?
== edited 2 ==
thanks #Brian Campbell
Despite weather there should be a useless $1 in the example, if a argument of "the shell script file" is given, then the $1 in xargs will not work as my wish, in my computer(In your computer too, I think).
Why? And, how to get avoid it?
xargs reads list from stdin so just discard the last $1 on the line if what you want is to kill processes by their PIDs.
As a side note, ps can also print processes according to their command name (with procps on linux see the -C option).
Instead of that complicated pipeline, you can always use killall -9 name to kill a process, or pkill -9 pattern if you don't know the exact name of the process but know a substring (be careful that you don't kill any unintended processes, though).
For your command to work, just remove the $1; xargs takes its arguments from standard in, and runs the command line passing in the values it gets from standard in at the end of the command.
edit (in response to your edit): What do you expect xargs to do with the $1 argument? What are you expecting to be in it? The only interpretation of $1 that has any meaning here is the first argument that was passed to your script.
The $1 from your awk script is what awk finds in the first column of its input; it then prints that out, and xargs takes those values from standard input, and will call the command you pass it with those values at the end of the command line. So if the awk command returns:
100
120
130
Then piping that result to xargs kill -9 will result in the following being called:
kill -9 100 120 130
You do not need a variable like $1 to make this work
This should work:
ps ax | grep -E "bar" | grep -v "grep" | awk '{print $1}' | xargs kill -9
You can also try:
result=$(ps -ef | grep -E "bar" | grep -v "grep" | awk '{print $2}')
kill -9 $result
In my case piping xargs sometimes returned below error even if matched processes existed:
usage: kill [ -s signal | -p ] [ -a ] pid ...
kill -l [ signal ]
usage: kill [ -s signal | -p ] [ -a ] pid ...
kill -l [ signal ]

How to tail -f the latest log file with a given pattern

I work with some log system which creates a log file every hour, like follows:
SoftwareLog.2010-08-01-08
SoftwareLog.2010-08-01-09
SoftwareLog.2010-08-01-10
I'm trying to tail to follow the latest log file giving a pattern (e.g. SoftwareLog*) and I realize there's:
tail -F (tail --follow=name --retry)
but that only follow one specific name - and these have different names by date and hour. I tried something like:
tail --follow=name --retry SoftwareLog*(.om[1])
but the wildcard statement is resoved before it gets passed to tail and doesn't re-execute everytime tail retries.
Any suggestions?
I believe the simplest solution is as follows:
tail -f `ls -tr | tail -n 1`
Now, if your directory contains other log files like "SystemLog" and you only want the latest "SoftwareLog" file, then you would simply include a grep as follows:
tail -f `ls -tr | grep SoftwareLog | tail -n 1`
[Edit: after a quick googling for a tool]
You might want to try out multitail - http://www.vanheusden.com/multitail/
If you want to stick with Dennis Williamson's answer (and I've +1'ed him accordingly) here are the blanks filled in for you.
In your shell, run the following script (or it's zsh equivalent, I whipped this up in bash before I saw the zsh tag):
#!/bin/bash
TARGET_DIR="some/logfiles/"
SYMLINK_FILE="SoftwareLog.latest"
SYMLINK_PATH="$TARGET_DIR/$SYMLINK_FILE"
function getLastModifiedFile {
echo $(ls -t "$TARGET_DIR" | grep -v "$SYMLINK_FILE" | head -1)
}
function getCurrentlySymlinkedFile {
if [[ -h $SYMLINK_PATH ]]
then
echo $(ls -l $SYMLINK_PATH | awk '{print $NF}')
else
echo ""
fi
}
symlinkedFile=$(getCurrentlySymlinkedFile)
while true
do
sleep 10
lastModified=$(getLastModifiedFile)
if [[ $symlinkedFile != $lastModified ]]
then
ln -nsf $lastModified $SYMLINK_PATH
symlinkedFile=$lastModified
fi
done
Background that process using the normal method (again, I don't know zsh, so it might be different)...
./updateSymlink.sh 2>&1 > /dev/null
Then tail -F $SYMLINK_PATH so that the tail hands the changing of the symbolic link or a rotation of the file.
This is slightly convoluted, but I don't know of another way to do this with tail. If anyone else knows of a utility that handles this, then let them step forward because I'd love to see it myself too - applications like Jetty by default do logs this way and I always script up a symlinking script run on a cron to compensate for it.
[Edit: Removed an erroneous 'j' from the end of one of the lines. You also had a bad variable name "lastModifiedFile" didn't exist, the proper name that you set is "lastModified"]
I haven't tested this, but an approach that may work would be to run a background process that creates and updates a symlink to the latest log file and then you would tail -f (or tail -F) the symlink.
#!/bin/bash
PATTERN="$1"
# Try to make sure sub-shells exit when we do.
trap "kill -9 -- -$BASHPID" SIGINT SIGTERM EXIT
PID=0
OLD_FILES=""
while true; do
FILES="$(echo $PATTERN)"
if test "$FILES" != "$OLD_FILES"; then
if test "$PID" != "0"; then
kill $PID
PID=0
fi
if test "$FILES" != "$PATTERN" || test -f "$PATTERN"; then
tail --pid=$$ -n 0 -F $PATTERN &
PID=$!
fi
fi
OLD_FILES="$FILES"
sleep 1
done
Then run it as: tail.sh 'SoftwareLog*'
The script will lose some log lines if the logs are written to between checks. But at least it's a single script, with no symlinks required.
We have daily rotating log files as: /var/log/grails/customer-2020-01-03.log. To tail the latest one, the following command worked fine for me:
tail -f /var/log/grails/customer-`date +'%Y-%m-%d'`.log
(NOTE: no space after the + sign in the expression)
So, for you, the following should work (if you are in the same directory of the logs):
tail -f SoftwareLog.`date +'%Y-%m-%d-%H'`
I believe the easiest way is to use tail with ls and head, try something like this
tail -f `ls -t SoftwareLog* | head -1`

Resources