tail command with dynamic file parameter - linux

I am now using the tail command as below
show_log.sh:
LOGFILE=`ls -1 -r ./myservice.log.????????.?????? | head -n 1`
tail -v -f -s 1 -n 100 ${LOGFILE}
to monitor the log file.
The problem with it is that after each service restart, a new log file will be created, and the prior log file will be compressed. So the tail command stops working.
I need to change the script so that to continue tailing with the new file

Found a way. ojblass with the capital F parameter suggestion helped.
Actually I created a link to the latest log file by the following command after each service restart:
ln -n service-blabla.log log_lnk
and changed the tail command like this:
tail -v -F -s 1 -n 100 log_lnk
Note the capital F in the tail command. Lowercase f doesn't work in this situation.
done.

Related

ssh tail with nested ls and head cannot access

am trying to execute the following command:
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
ls: cannot access /var/log/alert_ARCDB.log: No such file or directory
tail: cannot follow `-' by name
notice the error returned, when i login to ssh separately and then execute
tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
see the below:
# ls -t /var/log/alert_ARCDB.log | head -n1
/var/log/alert_ARCDB.log
why is that happening and how to fix it. am trying to do this in one line as i don't want to create a script file.
Thanks a lot
Shell parameter expansion happens before command execution.
Here's a simple example. If I type...
ls "$HOME"
...the shell replaces $HOME with the path to my home directory first, then runs something like ls /home/larsks. The ls command has no idea that the command line originally had $HOME.
If we look at your command...
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
...we see that you're in exactly the same situation. The $(ls -t ...) expression is expanded before ssh is executed. In other words, that command is running your local system.
You can inhibit the shell expansion on your local system by using single quotes. For example, running:
echo '$HOME'
Will produce:
$HOME
So you can run:
ssh root#10.10.10.50 'tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )'
But there's another problem here. If /var/log/alert_ARCDB.log is a file, your command makes no sense: calling ls -t on a single file gets you nothing.
If alert-ARCDB.log is a directory, you have a different problem. The result of ls /some/directory is a list of filenames without any directory prefix. If I run something like:
ls -t /tmp
I will get output like
file1
file2
If I do this:
tail $(ls -t /tmp | head -1)
I end up with a command that looks like:
tail file1
And that will fail, because there is no file1 in my current directory.
One approach would be to pipe the commands you want to perform to ssh. One simple way to achieve that is to first create a function that will echo the commands you want executed :
remote_commands()
{
echo 'cd /var/log/alert_ARCDB.log'
echo 'tail -F -n 1 "$(ls -t | head -n1 )"'
}
The cd will allow you to use the relative path listed by ls. The single quotes make sure that everything will be sent as-is to the remote shell, with no local expansion occurring.
Then you can do
ssh root#10.10.10.50 bash < <(remote_commands)
This assumes alert_ARCDB.log is a directory (or else I am not sure why you would want to add head -n1 after that).

Is there a way to perform a "tail -f" from an url?

I currently use tail -f to monitor a log file: this way I get an autorefreshing console monitoring a web server.
Now, said webserver was moved to another host and I have no shell privileges for that.
Nevertheless I have a .txt network path, which in the end is a log file which is constantly updated.
So, I'd like to do something like tail -f, but on that url.
Would it be possible?In the end "in linux everything is a file" so..
You can do auto-refresh with help of watch combined with wget.
It won't show history, like tail -f, rather update screen like top.
Example of command, that shows content on file.txt on the screen, and update output every five seconds:
watch -n 5 wget -qO- http://fake.link/file.txt
Also, you can output n last lines, instead of the whole file:
watch -n 5 "wget -qO- http://fake.link/file.txt | tail"
In case if you still need behaviour like "tail -f" (with keeping history), I think you need to write a script that will download log file each time period, compare it to previous downloaded version, and then print new lines. Should be quite easy.
I wrote a simple bash script to fetch URL content each 2 seconds and compare with local file output.txt then append the diff to the same file
I wanted to stream AWS amplify logs in my Jenkins pipeline
while true; do comm -13 --output-delimiter="" <(cat output.txt) <(curl -s "$URL") >> output.txt; sleep 2; done
don't forget to create empty file output.txt file first
: > output.txt
view the stream :
tail -f output.txt
original comment : https://stackoverflow.com/a/62347827/2073339
UPDATE:
I found better solution using wget here:
while true; do wget -ca -o /dev/null -O output.txt "$URL"; sleep 2; done
https://superuser.com/a/514078/603774
I've made this small function and added it to the .*rc of my shell. This uses wget -c, so it does not re-download the whole page:
# Poll logs continuously over HTTP
logpoll() {
FILE=$(mktemp)
echo "———————— LOGPOLLING TO $FILE ————————"
tail -f $FILE &
tail_pid=$!
bg %1
stop=0
trap "stop=1" SIGINT SIGTERM
while [ $stop -ne 1 ]; do wget -co /dev/null -O $FILE "$1"; sleep 2; done
echo "——————————— LOGPOLL DONE ————————————"
kill $tail_pid
rm $FILE
trap - SIGINT SIGTERM
}
Explanation:
Create a temporary logfile using mktemp and save its path to $FILE
Make tail -f output the logfile continuously in the background
Make ctrl+c set stop to 1 instead of exiting the function
Loop until stop bit is set, i.e. until the user presses ctrl+c
wget given URL in a loop every two seconds:
-c - "continue getting partially downloaded file", so that wget continues instead of truncating the file and downloading again
-o /dev/null - wget's log messages shall be thrown into the void
-O $FILE - output the contents to the temp logfile we've created
Clean up after yourself: kill the tail -f, delete the temporary logfile, unset the signal handlers.
The proposed solutions periodically download the full file.
To avoid that I've created a package and published in NPM that does a HEAD request ( getting the size of the file ) and requesting only the last bytes.
Check it out and let me know if you need any help.
https://www.npmjs.com/package/#imdt-os/url-tail

Bash grep command finding the same file 5 times

I'm building a little bash script to run another bash script that's found in multiple directories. Here's the code:
cd /home/mainuser/CaseStudies/
grep -R -o --include="Auto.sh" [\w] | wc -l
When I execute just that part, it finds the same file 5 times in each folder. So instead of getting 49 results, I get 245. I've written a recursive bash script before and I used it as a template for this problem:
grep -R -o --include=*.class [\w] | wc -l
This code has always worked perfectly, without any duplication. I've tried running the first code with and without the " ", I've tried -r as well. I've read through the bash documentation and I can't seem to find a way to prevent, or even why I'm getting, this duplication. Any thoughts on how to get around this?
As a separate, but related question, if I could launch Auto.sh inside of each directory so that the output of Auto.sh was dumped into that directory; without having to place Auto.sh in each folder. That would probably be much more efficient that what I'm currently doing and it would also probably fix my current duplication problem.
This is the code for Auto.sh:
#!/bin/bash
index=1
cd /home/mainuser/CaseStudies/
grep -R -o --include=*.class [\w] | wc -l
grep -R -o --include=*.class [\w] |awk '{print $3}' > out.txt
while read LINE; do
echo 'Path '$LINE > 'Outputs/ClassOut'$index'.txt'
javap -c $LINE >> 'Outputs/ClassOut'$index'.txt'
index=$((index+1))
done <out.txt
Preferably I would like to make it dump only the javap outputs for the application its currently looking at. Since those .class files could be in any number of sub-directories, I'm not sure how to make them all dump in the top folder, without executing a modified Auto.sh in the top directory of each application.
Ok, so to fix the multiple find:
grep -R -o --include="Auto.sh" [\w] | wc -l
Should be:
grep -R -l --include=Auto.sh '\w' | wc -l
The reason this was happening, was that it was looking for instances of the letter w in Auto.sh. Which occurred 5 times in the file.
However, the overall fix that doesn't require having to place Auto.sh in every directory, is something like this:
MAIN_DIR=/home/mainuser/CaseStudies/
cd $MAIN_DIR
ls -d */ > DirectoryList.txt
while read LINE; do
cd $LINE
mkdir ProjectOutputs
bash /home/mainuser/Auto.sh
cd $MAIN_DIR
done <DirectoryList.txt
That calls this Auto.sh code:
index=1
grep -R -o --include=*.class '\w' | wc -l
grep -R -o --include=*.class '\w' | awk '{print $3}' > ProjectOutputs.txt
while read LINE; do
echo 'Path '$LINE > 'ProjectOutputs/ClassOut'$index'.txt'
javap -c $LINE >> 'ProjectOutputs/ClassOut'$index'.txt'
index=$((index+1))
done <ProjectOutputs.txt
Thanks again for everyone's help!

linux redirect last line of dwdiff result to a file

I am trying to create a sh that generates a report that will display differences against files from two folders/java projects. I am using dwdiff, and I need only the last line from each comparison (I don't care about differences, the percentage is what I need). I've created the following script:
DEST_FOLDER='./target/'
CLASSES_P1='classesP1.txt'
CLASSES_P2='classesP2.txt'
DEST_FILE='report.txt'
rm -r "$DEST_FOLDER"
mkdir -p "$DEST_FOLDER"
rm -f "$DEST_FOLDER/$CLASSES_P1"
rm -f "$DEST_FOLDER/$CLASSES_P2"
rm -f "$DEST_FOLDER/$DEST_FILE"
find ./p1 -name "*.java" >> "$DEST_FOLDER/$CLASSES_P1"
find ./p2 -name "*.java" >> "$DEST_FOLDER/$CLASSES_P2"
while read p; do
while read t; do
dwdiff -s $p $t | tail --lines=0 >> "$DEST_FOLDER/$DEST_FILE"
done < $DEST_FOLDER/$CLASSES_P2
done < $DEST_FOLDER/$CLASSES_P1
It works fine, but the results are not redirected to the given file. The file is created, but is empty, and the last line from each dwdiff result is displayed to console. Any ideas?
There are a few things going on
The output you want is going to stderr, not stdout. You can merge them with 2>&1 on the dwdiff command
The output you want actually appears to be printed first by dwdiff, but you see a different order due to the two different outputs. So you want head instead of tail
You want 1 line, not 0
So try dwdiff -s ... 2>&1 | head --lines=1
$ dwdiff -s /etc/motd /etc/issue 2>&1 | head --lines=1
old: 67 words 3 4% common 2 2% deleted 62 92% changed
Alternatively, if you want the new line instead of the old, and to simplify the ordering, try throwing away the diff output:
$ dwdiff -s /etc/motd /etc/issue 2>&1 1>/dev/null | tail --lines=1
new: 5 words 3 60% common 0 0% inserted 2 40% changed
Note that the order of redirection is important: First clone stdout into stderr, then redirect stdout to /dev/null.

How to tail -f the latest log file with a given pattern

I work with some log system which creates a log file every hour, like follows:
SoftwareLog.2010-08-01-08
SoftwareLog.2010-08-01-09
SoftwareLog.2010-08-01-10
I'm trying to tail to follow the latest log file giving a pattern (e.g. SoftwareLog*) and I realize there's:
tail -F (tail --follow=name --retry)
but that only follow one specific name - and these have different names by date and hour. I tried something like:
tail --follow=name --retry SoftwareLog*(.om[1])
but the wildcard statement is resoved before it gets passed to tail and doesn't re-execute everytime tail retries.
Any suggestions?
I believe the simplest solution is as follows:
tail -f `ls -tr | tail -n 1`
Now, if your directory contains other log files like "SystemLog" and you only want the latest "SoftwareLog" file, then you would simply include a grep as follows:
tail -f `ls -tr | grep SoftwareLog | tail -n 1`
[Edit: after a quick googling for a tool]
You might want to try out multitail - http://www.vanheusden.com/multitail/
If you want to stick with Dennis Williamson's answer (and I've +1'ed him accordingly) here are the blanks filled in for you.
In your shell, run the following script (or it's zsh equivalent, I whipped this up in bash before I saw the zsh tag):
#!/bin/bash
TARGET_DIR="some/logfiles/"
SYMLINK_FILE="SoftwareLog.latest"
SYMLINK_PATH="$TARGET_DIR/$SYMLINK_FILE"
function getLastModifiedFile {
echo $(ls -t "$TARGET_DIR" | grep -v "$SYMLINK_FILE" | head -1)
}
function getCurrentlySymlinkedFile {
if [[ -h $SYMLINK_PATH ]]
then
echo $(ls -l $SYMLINK_PATH | awk '{print $NF}')
else
echo ""
fi
}
symlinkedFile=$(getCurrentlySymlinkedFile)
while true
do
sleep 10
lastModified=$(getLastModifiedFile)
if [[ $symlinkedFile != $lastModified ]]
then
ln -nsf $lastModified $SYMLINK_PATH
symlinkedFile=$lastModified
fi
done
Background that process using the normal method (again, I don't know zsh, so it might be different)...
./updateSymlink.sh 2>&1 > /dev/null
Then tail -F $SYMLINK_PATH so that the tail hands the changing of the symbolic link or a rotation of the file.
This is slightly convoluted, but I don't know of another way to do this with tail. If anyone else knows of a utility that handles this, then let them step forward because I'd love to see it myself too - applications like Jetty by default do logs this way and I always script up a symlinking script run on a cron to compensate for it.
[Edit: Removed an erroneous 'j' from the end of one of the lines. You also had a bad variable name "lastModifiedFile" didn't exist, the proper name that you set is "lastModified"]
I haven't tested this, but an approach that may work would be to run a background process that creates and updates a symlink to the latest log file and then you would tail -f (or tail -F) the symlink.
#!/bin/bash
PATTERN="$1"
# Try to make sure sub-shells exit when we do.
trap "kill -9 -- -$BASHPID" SIGINT SIGTERM EXIT
PID=0
OLD_FILES=""
while true; do
FILES="$(echo $PATTERN)"
if test "$FILES" != "$OLD_FILES"; then
if test "$PID" != "0"; then
kill $PID
PID=0
fi
if test "$FILES" != "$PATTERN" || test -f "$PATTERN"; then
tail --pid=$$ -n 0 -F $PATTERN &
PID=$!
fi
fi
OLD_FILES="$FILES"
sleep 1
done
Then run it as: tail.sh 'SoftwareLog*'
The script will lose some log lines if the logs are written to between checks. But at least it's a single script, with no symlinks required.
We have daily rotating log files as: /var/log/grails/customer-2020-01-03.log. To tail the latest one, the following command worked fine for me:
tail -f /var/log/grails/customer-`date +'%Y-%m-%d'`.log
(NOTE: no space after the + sign in the expression)
So, for you, the following should work (if you are in the same directory of the logs):
tail -f SoftwareLog.`date +'%Y-%m-%d-%H'`
I believe the easiest way is to use tail with ls and head, try something like this
tail -f `ls -t SoftwareLog* | head -1`

Resources