Using cat : nothing happens - cat

I have this script :
#!/bin/bash
DIR_TMP=$HOME/.tmp
BIB=$HOME/biblio.bib
inotifywait -m $DIR_TMP -e create -e moved_to |
while read path action file; do
echo $path$file
echo $path$file >> $BIB
cat $path$file >> $BIB
rm $path$file
done
I the while, everything is working fine… but the cat, which doesn't do anything. Why and how to solve this?

The create and the moved_to events return true when a file is created, but it means neither the writing of the file is over nor there is (already) content in the file.
In my case, it resulted in the cat being executed before the file was written. So I changed the create and moved_to events to a close_write event. And now everything's fine.

Related

How to track the sequences in the file changes with Inotifywait

Currently I am monitoring the files by using above code, it works as expected. Basically I am monitoring the opened text files and executing commands.
inotifywait -m -q /home/testDir -e open |
while read path action file; do
if [[ "$file" =~ .*txt$ ]]; then
# execute command
fi
done
I wonder, how can I monitor the sequences of the inotifywait events and execute command.
For example, how can I execute a command or echo something when there is a open,access,close_nowrite event in the directory?
/home/TestDir OPEN test.txt
/home/TestDir ACCESS test.txt
/home/TestDIr CLOSE_NOWRITE,CLOSE test.txt

inotify seems to add a 6 letter code to filenames in its output, before the extension

inotify seems to add a 6 letter code to filenames in its output, before the extension.
For example:
"/path/to/directory/ CLOSE_WRITE,CLOSE filename-HzdVai.lyx"
or with --format "%w%f":
/path/to/directory/filename-HzdVai.lyx
This didn't happen with other scripts and I couldn't find any example of this or why this would happen with googling.
code:
inotifywait -m -r -e close_write --exclude '[^l][^y][^x]$' ~/Routines/* ~/Projects/* | while read path msg name
do
echo "$path $msg $name"
lyx -e pdf "$path$name.lyx"
done
If it's relevant, I am using Ubuntu 20.4.
The intention of the script was to continuously update LyX documents matching pdf files (LyX is a LaTeX-based document processor) so whenever I saved a document it would be compiled automatically
#larks had guessed correctly and tracking move events as well showed that LyX just wrote to the file with the id temporarily, then renamed it.
The final, working, script:
#!/usr/bin/env sh
inotifywait -m -r -e moved_to --exclude '[^l][^y][^x]$' --format "%w%f" ~/Routines/* ~/Projects/* | while read file_path
do
echo "$file_path"
lyx -e pdf "$file_path"
done

Not every command is being for in a while loop

I am trying to make a script what looks at a folder and will automatically encode files that go into that folder using hand brake. I want to do this doing monitoring the folder using inotify putting the new additions to the folder into a list then using a cron job to encode them overnight. However when using a while loop to loop over the list handbrake only encodes the first file exists then the scripts carrys on to after the loop without doing every file in the list. Here is the script that is calling handbrake:
#!/bin/bash
while IFS= read -r line
do
echo "$(basename "$line")"
HandBrakeCLI -Z "Very Fast 1080p30" -i "$line" -o "$line.m4v"
rm "$line"
done < list.txt
> list.txt
When testing the loop with a simple echo instead of the HandBrakeCLI it works fine and prints out every file so I have no idea what is wrong.
Here is the scripts that is monitoring the folder incase that is the problem:
#!/bin/bash
if ! [ -f list.txt ]
then
touch list.txt
fi
inotifywait -m -e create --format "%w%f" tv-shows | while read FILE
do
echo "$FILE" >> list.txt
done
Any help would be great, thanks
EDIT:
Just to be more specific, the script works fine for the first file in the list.txt, it encodes it no problem and removes the old version, but then it doesn't do any of the others in the list
Taken from here
To solve the problem simply
echo "" | HandBrakeCLI ......
or
HandBrakeCLI ...... < /dev/null

Is there a way to perform a "tail -f" from an url?

I currently use tail -f to monitor a log file: this way I get an autorefreshing console monitoring a web server.
Now, said webserver was moved to another host and I have no shell privileges for that.
Nevertheless I have a .txt network path, which in the end is a log file which is constantly updated.
So, I'd like to do something like tail -f, but on that url.
Would it be possible?In the end "in linux everything is a file" so..
You can do auto-refresh with help of watch combined with wget.
It won't show history, like tail -f, rather update screen like top.
Example of command, that shows content on file.txt on the screen, and update output every five seconds:
watch -n 5 wget -qO- http://fake.link/file.txt
Also, you can output n last lines, instead of the whole file:
watch -n 5 "wget -qO- http://fake.link/file.txt | tail"
In case if you still need behaviour like "tail -f" (with keeping history), I think you need to write a script that will download log file each time period, compare it to previous downloaded version, and then print new lines. Should be quite easy.
I wrote a simple bash script to fetch URL content each 2 seconds and compare with local file output.txt then append the diff to the same file
I wanted to stream AWS amplify logs in my Jenkins pipeline
while true; do comm -13 --output-delimiter="" <(cat output.txt) <(curl -s "$URL") >> output.txt; sleep 2; done
don't forget to create empty file output.txt file first
: > output.txt
view the stream :
tail -f output.txt
original comment : https://stackoverflow.com/a/62347827/2073339
UPDATE:
I found better solution using wget here:
while true; do wget -ca -o /dev/null -O output.txt "$URL"; sleep 2; done
https://superuser.com/a/514078/603774
I've made this small function and added it to the .*rc of my shell. This uses wget -c, so it does not re-download the whole page:
# Poll logs continuously over HTTP
logpoll() {
FILE=$(mktemp)
echo "———————— LOGPOLLING TO $FILE ————————"
tail -f $FILE &
tail_pid=$!
bg %1
stop=0
trap "stop=1" SIGINT SIGTERM
while [ $stop -ne 1 ]; do wget -co /dev/null -O $FILE "$1"; sleep 2; done
echo "——————————— LOGPOLL DONE ————————————"
kill $tail_pid
rm $FILE
trap - SIGINT SIGTERM
}
Explanation:
Create a temporary logfile using mktemp and save its path to $FILE
Make tail -f output the logfile continuously in the background
Make ctrl+c set stop to 1 instead of exiting the function
Loop until stop bit is set, i.e. until the user presses ctrl+c
wget given URL in a loop every two seconds:
-c - "continue getting partially downloaded file", so that wget continues instead of truncating the file and downloading again
-o /dev/null - wget's log messages shall be thrown into the void
-O $FILE - output the contents to the temp logfile we've created
Clean up after yourself: kill the tail -f, delete the temporary logfile, unset the signal handlers.
The proposed solutions periodically download the full file.
To avoid that I've created a package and published in NPM that does a HEAD request ( getting the size of the file ) and requesting only the last bytes.
Check it out and let me know if you need any help.
https://www.npmjs.com/package/#imdt-os/url-tail

Using inotify in a script to monitor a directory

I have written a bash script to monitor a particular directory "/root/secondfolder/" the script is as follows:
#!/bin/sh
while inotifywait -mr -e close_write "/root/secondfolder/"
do
echo "close_write"
done
When I create a file called "fourth.txt" in "/root/secondfolder/" and write stuff to it, save and close it, it outputs the following but it does not echo "close_write":
/root/secondfolder/ CLOSE_WRITE,CLOSE fourth.txt
can someone point me in the right direction?
You are not far away from solution. If you want to use inotifywait in your while statement you should not use -m option. With this option inotifywait never end because it's the monitor option. So you never go into the while.
This should work :
#!/bin/sh
while inotifywait -r -e close_write "/root/secondfolder/"
do
echo "close_write"
done
It turns out all I had to do was pipe the command into a while loop:
!/bin/sh
inotifywait -mqr -e close_write "/root/secondfolder/" | while read line
do
echo "close_write"
done

Resources