inotify seems to add a 6 letter code to filenames in its output, before the extension.
For example:
"/path/to/directory/ CLOSE_WRITE,CLOSE filename-HzdVai.lyx"
or with --format "%w%f":
/path/to/directory/filename-HzdVai.lyx
This didn't happen with other scripts and I couldn't find any example of this or why this would happen with googling.
code:
inotifywait -m -r -e close_write --exclude '[^l][^y][^x]$' ~/Routines/* ~/Projects/* | while read path msg name
do
echo "$path $msg $name"
lyx -e pdf "$path$name.lyx"
done
If it's relevant, I am using Ubuntu 20.4.
The intention of the script was to continuously update LyX documents matching pdf files (LyX is a LaTeX-based document processor) so whenever I saved a document it would be compiled automatically
#larks had guessed correctly and tracking move events as well showed that LyX just wrote to the file with the id temporarily, then renamed it.
The final, working, script:
#!/usr/bin/env sh
inotifywait -m -r -e moved_to --exclude '[^l][^y][^x]$' --format "%w%f" ~/Routines/* ~/Projects/* | while read file_path
do
echo "$file_path"
lyx -e pdf "$file_path"
done
So I'm trying to make a script that will download my podcasts upon detecting my smart watch connecting and transfer them to it. I've configured the udev rule to detect when the watch is connected and it executes /bin/watch_transfer.sh for which the code is as such:
#!/usr/bin/env sh
echo "Watch connected at $(date)" >>/tmp/scripts.log
# Download new podcasts
cd /home/pi/Scripts/
./bashpodder.shell >>/tmp/pscripts.log
echo "Upodder should've run by now">>/tmp/scripts.log
# Transfer podcasts
for file in /home/pi/Downloads/podcasts/*
do
/usr/bin/mtp-sendfile $file /Podcasts
echo "Processing $file" >>/tmp/scripts.log
done
echo "Sent all files" >>/tmp/scripts.log
I know that the file runs when the watch is connected because /tmp/scripts.log is created and updated, and also bashpodder.shelll creates the podcast.m3u file so the bashpodder script is running but it doesn't download any files to /~/Downloads/podcasts. Bashpodder is a simle podcast downloader (I was using upodder but switched because it didn't seem to work) and mtp-tools is a way to transfer files through MTP. Bashpodder.shell script below:
# By Linc 10/1/2004
# Find the latest script at http://lincgeek.org/bashpodder
# Revision 1.21 12/04/2008 - Many Contributers!
# If you use this and have made improvements or have comments
# drop me an email at linc dot fessenden at gmail dot com
# and post your changes to the forum at http://lincgeek.org/lincware
# I'd appreciate it!
# Make script crontab friendly:
cd $(dirname $0)
# datadir is the directory you want podcasts saved to:
datadir=/home/pi/Downloads/podcasts
# create datadir if necessary:
mkdir -p $datadir
# Delete any temp file:
rm -f temp.log
# Read the bp.conf file and wget any url not already in the podcast.log file:
while read podcast
do
file=$(xsltproc parse_enclosure.xsl $podcast 2> /dev/null || wget -q $podcast -O - | tr '\r' '\n' | tr \' \" | sed -n 's/.*url="\([^"]*\)".*/\1/p')
for url in $file
do
echo $url >> temp.log
if ! grep "$url" podcast.log > /dev/null
then
wget -t 10 -U BashPodder -c -q -O $datadir/$(echo "$url" | awk -F'/' {'print $NF'} | awk -F'=' {'print $NF'} | awk -F'?' {'print $1'}) "$url"
fi
done
done < bp.conf
# Move dynamically created log file to permanent log file:
cat podcast.log >> temp.log
sort temp.log | uniq > podcast.log
rm temp.log
# Create an m3u playlist:
ls $datadir | grep -v m3u > $datadir/podcast.m3u
I think it might be something to do with permissions? As when I run ./watch_transfer.sh from the terminal it runs perfectly. Thanks in advance for your help.
edit:
After connecting my watch:
Ouput of $ cat /tmp/scripts.log:
Watch connected at Thu Jul 16 22:25:47 BST 2020
Upodder should've run by now
Processing /home/pi/Downloads/podcasts/podcast.m3u
Sent all files
$ cat /tmp/psripts.log doesn't output anything but /tmp/pscripts.log does exist.
Output of $ cat ~/Scripts/temp.log:
http://rasterweb.net/raster/audio/rwaudio20060108.mp3
http://rasterweb.net/raster/audio/rwaudio20051020.mp3
http://rasterweb.net/raster/audio/rwaudio20051017.mp3
http://rasterweb.net/raster/audio/rwaudio20050807.mp3
http://rasterweb.net/raster/audio/rwaudio20050719.mp3
http://rasterweb.net/raster/audio/rwaudio20050615.mp3
http://rasterweb.net/raster/audio/rwaudio20050525.mp3
http://rasterweb.net/raster/audio/rwaudio20050323.mp3
This seems to suggest that bashpodder is running through the urls but not actually downloading them?
I am trying to make a script what looks at a folder and will automatically encode files that go into that folder using hand brake. I want to do this doing monitoring the folder using inotify putting the new additions to the folder into a list then using a cron job to encode them overnight. However when using a while loop to loop over the list handbrake only encodes the first file exists then the scripts carrys on to after the loop without doing every file in the list. Here is the script that is calling handbrake:
#!/bin/bash
while IFS= read -r line
do
echo "$(basename "$line")"
HandBrakeCLI -Z "Very Fast 1080p30" -i "$line" -o "$line.m4v"
rm "$line"
done < list.txt
> list.txt
When testing the loop with a simple echo instead of the HandBrakeCLI it works fine and prints out every file so I have no idea what is wrong.
Here is the scripts that is monitoring the folder incase that is the problem:
#!/bin/bash
if ! [ -f list.txt ]
then
touch list.txt
fi
inotifywait -m -e create --format "%w%f" tv-shows | while read FILE
do
echo "$FILE" >> list.txt
done
Any help would be great, thanks
EDIT:
Just to be more specific, the script works fine for the first file in the list.txt, it encodes it no problem and removes the old version, but then it doesn't do any of the others in the list
Taken from here
To solve the problem simply
echo "" | HandBrakeCLI ......
or
HandBrakeCLI ...... < /dev/null
I'm using ffmpeg to create a livestream video playlist of a folder of mp3s and a folder of videos.
I'd like everytime a new song comes on a new video loops until the next song.
Initially I was using live-stream-radio which is perfect except how it's handled is after every track a new ffmpeg stream loop is initialized. And in a lot of clients this issues a stop command, and there's "dead space" between.
My attempt was when creating the gif playlist text file (they were gifs but I converted to mp4), I set the duration for the duration of the corresponding track. The problem is the video plays once, and then freezes on the final frame until the next track.
rm music.txt
rm gifs.txt
printf "ffconcat version 1.0\n" >> gifs.txt
printf "ffconcat version 1.0\n" >> music.txt
for i in {1..9}; do
printf "file 'mp3/00%s.mp3'\n" $i >> music.txt
done
for i in {1..9}; do
DURATION=$(ffmpeg -i mp3/00$i.mp3 2>&1 | awk '/Duration/ { print substr($2,0,length($2)-1) }')
printf "file 'gif/00%s.mp4'\nduration %s\n" $i $DURATION >> gifs.txt
done
ffmpeg \
-stream_loop -1 \
-i gifs.txt \
-i music.txt \
-vcodec libx264 \
-f flv "$URL"
Any ideas here would be great.
I currently use tail -f to monitor a log file: this way I get an autorefreshing console monitoring a web server.
Now, said webserver was moved to another host and I have no shell privileges for that.
Nevertheless I have a .txt network path, which in the end is a log file which is constantly updated.
So, I'd like to do something like tail -f, but on that url.
Would it be possible?In the end "in linux everything is a file" so..
You can do auto-refresh with help of watch combined with wget.
It won't show history, like tail -f, rather update screen like top.
Example of command, that shows content on file.txt on the screen, and update output every five seconds:
watch -n 5 wget -qO- http://fake.link/file.txt
Also, you can output n last lines, instead of the whole file:
watch -n 5 "wget -qO- http://fake.link/file.txt | tail"
In case if you still need behaviour like "tail -f" (with keeping history), I think you need to write a script that will download log file each time period, compare it to previous downloaded version, and then print new lines. Should be quite easy.
I wrote a simple bash script to fetch URL content each 2 seconds and compare with local file output.txt then append the diff to the same file
I wanted to stream AWS amplify logs in my Jenkins pipeline
while true; do comm -13 --output-delimiter="" <(cat output.txt) <(curl -s "$URL") >> output.txt; sleep 2; done
don't forget to create empty file output.txt file first
: > output.txt
view the stream :
tail -f output.txt
original comment : https://stackoverflow.com/a/62347827/2073339
UPDATE:
I found better solution using wget here:
while true; do wget -ca -o /dev/null -O output.txt "$URL"; sleep 2; done
https://superuser.com/a/514078/603774
I've made this small function and added it to the .*rc of my shell. This uses wget -c, so it does not re-download the whole page:
# Poll logs continuously over HTTP
logpoll() {
FILE=$(mktemp)
echo "———————— LOGPOLLING TO $FILE ————————"
tail -f $FILE &
tail_pid=$!
bg %1
stop=0
trap "stop=1" SIGINT SIGTERM
while [ $stop -ne 1 ]; do wget -co /dev/null -O $FILE "$1"; sleep 2; done
echo "——————————— LOGPOLL DONE ————————————"
kill $tail_pid
rm $FILE
trap - SIGINT SIGTERM
}
Explanation:
Create a temporary logfile using mktemp and save its path to $FILE
Make tail -f output the logfile continuously in the background
Make ctrl+c set stop to 1 instead of exiting the function
Loop until stop bit is set, i.e. until the user presses ctrl+c
wget given URL in a loop every two seconds:
-c - "continue getting partially downloaded file", so that wget continues instead of truncating the file and downloading again
-o /dev/null - wget's log messages shall be thrown into the void
-O $FILE - output the contents to the temp logfile we've created
Clean up after yourself: kill the tail -f, delete the temporary logfile, unset the signal handlers.
The proposed solutions periodically download the full file.
To avoid that I've created a package and published in NPM that does a HEAD request ( getting the size of the file ) and requesting only the last bytes.
Check it out and let me know if you need any help.
https://www.npmjs.com/package/#imdt-os/url-tail