How do I rerun something until succeed in the background with pipe in linux command line? - linux

Story: I'm following this guide to setup my geth node:
https://github.com/enthusiastics/bsc-archive-snapshot/blob/master/build_archive_node.sh
However, for this code chunk
# Query S3 for all archives and download them in parallel to a new zfs dataset.
while IFS= read -r FILE_NAME; do
ZFS_NAME=$(echo "$FILE_NAME" | cut -d'.' -f1)
ARCHIVE_NAMES+=("$ZFS_NAME")
zfs create -o "mountpoint=/$ZFS_NAME" "tank/$ZFS_NAME"
bash -c "cd /$ZFS_NAME && aws s3 cp --request-payer=requester '$S3_BUCKET_PATH/bsc/$FILE_NAME' - | /zstd/zstd --long=30 -d | tar -xf -" &
done <<<"$(aws s3 ls --request-payer=requester "$S3_BUCKET_PATH/bsc/" | cut -d' ' -f4)"
To be exact for the following line
bash -c "cd /$ZFS_NAME && aws s3 cp --request-payer=requester '$S3_BUCKET_PATH/bsc/$FILE_NAME' - | /zstd/zstd --long=30 -d | tar -xf -" &
some of instance would results in broken pipe, but a automatic restart could help. So I want to rewrite the code so that it automatically retries until the above line is succeeded. I researched a bit and found the until do done method. Example
until passwd ; do echo "Try again" ; done;
However, I could not succeed in incorporating the above idea into the GitHub code. I try to rewrite it as:
bash -c "cd /$ZFS_NAME && until aws s3 cp --request-payer=requester '$S3_BUCKET_PATH/bsc/$FILE_NAME' - | /zstd/zstd --long=30 -d | tar -xf - ; do echo "Try again" ; done;" &
But it does not work... I test it on the minimum code:
until passwd | echo "randompipe" ; do echo "Try again" ; done;
it didn't run passwd until succeed

Related

How to copy latest file from sftp to local directory using shell script?

I have multiple file in SFTP server from which I need to copy only latest file. I have written sample code but in that I am passing filename. What logic I need to add that it identify the latest file from sftp and copy it into my local?
In SFTP server -
my_data_20220428.csv
my_data_20220504.csv
my_data_20220501.csv
my_data_20220429.csv
The code which I am running-
datadir="/script/data"
cd ${datadir}
rm -f ${datadir}/my_data*.csv
rm -f ${logfile}
lftp<<END_SCRIPT
open sftp://${sftphost}
user ${sftpuser} ${sftppassword}
cd ${sftpfolder}
lcd $datadir
mget my_data_20220504.csv
bye
END_SCRIPT
what changes I need to do it automatically pick the latest file from server without hardcoding the filename?
You can try this script mainly copied from your sample, so it is expected that the variables have already been created.
#!/usr/bin/env bash
datadir="/script/data"
rm -f "$datadir"/my_data*.csv
rm -f "$logfile"
new=$(echo "ls -halt $sftpfolder" | lftp -u "${sftpuser}","${sftppassword}" sftp://"${sftphost}" | sed -n '/my_data/s/.* \(.*\)/\1/p' | head -1)
lftp -u "${sftpuser}","${sftppassword}" sftp://"${sftphost}" << --EOF--
cd "$sftpfolder"
lcd "$datadir"
get "$new"
bye
--EOF--
You could try:
latest=$(lftp "sftp://$sftpuser:$sftppassword#$sftphost" \
-e "cd $sftpfolder; glob rels -1t *.csv; bye" |
head -1)
lftp "sftp://$sftpuser:$sftppassword#myhost" \
-e "cd $sftpfolder; mget $latest; bye"

Script will not download files when called by cron or udev rules

So I'm trying to make a script that will download my podcasts upon detecting my smart watch connecting and transfer them to it. I've configured the udev rule to detect when the watch is connected and it executes /bin/watch_transfer.sh for which the code is as such:
#!/usr/bin/env sh
echo "Watch connected at $(date)" >>/tmp/scripts.log
# Download new podcasts
cd /home/pi/Scripts/
./bashpodder.shell >>/tmp/pscripts.log
echo "Upodder should've run by now">>/tmp/scripts.log
# Transfer podcasts
for file in /home/pi/Downloads/podcasts/*
do
/usr/bin/mtp-sendfile $file /Podcasts
echo "Processing $file" >>/tmp/scripts.log
done
echo "Sent all files" >>/tmp/scripts.log
I know that the file runs when the watch is connected because /tmp/scripts.log is created and updated, and also bashpodder.shelll creates the podcast.m3u file so the bashpodder script is running but it doesn't download any files to /~/Downloads/podcasts. Bashpodder is a simle podcast downloader (I was using upodder but switched because it didn't seem to work) and mtp-tools is a way to transfer files through MTP. Bashpodder.shell script below:
# By Linc 10/1/2004
# Find the latest script at http://lincgeek.org/bashpodder
# Revision 1.21 12/04/2008 - Many Contributers!
# If you use this and have made improvements or have comments
# drop me an email at linc dot fessenden at gmail dot com
# and post your changes to the forum at http://lincgeek.org/lincware
# I'd appreciate it!
# Make script crontab friendly:
cd $(dirname $0)
# datadir is the directory you want podcasts saved to:
datadir=/home/pi/Downloads/podcasts
# create datadir if necessary:
mkdir -p $datadir
# Delete any temp file:
rm -f temp.log
# Read the bp.conf file and wget any url not already in the podcast.log file:
while read podcast
do
file=$(xsltproc parse_enclosure.xsl $podcast 2> /dev/null || wget -q $podcast -O - | tr '\r' '\n' | tr \' \" | sed -n 's/.*url="\([^"]*\)".*/\1/p')
for url in $file
do
echo $url >> temp.log
if ! grep "$url" podcast.log > /dev/null
then
wget -t 10 -U BashPodder -c -q -O $datadir/$(echo "$url" | awk -F'/' {'print $NF'} | awk -F'=' {'print $NF'} | awk -F'?' {'print $1'}) "$url"
fi
done
done < bp.conf
# Move dynamically created log file to permanent log file:
cat podcast.log >> temp.log
sort temp.log | uniq > podcast.log
rm temp.log
# Create an m3u playlist:
ls $datadir | grep -v m3u > $datadir/podcast.m3u
I think it might be something to do with permissions? As when I run ./watch_transfer.sh from the terminal it runs perfectly. Thanks in advance for your help.
edit:
After connecting my watch:
Ouput of $ cat /tmp/scripts.log:
Watch connected at Thu Jul 16 22:25:47 BST 2020
Upodder should've run by now
Processing /home/pi/Downloads/podcasts/podcast.m3u
Sent all files
$ cat /tmp/psripts.log doesn't output anything but /tmp/pscripts.log does exist.
Output of $ cat ~/Scripts/temp.log:
http://rasterweb.net/raster/audio/rwaudio20060108.mp3
http://rasterweb.net/raster/audio/rwaudio20051020.mp3
http://rasterweb.net/raster/audio/rwaudio20051017.mp3
http://rasterweb.net/raster/audio/rwaudio20050807.mp3
http://rasterweb.net/raster/audio/rwaudio20050719.mp3
http://rasterweb.net/raster/audio/rwaudio20050615.mp3
http://rasterweb.net/raster/audio/rwaudio20050525.mp3
http://rasterweb.net/raster/audio/rwaudio20050323.mp3
This seems to suggest that bashpodder is running through the urls but not actually downloading them?

Piped un-tar kill pipe when finished extracting given file from tarball

There is a big tarball that I am downloading using curl. I am just interested in one file within that tarball. So currently I am piping the output of curl to tar.
$ curl -S http://url/of/big/tarball.tar.gz | tar -xv path/of/one/file
Although it works fine this way. It will still download the humongous tarball completely even when the required file is already un-tared. Is there a way to interrupt it automatically when tar has finished extracting the required file?
Edit: For anyone wandering web for the same question. I ended up creating a small bash script
trap 'kill $(jobs -p)' EXIT
curl -S "${URL}" | tar -C "${OUTPUT_DIR}" -xv "${FILES[#]}" 2>&1 | head -"${FILES_CNT}" > "${CTRL_FILE}" 2>&1 &
# Wait for the required files to be found in the tar
until [[ -s "${CTRL_FILE}" && $(wc -l "${CTRL_FILE}" | cut -d' ' -f 8) -ge "${FILES_CNT}" ]]; do
sleep 10s
done
trap 'kill $(jobs -p)' EXIT
curl -S "${URL}" | tar -C "${OUTPUT_DIR}" -xv "${FILES[#]}" 2>&1 | head -"${FILES_CNT}" > "${CTRL_FILE}" 2>&1 &
# Wait for the required files to be found in the tar
until [[ -s "${CTRL_FILE}" && $(wc -l "${CTRL_FILE}" | cut -d' ' -f 8) -ge "${FILES_CNT}" ]]; do
sleep 10s
done

Bash script to download graphic files from website

I'm trying to write bash script in Linux (Debian), that will be used for downloading graphic files from website given by user during start-up. I'm not sure if my code is correct but first problem is when i try to run my script with website e.g. http://www.bbc.com/ an error shows: http://www.bbc.com/ : invalid identifier. I even tried a simple website that has only a few JPG files. My next problem is to find out how to download files from .txt file where the images Internet adresses are included.
#!/bin/bash
# $1 - URL $2 - new catalog name
read $1 $2
url=$1
fold=$2
mkdir -p $fold
if [$# -ne 3];
then
echo "Wrong command"
exit -1
fi
curl $url | grep -o -e "<img src=\".*\"+>" > img_list.txt |wc -l img_list.txt | lin=${% *}
baseurl=$(echo $url | grep -o "https?://[a-z.]*"")
curl -s $url | egrep -o "<img src\=[^>]*>" | sed 's/<img src=\"\([^"]*\).*/\1/.*/\1/g' > url_list.txt
sed -i "s|^/|$baseurl/|" url_list.txt
cd $fold;
what can I do next?
For download every image from the webpage I would to use:
mech-dump --absolute --images http://example.com | xargs -n1 curl -O
but this need to be installed the mech-dump command from the WWW::Mechanize package.
Using the list file
while read -r url folder
do
mkdir -p "$folder" || exit 1
(cd "$folder" && mech-dump --absolute --images "$url" | xargs -n1 curl -O)
done < list.txt
(assuming than no url nor folder containing a space).
an error shows: http://www.bbc.com/ : invalid identifier
Your use of read is wrong; change
read $1 $2
url=$1
fold=$2
to
read url fold
or decide to specify the arguments on the command line and omit only read $1 $2.
Also, each operand in [ ] must be separated from the brackets; change
if [$# -ne 3];
to
if [ -z "$fold" ]

Consuming bandwidth

I know how to write a basic bash script which uses wget to download a file, but how do I run this in an endless loop to download the specified file, delete it when the download is complete, then download it again.
you're looking for
while :
do
wget -O - -q "http://some.url/" > /dev/null
done
this will not save the file, not output useless info, and dump the contents over and over again in /dev/null
edit to just consume bandwidth, use ping -f or ping -f -s 65507
If your goal is to max out your bandwidth, especially for the purposes of benchmarking, use iperf. You run iperf on your server and client, and it will test your bandwidth using the protocol and parameters you specify. It can test one-way or two-way throughput and can optionally try to achieve a "target" bandwidth utilization (i.e. 3Mbps).
Everything is possible with programming. :)
If you want to try and max out your internet bandwidth, you could start many many processes of wget and let them download some big disk image files at the same time, while at the same time sending some huge files back to some server.
The details are left for the implementation, but this is one method to max out your bandwidth.
In case you want to consume network bandwidth, you'll need another computer. Then from computer A, IP 192.168.0.1, listen on a port (e.g. 12345).
$ netcat -l -p 12345
Then, from the other computer, send data to it.
$ netcat 192.168.0.1 12345 < /dev/zero
I perfer to use curl to wget. it is more editable. here is an excrpt from a bash script i wrote which checks the SVN version, and then gives the user a choice to download stable or latest. It then parses out the file, separating the "user settings" from the rest of the script.
svnrev=`curl -s -m10 mythicallibrarian.googlecode.com/svn/trunk/| grep -m1 Revision | sed s/"<html><head><title>mythicallibrarian - "/""/g| sed s/": \/trunk<\/title><\/head>"/""/g`
if ! which librarian-notify-send>/dev/null && test "$LinuxDep" = "1"; then
dialog --title "librarian-notify-send" --yesno "install librarian-notify-send script for Desktop notifications?" 8 25
test $? = 0 && DownloadLNS=1 || DownloadLNS=0
if [ "$DownloadLNS" = "1" ]; then
curl "http://mythicallibrarian.googlecode.com/files/librarian-notify-send">"/usr/local/bin/librarian-notify-send"
sudo chmod +x /usr/local/bin/librarian-notify-send
fi
fi
if [ ! -f "./librarian" ]; then
DownloadML=Stable
echo "Stable `date`">./lastupdated
else
lastupdated="`cat ./lastupdated`"
DownloadML=$(dialog --title "Version and Build options" --menu "Download an update first then Build mythicalLibrarian" 10 70 15 "Latest" "Download and switch to SVN $svnrev" "Stable" "Download and switch to last stable version" "Build" "using: $lastupdated" 2>&1 >/dev/tty)
if [ "$?" = "1" ]; then
clear
echo "mythicalLibrarian was not updated."
echo "Please re-run mythicalSetup."
echo "Done."
exit 1
fi
fi
clear
if [ "$DownloadML" = "Stable" ]; then
echo "Stable "`date`>"./lastupdated"
test -f ./mythicalLibrarian.sh && rm -f mythicalLibrarian.sh
curl "http://mythicallibrarian.googlecode.com/files/mythicalLibrarian">"./mythicalLibrarian.sh"
cat "./mythicalLibrarian.sh"| sed s/' '/'\\t'/g |sed s/'\\'/'\\\\'/g >"./mythicalLibrarian1" #sed s/"\\"/"\\\\"/g |
rm ./mythicalLibrarian.sh
mv ./mythicalLibrarian1 ./mythicalLibrarian.sh
parsing="Stand-by Parsing mythicalLibrarian"
startwrite=0
test -f ./librarian && rm -f ./librarian
echo -e 'mythicalVersion="'"`cat ./lastupdated`"'"'>>./librarian
while read line
do
test "$line" = "########################## USER JOBS############################" && let startwrite=$startwrite+1
if [ $startwrite = 2 ]; then
clear
parsing="$parsing""."
test "$parsing" = "Stand-by Parsing mythicalLibrarian......." && parsing="Stand-by Parsing mythicalLibrarian"
echo $parsing
echo -e "$line" >> ./librarian
fi
done <./mythicalLibrarian.sh
clear
echo "Parsing mythicalLibrarian completed!"
echo "Removing old and downloading new version of mythicalSetup..."
test -f ./mythicalSetup.sh && rm -f ./mythicalSetup.sh
curl "http://mythicallibrarian.googlecode.com/files/mythicalSetup.sh">"./mythicalSetup.sh"
chmod +x "./mythicalSetup.sh"
./mythicalSetup.sh
exit 0
fi
if [ "$DownloadML" = "Latest" ]; then
svnrev=`curl -s mythicallibrarian.googlecode.com/svn/trunk/| grep -m1 Revision | sed s/"<html><head><title>mythicallibrarian - "/""/g| sed s/": \/trunk<\/title><\/head>"/""/g`
echo "$svnrev "`date`>"./lastupdated"
test -f ./mythicalLibrarian.sh && rm -f mythicalLibrarian.sh
curl "http://mythicallibrarian.googlecode.com/svn/trunk/mythicalLibrarian">"./mythicalLibrarian.sh"
cat "./mythicalLibrarian.sh"| sed s/' '/'\\t'/g |sed s/'\\'/'\\\\'/g >"./mythicalLibrarian1" #sed s/"\\"/"\\\\"/g |
rm ./mythicalLibrarian.sh
mv ./mythicalLibrarian1 ./mythicalLibrarian.sh
parsing="Stand-by Parsing mythicalLibrarian"
startwrite=0
test -f ./librarian && rm -f ./librarian
echo -e 'mythicalVersion="'"`cat ./lastupdated`"'"'>>./librarian
while read line
do
test "$line" = "########################## USER JOBS############################" && let startwrite=$startwrite+1
if [ $startwrite = 2 ]; then
clear
parsing="$parsing""."
test "$parsing" = "Stand-by Parsing mythicalLibrarian......." && parsing="Stand-by Parsing mythicalLibrarian"
echo $parsing
echo -e "$line" >> ./librarian
fi
done <./mythicalLibrarian.sh
clear
echo "Parsing mythicalLibrarian completed!"
echo "Removing old and downloading new version of mythicalSetup..."
test -f ./mythicalSetup.sh && rm -f ./mythicalSetup.sh
curl "http://mythicallibrarian.googlecode.com/svn/trunk/mythicalSetup.sh">"./mythicalSetup.sh"
chmod +x "./mythicalSetup.sh"
./mythicalSetup.sh
exit 0
fi
EDIT: NEVERMIND I THOUGHT YOU WERE SAYING IT WAS DOWNLOADING IN AN ENDLESS LOOP

Resources