Download all mp3 files on a site and rename them in the process - linux

I want to download all episodes of a podcast that is poorly organized. Every episode is placed in a separate subfolder on the server, and they all have the file name "file.mp3" - I'd like to download them, sequentially downloading, then renaming, and then moving on to the next file. Using something like wget causes each file to overwrite the previous file, given they have the same file name.

wget normally doesn't overwrite files, it adds a number as a suffix:
file.mp3
file.mp3.1
file.mp3.2
...
But you can prevent that by calling it in a loop and using its -O option to specify the name:
count=0
urls=( http://example.com/folderA/file.mp3
http://example.com/folderB/file.mp3
http://example.com/folderC/file.mp3
)
for url in "${urls[#]}" ; do
wget -O file-$count.mp3 "$url"
(( count++ ))
done

Related

Recursively appending names of all files in a directory with exif specific png meta data field (aesthetic_score) with linux / EXIFtool

I am trying to rename all files located in a directory (recursively) with a specific meta data field appended to the end of the png file name.
the meta data field name is "aesthetic_score" with a value range from 1.0-9.0
when I type:
exiftool -Aesthetic_score -G1 -s testn.png
the result is:
[PNG] Aesthetic_score : 7.0
This is how I would like to append the png files recursively within a directory.
Note i would like to swap out the word aesthetic with the word chad in the append, and not all files will have this data field:
input file:
filename001.png (metadata aesthetic_score:7.0)
output:
filename001-chad-score-70.png
I tried to use Digikam and JExifToolGui-2.01, without success.
I am trying to perform this task in the cmd line, although other solutions are welcome. Thank you for your help.
So, this might work for you, I can't really test it; note that you would need to get rid of the echo before the mv for it to actually do something (rename rather than just show what it would do).
while read name
do
newname=$(exiftool -G1 -s "$name"|awk '$2~/FileName/{name=$4}; $2~/Aesthetic_score/{basename=gensub(/^(.+)\....$/,"\\1","1",name);ext=gensub(/^.*\.(...)$/,"\\1","1",name);gsub(/\./,"",$4);print basename"."$4"."ext}')
echo mv "$name" "$newname"
done <<<$( find -iname \*.png )
Basically the find at the very end finds all the pngs.
The while loop takes every name find throws it, and passes each file through exiftool (using your specs) and parses the output using awk, which then outputs the new name, which gets captured in the shell variable by the same name.
And finally the mv (without the echo) renames the files.

bash - opening an image only when a corresponding text file exists

I came across a problem in Bash when I would try to only open images based upon the information stored in .txt files about them. I am trying to sort a number of images by size or height, and display an image with them in the sorted order, but if there exists a .jpg in the folder without a .txt file with the same name, it should not process it.
I have the sorting piece of my situation done, and am trying to figure out how I would go about opening only the images that have a .jpg extension WITH a .txt file.
I figured a solution would look like me putting every .jpg's name (without extension) in a list and then process through the list and run something like:
[if -f $filename.txt ]; then ~~~
but I came across the problem of iterating through without a for-loop, or else all the pictures would open multiple times. My attempt was:
for i in *jpg; do
y=$y ${i.jpg}
done
if[ -f $y.txt ] then
(sorting parts)
This only looked at the last filename in y, as it should, but I am trying to figure out a way to look at each separate filename and see if there exists that textfile, in order to include it in the sorting.
Thanks so much for your help!
Collecting a list of file names in a single variable is an antipattern. You want to collect them in an array instead.
a=()
for f in *.jpg; do
if [ -e "${f%.jpg}".txt ]; then
continue
fi
a+=("$f")
done
# now do things with "${a[#]}"
Frequently, you don't really need to collect the files in an array -- just do everything you were doing inside the for loop to each individual file as you traverse the files.
(And actually y=$y ${i%.jpg} doesn't append to y -- it sets y to itself for the duration of attempting to execute a file named i sans the .jpg extension, which would most likely fail in the vast majority of cases.)
I would do the file check first such that find just reports files that have a corresponding text file. The following snippet will just display jpg files that have a corresponding txt file:
find . -name "*.jpg" -maxdepth 1 -exec /bin/bash -c '[ -e "${0%.*}.txt" ] && echo "$0";' {} \;

How to keep directory structure with aria2?

I need to download files simultaneously- wget doesn't support that so I want to try aria2. But I don't see an option in aria2 to keep directory structure.
Determine the directory structure first,
then build and use a download description file:
aria2c -i uri.txt
where uri.txt might contain
http://serverA/file1.iso http://mirror-serverB/file1.iso
# parameters must begin with a space, otherwise it's treatened as url!
dir=/downloads/a
# not mandatory
out=file1.iso
http://serverA/file2.iso http://mirror-serverB/file2.iso
dir=/downloads/b
out=file2.iso
Keep in mind that aria2 is a download util - not an sync util, like rsync or lftp.
Referencing an rsync answer: https://stackoverflow.com/a/4147263/1163786
and an lftp answer: https://superuser.com/a/305236.

Compress a set of log files in a folder depending on number of files

I would like to know if there is any way to compress a set of .txt files in a folder using scripting when the number of files get more than a set limit.
The txt files are automatically generated by another script.
You can use array size to detect the number of files:
limit=100
files=(*.txt)
if (( ${#files[#]} > limit )) ; then
zip archive.zip *.txt
fi
It sounds like you want logrotate with a custom (non-/etc) configuration file with rules for compressing/removing by size.

How to move and number files?

I working with linux, bash.
I have one directory with 100 folders in it, each one named different.
In each of these 100 folders, there is a file called first.bars (so I have 100 files named first.bars). Although all named first.bars, the files are actually slightly different.
I want to get all these files moved to one new folder and rename/number these files so that I know which file comes from which folder. So the first first.bars file must be renamed to 001.bars, the second to 002.bars.. etc.
I have tried the following:
ls -d * >> /home/directorywiththe100folders/list.txt
cat list.txt | while read line;
do cd $line;
mv first.bars /home/newfolder
This does not work because I can't have 100 files, named the same, in one folder. So I only need to know how to rename them. The renaming must be connected to the cat list.txt, because the first line is the folder containing the first file wich is moved and renamed. That file will be called 001.bars.
Try doing this :
$ rename 's/^.*?\./sprintf("%03d.", $c++)/e' *.bar
If you want more information about this command, see this recent response I gave earlier : How do I rename multiple files beginning with a Unix timestamp - imapsync issue
If the rename command is not available,
for d in /home/directorywiththe100folders/*/; do
newfile=$(printf "/home/newfolder/%d.bars" $(( c++ )) )
mv "$d/first.bars" "$newfile"
done

Resources