High-throughput viewing and selecting of photos - linux

From of the thousands and thousands of personal photographs in my collection, I'd like to select some special ones to print and display as a collage. All the photos are on one hard drive but scattered around /home/$USER. I know how to find all jpg photos with a command like this: find / -iname "*.jpg" -print. But that only lists the file name. I could run a similar command to view the file, but that is only half the challenge.
How can I then view each photograph and also have a dialog for whether or not to copy the photo to the directory that will be printed? (For example, with fdupes -r -d /home/$USER I can see a dialog about which file to delete.
(Some background, I used ubuntu 12.04 x64 and I'm comfortable with the terminal.)

# assuming you have $pic_list as an array of all images
# i.e. somethiing like pic_list=`find / -iname "*.jpg"`
for pic in $pic_list
do
display $pic & -OR- eog $pic &
echo "Press 'y' to copy $pic to /home/$USER/<dest_folder>"
read option
if [ $option = "y" -o $option = "Y" ]
then
cp -f $pic /home/$USER/<dest_folder>
else
echo "will not copy $pic"
fi
done
If this is not what you are looking for, pls do let me know.

1. Make symlinks to all the images in a single directory
mkdir all-pics
cd all-pics
find ~/Pictures/ -iname '*.jpg' | \
awk '{name=$0; gsub(/[/]/,"_", name);\
system("ln -s \"" $0 "\" \"" name "\"")}'
Note: The awk script generates and executes the command ln -s "/path/to the/original image.jpg" "_path_to the_original image.jpg" for each image found.
2. Use geeqie to view the images.
3. Use Ctrl+C shortcut to copy the current image to a separate to_be_printed/ folder. geeqie's copy dialog remembers the last selected folder, so you'd only be pressing Ctrl+C, Enter to copy the picture.

There are many ways to solve this problem. I personally always found e.g. qiv a nice tool for some problem like this. You can easily configure it to read a qiv-command config where you can exactly script what you want to do on a particular keypress. I use it for a similar task as you do and just keep my fingers on d (delete), space (next).
e.g.
https://bitbucket.org/ciberandy/qiv/src/3b3fb21db52c076cd05792f648df8ae659d1af92/qiv-command.example

Related

Using bash to loop through nested folders to run script in current working directory

I've got (what feels like) a fairly simple problem but my complete lack of experience in bash has left me stumped. I've spent all day trying to synthesize a script from many different SO threads explaining how to do specific things with unintuitive commands, but I can't figure out how to make them work together for the life of me.
Here is my situation: I've got a directory full of nested folders each containing a file with extension .7 and another file with extension .pc, plus a whole bunch of unrelated stuff. It looks like this:
Folder A
Folder 1
Folder x
data_01.7
helper_01.pc
...
Folder y
data_02.7
helper_02.pc
...
...
Folder 2
Folder z
data_03.7
helper_03.pc
...
...
Folder B
...
I've got a script that I need to run in each of these folders that takes in the name of the .7 file as an input.
pc_script -f data.7 -flag1 -other_flags
The current working directory needs to be the folder with the .7 file when running the script and the helper.pc file also needs to be present in it. After the script is finished running, there are a ton of new files and directories. However, I need to take just one of those output files, result.h5, and copy it to a new directory maintaining the same folder structure but with a new name:
Result Folder/Folder A/Folder 1/Folder x/new_result1.h5
I then need to run the same script again with a different flag, flag2, and copy the new version of that output file to the same result directory with a different name, new_result2.h5.
The folders all have pretty arbitrary names, though there aren't any spaces or special characters beyond underscores.
Here is an example of what I've tried:
#!/bin/bash
DIR=".../project/data"
for d in */ ; do
for e in */ ; do
for f in */ ; do
for PFILE in *.7 ; do
echo "$d/$e/$f/$PFILE"
cd "$DIR/$d/$e/$f"
echo "Performing operation 1"
pc_script -f "$PFILE" -flag1
mkdir -p ".../results/$d/$e/$f"
mv "results.h5" ".../project/results/$d/$e/$f/new_results1.h5"
echo "Performing operation 2"
pc_script -f "$PFILE" -flag 2
mv "results.h5" ".../project/results/$d/$e/$f/new_results2.h5"
done
done
done
done
Obviously, this didn't work. I've also tried using find with -execdir but then I couldn't figure out how to insert the name of the file into the script flag. I'd appreciate any help or suggestions on how to carry this out.
Another, perhaps more flexible, approach to the problem is to use the find command with the -exec option to run a short "helper-script" for each file found below a directory path that ends in ".7". The -name option allows find to locate all files ending in ".7" below a given directory using simple file-globbing (wildcards). The helper-script then performs the same operation on each file found by find and handles moving the result.h5 to the proper directory.
The form of the command will be:
find /path/to/search -type f -name "*.7" -exec /path/to/helper-script '{}` \;
Where the -f option tells find to only return files (not directories) ending in ".7". Your helper-script needs to be executable (e.g. chmod +x helper-script) and unless it is in your PATH, you must provide the full path to the script in the find command. The '{}' will be replaced by the filename (including relative path) and passed as an argument to your helper-script. The \; simply terminates the command executed by -exec.
(note there is another form for -exec called -execdir and another terminator '+' that can be used to process the command on all files in a given directory -- that is a bit safer, but has additional PATH requirements for the command being run. Since you have only one ".7" file per-directory -- there isn't much benefit here)
The helper-script just does what you need to do in each directory. Based on your description it could be something like the following:
#!/bin/bash
dir="${1%/*}" ## trim file.7 from end of path
cd "$dir" || { ## change to directory or handle error
printf "unable to change to directory %s\n" "$dir" >&2
exit 1
}
destdir="/Result_Folder/$dir" ## set destination dir for result.h5
mkdir -p "$destdir" || { ## create with all parent dirs or exit
printf "unable to create directory %s\n" "$dir" >&2
exit 1
}
ls *.pc 2>/dev/null || exit 1 ## check .pc file exists or exit
file7="${1##*/}" ## trim path from file.7 name
pc_script -f "$file7" -flags1 -other_flags ## first run
## check result.h5 exists and non-empty and copy to destdir
[ -s "result.h5" ] && cp -a "result.h5" "$destdir/new_result1.h5"
pc_script -f "$file7" -flags2 -other_flags ## second run
## check result.h5 exists and non-empty and copy to destdir
[ -s "result.h5" ] && cp -a "result.h5" "$destdir/new_result2.h5"
Which essentially stores the path part of the file.7 argument in dir and changes to that directory. If unable to change to the directory (due to read-permissions, etc..) the error is handled and the script exits. Next the full directory structure is created below your Result_Folder with mkdir -p with the same error handling if the directory cannot be created.
ls is used as a simple check to verify that a file ending in ".pc" exits in that directory. There are other ways to do this by piping the results to wc -l, but that spawns additional subshells that are best avoided.
(also note that Linux and Mac have files ending in ".pc" for use by pkg-config used when building programs from source -- they should not conflict with your files -- but be aware they exists in case you start chasing why weird ".pc" files are found)
After all tests are performed, the path is trimmed from the current ".7" filename storing just the filename in file7. The file7 variabli is then used in your pc_script command (which should also include the full path to the script if not in you PATH). After the pc_script is run [ -s "result.h5" ] is used to verify that result.h5 exists and is non-empty before moving that file to your Result_Folder location.
That should get you started. Using find to locate all .7 files is a simple way to let the tool designed to find the files for you do its job -- rather than trying to hand-roll your own solution. That way you only have to concentrate on what should be done for each file found. (note: I don't have pc_script or the files, so I have not testes this end-to-end, but it should be very close if not right-on-the-money)
There is nothing wrong in writing your own routine, but using find eliminates a lot of area where bugs can hide in your own solution.
Let me know if you have further questions.

How can I stop my script to overwrite existing files

I am learning bash since 6 days I think I got some of the basics.
Anyway, for the wallpapers downloaded from Variety I've written two scripts. One of them moves downloaded photos older than 12 days to a folder and renames them all as "Aday 1,2,3..." and the other lets me select these and moves them to another folder and removes photos I didn't select. 1st script works just as I intended, my question is about the other
I think I should write the script down to better explain my problem
Script:
#!/bin/bash
#Move victors of 'Seçme-Eleme' to 'Kazananlar'
cd /home/eurydice/Bulunur\ Bir\ Şeyler/Dosyamsılar/Seçme-Eleme
echo "Select victors"
read vct
for i in $vct; do
mv -i "Aday $i.png" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $RANDOM.png" ;
mv -i "Aday $i.jpg" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $RANDOM.jpg" ;
done
#Now let's remove the rest
rm /home/eurydice/Bulunur\ Bir\ Şeyler/Dosyamsılar/Seçme-Eleme/*
In this script I originally intended to define another variable (let's call this "n") and so did I with copying and changing the variable from the first script. It was something like that
for i in $vct; do
n=1
mv "Aday $i.png" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $n.png" ;
mv "Aday $i.jpg" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $n.jpg" ;
n=$((n+1))
done
When I do that for the first time the script worked just as I intended. However, in my 2nd test run this script overwrote the files that already existed. I mean, for example in 1st run i had 5 files whose names are "Bahar 1,2,3,4,5" and the 2nd time I chose 3 files to add. I wanted their names to be "Bahar 6,7,8" but instead, my script made them the new 1,2 and 3. I tried many solutions and when I couldn't fix that I just assigned random numbers to them.
Is there a way to make this script work as I intended?
This command finds the biggest file name number amongst files in current directory. If no file is found, biggest number is assigned to 0.
biggest_number=$(ls -1 | sed -n 's/^[^0-9]*\([0-9]\+\)\(\.[a-zA-Z]\+\)\?$/\1/p' | sort -r -g | head -n 1)
[[ ! -z "$biggest_number" ]] || biggest_number=0
The regex in sed command assumes that there is no digit in filenames before the trailing number intended for increment.
As soon as you have found the biggest number, you can use it to start your loop to prevent overwrites.
n=$((biggest_number+1))

bash - opening an image only when a corresponding text file exists

I came across a problem in Bash when I would try to only open images based upon the information stored in .txt files about them. I am trying to sort a number of images by size or height, and display an image with them in the sorted order, but if there exists a .jpg in the folder without a .txt file with the same name, it should not process it.
I have the sorting piece of my situation done, and am trying to figure out how I would go about opening only the images that have a .jpg extension WITH a .txt file.
I figured a solution would look like me putting every .jpg's name (without extension) in a list and then process through the list and run something like:
[if -f $filename.txt ]; then ~~~
but I came across the problem of iterating through without a for-loop, or else all the pictures would open multiple times. My attempt was:
for i in *jpg; do
y=$y ${i.jpg}
done
if[ -f $y.txt ] then
(sorting parts)
This only looked at the last filename in y, as it should, but I am trying to figure out a way to look at each separate filename and see if there exists that textfile, in order to include it in the sorting.
Thanks so much for your help!
Collecting a list of file names in a single variable is an antipattern. You want to collect them in an array instead.
a=()
for f in *.jpg; do
if [ -e "${f%.jpg}".txt ]; then
continue
fi
a+=("$f")
done
# now do things with "${a[#]}"
Frequently, you don't really need to collect the files in an array -- just do everything you were doing inside the for loop to each individual file as you traverse the files.
(And actually y=$y ${i%.jpg} doesn't append to y -- it sets y to itself for the duration of attempting to execute a file named i sans the .jpg extension, which would most likely fail in the vast majority of cases.)
I would do the file check first such that find just reports files that have a corresponding text file. The following snippet will just display jpg files that have a corresponding txt file:
find . -name "*.jpg" -maxdepth 1 -exec /bin/bash -c '[ -e "${0%.*}.txt" ] && echo "$0";' {} \;

Linux rename files based on input file

I need to rename hundreds of files in Linux to change the unique identifier of each from the command line. For sake of examples, I have a file containing:
old_name1 new_name1
old_name2 new_name2
and need to change the names from new to old IDs. The file names contain the IDs, but have extra characters as well. My plan is therefore to end up with:
abcd_old_name1_1234.txt ==> abcd_new_name1_1234.txt
abcd_old_name2_1234.txt ==> abcd_new_name2_1234.txt
Use of rename is obviously fairly helpful here, but I am struggling to work out how to iterate through the file of the desired name changes and pass this as input into rename?
Edit: To clarify, I am looking to make hundreds of different rename commands, the different changes that need to be made are listed in a text file.
Apologies if this is already answered, I've has a good hunt, but can't find a similar case.
rename 's/^(abcd_)old_name(\d+_1234\.txt)$/$1new_name$2/' *.txt
Should work, depending on whether you have that package installed. Also have a look at qmv (rename-utils)
If you want more options, use e.g.
shopt -s globstart
rename 's/^(abcd_)old_name(\d+_1234\.txt)$/$1new_name$2/' folder/**/*.txt
(finds all txt files in subdirectories of folder), or
find folder -type f -iname '*.txt' -exec rename 's/^(abcd_)old_name(\d+_1234\.txt)$/$1new_name$2/' {} \+
To do then same using GNU find
while read -r old_name new_name; do
rename "s/$old_name/$new_name/" *$old_name*.txt
done < file_with_names
In this way, you read the IDs from file_with_names and rename the files replacing $old_name with $new_name leaving the rest of the filename untouched.
I was about to write a php function to do this for myself, but I came upon a faster method:
ls and copy & paste the directory contents into excel from the terminal window. Perhaps you may need to use on online line break removal or addition tool. Assume that your file names are in column A In excel, use the following formula in another column:
="mv "&A1&" prefix"&A1&"suffix"
or
="mv "&A1&" "&substitute(A1,"jpeg","jpg")&"suffix"
or
="mv olddirectory/"&A1&" newdirectory/"&A1
back in Linux, create a new file with
nano rename.txt and paste in the values from excel. They should look something like this:
mv oldname1.jpg newname1.jpg
mv oldname1.jpg newname2.jpg
then close out of nano and run the following command:
bash rename.txt. Bash just runs every line in the file as if you had typed it.
and you are done! This method gives verbose output on errors, which is handy.

BASH Linux Run code for all file extensions ".fal"

I am not a Linux user, so bash and shell are new for me.
I need a code that runs 2 scripts for all file extensions ".fal" that are located in the folder(and sub-folders preferably) that I run the code in.
E.g:
dos2unixfortxtandfal """""""that code runs for all files in the folder already
and
for all ".fal" files in this folder,
Do
eine_fal_macher (here the .fal files 1 by one) Versuch.txt
Done
eine_fal_marcher --> this is the script that runs in the moment only once
(here the .fal files 1 by one) --> this is input file 1
Versuch.txt--> this is input file 2 (same for all) (from the same
folder)
In the end I want to do the following in the terminal:
frdc09927:\Frdc09927\z183464\DOE_Wellen\21a>
frdc09927:\Frdc09927\z183464\DOE_Wellen\21a>script.bash --> Enter
frdc09927:\Frdc09927\z183464\DOE_Wellen\21b>script.bash --> Enter
frdc09927:\Frdc09927\z183464\DOE_Wellen\21c>script.bash --> Enter
find . -name \*.fal -exec eine_fal_macher {} Versuch.txt \;
This runs for all *.fal files in the current directory and its subdirectories. Use -maxdepth 1 as first option to limit it to the current directory only, or give a different working directory than . to have find search somewhere else. {} is replaced with the "found" filename, honoring things like spaces in the filename automatically.
I could start explaining find at this point, but you should really rather have a look at man find instead. This tool is extremely powerful, and can reduce rather complex problems (like acting on the age of files, their owners etc.) to a one-liner.
Try something like this:
for i in `ls *.fal`; do command1 $i && command2 $i; done
command2 is only executed for a specific file if command1 does not return an errorcode
I'm not sure I fully understand the requirement, but here goes (trying to follow your pseudo code):
for FILE in `find . -name "*.fal"`
do
eine_fal_macher "${FILE}" Versuch.txt
done

Resources