Linux: Update directory structure for millions of images which are already in prefix-based folders - linux

This is basically a follow-up to Linux: Move 1 million files into prefix-based created Folders
The original question:
I want to write a shell command to rename all of those images into the
following format:
original: filename.jpg new: /f/i/l/filename.jpg
Now, I want to take all of those files and add an additional level to the directory structure, e.g:
original: /f/i/l/filename.jpg new: /f/i/l/e/filename.jpg
Is this possible to do with command line or bash?

One way to do it is to simply loop over all the directories you already have, and in each bottom-level subdirectory create the new subdirectory and move the files:
for d in ?/?/?/; do (
cd "$d" &&
printf '%.4s\0' * | uniq -z |
xargs -0 bash -c 'for prefix do
s=${prefix:3:1}
mkdir -p "$s" && mv "$prefix"* "$s"
done' _
) done
That probably needs a bit of explanation.
The glob ?/?/?/ matches all directory paths made up of three single-character subdirectories. Because it ends with a /, everything it matches is a directory so there is no need to test.
( cd "$d" && ...; )
executes ... after cd'ing to the appropriate subdirectory. Putting that block inside ( ) causes it to be executed in a subshell, which means the scope of the cd will be restricted to the parenthesized block. That's easier and safer than putting cd .. at the end.
We then collecting the subdirectories first, by finding the unique initial strings of the files:
printf '%.4s\0' * | uniq -z | xargs -0 ...
That extracts the first four letters of each filename, nul-terminating each one, then passes this list to uniq to eliminate duplicates, providing the -z option because the input is nul-terminated, and then passes the list of unique prefixes to xargs, again using -0 to indicate that the list is nul-terminated. xargs executes a command with a list of arguments, issuing the command several times only if necessary to avoid exceeding the command-line limit. (We probably could have avoided the use of xargs but it doesn't cost that much and it's a lot safer.)
The command called with xargs is bash itself; we use the -c option to pass it a command to be executed. That command iterates over its arguments by using the for arg in syntax. Each argument is a unique prefix; we extract the fourth character from the prefix to construct the new subdirectory and then mv all files whose names start with the prefix into the newly created directory.
The _ at the end of the xargs invocation will be passed to bash (as with all the rest of the arguments); bash -c uses the first argument following the command as the $0 argument to the script, which is not part of the command line arguments iterated over by the for arg in syntax. So putting the _ there means that the argument list constructed by xargs will be precisely $1, $2, ... in the execution of the bash command.

Okay, so I've created a very crude solution:
#!/bin/bash
for file1 in *; do
if [[ -d "$file1" ]]; then
cd "$file1"
for file2 in *; do
if [[ -d "$file2" ]]; then
cd "$file2"
for file3 in *; do
if [[ -d "$file3" ]]; then
cd "$file3"
for file4 in *; do
if [[ -f "$file4" ]]; then
echo "mkdir -p ${file4:3:1}/; mv $file4 ${file4:3:1}/;"
mkdir -p ${file4:3:1}/; mv $file4 ${file4:3:1}/;
fi
done
cd ..
fi
done
cd ..
fi
done
cd ..
fi
done
I should warn that this is untested, as my actual structure varies slightly, but I wanted to keep the question/answer consistent with the original question for clarity.
That being said, I'm sure a much more elegant solution exists than this one.

Related

Concatenate (using bash) all file names in subdirectories with option

I have directory work_dir, and there are some subdirectories inside. And inside subdirectories there are zip archives. I can see all zip archives in terminal:
find . -name *.zip
The output:
./folder2/sub/dir/test2.zip
./folder3/test3.zip
./folder1/sub/dir/new/test1.zip
Now I want to concatinate all these file names in single row with some option. For example I want single row:
my_command -f ./folder2/sub/dir/test2.zip -f ./folder3/test3.zip -f ./folder1/sub/dir/new/test1.zip -u user1 -p pswd1
In this example:
my_command is some command
-f the option
-u user1 another option with value
-p pswd1 another option with value
Can you help me please, how can I do this in Linux BASH ?
One way is: (updated per #M. Nejat Aydin comments)
find . -name "*.zip" -print0 | xargs -0 -n1 printf -- '-f\0%s\0' | xargs -0 -n100000 my_command -u user1 -p pswd1
Note that -n100000 parameter forces all output of the previous xargs to be executed on the same line with the assumption that number of findings will be less than 100000.
I used null terminated versions (notice: -0 flag, -print0) because file names can contain spaces.
This is a bash script that should do what you wanted.
#!/usr/bin/env bash
user=user1
passwd=pswd1
while IFS= read -rd '' files; do
args+=(-f "$files")
done < <(find . -name '*.zip' -print0)
args=("${args[#]}" -u "$user" -p "$passwd")
##: Just for the human eye to see the output,
##: change this line of code according to the comment below.
printf 'mycommand %s\n' "${args[*]}"
The output should be in one-line, like what you wanted, but do change the last line from
printf 'mycommand %s\n' "${args[*]}"
into
mycommand "${args[#]}"
If you actually want to execute mycommand with the arguments.
Change the value of user and passwd too.
A while + read loop was used with IFS.
See How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
Why the last line should be change.
See Arguments
Shell quoting is a basic but common mistake when dealing with spaces in file/path name.
See How can I find and safely handle file names containing
Also the find command/utiliy.
The construct "${args[#}" is an array.
See Array1 Array2 Array3
You can do this by making a bash script.
Make a new file called whatever.sh
Type chmod +x ./whatever.sh so it becomes executable on the terminal
Add the BASH scripting as shown below..
#!/bin/bash
# Get all the zip files from your FolderName
files="`find ./FolderName -name *.zip`"
# Loop through the files and build your args
arg=""
for file in $files; do
arg="$arg -f $file"
done
# Run your command
mycommand $arg -u user1 -p pswd1

Shell - iterate over content of file but do something only the first x lines

So guys,
I need your help trying to identify the fastest and the most "fault" tolerant solution to my problem.
I have a shell script which executes some functions, based on a txt file, in which I have a list of files.
The list can contain from 1 file to X files.
What I would like to do is iterate over the content of the file and execute my scripts for only 4 items out of the file.
Once the functions have been executed for these 4 files, go over to the next 4 .... and keep on doing so until all the files from the list have been "processed".
My code so far is as follows.
#!/bin/bash
number_of_files_in_folder=$(cat list.txt | wc -l)
max_number_of_files_to_process=4
Translated_files=/home/german_translated_files/
while IFS= read -r files
do
while [[ $number_of_files_in_folder -gt 0 ]]; do
i=1
while [[ $i -le $max_number_of_files_to_process ]]; do
my_first_function "$files" & # I execute my translation function for each file, as it can only perform 1 file per execution
find /home/german_translator/ -name '*.logs' -exec mv {} $Translated_files \; # As there will be several files generated, I have them copied to another folder
sed -i "/$files/d" list.txt # We remove the processed file from within our list.txt file.
my_second_function # Without parameters as it will process all the files copied at step 2.
done
# here, I want to have all the files processed and don't stop after the first iteration
done
done < list.txt
Unfortunately, as I am not quite good at shell scripting, I do not know how to structure it so that it won't waste any resources and mostly, to make sure that it "processes" everything from that file.
Do you have any advice on how to achieve what I am trying to achieve?
only 4 items out of the file. Once the functions have been executed for these 4 files, go over to the next 4
Seems to be quite easy with xargs.
your_function() {
echo "Do something with $1 $2 $3 $4"
}
export -f your_function
xargs -d '\n' -n 4 bash -c 'your_function "$#"' _ < list.txt
xargs -d '\n' for each line
-n 4 take for arguments
bash .... - run this command with 4 arguments
_ - the syntax is bash -c <script> $0 $1 $2 etc..., see man bash.
"$#" - forward arguments
export -f your_function - export your function to environment so child bash can pick it up.
I execute my translation function for each file
So you execute your translation function for each file, not for each 4 files. If the "translation function" is really for each file with no inter-file state, consider rather executing 4 processes in parallel with same code and just xargs -P 4.
If you have GNU Parallel it looks something like this:
doit() {
my_first_function "$1"
my_first_function "$2"
my_first_function "$3"
my_first_function "$4"
my_second_function "$1" "$2" "$3" "$4"
}
export -f doit
cat list.txt | parallel -n4 doit

How do I write a one-liner cd command for the following case?

I have 2 directories
testing_dir
testing_dir_win
So I need to cd to testing_dir. But here is the case
the directories can be
testing_dir or testing_dir-2.1.0
testing_dir_win or testing_dir_win-1.3.0
and my script should only take testing_dir or testing_dir-2.1.0 (based on which is available)
I have the long way of writing it:
str=`ls folder_name|grep ^testing_dir`
arr=(${str//" "/ })
ret=""
for i in "${arr[#]}"
do
if [[ $i != *"testing_dir_win"* ]] ; then
ret=$i
fi
done
but is there a one-liner for this problem? something like cd testing_dir[\-]?(This doesn't work by the way).
If your script contains
shopt -s extglob
you can use:
cd testing_dir?(-[[:digit:]]*) || exit
...if you have a guarantee that only one match will exist.
Without that guarantee, you can directly set
arr=( testing_dir?(-[[:digit:]]*) )
cd "${arr[0]}" || exit
use command with grep filters:
cd `ls | grep -w testing_dir`
this command will match the testing_dir directory without worrying for version.
P.S in case of many versions it will go inside the earliest version so add "head -1, tail -1" according to your usecase

Deleting all files except ones mentioned in config file

Situation:
I need a bash script that deletes all files in the current folder, except all the files mentioned in a file called ".rmignore". This file may contain addresses relative to the current folder, that might also contain asterisks(*). For example:
1.php
2/1.php
1/*.php
What I've tried:
I tried to use GLOBIGNORE but that didn't work well.
I also tried to use find with grep, like follows:
find . | grep -Fxv $(echo $(cat .rmignore) | tr ' ' "\n")
It is considered bad practice to pipe the exit of find to another command. You can use -exec, -execdir followed by the command and '{}' as a placeholder for the file, and ';' to indicate the end of your command. You can also use '+' to pipe commands together IIRC.
In your case, you want to list all the contend of a directory, and remove files one by one.
#!/usr/bin/env bash
set -o nounset
set -o errexit
shopt -s nullglob # allows glob to expand to nothing if no match
shopt -s globstar # process recursively current directory
my:rm_all() {
local ignore_file=".rmignore"
local ignore_array=()
while read -r glob; # Generate files list
do
ignore_array+=(${glob});
done < "${ignore_file}"
echo "${ignore_array[#]}"
for file in **; # iterate over all the content of the current directory
do
if [ -f "${file}" ]; # file exist and is file
then
local do_rmfile=true;
# Remove only if matches regex
for ignore in "${ignore_array[#]}"; # Iterate over files to keep
do
[[ "${file}" == "${ignore}" ]] && do_rmfile=false; #rm ${file};
done
${do_rmfile} && echo "Removing ${file}"
fi
done
}
my:rm_all;
If we assume that none of the files in .rmignore contain newlines in their name, the following might suffice:
# Gather our exclusions...
mapfile -t excl < .rmignore
# Reverse the array (put data in indexes)
declare -A arr=()
for file in "${excl[#]}"; do arr[$file]=1; done
# Walk through files, deleting anything that's not in the associative array.
shopt -s globstar
for file in **; do
[ -n "${arr[$file]}" ] && continue
echo rm -fv "$file"
done
Note: untested. :-) Also, associative arrays were introduced with Bash 4.
An alternate method might be to populate an array with the whole file list, then remove the exclusions. This might be impractical if you're dealing with hundreds of thousands of files.
shopt -s globstar
declare -A filelist=()
# Build a list of all files...
for file in **; do filelist[$file]=1; done
# Remove files to be ignored.
while read -r file; do unset filelist[$file]; done < .rmignore
# Annd .. delete.
echo rm -v "${!filelist[#]}"
Also untested.
Warning: rm at your own risk. May contain nuts. Keep backups.
I note that neither of these solutions will handle wildcards in your .rmignore file. For that, you might need some extra processing...
shopt -s globstar
declare -A filelist=()
# Build a list...
for file in **; do filelist[$file]=1; done
# Remove PATTERNS...
while read -r glob; do
for file in $glob; do
unset filelist[$file]
done
done < .rmignore
# And remove whatever's left.
echo rm -v "${!filelist[#]}"
And .. you guessed it. Untested. This depends on $f expanding as a glob.
Lastly, if you want a heavier-weight solution, you can use find and grep:
find . -type f -not -exec grep -q -f '{}' .rmignore \; -delete
This runs a grep for EACH file being considered. And it's not a bash solution, it only relies on find which is pretty universal.
Note that ALL of these solutions are at risk of errors if you have files that contain newlines.
This line do perfectly the job
find . -type f | grep -vFf .rmignore
If you have rsync, you might be able to copy an empty directory to the target one, with suitable rsync ignore files. Try it first with -n, to see what it will attempt, before running it for real!
This is another bash solution that seems to work ok in my tests:
while read -r line;do
exclude+=$(find . -type f -path "./$line")$'\n'
done <.rmignore
echo "ignored files:"
printf '%s\n' "$exclude"
echo "files to be deleted"
echo rm $(LC_ALL=C sort <(find . -type f) <(printf '%s\n' "$exclude") |uniq -u ) #intentionally non quoted to remove new lines
Test it online here
Alternatively, you may want to look at the simplest format:
rm $(ls -1 | grep -v .rmignore)

Bash: Move files to specific folder if name contains one of a list of strings

I have a script that queries the Twitter API for several queries, and then writes the raw data to a file with the query in the name, plus a timestamp. I'd like to have a script that, given the list of query strings (regexs?) and for all files in a folder, if one of the query strings is a substring in that file, move it to a specific folder. Right now I have just a script with just a few dozen mv commands, but I'd like a simpler and more maintainable version. Here's an example of what I'm doing now:
mv /home/nick/TwitterSearchToDatabase/queries_for_amita/*femin*/home/nick/TwitterSearchToDatabase/queries_for_amita/feminism
mv /home/nick/TwitterSearchToDatabase/queries_for_amita/*patriarchy* /home/nick/TwitterSearchToDatabase/queries_for_amita/feminism
mv /home/nick/TwitterSearchToDatabase/queries_for_amita/*yesallwomen* /home/nick/TwitterSearchToDatabase/queries_for_amita/feminism
mv /home/nick/TwitterSearchToDatabase/queries_for_amita/*womanpower* /home/nick/TwitterSearchToDatabase/queries_for_amita/feminism
I would use a for loop:
for i in femin patriarchy yesallwomen womanpower; do
mv /home/nick/TwitterSearchToDatabase/queries_for_amita/*$i* /home/nick/TwitterSearchToDatabase/queries_for_amita/feminism
done
That way the list is in the first line so it is easy to amend.
I would isolate data (the words to be moved to feminism) and code.
When you have more keywords (feminism and so), you can make files with keywords and check these keywordfiles for the files you are considering to move.
With ${fromdir} where the files come from, ${todir} where you want them and ${keyfiledir} with the keywords, you get something like
for keyfile in ${keyfiledir}/*; do
key="${keyfile##*/}"
find $from -type f | sed 's#.*/##' | while read -r file; do
echo "${file}" | grep -q -f "${keyfiledir}"/"${key}" && mv "${from}"/"${file}" "${to}"/"${key}"
done
done
How does that work? I tested the solution above with the following script.
from=fromdir
to=todir
keyfiledir=keyfiledir
rm -rf ${from} ${to} ${keyfiledir}
mkdir ${from} ${to} ${keyfiledir}
mkdir ${to}/feminism ${to}/so
touch ${from}/yesallwomen ${from}/women ${from}/some_femin ${from}/"help move"
cat <<# > ${keyfiledir}/feminism
femin
patriarchy
yesallwomen
womanpower
#
touch ${from}/yesallwomen ${from}/women ${from}/some_femin
cat <<# > ${keyfiledir}/so
stack
exchange
help
#
test ! -d "${from}" && echo " Wrong dir ${from}" && exit 1
test ! -d "${to}" && echo " Wrong dir ${to}" && exit 1
test ! -d "${keyfiledir}" && echo " Wrong dir ${keyfiledir}" && exit 1
for keyfile in ${keyfiledir}/*; do
key="${keyfile##*/}"
find $from -type f | sed 's#.*/##' | while read -r file; do
echo "${file}" | grep -q -f "${keyfiledir}"/"${key}" && mv "${from}"/"${file}" "${to}"/"${key}"
done
done
echo "Not moved"
ls ${from}
echo "Moved"
ls -R ${to}
A simple combination of mv and egrep should suffice. egrep can take a pattern list from a file (and then you get to use full regexp syntax, not just glob syntax.) Make sure to exclude the name of the target folder.
cd /home/nick/TwitterSearchToDatabase/queries_for_amita
mv $(ls | egrep -f patterns.txt | grep -v '^feminism$') feminism

Resources