Handle Whitespace and special character in shell script (using gio) - linux

Hy,
I am trying to handle white spaces and special characters like "&" in a shell script which is supposed to set custom directory icons using gio in Ubuntu 18.04.
When directory names consist only of a single word eg. MyFolder the following script works just fine:
for dir in $(find "$PWD" -type d); do
icon="/.FolderIcon.png"
iconLocation="$dir$icon"
if [ -a "$iconLocation" ]; then
front="file://"
gio set "$dir" metadata::custom-icon "$front$iconLocation"
fi
done
However when the directory is named eg. "A & B" the above script does not change the icon of the respective directory.
So my question is: Is there a way to handle directories named like "A & B" in my script?

First, for var in $(cmd) is generally an antipattern.
In most cases, what you'd probably want is something like suggested in https://mywiki.wooledge.org/BashFAQ/020 -
while IFS= read -r -d '' dir; do
# stuff ...
done < <(find "$PWD" -type d -print0)
But for this particular example, you might just use shopt -s globstar.
I made a directory with an A & B subdirectory and ran this test loop:
$: shopt -s globstar
$: for d in **/; do touch "$d.FolderIcon.png"; if [[ -e "$d.FolderIcon.png" ]]; then ls -l "$d.FolderIcon.png"; fi; done
-rw-r--r-- 1 paul 1234567 0 Apr 20 09:25 'A & B/.FolderIcon.png'
**/ has some shortcomings - it won't find hidden directories, for example, or anything beneath them. It is pretty metacharacter-safe as long as you quote your variables, though.

Thanks to the answer of Paul Hodges the following solution finally worked for me:
shopt -s globstar
location="/path/to/location/you/want/to/modify"
prefix="file://"
for d in **/; do
if [[ -e "$d.FolderIcon.png" ]];
then gio set "$d" metadata::custom-icon "$prefix$location/$d.FolderIcon.png";
fi;
done

Related

Linux shell script: Dynamically finding folders in the script directory and add them to an array [duplicate]

I want to write a shell script to show a list of directories entered by a user and then for a user to select one of the directories with an index number based on how many directories there are
I'm thinking this is some kind of array operation, but im not sure how to do this in shell script
example:
> whichdir
There are 3 dirs in the current path
1 dir1
2 dir2
3 dir3
which dir do you want?
> 3
you selected dir3!
$ ls -a
./ ../ .foo/ bar/ baz qux*
$ shopt -s dotglob
$ shopt -s nullglob
$ array=(*/)
$ for dir in "${array[#]}"; do echo "$dir"; done
.foo/
bar/
$ for dir in */; do echo "$dir"; done
.foo/
bar/
$ PS3="which dir do you want? "
$ echo "There are ${#array[#]} dirs in the current path"; \
select dir in "${array[#]}"; do echo "you selected ${dir}"'!'; break; done
There are 2 dirs in the current path
1) .foo/
2) bar/
which dir do you want? 2
you selected bar/!
Array syntax
Assuming you have the directories stored in an array:
dirs=(dir1 dir2 dir3)
You can get the length of the array thusly:
echo "There are ${#dirs[#]} dirs in the current path"
You can loop through it like so:
let i=1
for dir in "${dirs[#]}"; do
echo "$((i++)) $dir"
done
And assuming you've gotten the user's answer, you can index it as follows. Remember that arrays are 0-based so the 3rd entry is index 2.
answer=2
echo "you selected ${dirs[$answer]}!"
Find
How do you get the file names into an array, anyways? It's a bit tricky. If you have find that might be the best way:
readarray -t dirs < <(find . -maxdepth 1 -type d -printf '%P\n')
The -maxdepth 1 stops find from looking through subdirectories, -type d tells it to find directories and skip files, and -printf '%P\n' tells it to print the directory names without the leading ./ it normally likes to print.
#! /bin/bash
declare -a dirs
i=1
for d in */
do
dirs[i++]="${d%/}"
done
echo "There are ${#dirs[#]} dirs in the current path"
for((i=1;i<=${#dirs[#]};i++))
do
echo $i "${dirs[i]}"
done
echo "which dir do you want?"
echo -n "> "
read i
echo "you selected ${dirs[$i]}"
Update: my answer is wrong
Leaving it here to address a common misunderstanding, below the line is erroneous.
To put the directories in an array you can do...
array=( $( ls -1p | grep / | sed 's/^\(.*\)/"\1"/') )
This will capture the dir names, including those with spaces.
Extracting from comments:
literal quotes don't have any effect on string-splitting, so array=( echo '"hello world" "goodbye world"' ) is an array with four elements, not two
#Charles Duffy
Charles also supplied the following link Bash FAQ #50 which is an extended discussion on this issue.
I should also draw attention to the link posted by #Dennis Williamson - why I shouldn't have used ls

Working with hidden files and files\folders that have spaces in the name

I have made a script that takes all of the files in the current directory, checks if it is a regular file or a folder and sets permissions to them.
My problem is when I encounter hidden files\folders and files\folders that have spaces in the name.
Here is my script:
#!/bin/bash
FILES=$(pwd)/*
for f in $FILES
do
if [ -f $f ]; then chmod u+x $f; fi
if [ -d $f ]; then chmod u=w,g+r,o-rwx $f; fi
done
Here is an example of an error I get from the testing computer:
'test/.bwhajtbzmu xswxcgqsvz' has incorrect permissions: expected 250, got 414.
The other errors are basically the same.
I am not actually sure what is the problem here, if it is the fact that it is a hidden file or that it has a space in the name. I guess both things are the problem.
How can I modify the script so that it can work with hidden files and files that have space in the name ?
Thank you
PS. Please don't question the usefulness of the script, it is a school homework.
Handling whitespace in file names is tricky. First rule is: doublequotes around all usages of variables. Otherwise the shell interprets the spaces as separators. Unfortunately you cannot simply hold a list in a variable. You have to use array variables for this.
#!/bin/bash
FILES=( "$(pwd)"/* )
for f in "${FILES[#]}"
do
if [ -f "$f" ]; then chmod u+x "$f"; fi
if [ -d "$f" ]; then chmod u=w,g+r,o-rwx "$f"; fi
done
For handling hidden files and folders (the ones starting with a dot .) you should best set the shell option dotglob which makes * also match these (which it otherwise doesn't). (Using .* is not good as it matches also . and .. which normally aren't wanted and things like .??* will not match stuff like .a which is normally wanted.):
shopt -s dotglob
FILES=( "$(pwd)"/* )
shopt -u dotglob
I would not recommend leaving it on, so I switch it off after using it.

Deleting all files except ones mentioned in config file

Situation:
I need a bash script that deletes all files in the current folder, except all the files mentioned in a file called ".rmignore". This file may contain addresses relative to the current folder, that might also contain asterisks(*). For example:
1.php
2/1.php
1/*.php
What I've tried:
I tried to use GLOBIGNORE but that didn't work well.
I also tried to use find with grep, like follows:
find . | grep -Fxv $(echo $(cat .rmignore) | tr ' ' "\n")
It is considered bad practice to pipe the exit of find to another command. You can use -exec, -execdir followed by the command and '{}' as a placeholder for the file, and ';' to indicate the end of your command. You can also use '+' to pipe commands together IIRC.
In your case, you want to list all the contend of a directory, and remove files one by one.
#!/usr/bin/env bash
set -o nounset
set -o errexit
shopt -s nullglob # allows glob to expand to nothing if no match
shopt -s globstar # process recursively current directory
my:rm_all() {
local ignore_file=".rmignore"
local ignore_array=()
while read -r glob; # Generate files list
do
ignore_array+=(${glob});
done < "${ignore_file}"
echo "${ignore_array[#]}"
for file in **; # iterate over all the content of the current directory
do
if [ -f "${file}" ]; # file exist and is file
then
local do_rmfile=true;
# Remove only if matches regex
for ignore in "${ignore_array[#]}"; # Iterate over files to keep
do
[[ "${file}" == "${ignore}" ]] && do_rmfile=false; #rm ${file};
done
${do_rmfile} && echo "Removing ${file}"
fi
done
}
my:rm_all;
If we assume that none of the files in .rmignore contain newlines in their name, the following might suffice:
# Gather our exclusions...
mapfile -t excl < .rmignore
# Reverse the array (put data in indexes)
declare -A arr=()
for file in "${excl[#]}"; do arr[$file]=1; done
# Walk through files, deleting anything that's not in the associative array.
shopt -s globstar
for file in **; do
[ -n "${arr[$file]}" ] && continue
echo rm -fv "$file"
done
Note: untested. :-) Also, associative arrays were introduced with Bash 4.
An alternate method might be to populate an array with the whole file list, then remove the exclusions. This might be impractical if you're dealing with hundreds of thousands of files.
shopt -s globstar
declare -A filelist=()
# Build a list of all files...
for file in **; do filelist[$file]=1; done
# Remove files to be ignored.
while read -r file; do unset filelist[$file]; done < .rmignore
# Annd .. delete.
echo rm -v "${!filelist[#]}"
Also untested.
Warning: rm at your own risk. May contain nuts. Keep backups.
I note that neither of these solutions will handle wildcards in your .rmignore file. For that, you might need some extra processing...
shopt -s globstar
declare -A filelist=()
# Build a list...
for file in **; do filelist[$file]=1; done
# Remove PATTERNS...
while read -r glob; do
for file in $glob; do
unset filelist[$file]
done
done < .rmignore
# And remove whatever's left.
echo rm -v "${!filelist[#]}"
And .. you guessed it. Untested. This depends on $f expanding as a glob.
Lastly, if you want a heavier-weight solution, you can use find and grep:
find . -type f -not -exec grep -q -f '{}' .rmignore \; -delete
This runs a grep for EACH file being considered. And it's not a bash solution, it only relies on find which is pretty universal.
Note that ALL of these solutions are at risk of errors if you have files that contain newlines.
This line do perfectly the job
find . -type f | grep -vFf .rmignore
If you have rsync, you might be able to copy an empty directory to the target one, with suitable rsync ignore files. Try it first with -n, to see what it will attempt, before running it for real!
This is another bash solution that seems to work ok in my tests:
while read -r line;do
exclude+=$(find . -type f -path "./$line")$'\n'
done <.rmignore
echo "ignored files:"
printf '%s\n' "$exclude"
echo "files to be deleted"
echo rm $(LC_ALL=C sort <(find . -type f) <(printf '%s\n' "$exclude") |uniq -u ) #intentionally non quoted to remove new lines
Test it online here
Alternatively, you may want to look at the simplest format:
rm $(ls -1 | grep -v .rmignore)

How can I batch rename multiple images with their path names and reordered sequences in bash?

My pictures are kept in the folder with the picture-date for folder name, for example the original path and file names:
.../Pics/2016_11_13/wedding/DSC0215.jpg
.../Pics/2016_11_13/afterparty/DSC0234.jpg
.../Pics/2016_11_13/afterparty/DSC0322.jpg
How do I rename the pictures into the format below, with continuous sequences and 4-digit padding?
.../Pics/2016_11_13_wedding.0001.jpg
.../Pics/2016_11_13_afterparty.0002.jpg
.../Pics/2016_11_13_afterparty.0003.jpg
I'm using Bash 4.1, so only mv command is available. Here is what I have now but it's not working
#!/bin/bash
p=0
for i in *.jpg;
do
mv "$i" "$dirname.%03d$p.JPG"
((p++))
done
exit 0
Let say you have something like .../Pics/2016_11_13/wedding/XXXXXX.jpg; then go in directory .../Pics/2016_11_13; from there, you should have a bunch of subdirectories like wedding, afterparty, and so on. Launch this script (disclaimer: I didn't test it):
#!/bin/sh
for subdir in *; do # scan directory
[ ! -d "$subdir" ] && continue; # skip non-directory
prognum=0; # progressive number
for file in $(ls "$dir"); do # scan subdirectory
(( prognum=$prognum+1 )) # increment progressive
newname=$(printf %4.4d $prognum) # format it
newname="$subdir.$newname.jpg" # compose the new name
if [ -f "$newname" ]; then # check to not overwrite anything
echo "error: $newname already exist."
exit
fi
# do the job, move or copy
cp "$subdir/$file" "$newname"
done
done
Please note that I skipped the "date" (2016_11_13) part - I am not sure about it. If you have a single date, then it is easy to add these digits in # compose the new name. If you have several dates, then you can add a nested for for scanning the "date" directories. One more reason I skipped this, is to let you develop something by yourself, something you can be proud of...
Using only mv and bash builtins:
#! /bin/bash
shopt -s globstar
cd Pics
p=1
# recursive glob for .jpg files
for i in **/*.jpg
do
# (date)/(event)/(filename).jpg
if [[ $i =~ (.*)/(.*)/(.*).jpg ]]
then
newname=$(printf "%s_%s.%04d.jpg" "${BASH_REMATCH[#]:1:2}" "$p")
echo mv "$i" "$newname"
((p++))
fi
done
globstar is a bash 4.0 feature, and regex matching is available even in OSX's anitque bash.

Renaming a set of files to 001, 002,

I originally had a set of images of the form image_001.jpg, image_002.jpg, ...
I went through them and removed several. Now I'd like to rename the leftover files back to image_001.jpg, image_002.jpg, ...
Is there a Linux command that will do this neatly? I'm familiar with rename but can't see anything to order file names like this. I'm thinking that since ls *.jpg lists the files in order (with gaps), the solution would be to pass the output of that into a bash loop or something?
If I understand right, you have e.g. image_001.jpg, image_003.jpg, image_005.jpg, and you want to rename to image_001.jpg, image_002.jpg, image_003.jpg.
EDIT: This is modified to put the temp file in the current directory. As Stephan202 noted, this can make a significant difference if temp is on a different filesystem. To avoid hitting the temp file in the loop, it now goes through image*
i=1; temp=$(mktemp -p .); for file in image*
do
mv "$file" $temp;
mv $temp $(printf "image_%0.3d.jpg" $i)
i=$((i + 1))
done
A simple loop (test with echo, execute with mv):
I=1
for F in *; do
echo "$F" `printf image_%03d.jpg $I`
#mv "$F" `printf image_%03d.jpg $I` 2>/dev/null || true
I=$((I + 1))
done
(I added 2>/dev/null || true to suppress warnings about identical source and target files. If this is not to your liking, go with Matthew Flaschen's answer.)
Some good answers here already; but some rely on hiding errors which is not a good idea (that assumes mv will only error because of a condition that is expected - what about all the other reaons mv might error?).
Moreover, it can be done a little shorter and should be better quoted:
for file in *; do
printf -vsequenceImage 'image_%03d.jpg' "$((++i))"
[[ -e $sequenceImage ]] || \
mv "$file" "$sequenceImage"
done
Also note that you shouldn't capitalize your variables in bash scripts.
Try the following script:
numerate.sh
This code snipped should do the job:
./numerate.sh -d <your image folder> -b <start number> -L 3 -p image_ -s .jpg -o numerically -r
This does the reverse of what you are asking (taking files of the form *.jpg.001 and converting them to *.001.jpg), but can easily be modified for your purpose:
for file in *
do
if [[ "$file" =~ "(.*)\.([[:alpha:]]+)\.([[:digit:]]{3,})$" ]]
then
mv "${BASH_REMATCH[0]}" "${BASH_REMATCH[1]}.${BASH_REMATCH[3]}.${BASH_REMATCH[2]}"
fi
done
I was going to suggest something like the above using a for loop, an iterator, cut -f1 -d "_", then mv i i.iterator. It looks like it's already covered other ways, though.

Resources