Looping through files in a directory but the directory name gets echoed twice - linux

I'm working on looping through files in bash script and for the most part it is working but the directory name is echoed twice. How could I change the code to only echo the files in the directory and not the directory itself.
This is my code:
Directory=C:/temp/
find $Directory
for filename in "$Directory"/;
do
echo $filename
done
This is what I see in my terminal:
C:/temp/
C:/temp/QJ07312433_10_19_2021_snapshot.xml
C:/temp/QJ07312433_10_28_2021_snapshot.xml
C:/temp/

The find command prints all the names in the hierarchy headed by $Directory. Then you loop through the single string $Directory/, and echo that. This is why you get the second echo of the directory name.
If you want the loop to process the find output in the loop, you need to pipe to it:
find "$Directory" | while read -r name; do
echo "$name"
done

Related

How do i extract the date from multiple files with dates in it?

Lets say i have multiple filesnames e.g. R014-20171109-1159.log.20171109_1159.
I want to create a shell script which creates for every given date a folder and moves the files matching the date to it.
Is this possible?
For the example a folder "20171109" should be created and has the file "R014-20171109-1159.log.20171109_1159" on it.
Thanks
This is a typical application of a for-loop in bash to iterate thru files.
At the same time, this solution utilizes GNU [ shell param substitution ].
for file in /path/to/files/*\.log\.*
do
foldername=${file#*-}
foldername=${foldername%%-*}
mkdir -p "${foldername}" # -p suppress errors if folder already exists
[ $? -eq 0 ] && mv "${file}" "${foldername}" # check last cmd status and move
done
Since you want to write a shell script, use commands. To get date, use cut cmd like ex:
cat 1.txt
R014-20171109-1159.log.20171109_1159
cat 1.txt | cut -d "-" -f2
Output
20171109
is your date and create folder. This way you can loop and create as many folders as you want
Its actually quite easy(my Bash syntax might be a bit off) -
for f in /path/to/your/files*; do
## Check if the glob gets expanded to existing files.
## If not, f here will be exactly the pattern above
## and the exists test will evaluate to false.
[ -e "$f" ] && echo $f > #grep the file name for "*.log."
#and extract 8 charecters after "*.log." .
#Next check if a folder exists already with the name of 8 charecters.
#If not { create}
#else just move the file to that folder path
break
done
Main idea is from this post link. Sorry for not providing the actual code as i havent worked anytime recently on Bash
Below commands can be put in script to achieve this,
Assign a variable with current date as below ( use --date='n day ago' option if need to have an older date).
if need to get it from File name itself, get files in a loop then use cut command to get the date string,
dirVar=$(date +%Y%m%d) --> for current day,
dirVar=$(date +%Y%m%d --date='1 day ago') --> for yesterday,
dirVar=$(echo $fileName | cut -c6-13) or
dirVar=$(echo $fileName | cut -d- -f2) --> to get from $fileName
Create directory with the variable value as below, (-p : create directory if doesn't exist.)
mkdir -p ${dirVar}
Move files to directory to the directory with below line,
mv *log.${dirVar}* ${dirVar}/

Move files in a for loop

I want a script that is able to read the content of a text file which contains folder names and moves the folders from their directory to a specific folder. Here is my script:
#!/bin/bash
for i in $(cat /folder/collected/folders.txt)
do
mv /fromfilelocation/$i /folder/Collected/
done
This script is partly working as it copies only the last folder in the text file, as for the other folders it gives the error "not possible: data or directory not found" But the folder is there and according to the error the folder directory is correctly displayed.
What should I do in order to make it work correctly ??
You can use this:
#!/bin/bash
for sample in `awk '{print $1}' All_bins.txt`
do mv "$sample" All_Good_Bins
done
Use while loop instead
while read i; do
mv fromfilelocation/"$i" /folder/Collected/
done </folder/collected/folders.txt

Get current directory (not full path) with filename only when sub folder is present in Linux bash

I have prepared a bash script to get only the directory (not full path) with file name where file is present. It has to be done only when file is located in sub directory.
For example:
if input is src/email/${sub_dir}/Bank_Casefeed.email, output should be ${sub_dir}/Bank_Casefeed.email.
If input is src/layouts/Bank_Casefeed.layout, output should be Bank_Casefeed.layout. I can easily get this using basename command.
src/basefolder is always constant. In some cases (after src/email(basefolder) directory), sub_directories will be there.
This script will work. I can use this script (only if module is email) to get output. but script should work even if sub directory is present in other modules. Maybe should I count the directories? if there are more than two directories (src/basefolder), script should get sub directories. Is there any better way to handle both scenarios?
#!/bin/bash
filename=`basename src/email/${sub_dir}/Bank_Casefeed.email`
echo "filename is $filename"
fulldir=`dirname src/email/${sub_dir}/Bank_Casefeed.email`
dir=`basename $fulldir`
echo "subdirectory name: $dir"
echo "concatenate $filename $dir"
Entity=$dir/$filename
echo $Entity
Using shell parameter expansion:
sub_dir='test'
files=( "src/email/${sub_dir}/Bank_Casefeed.email" "src/email/Bank_Casefeed.email" )
for f in "${files[#]}"; do
if [[ $f == *"/$sub_dir/"* ]]; then
echo "${f/*\/$sub_dir\//$sub_dir\/}"
else
basename "$f"
fi
done
test/Bank_Casefeed.email
Bank_Casefeed.email
I know there might be an easier way to do this. But I believe you can just manipulate the input string. For example:
#!/bin/bash
sub_dir='test'
DIRNAME1="src/email/${sub_dir}/Bank_Casefeed.email"
DIRNAME2="src/email/Bank_Casefeed.email"
echo $DIRNAME1 | cut -f3- -d'/'
echo $DIRNAME2 | cut -f3- -d'/'
This will remove the first two directories.

Delete files in one directory that do not exist in another directory or its child directories

I am still a newbie in shell scripting and trying to come up with a simple code. Could anyone give me some direction here. Here is what I need.
Files in path 1: /tmp
100abcd
200efgh
300ijkl
Files in path2: /home/storage
backupfile_100abcd_str1
backupfile_100abcd_str2
backupfile_200efgh_str1
backupfile_200efgh_str2
backupfile_200efgh_str3
Now I need to delete file 300ijkl in /tmp as the corresponding backup file is not present in /home/storage. The /tmp file contains more than 300 files. I need to delete the files in /tmp for which the corresponding backup files are not present and the file names in /tmp will match file names in /home/storage or directories under /home/storage.
Appreciate your time and response.
You can also approach the deletion using grep as well. You can loop though the files in /tmp checking with ls piped to grep, and deleting if there is not a match:
#!/bin/bash
[ -z "$1" -o -z "$2" ] && { ## validate input
printf "error: insufficient input. Usage: %s tmpfiles storage\n" ${0//*\//}
exit 1
}
for i in "$1"/*; do
fn=${i##*/} ## strip path, leaving filename only
## if file in backup matches filename, skip rest of loop
ls "${2}"* | grep -q "$fn" &>/dev/null && continue
printf "removing %s\n" "$i"
# rm "$i" ## remove file
done
Note: the actual removal is commented out above, test and insure there are no unintended consequences before preforming the actual delete. Call it passing the path to tmp (without trailing /) as the first argument and with /home/storage as the second argument:
$ bash scriptname /path/to/tmp /home/storage
You can solve this by
making a list of the files in /home/storage
testing each filename in /tmp to see if it is in the list from /home/storage
Given the linux+shell tags, one might use bash:
make the list of files from /home/storage an associative array
make the subscript of the array the filename
Here is a sample script to illustrate ($1 and $2 are the parameters to pass to the script, i.e., /home/storage and /tmp):
#!/bin/bash
declare -A InTarget
while read path
do
name=${path##*/}
InTarget[$name]=$path
done < <(find $1 -type f)
while read path
do
name=${path##*/}
[[ -z ${InTarget[$name]} ]] && rm -f $path
done < <(find $2 -type f)
It uses two interesting shell features:
name=${path##*/} is a POSIX shell feature which allows the script to perform the basename function without an extra process (per filename). That makes the script faster.
done < <(find $2 -type f) is a bash feature which lets the script read the list of filenames from find without making the assignments to the array run in a subprocess. Here the reason for using the feature is that if the array is updated in a subprocess, it would have no effect on the array value in the script which is passed to the second loop.
For related discussion:
Extract File Basename Without Path and Extension in Bash
Bash Script: While-Loop Subshell Dilemma
I spent some really nice time on this today because I needed to delete files which have same name but different extensions, so if anyone is looking for a quick implementation, here you go:
#!/bin/bash
# We need some reference to files which we want to keep and not delete,
 # let's assume you want to keep files in first folder with jpeg, so you
# need to map it into the desired file extension first.
FILES_TO_KEEP=`ls -1 ${2} | sed 's/\.pdf$/.jpeg/g'`
#iterate through files in first argument path
for file in ${1}/*; do
# In my case, I did not want to do anything with directories, so let's continue cycle when hitting one.
if [[ -d $file ]]; then
continue
fi
# let's omit path from the iterated file with baseline so we can compare it to the files we want to keep
NAME_WITHOUT_PATH=`basename $file`
 # I use mac which is equal to having poor quality clts
# when it comes to operating with strings,
# this should be safe check to see if FILES_TO_KEEP contain NAME_WITHOUT_PATH
if [[ $FILES_TO_KEEP == *"$NAME_WITHOUT_PATH"* ]];then
echo "Not deleting: $NAME_WITHOUT_PATH"
else
# If it does not contain file from the other directory, remove it.
echo "deleting: $NAME_WITHOUT_PATH"
rm -rf $file
fi
done
Usage: sh deleteDifferentFiles.sh path/from/where path/source/of/truth

Another Bash permission denied post

I've spent the past hour trying to find a way around this before asking but to no avail so I'm asking.
I am trying to make a simple script that will take the name for a file and then generate a generic blank html template for me.
#!/bin/bash
blank=/home/sithyrys/Documents/scripts/blank.html
echo "Enter file name with no extensions:"
read fileName
fileName+=.html
echo $fileName
touch $fileName
$blank >> $fileName
When I comment out the path the code runs with no error message but then it's not pulling the template and it makes a blank page. The error message in question is:
./basicHTMLTemplate.sh: line 9: /home/sithyrys/Documents/scripts/blank.html: Permission denied
Edit: shebang line copied wrong that was correct already
>> does not copy a file; it appends the output of the command that precedes it to the file named following it. You need to use the cat command to actually "push" the contents of blank.html into the new file.
cat "$blank" >> "$fileName"
As written, your code accommodates the possibility that $fileName already exists and appends the contents of $blank without overwriting the existing file. In practice, it doesn't make much sense to append the template to the end of an existing file, so you probably just want to make a copy of the template.
#!/bin/bash
blank=/home/sithyrys/Documents/scripts/blank.html
echo "Enter file name with no extensions:"
read fileName
fileName+=.html
echo $fileName
cp "$blank" "$fileName"
(or, to guard against overwriting an existing file,
[[ -f "$fileName" ]] || cp "$blank" "$fileName"
)

Resources