linux: how to batch rename folder name and the file name under folder - linux

I need a help to finish a script to rename a folders and .
eg: my current folders and files like below:
Gideon/gideon_lisha/Gideon_samuel/Gideon_nathan.xml
Gideon/lisha_gideon/Gideon_noah.xml
...
I want a shell command to rename them like below:
Liang/Liang_lisha/Liang_samuel/Liang_nathan.xml
Liang/lisha_Liang/Liang_noah.xml
...
I tied:
#!/bin/bash
path=$1
filename=$2
newfilename=$3
echo "We are finding '$filename' under the folder '$path'"
count=1
for i in `find $path -iname *$filename*`
do
newpath=`echo $i | sed "s/$filename/$newfilename/g"`
sudo mv "$i" "$newpath"
echo "${count}: Renaming $i to $newpath"
let count++
done
but the script will stop to:
Liang/gideon_lisha/Gideon_samuel/Gideon_nathan.xml
because it changed the folder name, so that can not find the next path. I do not know how let the script run from inner to outer instead of running outer to inner.

finally, I found out the anwser:
#!/bin/bash
path=$1
filename=$2
newfilename=$3
echo "We are finding '$filename' under the folder '$path'"
count=1
for i in `find $path -iname "*$filename*" | tac`
do
newpath=`echo $i | sed "s#\(.*\)$filename#\1$newfilename#i"`
sudo mv "$i" "$newpath"
echo "${count}: Renaming $i to $newpath"
let count++
done
really thank #susbarbatus !

Related

Script to rename files in subfolders to different name

I have a directory with several folders inside. Inside the folders I have a worksheet with the same name in all folders. I need to run a script that randomly changes the name of the worksheets so that I can throw them all in the same folder. For example: worksheet1
worksheet2
worksheet3.
today they are all called spreadsheet.csv
I have a sketch in linux.
need help please.
NN=0;
for arq in $(ls -1 *.csv);do
let NN++;
rename -n 's/'${arq}'/spreadsheet'${NN}'.csv/' ${arq};
done
(this search all files csv, but dont is recursive. Work only in one directory)
Use globstar:
n=0
shopt -s globstar nullglob || exit
for f in **/*.csv; do
until dst=spreadsheet$(( ++n )).csv; [[ ! -e ${dst} ]]; do
continue
done
mv -i -- "${f}" "${dst}"
done
Sample files:
$ find . -name "*.csv"
./spreadsheet.csv
./sub1/spreadsheet.csv
./sub1/sub2/spreadsheet.csv
One idea:
n=0
while read -r oldname
do
((++n))
newname="${oldname##*/}"
newname="${newname//.csv/-$n.csv}"
echo mv "$oldname" "$newname"
done < <(find . -name spreadsheet.csv)
This generates:
mv ./spreadsheet.csv spreadsheet-1.csv
mv ./sub1/spreadsheet.csv spreadsheet-2.csv
mv ./sub1/sub2/spreadsheet.csv spreadsheet-3.csv
Once OP is satisifed with the output, remove the echo and run the script again.
After removing the echo and running again:
$ find . -name "*.csv"
./spreadsheet-1.csv
./spreadsheet-2.csv
./spreadsheet-3.csv

How do I tweak my bash script below for searching sub directories

I have a directory with files in the following strcuture:
HomeTransaction1/Date1/transactionfile1.txt
HomeTransaction1/Date1/transactionfile1.xls
HomeTransaction1/Date1/transactionfile2.xls
HomeTransaction1/Date1/transactionfile2.txt
HomeTransaction1/Date1/transactionfile3.txt
HomeTransaction1/Date2/transactionfile1.txt
HomeTransaction1/Date3/transactionfile2.txt
HomeTransaction1/Date3/transactionfile3.txt
HomeTransaction2/Date1/transactionfile1.txt
HomeTransaction2/Date1/transactionfile2.txt
HomeTransaction3/Date1/transactionfile3.txt
I'm trying to get for a specific thing in the transaction files that end in .txt so I'm trying to come up with a bash script to achieve this. Conceptually, this is my thought process.
A - List each folder in the current directory. I this example, it'll be HomeTransaction1, HomeTransaction2 and HomeTransaction3
B - for each folder in B list all the folders(the Date folders)
C - for each folder in step B, run "grep " for files with .txt extension
This is what I have come up with so far:
#!/bin/bash
for FILE in `ls -l`
do
if test -d $FILE && (startswith "HomeTrasaction") //I want to add a condition to check that the directory name and starts with "HomeTrasaction"
then
cd $FILE // example cd to 'HomeTransaction1' directory
echo "In HomeTransaction directory $FILE"
for SUB_FILE in `ls -l`
do
cd $SUB_FILE //example cd to 'Date1'
echo "In Date directory $FILE"
for TS_FILES in ($find . -print | grep .txt)
grep "text-to-search" $SUB_FILE
fi
done
I appreciate any help in finalizing my script. Thank you.
The solution is actually pretty simple
find ./HomeTrasaction* -iname "*.txt" -exec grep -i "phrase" {} \;
find ./HomeTrasaction* - search each directory that start with this phrase, in the current directory.
-iname "*.txt" - for each file that ends with .txt
-exec grep -i "phrase" {} ; - grep for the word "phrase"
If this is still not clear "man find" :)

How to log the compressed zip files when using a function in shell script

I am working on a shell script which should validate .gz files in multiple folders in linux and then gzip them if a particular file not zipped and if already the file is zipped, purge them with following condition.
a) All these files in folders have *.log..gz as extension
So i was using functions and find cmd to achieve the same.
Script seems to be working fine but its not logging the zipped files information to log file, however its spooling about already zipped files in the folder to log. is this the correct way using functions?
#!/bin/bash
DIR_PATH="/var/log"
LOG="/tmp/test.log"
VARLOG_PATH=("$DIR_PATH"{"Kevin","John","Robin","Pavan"})
fun_zip_log(){
for i in `find "$i" -type f \( -name "*.log.20*" 2>/dev/null \) `; do
gzip "$i" ; done >> $LOG
}
fun_purge_log(){
for i in `find "$i" -type f \( -name "log.20*" 2>/dev/null \) `; do rm -f
"$i" ; done >> $LOG
}
validate_zip(){
for file in $i/*.gz
do
if ! [ -f "$file" ];
then
echo "$file is getting zipped" >> $LOG
fun_zip_log "$i"
else
echo "$file is already zipped" >> $LOG
fun_purge_log "$i"
fi
done
}
#MainBlock
for i in "${VARLOG_PATH[#]}"
do
if [ -d "$i" ] && [ "$(ls -A "$i" |wc -l )" -gt 0 ]; then
echo "Searching for files in directory : "$i" " >> $LOG
validate_zip "$i"
else
echo "No files exist in directory : "$i" " >> $LOG
fi
done
exit
####LOG FILE###
Searching for files in directory : /var/log/Kevin
[*.gz] is getting zipped.
Searching for files in directory : /var/log/John
/var/log/John/instrumentation.log.2018-06-20.gz is already zipped
/var/log/John/instrumentation.log.2018-06-21.gz is already zipped
No files exist in directory : /var/log/Robin
Searching for files in directory : /var/log/Pavan
[*.gz] is getting zipped.
Your code is very muddled and confusing. For example in this:
fun_purge_log(){
for i in `find "$i" -type f \( -name "log.20*" 2>/dev/null \) `; do rm -f
"$i" ; done >> $LOG
}
for file in $i/*.gz
do
...
fun_purge_log "$i"
In the calling code you're looping setting a variable file but then passing the directory "$i" to your function to then try to find the files again.
Within fun_purge_log() you're ignoring the argument being passed in and then using the global variable i as both the directory argument to find and also to loop through the list of files output by find - why not pick a new variable name and use some local variables?
I can't imagine what you think 2>/dev/null is going to do in \( -name "*log.20*" 2>/dev/null \).
You're trying to append something to $LOG but you aren't printing anything to append to it.
Run your code through shellcheck (e.g. at shellcheck.net), read http://mywiki.wooledge.org/BashFAQ/001, https://mywiki.wooledge.org/Quotes and https://mywiki.wooledge.org/ParsingLs, and really just THINK about what each line of your code is doing. Correct the issues yourself and then let us know if you still have a problem. Oh, and by convention and to avoid clashing with other variables don't use all capitals for non-exported variable names and lastly use $(command) instead of `command`.

getting filenames from directory in shell script

I would like to iterate a loop over all the file present in a directory using shell script. Further, I would like to display the contents from each file. I am passing directory as a command line argument.
I have a simple loop as follows:
for file in $1
do
cat $file
done
If I run
sh script.sh test
where test is a directory, I get content of first file only.
Could anyone please help me in this?
couple of alternatives:
compact modification of SMA's code:
for file in $1/*
do
[[ -f $file ]] && cat $file
done
or use find:
find $1 -type f -exec cat \{\} \;
Try something like:
for file in $1/*
do
if [[ -f $file ]] ##you could add -r to check if you have read permission for file or not
then
cat $file
fi
done

Change extension of file using shell script

How to change extension of all *.dat files in a directory to *.txt.
Shell script should take the directory name as an argument. Can
take multiple directories as arguments. Print the log of command
result in appending mode with date and timestamp.
Bash can do all of the heavy lifting such as extracting the extension and tagging on a new one. For example:
for file in $1/*.dat ; do mv "$file" "${file%.*}.txt" ; done
Batch File Rename By File Extension in Unix
# change .htm files to .html
for file in *.htm ; do mv $file `echo $file | sed 's/\(.*\.\)htm/\1html/'` ; done
# change .html files to .htm
for file in *.html ; do mv $file `echo $file | sed 's/\(.*\.\)html/\1htm/'` ; done
#change .html files to .shtml
for file in *.html ; do mv $file `echo $file | sed 's/\(.*\.\)html/\1shtml/'` ; done
#change .html files to php
for file in *.html ; do mv $file `echo $file | sed 's/\(.*\.\)html/\1php/'` ; done
so ==>
# change .dat files to .txt
for file in *.dat ; do mv $file `echo $file | sed 's/\(.*\.\)dat /\1txt/'` ; done
#!/bin/bash
for d in $*; do
for f in $(ls $d/*.dat); do
echo $(date) $(mv -v $f ${f%.dat}.txt)
done
done
Output redirection should be done by the shell when running the script
Leaving out argument validity checks
Simple script:
#!/bin/bash
if [ $# -lt 2 ] then
echo "Usage `basename $0` <any number of directories space separated>"
exit 85 # exit status for wrong number of arguments.
fi
for directories
do
for files in $(ls $directories/*.dat); do
echo $(date) $(mv -v $files ${files%.dat}.txt)
done
done
The first for loop by default loops on the $# i.e. command-line arguments passed.
Follow Pben's solution, if your filename contains blank space, you should use double quotation marks to the variable like the following:
#remove the space in file name
#example file name:19-014-0100.mp3 .mp3
#result file name:19-014-0100.mp3
$ for file in *.mp3 ;
do target=`echo "$file" | sed 's/ //g'`;
echo "$target";
mv "$file" "$target";
done;
#remove the duplicate file extension in file name
#example file name:19-014-0100.mp3.mp3
#result file name:19-014-0100.mp3
$ for file in *.mp3 ;
do target=`echo "$file" | sed 's/\.mp3\.mp3$/.mp3/g'`;
echo "$target";
mv "$file" "$target";
done;
To rename (changing extention) all my html files on epub files I use this command line :
find . -name "*.html*" -exec rename -v 's/\.html$/\.epub/i' {} \;
Script, first finds the names of the given extensions.
It removes the extension from names. Then adds backslash()
for identification of terminal.
Then the 'mv' command executed.
Here the '.temp' folder is used to hide the process from user,
in GUI.
#!/bin/sh
if [ $# -ne 3 ]
then
echo "Usage: ./script folder current_extension modify_extension"
exit
fi
mkdir .temp
find $1 -name "*.$2" > .temp/output_1 && sed "s/$2//" .temp/output_1 > .temp/output_2 && sed -e "s/[ \t]/\\\ /g" .temp/output_2 > .temp/output_3
while read line
do
mv -v "$line""$2" "$line""$3"
done < .temp/output_3
rm -rf .temp
The output files are saved inside the '.temp' folder,later the '.temp' folder is removed.
The top voted answer didn't really work for me. I may have been doing something wrong. My scenario was trying to create a file with the original name, but with the date appended to it, along with changing the extension from .xslx to .csv. This is what worked for me:
csvname=`echo $xlsx |sed 's/\.xlsx//'`"-$now"`echo $xlsx | sed 's/\(.*\.\)xlsx/\.csv/'`
So, for all the .dat files in a directory (without the date addition), you could run something like this:
for i in *.dat
do mv $i `echo $i |sed 's/\.dat//'``echo $i | sed 's/\(.*\.\)dat/\.txt/'`
done
From the above, this section of code just removed the extension:
echo $i |sed 's/\.dat//'
And this section changes the .dat to .txt:
echo $i | sed 's/\(.*\.\)dat/\.txt/'
And by bumping them next to each other, it concatenates the two outputs into the filename. It's like doing this:
mv [filename][.dat] [filename] + [.txt]
Though, I did use STDOUT instead of the 'mv' command.
Following command to change file extention .c to .h
find . -depth -name "*.c" -exec sh -c 'dname=$(dirname {}) && fname=$(basename {} .c) && mv {} $dname/$fname.h' ";"
change js to cjs extension files recursively:
cd dist # where you place your .js
for file in $(find . -type f -name "*.js"); do mv "$file" "${file%.*}.cjs"; done

Resources