A bash script to unrar all files in a sub directory in linux - linux

I'm on Ubuntu server 18.04.
My main goal is to run a script from a parent directory which unrars all the files inside all sub directories of the parent directory.
I have also installed apt install unrar and it is located at "/usr/bin/unrar".
This is what I have come up with till now. But it does not seem to work:
for dir in 'pwd/*/'
do
dir=${dir%*/}
cd dir
for file in dir/*/
do
"/usr/bin/unrar" x dir/*.r* dir/
done
I've found a working script for Windows which uses 7zip here

Here's a starting example; adjust pattern and echo statements as needed:
#!/bin/bash
pattern='*/*.rar'
archives=($pattern)
if [[ "${archives[#]}" == "$pattern" ]]; then
echo NONE 1>&2
exit
fi
for rpath in "${archives[#]}"; do
dir=${rpath%/*}
rar=${rpath##*/}
pushd "$dir" > /dev/null
echo -e "\ndir: $dir"
echo unrar args "$rar"
popd > /dev/null
done

If you just want to unrar every rar file in the directory where it was found, find can do that directly. It takes some getting used to, but it's well worth learning.
find . -name '*.rar' -execdir unrar {} \;
Briefly, -execdir says to run the stuff up to \; on each found file in the directory where it was found; the {} placeholder gets replaced with the file name.

find . -name '*.rar' -execdir unrar {} \;
This didn't work for me.
I changed it to this and it worked.
find . -name '*.rar' -execdir unrar e -r {} \;

Related

How to get echo to print only deleted file paths?

I'm trying to write a script to create mysqldumps daily in a directory as well as check all the backups in that directory and remove any older than 7 days that is going to run on cron.
So my functions work correctly, it's just my last echo command that is not doing what I want it to. This is what I have so far:
DBNAME=database
DATE=`date +\%Y-\%m-\%d_\%H\%M`
SQLFILE=$DBNAME-${DATE}.sql
curr_dir=$1
#MAIN
mysqldump -u root -ppassword --databases $DBNAME > $SQLFILE
echo "$SQLFILE has been successfully created."
#Remove files older than 7 days
for filepath in "$curr_dir"*
do
find "$filepath" -mtime +7 -type f -delete
echo "$filepath has been deleted."
done
exit
So the backup creations and removal of old files both work. But, my problem is that echo "$filepath has been deleted." is printing all files in the directory instead of just the files older than 7 days that were deleted. Where am I going wrong here?
EDIT (Full solution):
This is the full solution that wound up working for me using everyone's advice from the answers and comments. This works for cron jobs. I had to specify the main function's output filepath because the files were being created in the root directory instead of the path specified in Argument $1.
Thank you everyone for the help! The if statement also checks whether or not $1 is the specified directory I want files to be deleted in.
#Variables
DBNAME=database
DATE=`date +\%Y-\%m-\%d_\%H\%M`
SQLFILE=$DBNAME-${DATE}.sql
curr_dir=$1
#MAIN
mysqldump -u root -ppassword --databases $DBNAME > /path/to/db-backups/directory/$SQLFILE
echo "$SQLFILE has been successfully created."
#Remove files older than 7 days
for filepath in "$curr_dir"*
do
if [[ $1 = "/path/to/db-backups/directory" ]]; then
find "$filepath" -mtime +7 -type f -delete -exec sh -c 'printf "%s has been deleted.\n" "$#"' _ {} +
fi
done
exit
You can merge the echo into the find:
find "$filepath" -mtime +7 -type f -delete -exec echo '{}' "has been deleted." \;
The -delete option is just a shortcut for -exec rm '{}' \; and all the -exec commands are run in the sequence you specify them in.

Run an Executable Program File in Multiple Subdirectories Using Shell

I have a main directory with 361 subdirectories. Within the each subdirectory, there is a parameter file and one executable program file. The executable file is coded to look for the parameter file in the directory where the executable is located. (The same executable file is in all subdirectories. The parameter files all have the same file name in all subdirectories)
Instead of executing the program file individually, is there a cshell command for terminal to run them all at once?
UPDATED
If your Linux is so old it doesn't have -execdir, you could try this:
find $(pwd) -name YourProgram -exec dirname {} \; | while read d; do cd "$d" && pwd; done
If that correctly prints the names of the directories where your program needs to be run, just remove the pwd and replace with whatever you want done in tha directory - presumably something like this:
find $(pwd) -name YourProgram -exec dirname {} \; | while read d; do cd "$d" && ./YourPrgram; done
ORIGINAL ANSWER
Like this maybe:
find . -type f -name YourProgramName -execdir ./YourProgramName YourParameterFile \;
But backup first and check it looks right before using.
The -execdir causes find to change to the directory it has found before running the commands there.
If your command is more complicated, you can do this:
find . -type f -name YourProgramName -execdir sh -c "command1; command2; command3" \;
Check it does what you want like this:
find . -type f -name YourProgramName -execdir pwd \;
Maybe this will help. Suppose you have in each folder a file named params_file and an executable named exec_file, then:
for dir in `find . -maxdepth 1 -mindepth 1 -type d` ; do
cd $dir
cat params_file | xargs ./exec_file
cd ..
done

Include folder name in renaming a file in linux

I've already used that command to rename the files in multiple directories and change JPG to jpg, so I have consistency.
find . -name '*.jpg' -exec sh -c 'mv "$0" "${0%.JPG}$.jpg"' {} \;
Do you have any idea how to change that to include the folder name in the name of the file
I am executing that in a folder that contains about 2000 folders (SKU's) or products ... and inside every SKU folder, there are 9 images. 1.jpg 2.jpg .... 9.jpg.
So the bottom-line is I have 2000 images with name 1.jpg, 2.jpg ... 9.jpg. I need those files to be unique, for example:
folder-name-1.jpg ... folder-name.2.jpg ... so on, in every folder.
Any help will be appreciated.
For example I can do as follows:
$ find . -iname '*.jpg' | while read fn; do name=$(basename "$fn") ; dir=$(dirname "$fn") ; mv "$fn" "$dir/$(basename "$dir")-$name" ;done
./lib/bukovina/version.jpg ./lib/bukovina/bukovina-version.jpg
./lib/bukovina.jpg ./lib/lib-bukovina.jpg
You can use fine one-liner:
find . -name '*.jpg' -execdir \
bash -c 'd="${PWD##*/}"; [[ "$1" != "$d-"* ]] && mv "$1" "./$d-$1"' - '{}' \;
This command uses safe approach to check whether image name is already not prefixed by the current directory name. You can run it multiple times also and image name won't be renamed after first run.
To get the folder name of a file you can do $(basename $(dirname ${FILE})), where ${FILE} is a path that may be relative but must contain at least one folder before the file name in it. This should not be a problem with find. If it is, just run it from one directory up.
find . -name '*.jpg' -exec sh -c 'mv "$0" "$(basename $(dirname $0))-${0%.JPG}$.jpg"' {} \;
Or, if you have JPEGs in your current directory:
find ../<dirname> -name '*.jpg' -exec sh -c 'mv "$0" "$(basename $(dirname $0))-${0%.JPG}$.jpg"' {} \;

shell clean up script advice

Can you let me know your thoughts on this script and if you think it can be improved by any method?
I'm trying to create a clean up script that will run once a week by a cron job by root on our linux servers.
At one part of the script I call a text file that will have a list of user's names that can be deleted from, the contains of this file might change week to week.
#!/bin/bash
DAY=$(date +"%d%b%Y")
HOME='/home/user'
DOCS='/var/program/alpha/top/is'
SCRATCH='/var/program/beta/top/_temp/'
USER='/home/user/deleteuserdata.txt'
DELUSER=$USER
cd $SCRATCH
rm -rf _temp-*/
cd $DOCS
while read DELUSER; do
find $DOCS/"$DELUSER"_info* -name "*.pdf" -size +1000k -exec rm {} \;
done < $USER > $HOME/"$DAY"dellogs.txt
You should quote variables almost everywhere. Prefer pushd/popd over cd (easier to remember pervious path). Probably want to prefer find -delete over the spawn-some -exec rm. Add error checking (bash -e), and -x to see where it exits when it comes to that.
#!/bin/bash -ex
DELUSER="$USER" # setting this is useless because it's overriden in the while loop
pushd "$SCRATCH"
rm -Rf _temp-*/ || :
pushd "$DOCS"
while read DELUSER; do
find "$DOCS/$DELUSER"_info* -name "*.pdf" -size +1000k -print -delete
done <"$USER" >"$HOME/${DAY}dellogs.txt"
popd
popd

Find file then cd to that directory in Linux

In a shell script how would I find a file by a particular name and then navigate to that directory to do further operations on it?
From here I am going to copy the file across to another directory (but I can do that already just adding it in for context.)
You can use something like:
cd -- "$(dirname "$(find / -type f -name ls | head -1)")"
This will locate the first ls regular file then change to that directory.
In terms of what each bit does:
The find will start at / and search down, listing out all regular files (-type f) called ls (-name ls). There are other things you can add to find to further restrict the files you get.
The | head -1 will filter out all but the first line.
$() is a way to take the output of a command and put it on the command line for another command.
dirname can take a full file specification and give you the path bit.
cd just changes to that directory, the -- is used to prevent treating a directory name beginning with a hyphen from being treated as an option to cd.
If you execute each bit in sequence, you can see what happens:
pax[/home/pax]> find / -type f -name ls
/usr/bin/ls
pax[/home/pax]> find / -type f -name ls | head -1
/usr/bin/ls
pax[/home/pax]> dirname "$(find / -type f -name ls | head -1)"
/usr/bin
pax[/home/pax]> cd -- "$(dirname "$(find / -type f -name ls | head -1)")"
pax[/usr/bin]> _
The following should be more safe:
cd -- "$(find / -name ls -type f -printf '%h' -quit)"
Advantages:
The double dash prevents the interpretation of a directory name starting with a hyphen as an option (find doesn't produce such file names, but it's not harmful and might be required for similar constructs)
-name check before -type check because the latter sometimes requires a stat
No dirname required because the %h specifier already prints the directory name
-quit to stop the search after the first file found, thus no head required which would cause the script to fail on directory names containing newlines
no one suggesting locate (which is much quicker for huge trees) ?
zsh:
cd $(locate zoo.txt|head -1)(:h)
cd ${$(locate zoo.txt)[1]:h}
cd ${$(locate -r "/zoo.txt$")[1]:h}
or could be slow
cd **/zoo.txt(:h)
bash:
cd $(dirname $(locate -l1 -r "/zoo.txt$"))
Based on this answer to a similar question, other useful choice could be having 2 commands, 1st to find the file and 2nd to navigate to its directory:
find ./ -name "champions.txt"
cd "$(dirname "$(!!)")"
Where !! is history expansion meaning 'the previous command'.
Expanding on answers already given, if you'd like to navigate iteratively to every file that find locates and perform operations in each directory:
for i in $(find /path/to/search/root -name filename -type f)
do (
cd $(dirname $(realpath $i));
your_commands;
)
done
if you are just finding the file and then moving it elsewhere, just use find and -exec
find /path -type f -iname "mytext.txt" -exec mv "{}" /destination +;
function fReturnFilepathOfContainingDirectory {
#fReturnFilepathOfContainingDirectory_2012.0709.18:19
#$1=File
local vlFl
local vlGwkdvlFl
local vlItrtn
local vlPrdct
vlFl=$1
vlGwkdvlFl=`echo $vlFl | gawk -F/ '{ $NF="" ; print $0 }'`
for vlItrtn in `echo $vlGwkdvlFl` ;do
vlPrdct=`echo $vlPrdct'/'$vlItrtn`
done
echo $vlPrdct
}
Simply this way, isn't this elegant?
cdf yourfile.py
Of course you need to set it up first, but you need to do this only once:
Add following line into your .bashrc or .zshrc, whatever you use as your shell initialization script.
source ~/bin/cdf.sh
And add this code into ~/bin/cdf.sh file that you need to create from scratch.
#!/bin/bash
function cdf() {
THEFILE=$1
echo "cd into directory of ${THEFILE}"
# For Mac, replace find with mdfind to get it a lot faster. And it does not need args ". -name" part.
THEDIR=$(find . -name ${THEFILE} |head -1 |grep -Eo "/[ /._A-Za-z0-9\-]+/")
cd ${THEDIR}
}
If it's a program in your PATH, you can do:
cd "$(dirname "$(which ls)")"
or in Bash:
cd "$(dirname "$(type -P ls)")"
which uses one less external executable.
This uses no externals:
dest=$(type -P ls); cd "${dest%/*}"
If your file is only in one location you could try the following:
cd "$(find ~/ -name [filename] -exec dirname {} \;)" && ...
You can use -exec to invoke dirname with the path that find returns (which goes where the {} placeholder is). That will change directories. You can also add double ampersands ( && ) to execute the next command after the shell has changed directory.
For example:
cd "$(find ~/ -name need_to_find_this.rb -exec dirname {} \;)" && ruby need_to_find_this.rb
It will look for that ruby file, change to the directory, then run it from within that folder. This example assumes the filename is unique and that for some reason the ruby script has to run from within its directory. If the filename is not unique you'll get many locations passed to cd, it will return an error then it won't change directories.
try this. i created this for my own use.
cd ~
touch mycd
sudo chmod +x mycd
nano mycd
cd $( ./mycd search_directory target_directory )"
if [ $1 == '--help' ]
then
echo -e "usage: cd \$( ./mycd \$1 \$2 )"
echo -e "usage: cd \$( ./mycd search_directory target_directory )"
else
find "$1"/ -name "$2" -type d -exec echo {} \; -quit
fi
cd -- "$(sudo find / -type d -iname "dir name goes here" 2>/dev/null)"
keep all quotes (all this does is just send you to the directory you want, after that you can just put commands after that)

Resources