Using a while/for loop with the 'find' command to copy files and directories - linux

I have the follwoing problem: I want to make a script that backups a certain directory completely to another directory. I may not use cp -r or any other recursive command. So I was thinking of using a while or for loop. The directory that needs to be back upped is given with a parameter. This is what I have so far:
OIFS="$IFS"
IFS=$'\n'
for file in `find $1`
do
cp $file $HOME/TestDirectory
done
IFS="$OIFS"
But when I execute it, this is what my terminal says: Script started, file is typescript

Try this:
find "$1" -type f -print0 | xargs -0 cp -t $HOME/TestDirectory
Don't run your script through script !
Add this (shebang) at top of file:
#!/bin/bash
find "$1" -type f -print0 | xargs -0 cp -t $HOME/TestDirectory
change permission of your script for adding executable flag:
chmod +x myScript
run your script localy whith arguments:
./myScript rootDirectoryWhereSearchForFiles

Related

Find all files and unzip specific file to local folder

find -name archive.zip -exec unzip {} file.txt \;
This command finds all files named archive.zip and unzips file.txt to the folder that I execute the command from, is there a way to unzip the file to the same folder where the .zip file was found? I would like file.txt to be unzipped to folder1.
folder1\archive.zip
folder2\archive.zip
I realize $dirname is available in a script but I'm looking for a one line command if possible.
#iheartcpp - I successfully ran three alternatives using the same base command...
find . -iname "*.zip"
... which is used to provide the list of / to be passed as an argument to the next command.
Alternative 1: find with -exec + Shell Script (unzips.sh)
File unzips.sh:
#!/bin/sh
# This will unzip the zip files in the same directory as the zip are
for f in "$#" ; do
unzip -o -d `dirname $f` $f
done
Use this alternative like this:
find . -iname '*.zip' -exec ./unzips.sh {} \;
Alternative 2: find with | xargs _ Shell Script (unzips)
Same unzips.sh file.
Use this alternative like this:
find . -iname '*.zip' | xargs ./unzips.sh
Alternative 3: all commands in the same line (no .sh files)
Use this alternative like this:
find . -iname '*.zip' | xargs sh -c 'for f in $#; do unzip -o -d `dirname $f` $f; done;'
Of course, there are other alternatives but hope that the above ones can help.

Execute multiple commands on target files from find command

Let's say I have a bunch of *.tar.gz files located in a hierarchy of folders. What would be a good way to find those files, and then execute multiple commands on it.
I know if I just need to execute one command on the target file, I can use something like this:
$ find . -name "*.tar.gz" -exec tar xvzf {} \;
But what if I need to execute multiple commands on the target file? Must I write a bash script here, or is there any simpler way?
Samples of commands that need to be executed a A.tar.gz file:
$ tar xvzf A.tar.gz # assume it untars to folder logs
$ mv logs logs_A
$ rm A.tar.gz
Here's what works for me (thanks to Etan Reisner suggestions)
#!/bin/bash # the target folder (to search for tar.gz files) is parsed from command line
find $1 -name "*.tar.gz" -print0 | while IFS= read -r -d '' file; do # this does the magic of getting each tar.gz file and assign to shell variable `file`
echo $file # then we can do everything with the `file` variable
tar xvzf $file
# mv untar_folder $file.suffix # untar_folder is the name of folder after untar
rm $file
done
As suggested, the array way is unsafe if file name contained space(s), and also doesn't seem to work properly in this case.
Writing a shell script is probably easiest. Take a look at sh for loops. You could use the output of a find command in an array, and then loop over that array to perform a set of commands on each element.
For example,
arr=( $(find . -name "*.tar.gz" -print0) )
for i in "${arr[#]}"; do
# $i now holds each of the filenames output by find
tar xvzf $i
mv $i $i.suffix
rm $i
# etc., etc.
done

About the usage of linux command "xargs"

I have some file like
love.txt
loveyou.txt
in directory useful; I want to copy this file to directory /tmp.
I use this command:
find ./useful/ -name "love*" | xargs cp /tmp/
but is doesn't work, just says:
cp: target `./useful/loveyou.txt' is not a directory
when I use this command:
find ./useful/ -name "love*" | xargs -i cp {} /tmp/
it works fine,
I want to know why the second works, and more about the usage of -i cp {}.
xargs puts the words coming from the standard input to the end of the argument list of the given command. The first form therefore creates
cp /tmp/ ./useful/love.txt ./useful/loveyou.txt
Which does not work, because there are more than 2 arguments and the last one is not a directory.
The -i option tells xargs to process one file at a time, though, replacing {} with its name, so it is equivalent to
cp ./useful/love.txt /tmp/
cp ./useful/loveyou.txt /tmp/
Which clearly works well.
When using the xargs -i command, {} is substituted with each element you find. So, in your case, for both "loveyou.txt" and "love.txt", the following command will be run:
cp ./useful/loveyou.txt /tmp/
cp ./useful/love.txt /tmp/
if you omit the {}, all the elements you find will automatically be inserted at the end of the command, so, you will execute the nonsensical command:
cp /tmp/ ./useful/loveyou.txt ./useful/love.txt
xargs appends the values fed in as a stream to the end of the command - it does not run the command once per input value. If you want the same command run multiple times - that is what the -i cp {} syntax is for.
This works well for commands which accept a list of arguments at the end (e.g. grep) - unfortunately cp is not one of those - it considers the arguments you pass in as directories to copy to, which explains the 'is not a directory' error.
The first example will do this:
cp /tmp/ love.txt loveyou.txt
Which can't be done, since they attempt to copy the directory /tmp and the file love.txt to the file loveyou.txt.
In the second example, -i tells xargs to replace every instance of {} with the argument, so it will do:
cp love.txt /tmp/
cp loveyou.txt /tmp/
find ./useful/ -name "love*" | xargs cp -t /tmp/
You might avoid xargs that way:
find ./useful/ -name "love*" -exec sh -c 'cp "$#" /tmp' sh {} +

Using * to parse through files. Need to write file names

I have the following problem using UNIX Commands. I wish to go through a large number of files and convert them using a command that converts them. My idea is to work like this: command *.fileending > *.newfileending
The problem is that I wish to keep the file-names and only replace the file-ending. Thus filename.fileending should become filename.newfileending. How do I achieve this?
Use a for loop:
for file in *.krn; do
hum2mid "$file" -o "${file%.krn}.mid"
done
In a single line: for file in *.krn; do hum2mid "$file" -o "${file%.krn}.mid"; done
To apply the command to files and subdirectories recursively, use the find|xargs pattern:
find -type f -name '*.krn' -print0 \
| xargs -0 -n1 sh -c 'hum2mid "$1" -o "/destination/dir/$(basename ${1%.krn}.mid)"' -
Note that this will overwrite already converted files, if a file from another directory has the same name.
rename .fileending .newfileending *
#!/bin/bash
ls -1 *.fileending | while read i; do
command "$i" > "${i/%.fileending/.newfileending}"
done
if you need process 'weird' filenames ( like with embedded '\n', for example ), you can use following trick:
create file foo.sh:
#!/bin/bash
command "$1" > "${1/%.fileending/.newfileending}"
, then do chmod +x foo.sh and finally find . -maxdepth 1 -a -type f -a -name '*.fileending' -print0 | xargs -0 -n 1 -J '%' ./foo.sh "%"

Find file then cd to that directory in Linux

In a shell script how would I find a file by a particular name and then navigate to that directory to do further operations on it?
From here I am going to copy the file across to another directory (but I can do that already just adding it in for context.)
You can use something like:
cd -- "$(dirname "$(find / -type f -name ls | head -1)")"
This will locate the first ls regular file then change to that directory.
In terms of what each bit does:
The find will start at / and search down, listing out all regular files (-type f) called ls (-name ls). There are other things you can add to find to further restrict the files you get.
The | head -1 will filter out all but the first line.
$() is a way to take the output of a command and put it on the command line for another command.
dirname can take a full file specification and give you the path bit.
cd just changes to that directory, the -- is used to prevent treating a directory name beginning with a hyphen from being treated as an option to cd.
If you execute each bit in sequence, you can see what happens:
pax[/home/pax]> find / -type f -name ls
/usr/bin/ls
pax[/home/pax]> find / -type f -name ls | head -1
/usr/bin/ls
pax[/home/pax]> dirname "$(find / -type f -name ls | head -1)"
/usr/bin
pax[/home/pax]> cd -- "$(dirname "$(find / -type f -name ls | head -1)")"
pax[/usr/bin]> _
The following should be more safe:
cd -- "$(find / -name ls -type f -printf '%h' -quit)"
Advantages:
The double dash prevents the interpretation of a directory name starting with a hyphen as an option (find doesn't produce such file names, but it's not harmful and might be required for similar constructs)
-name check before -type check because the latter sometimes requires a stat
No dirname required because the %h specifier already prints the directory name
-quit to stop the search after the first file found, thus no head required which would cause the script to fail on directory names containing newlines
no one suggesting locate (which is much quicker for huge trees) ?
zsh:
cd $(locate zoo.txt|head -1)(:h)
cd ${$(locate zoo.txt)[1]:h}
cd ${$(locate -r "/zoo.txt$")[1]:h}
or could be slow
cd **/zoo.txt(:h)
bash:
cd $(dirname $(locate -l1 -r "/zoo.txt$"))
Based on this answer to a similar question, other useful choice could be having 2 commands, 1st to find the file and 2nd to navigate to its directory:
find ./ -name "champions.txt"
cd "$(dirname "$(!!)")"
Where !! is history expansion meaning 'the previous command'.
Expanding on answers already given, if you'd like to navigate iteratively to every file that find locates and perform operations in each directory:
for i in $(find /path/to/search/root -name filename -type f)
do (
cd $(dirname $(realpath $i));
your_commands;
)
done
if you are just finding the file and then moving it elsewhere, just use find and -exec
find /path -type f -iname "mytext.txt" -exec mv "{}" /destination +;
function fReturnFilepathOfContainingDirectory {
#fReturnFilepathOfContainingDirectory_2012.0709.18:19
#$1=File
local vlFl
local vlGwkdvlFl
local vlItrtn
local vlPrdct
vlFl=$1
vlGwkdvlFl=`echo $vlFl | gawk -F/ '{ $NF="" ; print $0 }'`
for vlItrtn in `echo $vlGwkdvlFl` ;do
vlPrdct=`echo $vlPrdct'/'$vlItrtn`
done
echo $vlPrdct
}
Simply this way, isn't this elegant?
cdf yourfile.py
Of course you need to set it up first, but you need to do this only once:
Add following line into your .bashrc or .zshrc, whatever you use as your shell initialization script.
source ~/bin/cdf.sh
And add this code into ~/bin/cdf.sh file that you need to create from scratch.
#!/bin/bash
function cdf() {
THEFILE=$1
echo "cd into directory of ${THEFILE}"
# For Mac, replace find with mdfind to get it a lot faster. And it does not need args ". -name" part.
THEDIR=$(find . -name ${THEFILE} |head -1 |grep -Eo "/[ /._A-Za-z0-9\-]+/")
cd ${THEDIR}
}
If it's a program in your PATH, you can do:
cd "$(dirname "$(which ls)")"
or in Bash:
cd "$(dirname "$(type -P ls)")"
which uses one less external executable.
This uses no externals:
dest=$(type -P ls); cd "${dest%/*}"
If your file is only in one location you could try the following:
cd "$(find ~/ -name [filename] -exec dirname {} \;)" && ...
You can use -exec to invoke dirname with the path that find returns (which goes where the {} placeholder is). That will change directories. You can also add double ampersands ( && ) to execute the next command after the shell has changed directory.
For example:
cd "$(find ~/ -name need_to_find_this.rb -exec dirname {} \;)" && ruby need_to_find_this.rb
It will look for that ruby file, change to the directory, then run it from within that folder. This example assumes the filename is unique and that for some reason the ruby script has to run from within its directory. If the filename is not unique you'll get many locations passed to cd, it will return an error then it won't change directories.
try this. i created this for my own use.
cd ~
touch mycd
sudo chmod +x mycd
nano mycd
cd $( ./mycd search_directory target_directory )"
if [ $1 == '--help' ]
then
echo -e "usage: cd \$( ./mycd \$1 \$2 )"
echo -e "usage: cd \$( ./mycd search_directory target_directory )"
else
find "$1"/ -name "$2" -type d -exec echo {} \; -quit
fi
cd -- "$(sudo find / -type d -iname "dir name goes here" 2>/dev/null)"
keep all quotes (all this does is just send you to the directory you want, after that you can just put commands after that)

Resources