Find shell script - logs (SUSE Linux, bash, if, find) - linux

I have a problem with find command in shell script.
My script finds the archives then unpacks some logs from it and tar the results move it level up and remove file that were created in process.
And there is a problem it will do all operation even when script didn't find archive or logs as a result it creating empty tar. with name:
date_patern_patern.tar
i was trying fix it using if-statement or && and || operators but i can't handle it could you please direct me how to do it ?
I have modified the script:
#!/bin/bash
printf " -----\nSerch date: $1 \n"
printf "Serch patern: $2 \n"
printf "serch patern: $3 \n -----\n\n"
printf "Serch archives:\n"
mkdir /cc/xx/logs/aaa/envelop/tmp
find archives/ -name "$1_*_messages.tar.gz" -exec cp {} tmp \;
ls -l tmp/$1_*_messages.tar.gz || exit 1
cd tmp
tar -xf $1_*_messages.tar.gz --wildcards --no-anchored "*$2_$3*"
printf "Find envelop:\n"
ls -l alsb-message/ || exit 1
mv alsb-message $1_$2_$3
tar -cvf $1_$2_$3.tar $1_$2_$3
mv $1_$2_$3.tar ..
rm *.gz
rm -R $1_$2_$3
cd ..
rm -r tmp
there is another problem i want my script to stop when it serch and doesn't find patern in example when schript is serching:
./serch_script.sh 20151110 patern2 patern3
and it doesn't find either parern2 or patern3 i want it to stop but it do again empyt .tar i tried to do like Prasanna told but it didn't work in this case,

Please replace following line
ls -l tmp/
with following line
ls -l tmp/$1_*_messages.tar.gz || exit 1
Basically with this code, you are running exit 1 if there are no files in tmp directory matching your criteria.

Related

Redirecting output for find & exec to a log file

I created a script for moving the files based on the below reference. I am trying to capture all the files activity that are picked up and moved from source to destination along with any unsuccessful files.
I tried pipping the output to a log file, but after operation the log file size is 0. Any recommendations please?
Reference Doc#
https://unix.stackexchange.com/questions/59112/preserve-directory-structure-when-moving-files-using-find
Below is the code block
destination=$(cd -- "$destination" && pwd)
cd -- "$source" &&
find . -type f -newermt $startdays -not -newermt $enddays -exec sh -c '
for x do
mkdir -p "$0/${x%/*}"
mv "$x" "$0/$x"
done
' "$destination" {} + >> output.log
By default, mv does not produce any output. If you want it to produce output, try mv -v.

Shell, For loop into directories and sub directory not working as intended

When trying to do a simple script that tars the files and moves them to another directory i'am having problems implementing the recursive loop.
#!/bin/bash
for directorio in $1; do #my thought process: start for loop goes into dir
for file in $directorio; do #enter for loop for files
fwe="${file%.*}" #file_without_extension
tar -czPf "$fwe.tar.bz2" "$(pwd)" && mv "$fwe.tar.bz2" /home/cunha/teste
done
done
the script seems to be doing nothing ...
when the script is called like: ./script.sh /home/blablabla
How could i get this fixed?
You can better follow below option. How will it work? First it will list all the directories in short_list.txt. In while loop it will read each directory name and zip it in location /home/cunha/teste
find $1 -type d -name "*" > short_list.txt
cdr=$(pwd)
while read line
do
cd $line
base=$(basename $PWD)
cd ..
tar -cf /home/cunha/teste/$base.tar.gz $base
done < short_list.txt
cd $cdr
rm -rf short_list.txt

Execute multiple commands on target files from find command

Let's say I have a bunch of *.tar.gz files located in a hierarchy of folders. What would be a good way to find those files, and then execute multiple commands on it.
I know if I just need to execute one command on the target file, I can use something like this:
$ find . -name "*.tar.gz" -exec tar xvzf {} \;
But what if I need to execute multiple commands on the target file? Must I write a bash script here, or is there any simpler way?
Samples of commands that need to be executed a A.tar.gz file:
$ tar xvzf A.tar.gz # assume it untars to folder logs
$ mv logs logs_A
$ rm A.tar.gz
Here's what works for me (thanks to Etan Reisner suggestions)
#!/bin/bash # the target folder (to search for tar.gz files) is parsed from command line
find $1 -name "*.tar.gz" -print0 | while IFS= read -r -d '' file; do # this does the magic of getting each tar.gz file and assign to shell variable `file`
echo $file # then we can do everything with the `file` variable
tar xvzf $file
# mv untar_folder $file.suffix # untar_folder is the name of folder after untar
rm $file
done
As suggested, the array way is unsafe if file name contained space(s), and also doesn't seem to work properly in this case.
Writing a shell script is probably easiest. Take a look at sh for loops. You could use the output of a find command in an array, and then loop over that array to perform a set of commands on each element.
For example,
arr=( $(find . -name "*.tar.gz" -print0) )
for i in "${arr[#]}"; do
# $i now holds each of the filenames output by find
tar xvzf $i
mv $i $i.suffix
rm $i
# etc., etc.
done

move folder contents recursive into nested folder

I don't expected that this will be a problem. Because I thought the coreutils support these things and then, that a dirty combination of cp ls and rm would be enough.
However, this was not the case and I would be really much appreciated if you now explain me why my approuch is failing and further, how I should do this in a proper way.
Code
function CheckoutFolder {
local dir=$1
mkdir "$dir/.CheckoutFolderTmp"
(
cd "$dir" \
&& cp -R $(ls -Q -A "$dir" --ignore=".CheckoutFolderTmp") "$dir/.CheckoutFolderTmp" \
&& rm -Rf $(ls -Q -A "$dir" --ignore=".CheckoutFolderTmp")
)
mv "$dir/.CheckoutFolderTmp" "$dir/src"
mkdir -p "$dir/"{build,log}
}
Sample output
++ CheckoutFolder /home/tobias/Develop/src/thelegacy/RCMeta
++ local dir=/home/tobias/Develop/src/thelegacy/RCMeta
++ mkdir /home/tobias/Develop/src/thelegacy/RCMeta/.CheckoutFolderTmp
++ cd /home/tobias/Develop/src/thelegacy/RCMeta
+++ ls -Q -A /home/tobias/Develop/src/thelegacy/RCMeta --ignore=.CheckoutFolderTmp
++ cp -R '"build"' '"buildmythli.sh"' '"CMakeLists.txt"' '".directory"' '".libbuildmythli.sh"' '"log"' '"RCMeta"' '"RCMetaTest"' '"src"' /home/tobias/Develop/src/thelegacy/RC
cp: cannot stat `"build"': No such file or directory
cp: cannot stat `"buildmythli.sh"': No such file or directory
cp: cannot stat `"CMakeLists.txt"': No such file or directory
cp: cannot stat `".directory"': No such file or directory
cp: cannot stat `".libbuildmythli.sh"': No such file or directory
cp: cannot stat `"log"': No such file or directory
cp: cannot stat `"RCMeta"': No such file or directory
cp: cannot stat `"RCMetaTest"': No such file or directory
cp: cannot stat `"src"': No such file or directory
++ mv /home/tobias/Develop/src/thelegacy/RCMeta/.CheckoutFolderTmp /home/tobias/Develop/src/thelegacy/RCMeta/src
++ mkdir -p /home/tobias/Develop/src/thelegacy/RCMeta/build /home/tobias/Develop/src/thelegacy/RCMeta/log
++ return 0
Mythli
As Les says, ls -Q is putting quotation-marks around the filenames, and those quotation-marks are getting passed in the arguments to cp and rm. (The use of quotation-marks to quote and delimit arguments is an aspect of the Bash command-line, when you actually type in a command; it doesn't work when you're passing the output of one command into another.) In general, parsing the output of ls is not generally a good idea.
Here is an alternative approach:
function CheckoutFolder() (
cd "$1"
mkdir .CheckoutFolderTmp
find -mindepth 1 -maxdepth 1 -not -name .CheckoutFolderTmp \
-exec mv {} .CheckoutFolderTmp/{} \;
mv .CheckoutFolderTmp src
mkdir build log
)
(Note that I surrounded the function body with parentheses (...) rather than curly-brackets {...}. This causes the whole function to be run in a subshell.)
The $(ls ...) command is putting unwanted quotes around the names. Consider using xargs and back-quotes instead. For example...
(cd "$dir" && cp -R `ls -Q -A "$dir" --ignore=".CheckoutFolderTmp"` "$dir/.CheckoutFolderTmp" && ls -Q -A "$dir" --ignore=".CheckoutFolderTmp" | xargs rm -Rf )
The cp output is not too friendly, but it does give the information you need.
cp: cannot stat '"build"': No such file or directory
Skip to the final statement "no such file or directory". The "cannot stat" cryptic, but it means that "cp" used "stat" to get some information about the file or directory it was trying to copy. "stat" failed. It failed with the errno for "no such file or directory" on a file (or directory) named '"build"'. That's because, the actual argument internal to cp is "build" (notice) the quotes), while the file name you want is build (notice no quotes).
The $(ls... ) is called with -Q to put quotes on (presumably to handle file names with spaces and commas and other offending characters). But the $(ls...) already puts quotes on for you. xargs can also handle funky filenames if you use -0.

Is there a way to make mv create the directory to be moved to if it doesn't exist?

So, if I'm in my home directory and I want to move foo.c to ~/bar/baz/foo.c , but those directories don't exist, is there some way to have those directories automatically created, so that you would only have to type
mv foo.c ~/bar/baz/
and everything would work out? It seems like you could alias mv to a simple bash script that would check if those directories existed and if not would call mkdir and then mv, but I thought I'd check to see if anyone had a better idea.
How about this one-liner (in bash):
mkdir --parents ./some/path/; mv yourfile.txt $_
Breaking that down:
mkdir --parents ./some/path
# if it doesn't work; try
mkdir -p ./some/path
creates the directory (including all intermediate directories), after which:
mv yourfile.txt $_
moves the file to that directory ($_ expands to the last argument passed to the previous shell command, ie: the newly created directory).
I am not sure how far this will work in other shells, but it might give you some ideas about what to look for.
Here is an example using this technique:
$ > ls
$ > touch yourfile.txt
$ > ls
yourfile.txt
$ > mkdir --parents ./some/path/; mv yourfile.txt $_
$ > ls -F
some/
$ > ls some/path/
yourfile.txt
mkdir -p `dirname /destination/moved_file_name.txt`
mv /full/path/the/file.txt /destination/moved_file_name.txt
Save as a script named mv.sh
#!/bin/bash
# mv.sh
dir="$2" # Include a / at the end to indicate directory (not filename)
tmp="$2"; tmp="${tmp: -1}"
[ "$tmp" != "/" ] && dir="$(dirname "$2")"
[ -a "$dir" ] ||
mkdir -p "$dir" &&
mv "$#"
Or put at the end of your ~/.bashrc file as a function that replaces the default mv on every new terminal. Using a function allows bash keep it memory, instead of having to read a script file every time.
function mvp ()
{
dir="$2" # Include a / at the end to indicate directory (not filename)
tmp="$2"; tmp="${tmp: -1}"
[ "$tmp" != "/" ] && dir="$(dirname "$2")"
[ -a "$dir" ] ||
mkdir -p "$dir" &&
mv "$#"
}
Example usage:
mv.sh file ~/Download/some/new/path/ # <-End with slash
These based on the submission of Chris Lutz.
You can use mkdir:
mkdir -p ~/bar/baz/ && \
mv foo.c ~/bar/baz/
A simple script to do it automatically (untested):
#!/bin/sh
# Grab the last argument (argument number $#)
eval LAST_ARG=\$$#
# Strip the filename (if it exists) from the destination, getting the directory
DIR_NAME=`echo $2 | sed -e 's_/[^/]*$__'`
# Move to the directory, making the directory if necessary
mkdir -p "$DIR_NAME" || exit
mv "$#"
It sounds like the answer is no :). I don't really want to create an alias or func just to do this, often because it's one-off and I'm already in the middle of typing the mv command, but I found something that works well for that:
mv *.sh shell_files/also_with_subdir/ || mkdir -p $_
If mv fails (dir does not exist), it will make the directory (which is the last argument to the previous command, so $_ has it). So just run this command, then up to re-run it, and this time mv should succeed.
The simpliest way to do that is:
mkdir [directory name] && mv [filename] $_
Let's suppose I downloaded pdf files located in my download directory (~/download) and I want to move all of them into a directory that doesn't exist (let's say my_PDF).
I'll type the following command (making sure my current working directory is ~/download):
mkdir my_PDF && mv *.pdf $_
You can add -p option to mkdir if you want to create subdirectories just like this: (supposed I want to create a subdirectory named python):
mkdir -p my_PDF/python && mv *.pdf $_
Making use of the tricks in "Getting the last argument passed to a shell script" we can make a simple shell function that should work no matter how many files you want to move:
# Bash only
mvdir() { mkdir -p "${#: -1}" && mv "$#"; }
# Other shells may need to search for the last argument
mvdir() { for last; do true; done; mkdir -p "$last" && mv "$#"; }
Use the command like this:
mvdir foo.c foo.h ~/some/new/folder/
rsync command can do the trick only if the last directory in the destination path doesn't exist, e.g. for the destination path of ~/bar/baz/ if bar exists but baz doesn't, then the following command can be used:
rsync -av --remove-source-files foo.c ~/bar/baz/
-a, --archive archive mode; equals -rlptgoD (no -H,-A,-X)
-v, --verbose increase verbosity
--remove-source-files sender removes synchronized files (non-dir)
In this case baz directory will be created if it doesn't exist. But if both bar and baz don't exist rsync will fail:
sending incremental file list
rsync: mkdir "/root/bar/baz" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(657) [Receiver=3.1.2]
So basically it should be safe to use rsync -av --remove-source-files as an alias for mv.
The following shell script, perhaps?
#!/bin/sh
if [[ -e $1 ]]
then
if [[ ! -d $2 ]]
then
mkdir --parents $2
fi
fi
mv $1 $2
That's the basic part. You might want to add in a bit to check for arguments, and you may want the behavior to change if the destination exists, or the source directory exists, or doesn't exist (i.e. don't overwrite something that doesn't exist).
Sillier, but working way:
mkdir -p $2
rmdir $2
mv $1 $2
Make the directory with mkdir -p including a temporary directory that is shares the destination file name, then remove that file name directory with a simple rmdir, then move your file to its new destination.
I think answer using dirname is probably the best though.
This will move foo.c to the new directory baz with the parent directory bar.
mv foo.c `mkdir -p ~/bar/baz/ && echo $_`
The -p option to mkdir will create intermediate directories as required.
Without -p all directories in the path prefix must already exist.
Everything inside backticks `` is executed and the output is returned in-line as part of your command.
Since mkdir doesn't return anything, only the output of echo $_ will be added to the command.
$_ references the last argument to the previously executed command.
In this case, it will return the path to your new directory (~/bar/baz/) passed to the mkdir command.
I unzipped an archive without giving a destination and wanted to move all the files except demo-app.zip from my current directory to a new directory called demo-app. The following line does the trick:
mv `ls -A | grep -v demo-app.zip` `mkdir -p demo-app && echo $_`
ls -A returns all file names including hidden files (except for the implicit . and ..).
The pipe symbol | is used to pipe the output of the ls command to grep (a command-line, plain-text search utility).
The -v flag directs grep to find and return all file names excluding demo-app.zip.
That list of files is added to our command-line as source arguments to the move command mv. The target argument is the path to the new directory passed to mkdir referenced using $_ and output using echo.
Based on a comment in another answer, here's my shell function.
# mvp = move + create parents
function mvp () {
source="$1"
target="$2"
target_dir="$(dirname "$target")"
mkdir --parents $target_dir; mv $source $target
}
Include this in .bashrc or similar so you can use it everywhere.
Code:
if [[ -e $1 && ! -e $2 ]]; then
mkdir --parents --verbose -- "$(dirname -- "$2")"
fi
mv --verbose -- "$1" "$2"
Example:
arguments: "d1" "d2/sub"
mkdir: created directory 'd2'
renamed 'd1' -> 'd2/sub'
((cd src-path && tar --remove-files -cf - files-to-move) | ( cd dst-path && tar -xf -))
I frequently stumble upon this issue while bulk moving files to new subdirectories. Ideally, I want to do this:
mv * newdir/
Most of the answers in this thread propose to mkdir and then mv, but this results in:
mkdir newdir && mv * newdir
mv: cannot move 'newdir/' to a subdirectory of itself
The problem I face is slightly different in that I want to blanket move everything, and, if I create the new directory before moving then it also tries to move the new directory to itself. So, I work around this by using the parent directory:
mkdir ../newdir && mv * ../newdir && mv ../newdir .
Caveats: Does not work in the root folder (/).
My one string solution:
test -d "/home/newdir/" || mkdir -p "/home/newdir/" && mv /home/test.txt /home/newdir/
i accomplished this with the install command on linux:
root#logstash:# myfile=bash_history.log.2021-02-04.gz ; install -v -p -D $myfile /tmp/a/b/$myfile
bash_history.log.2021-02-04.gz -> /tmp/a/b/bash_history.log.2021-02-04.gz
the only downside being the file permissions are changed:
root#logstash:# ls -lh /tmp/a/b/
-rwxr-xr-x 1 root root 914 Fev 4 09:11 bash_history.log.2021-02-04.gz
if you dont mind resetting the permission, you can use:
-g, --group=GROUP set group ownership, instead of process' current group
-m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x
-o, --owner=OWNER set ownership (super-user only)
There's a lot of conflicting solutions around for this, here's what worked for us:
## ss_mv ##
function ss_mv {
mkdir -p $(dirname "$2") && mv -f "$#"
}
This assumes commands in the following syntax:
ss_mv /var/www/myfile /var/www/newdir/myfile
In this way the directory path /var/www/newdir is extracted from the 2nd part of the command, and that new directory is then created (it's critical that you use the dirname tag to avoid myfile being added to the new directory being created).
Then we go ahead and mv on the entire string again by using the "$#" tag.
You can even use brace extensions:
mkdir -p directory{1..3}/subdirectory{1..3}/subsubdirectory{1..2}
which creates 3 directories (directory1, directory2, directory3),
and in each one of them two subdirectories (subdirectory1, subdirectory2),
and in each of them two subsubdirectories (subsubdirectory1 and subsubdirectory2).
You have to use bash 3.0 or newer.
$what=/path/to/file;
$dest=/dest/path;
mkdir -p "$(dirname "$dest")";
mv "$what" "$dest"

Resources