Linux Shell Script - How to get the upper directory of a parameter $n - linux

in my script I receive as parameter $1 a .tgz file and I have to filter the size of its elements in a temporal directory, create a new one and rewrite the original.
If $1 is "~/Directory/File.tgz", I need to know hot to get to "~/Directory" so I can work with it.
This is my code:
dtemp=`mktemp -d ./tmpdirXXX`
cp $1 $dtemp #Copy
cd $dtemp
comprimido=`find ./ -name "*.tgz"`
tar xzvf $comprimido
rm $comprimido
for archivo in *
do
Tarchivo=`du -b "$archivo" | cut -f1`
if test 70192 -lt $Tarchivo
then
echo "$archivo es mayor de 8KB"
rm -r $archivo
fi
done
tar czvf $1 $dtemp
rm -r $dtemp
The last two lines don't work, it says that the file or directory doesn't exist.
Thanks for your help!

Your last two lines don't work because you cd'd to the temp directory, but never cd'd back.

Related

Is this possible in this command to cd into the directory thats printed in output

When I do ls | grep -e *-folder1 it prints my-folder1 that's the name of the folder matched in the command in current directory.
Is there a way I can add something like cd into this directory. This is more of an attempt to learn Bash or commands on Linux, rather than about doing what I am trying to accomplish.
You could do
ls | grep -- -folder1 | while read -r dir
do
cd "$dir"
# do things in $dir
done
# do things in the original directory
but parsing the output of ls is not recommended. You could instead use globbing:
for dir in *-efolder*
do
cd "$dir"
# do things in $dir
cd .. # need to back out again
done
# do things in the original directory
If the purpose isn't to grep on all folders matching a certain pattern and to cd down into each one of them, but to simply cd into a directory ending with -folder1, then:
cd *-folder1
If you get zero or multiple hits, cd will shown an error.

Bash script to check for directory name with condition

I want to exclude some directory in my script (directory name >1000) for deletion and here is my directory look like:
/home/tester/100
/home/tester/1000
/home/tester/1020 # delete all files inside
/home/tester/2000 # delete all files inside
My bash script:
cd /home/tester
for dir in */ ; do
echo -n $dir": ";
find "$dir" -type f | wc -l;
if [ $dir -gt 1000 ]; then
cd $dir;
rm *;
cd ..;
fi
done
I got error on the if line and have no idea how to fix it ... Is it possible to do with bash script ?
Thank you for your help
for dir in */ ; do will set dir to things like "1000/" -- and the "/" makes it not a valid number. You can trim off the trailing "/" with ${dir%/}. I'd also recommend double-quoting it to prevent possible weird parsing:
if [ "${dir%/}" -gt 1000 ]; then
Note that if the directory name isn't a number (even after the "/" is removed), you'll get an error from the comparison, and the then clause won't run (which is probably what you want). If you want to handle other (non-numeric) directory names more gracefully, you should add some appropriate is-this-a-number test first.
Also, using cd in scripts tends to be problematic, because if a cd fails for any reason, the rest of the script will continue running, but in the wrong place. This can cause all sorts of chaos. Consider what'd happen if one of the cd $dir commands fails: it'd run rm * in the /home/tester directory, deleting all the non-subdirectory files there, then it'd cd .., leaving it in /home. The next iteration would try to cd down to something like 2000, which doesn't exist under /home, so that cd would fail too, and then it'd delete all files in /home. This repeats indefinitely, potentially all the way up to running rm * in /, the root directory. Not good at all.
I recommend either putting error checks on cd commands, or just avoiding them entirely in favor of using explicit paths to files.
#!/bin/bash
cd /home/tester || {
echo "Couldn't cd to /home/tester, quitting here..." >&2
exit 1
}
for dir in */ ; do
echo -n "$dir: "
find "$dir" -type f | wc -l
if [ "${dir%/}" -gt 1000 ]; then
rm "$dir"/* # Explicit path -- the / is redundant, but won't hurt
fi
done
I've also added an explicit shebang line, double-quoted all the variable references (good general scripting hygiene), and removed the semicolons from the ends of lines (not needed in shell syntax).
Another recommendation: run your scripts through shellcheck.net -- it'll point out a lot of common mistakes like unquoted variable references and unchecked cds.
The value of $dir is not numeric. Add set -x at the to of your script to debug.
Use "$(basename "$dir")" to get the numeric value.
When I did not have my first cup of coffee, I would do
for dir in */ ; do
echo -n $dir": ";
find "$dir" -type f | wc -l;
done
mv /home/tester/1000 /home/tester/some_unique_name
rm /home/tester/[1-9][0-9][0-9][0-9]/*
mv /home/tester/some_unique_name /home/tester/1000
This will not work when you have directories > 9999.
Perhaps rm /home/tester/[1-9][0-9][0-9][0-9]*/* will work, when you don't have directories like 1000backup or 2000my_unique_name.
A better solution is
find . -regextype sed -regex '/home/tester/[0-9]\{4,\}' ! -name 1000 |
xargs -L1 -I{} echo rm {}/*

Shell, For loop into directories and sub directory not working as intended

When trying to do a simple script that tars the files and moves them to another directory i'am having problems implementing the recursive loop.
#!/bin/bash
for directorio in $1; do #my thought process: start for loop goes into dir
for file in $directorio; do #enter for loop for files
fwe="${file%.*}" #file_without_extension
tar -czPf "$fwe.tar.bz2" "$(pwd)" && mv "$fwe.tar.bz2" /home/cunha/teste
done
done
the script seems to be doing nothing ...
when the script is called like: ./script.sh /home/blablabla
How could i get this fixed?
You can better follow below option. How will it work? First it will list all the directories in short_list.txt. In while loop it will read each directory name and zip it in location /home/cunha/teste
find $1 -type d -name "*" > short_list.txt
cdr=$(pwd)
while read line
do
cd $line
base=$(basename $PWD)
cd ..
tar -cf /home/cunha/teste/$base.tar.gz $base
done < short_list.txt
cd $cdr
rm -rf short_list.txt

Execute multiple commands on target files from find command

Let's say I have a bunch of *.tar.gz files located in a hierarchy of folders. What would be a good way to find those files, and then execute multiple commands on it.
I know if I just need to execute one command on the target file, I can use something like this:
$ find . -name "*.tar.gz" -exec tar xvzf {} \;
But what if I need to execute multiple commands on the target file? Must I write a bash script here, or is there any simpler way?
Samples of commands that need to be executed a A.tar.gz file:
$ tar xvzf A.tar.gz # assume it untars to folder logs
$ mv logs logs_A
$ rm A.tar.gz
Here's what works for me (thanks to Etan Reisner suggestions)
#!/bin/bash # the target folder (to search for tar.gz files) is parsed from command line
find $1 -name "*.tar.gz" -print0 | while IFS= read -r -d '' file; do # this does the magic of getting each tar.gz file and assign to shell variable `file`
echo $file # then we can do everything with the `file` variable
tar xvzf $file
# mv untar_folder $file.suffix # untar_folder is the name of folder after untar
rm $file
done
As suggested, the array way is unsafe if file name contained space(s), and also doesn't seem to work properly in this case.
Writing a shell script is probably easiest. Take a look at sh for loops. You could use the output of a find command in an array, and then loop over that array to perform a set of commands on each element.
For example,
arr=( $(find . -name "*.tar.gz" -print0) )
for i in "${arr[#]}"; do
# $i now holds each of the filenames output by find
tar xvzf $i
mv $i $i.suffix
rm $i
# etc., etc.
done

Is there a way to make mv create the directory to be moved to if it doesn't exist?

So, if I'm in my home directory and I want to move foo.c to ~/bar/baz/foo.c , but those directories don't exist, is there some way to have those directories automatically created, so that you would only have to type
mv foo.c ~/bar/baz/
and everything would work out? It seems like you could alias mv to a simple bash script that would check if those directories existed and if not would call mkdir and then mv, but I thought I'd check to see if anyone had a better idea.
How about this one-liner (in bash):
mkdir --parents ./some/path/; mv yourfile.txt $_
Breaking that down:
mkdir --parents ./some/path
# if it doesn't work; try
mkdir -p ./some/path
creates the directory (including all intermediate directories), after which:
mv yourfile.txt $_
moves the file to that directory ($_ expands to the last argument passed to the previous shell command, ie: the newly created directory).
I am not sure how far this will work in other shells, but it might give you some ideas about what to look for.
Here is an example using this technique:
$ > ls
$ > touch yourfile.txt
$ > ls
yourfile.txt
$ > mkdir --parents ./some/path/; mv yourfile.txt $_
$ > ls -F
some/
$ > ls some/path/
yourfile.txt
mkdir -p `dirname /destination/moved_file_name.txt`
mv /full/path/the/file.txt /destination/moved_file_name.txt
Save as a script named mv.sh
#!/bin/bash
# mv.sh
dir="$2" # Include a / at the end to indicate directory (not filename)
tmp="$2"; tmp="${tmp: -1}"
[ "$tmp" != "/" ] && dir="$(dirname "$2")"
[ -a "$dir" ] ||
mkdir -p "$dir" &&
mv "$#"
Or put at the end of your ~/.bashrc file as a function that replaces the default mv on every new terminal. Using a function allows bash keep it memory, instead of having to read a script file every time.
function mvp ()
{
dir="$2" # Include a / at the end to indicate directory (not filename)
tmp="$2"; tmp="${tmp: -1}"
[ "$tmp" != "/" ] && dir="$(dirname "$2")"
[ -a "$dir" ] ||
mkdir -p "$dir" &&
mv "$#"
}
Example usage:
mv.sh file ~/Download/some/new/path/ # <-End with slash
These based on the submission of Chris Lutz.
You can use mkdir:
mkdir -p ~/bar/baz/ && \
mv foo.c ~/bar/baz/
A simple script to do it automatically (untested):
#!/bin/sh
# Grab the last argument (argument number $#)
eval LAST_ARG=\$$#
# Strip the filename (if it exists) from the destination, getting the directory
DIR_NAME=`echo $2 | sed -e 's_/[^/]*$__'`
# Move to the directory, making the directory if necessary
mkdir -p "$DIR_NAME" || exit
mv "$#"
It sounds like the answer is no :). I don't really want to create an alias or func just to do this, often because it's one-off and I'm already in the middle of typing the mv command, but I found something that works well for that:
mv *.sh shell_files/also_with_subdir/ || mkdir -p $_
If mv fails (dir does not exist), it will make the directory (which is the last argument to the previous command, so $_ has it). So just run this command, then up to re-run it, and this time mv should succeed.
The simpliest way to do that is:
mkdir [directory name] && mv [filename] $_
Let's suppose I downloaded pdf files located in my download directory (~/download) and I want to move all of them into a directory that doesn't exist (let's say my_PDF).
I'll type the following command (making sure my current working directory is ~/download):
mkdir my_PDF && mv *.pdf $_
You can add -p option to mkdir if you want to create subdirectories just like this: (supposed I want to create a subdirectory named python):
mkdir -p my_PDF/python && mv *.pdf $_
Making use of the tricks in "Getting the last argument passed to a shell script" we can make a simple shell function that should work no matter how many files you want to move:
# Bash only
mvdir() { mkdir -p "${#: -1}" && mv "$#"; }
# Other shells may need to search for the last argument
mvdir() { for last; do true; done; mkdir -p "$last" && mv "$#"; }
Use the command like this:
mvdir foo.c foo.h ~/some/new/folder/
rsync command can do the trick only if the last directory in the destination path doesn't exist, e.g. for the destination path of ~/bar/baz/ if bar exists but baz doesn't, then the following command can be used:
rsync -av --remove-source-files foo.c ~/bar/baz/
-a, --archive archive mode; equals -rlptgoD (no -H,-A,-X)
-v, --verbose increase verbosity
--remove-source-files sender removes synchronized files (non-dir)
In this case baz directory will be created if it doesn't exist. But if both bar and baz don't exist rsync will fail:
sending incremental file list
rsync: mkdir "/root/bar/baz" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(657) [Receiver=3.1.2]
So basically it should be safe to use rsync -av --remove-source-files as an alias for mv.
The following shell script, perhaps?
#!/bin/sh
if [[ -e $1 ]]
then
if [[ ! -d $2 ]]
then
mkdir --parents $2
fi
fi
mv $1 $2
That's the basic part. You might want to add in a bit to check for arguments, and you may want the behavior to change if the destination exists, or the source directory exists, or doesn't exist (i.e. don't overwrite something that doesn't exist).
Sillier, but working way:
mkdir -p $2
rmdir $2
mv $1 $2
Make the directory with mkdir -p including a temporary directory that is shares the destination file name, then remove that file name directory with a simple rmdir, then move your file to its new destination.
I think answer using dirname is probably the best though.
This will move foo.c to the new directory baz with the parent directory bar.
mv foo.c `mkdir -p ~/bar/baz/ && echo $_`
The -p option to mkdir will create intermediate directories as required.
Without -p all directories in the path prefix must already exist.
Everything inside backticks `` is executed and the output is returned in-line as part of your command.
Since mkdir doesn't return anything, only the output of echo $_ will be added to the command.
$_ references the last argument to the previously executed command.
In this case, it will return the path to your new directory (~/bar/baz/) passed to the mkdir command.
I unzipped an archive without giving a destination and wanted to move all the files except demo-app.zip from my current directory to a new directory called demo-app. The following line does the trick:
mv `ls -A | grep -v demo-app.zip` `mkdir -p demo-app && echo $_`
ls -A returns all file names including hidden files (except for the implicit . and ..).
The pipe symbol | is used to pipe the output of the ls command to grep (a command-line, plain-text search utility).
The -v flag directs grep to find and return all file names excluding demo-app.zip.
That list of files is added to our command-line as source arguments to the move command mv. The target argument is the path to the new directory passed to mkdir referenced using $_ and output using echo.
Based on a comment in another answer, here's my shell function.
# mvp = move + create parents
function mvp () {
source="$1"
target="$2"
target_dir="$(dirname "$target")"
mkdir --parents $target_dir; mv $source $target
}
Include this in .bashrc or similar so you can use it everywhere.
Code:
if [[ -e $1 && ! -e $2 ]]; then
mkdir --parents --verbose -- "$(dirname -- "$2")"
fi
mv --verbose -- "$1" "$2"
Example:
arguments: "d1" "d2/sub"
mkdir: created directory 'd2'
renamed 'd1' -> 'd2/sub'
((cd src-path && tar --remove-files -cf - files-to-move) | ( cd dst-path && tar -xf -))
I frequently stumble upon this issue while bulk moving files to new subdirectories. Ideally, I want to do this:
mv * newdir/
Most of the answers in this thread propose to mkdir and then mv, but this results in:
mkdir newdir && mv * newdir
mv: cannot move 'newdir/' to a subdirectory of itself
The problem I face is slightly different in that I want to blanket move everything, and, if I create the new directory before moving then it also tries to move the new directory to itself. So, I work around this by using the parent directory:
mkdir ../newdir && mv * ../newdir && mv ../newdir .
Caveats: Does not work in the root folder (/).
My one string solution:
test -d "/home/newdir/" || mkdir -p "/home/newdir/" && mv /home/test.txt /home/newdir/
i accomplished this with the install command on linux:
root#logstash:# myfile=bash_history.log.2021-02-04.gz ; install -v -p -D $myfile /tmp/a/b/$myfile
bash_history.log.2021-02-04.gz -> /tmp/a/b/bash_history.log.2021-02-04.gz
the only downside being the file permissions are changed:
root#logstash:# ls -lh /tmp/a/b/
-rwxr-xr-x 1 root root 914 Fev 4 09:11 bash_history.log.2021-02-04.gz
if you dont mind resetting the permission, you can use:
-g, --group=GROUP set group ownership, instead of process' current group
-m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x
-o, --owner=OWNER set ownership (super-user only)
There's a lot of conflicting solutions around for this, here's what worked for us:
## ss_mv ##
function ss_mv {
mkdir -p $(dirname "$2") && mv -f "$#"
}
This assumes commands in the following syntax:
ss_mv /var/www/myfile /var/www/newdir/myfile
In this way the directory path /var/www/newdir is extracted from the 2nd part of the command, and that new directory is then created (it's critical that you use the dirname tag to avoid myfile being added to the new directory being created).
Then we go ahead and mv on the entire string again by using the "$#" tag.
You can even use brace extensions:
mkdir -p directory{1..3}/subdirectory{1..3}/subsubdirectory{1..2}
which creates 3 directories (directory1, directory2, directory3),
and in each one of them two subdirectories (subdirectory1, subdirectory2),
and in each of them two subsubdirectories (subsubdirectory1 and subsubdirectory2).
You have to use bash 3.0 or newer.
$what=/path/to/file;
$dest=/dest/path;
mkdir -p "$(dirname "$dest")";
mv "$what" "$dest"

Resources