Recursively create directory tree within existing subdirectories - linux

I have a working directory with a large number of subfolders (i.e 1190A, 1993A etc).
'/working/1190A'
'/working/1993A'
I would like to recursively create a certain directory tree within each subfolder. For example:
'/working/1190A/analysis/1'
'/working/1993A/analysis/1'
etc
Thanks.

To force the system create a directory tree without having to create each level of it, add -p to the mkdir command.
Hence, this could work:
for dir in list_of_folders
do
mkdir -p $dir/your/directory/tree
[ $? ] && echo "error on $dir" # if the dir could not be created, print error (thanks #hetepeperfan - see comments)
done
Note that the list_of_folders can be given like /working/1190A /working/1993A, but also generated with a find command. This is just a first version you'd better adapt to your specific requirements.

This is one liner as
cd /working
find . -maxdepth 1 -type d -exec mkdir -p '{}'/analysis/1 \;

yourDirList=/working/*
for f in $yourDirList
do
if [ -d $f ]
mkdir -p $f/analysis/1
fi
done

Related

Run a qsub command in all subdirectories

I am using Centos on a HPC to run my code. I typically have a folder that contains a run_calc File, which is what I want to run as:
qsub run_calc
Now I want to write a script "submit_all.sh" that submits all run_calc files in all the subfolders in their current directory and not from the from a parent folder where I runt the submit_all.sh script.
I found similar questions posted here Solution and here Solution2
that seems to be a partial answer to this question. I am not confident just submitting scripts until I found a solution which is why I ask:
In the second link I found this solution:
for i in {1..1000}; do
cd "$i"
qsub submit.sh
cd ..
done
were "i" was a list of folders with the names 1-100. Is it somehow possible to use find to create a list of all the subdirectories and path it to the for loop? How would i deal with subsubdirectories? Would I be able to change the cd .. statement such that I always go back to the parent folder directly in that case?
I fond this solution here: Solution
#!/bin/sh
for i in `find /var/www -type d -maxdepth 1 -mindepth 1`; do
cd $i
# do something here
done
But I do not understand what is going on? Is it possible to change the above script to the only dive into folders containing a run_calc File and also include subsubdirectries?
Thank you in advance
Assuming that you are using bash as your shell:
$ cat ./test.sh
#!/bin/bash
IFS=$'\n'
while read -r fname ;
do
pushd $(dirname "${fname}") > /dev/null
qsub run_calc
popd > /dev/null
done < <(find . -type f -name 'run_calc')
find . -type f -name 'run_calc' finds all paths to file run_calc inside the current directory and its subdirectories. This is input for the while loop.
pushd, popd are bash specific, and adds in or pops out of directory stack.
for d in `find . -type d`
do ( cd "$d"
if test ! -f run_calc; then continue; fi
qsub run_calc
) done
( commnds ) execute commands in a separate process and effect of cd does not "leak".

Command for moving subfolders with files, with keeping the original structure

I have a parent/ folder with a couple of subfolders in it. Structure:
/parent/
/subfolder_1/
- file_1.txt
- file_2.txt
/subfolder_2/
- file_3.txt
- file_4.txt
Now, I need to recursively move the contents of parent/ folder to the empty parent_tmp/ directory. Thing is, I need to keep the original folder structure in parent/.
Expected outcome after moving:
/parent/
/subfolder_1/
(empty)
/subfolder_2/
(empty)
/parent_tmp/
/subfolder_1/
- file_1.txt
- file_2.txt
/subfolder_2/
- file_3.txt
- file_4.txt
Normally, I would simply do
mv parent/* parent_tmp
but this will, of course, move the subfolders permanently.
Is there a way to adjust the mv command to keep the original structure of the source directory?
Note:
I realize that I can e.g. copy parent/ to parent_tmp, and then remove the files in parent/ subfolders. This is plan B to me.
You can use find from parent of parent and parent_tmp directoroies:
find parent -type f -exec bash -c 'mkdir -p "parent_tmp/${1%/*}" &&
mv "$1" "parent_tmp/${1%/*}"' - {} \;
You could copy the files
cp -r parent/* parent_tmp/
or create hard links (should be a lot faster for big files)
cp -l -r parent/* parent_tmp/
and then delete the original files
find parent -type f -delete
while keeping the directory structure.
Zip the content of the parent folder and Unzip it in the target folder.
Quick and Dirty:
I don't think you'll find a tool or option in the mv command to do what you want, but you should be able to achieve the desired goal by using find:
cd parent && while read file ; do dirname="$(dirname "$file")" ; mkdir -p ../parent_tmp/"$dirname"/; mv "$file" "../parent_tmp/"${file#}"" ; done < <( find . -type f ) && cd -
Function
If you use this a lot then you can add the above to your ~/.basrc like so (append to the end of the file):
alias mvkp=moveandkeep
moveandkeep() {
cd "$1"
while read file ;
do dirname="$(dirname "$file")" ;
mkdir -p "$2"/"${dirname#}";
mv "$file" ""$2"/"${file#}"";
done < <(find . -type f)
cd -
}
Now you could simply do the following: (Full path to directories required)
mvkp /home/user/parent /home/user/parent_tmp

linux script to move files in directories with data

I'm using freeradius with daloradius appliance.
Now there are lots of routers connected to this radius server and the database and log files are growing too much.
The log files from freeradius are saved on:
/radacct/xxx.xxx.xxx.xxx/detail-20150404
xxx.xxx.xxx.xxx are differents IP clients, so there are lots of folders and lots of files inside these folders.
I can add this directory to rotate log because the file detail-TODAY can't be modified during the day and can be accessible all the 24h.
So I'm asking for:
A script to move /radacct/xxx.xxx.xxx.xxx/detail-yyyymmdd to a new folder /radacct_old/xxx.xxx.xxx.xxx/detail-yyyymmdd.
We have to move all files except where yyyymmdd is the current date (date when the script is executed).
After this I can rotate log radacct_old or just add to zip radacct_old_yyyymmdd.
I'm planning to do this job every week or so.
What’s the best way do you suggest?
Try something like this:
function move {
today=$(date +%Y%m%d)
file="${1#/radacct/}"
ip="${file%%/*}"
name="${file##*/}"
if [[ ! $name =~ detail-$today ]]; then
dir="/radacct_old/$ip"
[ -d "${dir}" ] || mkdir "${dir}"
mv "${1}" "${dir}/${name}"
fi
}
export -f move
find /radacct -type d -mindepth 2 -maxdepth 2 -name '*detail*' -exec bash -c 'move "$0"' {} \;
Beware this is untested, you will certainly be able to fill the gaps. I will test it out and debug later if you can't seem to make it work. Post when you have further questions.
Explanation: generally the script looks for all directories of the required format and moves them (last two lines) by calling a function (beginning).
Move function
today=$(date +%Y%m%d) constructs the date in the required format.
file="${1#/radacct/}" remove leading directory name from the directory we found using find.
ip="${file%%/*}" extract the ip address.
name="${file##*/}" extract the dir name.
if [[ ! $name =~ detail-$today ]]; then if dir name is from today.
dir="/radacct_old/$ip" construct target directory.
[ -d "${dir}" ] || mkdir "${dir}" create it if it doesn't exist.
mv "${1}" "${dir}/${name}" move the dir to the new location.
export -f move export the function so it can be called in subshell
Find function
find /radacct look in /radacct dir
-type d -mindepth 2 -maxdepth 2 look for dirs in dirs.
-name '*detail*' which contain the word detail.
-exec bash -c 'move "$0"' {} \; and execute the move function, supplying the name of the dir as argument.
Note that I will add more details and test it later today.
To perform this weekly, use a cron job.

using IF to see a directory exists if not do something

I am trying to move the directories from $DIR1 to $DIR2 if $DIR2 does not have the same directory name
if [[ ! $(ls -d /$DIR2/* | grep test) ]] is what I currently have.
then
mv $DIR1/test* /$DIR2
fi
first it gives
ls: cannot access //data/lims/PROCESSING/*: No such file or directory
when $DIR2 is empty
however, it still works.
secondly
when i run the shell script twice.
it doesn't let me move the directories with the similar name.
for example
in $DIR1 i have test-1 test-2 test-3
when it runs for the first time all three directories moves to $DIR2
after that i do mkdir test-4 at $DIR1 and run the script again..
it does not let me move the test-4 because my loop thinks that test-4 is already there since I am grabbing all test
how can I go around and move test-4 ?
Firstly, you can check whether or not a directory exists using bash's built in 'True if directory exists' expression:
test="/some/path/maybe"
if [ -d "$test" ]; then
echo "$test is a directory"
fi
However, you want to test if something is not a directory. You've shown in your code that you already know how to negate the expression:
test="/some/path/maybe"
if [ ! -d "$test" ]; then
echo "$test is NOT a directory"
fi
You also seem to be using ls to get a list of files. Perhaps you want to loop over them and do something if the files are not a directory?
dir="/some/path/maybe"
for test in $(ls $dir);
do
if [ ! -d $test ]; then
echo "$test is NOT a directory."
fi
done
A good place to look for bash stuff like this is Machtelt Garrels' guide. His page on the various expressions you can use in if statements helped me a lot.
Moving directories from a source to a destination if they don't already exist in the destination:
For the sake of readability I'm going to refer to your DIR1 and DIR2 as src and dest. First, let's declare them:
src="/place/dir1/"
dest="/place/dir2/"
Note the trailing slashes. We'll append the names of folders to these paths so the trailing slashes make that simpler. You also seem to be limiting the directories you want to move by whether or not they have the word test in their name:
filter="test"
So, let's first loop through the directories in source that pass the filter; if they don't exist in dest let's move them there:
for dir in $(ls -d $src | grep $filter); do
if [ ! -d "$dest$dir" ]; then
mv "$src$dir" "$dest"
fi
done
I hope that solves your issue. But be warned, #gniourf_gniourf posted a link in the comments that should be heeded!
If you need to mv some directories to another according to some pattern, than you can use find:
find . -type d -name "test*" -exec mv -t /tmp/target {} +
Details:
-type d - will search only for directories
-name "" - set search pattern
-exec - do something with find results
-t, --target-directory=DIRECTORY move all SOURCE arguments into DIRECTORY
There are many examples of exec or xargs usage.
And if you do not want to overwrite files, than add -n option to mv command:
find . -type d -name "test*" -exec mv -n -t /tmp/target {} +
-n, --no-clobber do not overwrite an existing file

Comparing two directory recursively, copy content of directory to third directory using shell script

I have two directories say A and B.
I want to compare directory B with A.
Directory A and directory B have many common directories(contains is same) and file.
If file is not present in directory A then copy file to directory C by maintaining directory structure.
e.g. If in Directory A following relative path not is not exist.
B/hellboy/MyScripts/dir1/
I want to copy this path and descendant files and directory to directory C.
I tried a lot with the diff command. But I think it is not possible. Please help me to get out of this.
Have a look at rdiff-backup (a relative of rsync), it might provide what you want out of the box.
Does the following script snippet come close to what you are looking for?
#!/usr/bin/env bash
# Setup for testing:
# mkdir -p A/l1-{1,2,3}/l2-{4,5,6}
# mkdir -p B/l1-{2,3,4,5}/l2-{1,2,3,4}
# touch B/l1-{3,4,5}/l2-{2,3,4}/file-${RANDOM}
# touch {A,B}/l1-2/l2-4/never_goes_to_C
while read f d _; do
if [ -e "A/$f" ]; then
echo "---> FOUND: A/$f"
else
echo "---> NOT FOUND: A/$f, moving to C"
mkdir -p "C/$d" && cp -a "B/$f" "C/$d"
fi;
done < <(cd B; find . -printf "%p %h\n")

Resources