I want to recursively loop through all subdirectories within the current directory.
for i in *
this only iterates through all the files in the current directory. How do you make it so that if the file it is looking at is a directory, to enter that directory and recursively look through that directory too and so on and so forth.
EDIT: sorry should have been more specific. i cant use ls -R, as i want to display it in a tree structure with a certain format type. I have heard there are commands which can do this but i have to use a loop. cannot use "find" either...
Here is a function to do what you need. Call it with recp 0 in the current directory.
function recp
{
lvl=$(($1 + 1))
for f in *
do
j=0
while [[ $j -lt $lvl ]]
do
echo -n " "
j=$(($j+1))
done
echo $f
[[ -d $f ]] && ( cd $f && recp $lvl )
done
}
I'm not very good at bash scripting
but the example below goes through all the directories.
You can replace the echo command with whatever you like.
I'm sure the style of this example can be improved.
Also did you consider the command "tree"?
Good luck.
#!/bin/bash
function recursiveList
{
if [ -d $1 ]
then
cd $1
for x in *
do
recursiveList $x
done
cd ..
else
echo $1
fi
}
for i in *
do
recursiveList $i
done
Related
My task is create script which lists all subdirs in user's directory and write them in a file.
I created a recursive function which should run through all directories from main and write the names of their subdirs in the file. But the script runs for first folders in my home dirs until reaching the folder without the subfolder. How do I do it correctly?
#!/bin/bash
touch "/home/neo/Desktop/exercise1/backup.txt"
writeFile="/home/neo/Desktop/exercise1/backup.txt"
baseDir="/home/neo"
print(){
echo $1
cd $1
echo "============">>$writeFile
pwd>>$writeFile
echo "============">>$writeFile
ls>>$writeFile
for f in $("ls -R")
do
if [ -d "$f" ]
then
print $1"/"$f
fi
done
}
print $baseDir
to get all folders within a path you can simply do:
find /home/neo -type d > /home/neo/Desktop/exercise1/backup.txt
done
Try this
fun(){
[[ -d $1 ]] && cd $1
echo $PWD
d=$(echo *)
[[ -d $d ]] && cd $d || return
fun
}
I have a directory containing 8000+ fits files and I am wondering if there is a way to copy increments of them unto other directories such that I could have either 8 directories with about 1000 fits files in them each or 4 directories with 2000 fits files?
Something like this could make it:
dir=1
counter=1
for file in spec*
do
echo "cp $file dir_$dir"
((counter++))
(( $counter%1000 == 1 )) && ((dir++))
done
Explanation
dir=1 and counter=1 are setting the variables.
for file in spec* loops through spec* pattern name files.
echo "cp $file dir_$dir" will output like cp spec123 dir_1 / dir_2, ... I used echo so that you can check the behaviour before going ahead and doing the proper cp.
((counter++)) increments the counter variable counter.
(( $counter%1000 == 1 )) && ((dir++)) if $counter is on a form 1000K+1, increment the value of $dir.
To extend the script fedorqui wrote to your need, you can do the following:
create a file,
$ nano mycopy.sh (you can choose your own name)
paste the following and save the file:
source=$1
dir=1
counter=1
mkdir dir_$dir
for file in `ls $source`
do
cp -r $source$file dir_$dir/
((counter++))
(( $counter%1000 == 1 )) && ((dir++)) && ((`mkdir dir_$dir`))
done
make your file executable by
$ chmod u+x mycopy.sh
now execute the script by running
$ ./mycopy.sh MyDirectoryWithManyFiles/
where MyDirectoryWithManyFiles/ is the directory containing your files
The script will create sub directories and copy 1000 files max inside them
Here's a way of doing it in Python:
Create a new file called "manage_files.py" or something, put this code into it:
# Hat-tip to http://stackoverflow.com/questions/3964681/find-all-files-in-directory-with-extension-txt-with-python
import os
import glob
from subprocess import call
# Initialise
my_dir = 'files_are_here'
num_files = 10
# Get a list of all files in my_dir
os.chdir(my_dir)
all_files = glob.glob('*')
for i in range((len(all_files) / num_files) + 1):
files = all_files[i * num_files:(i + 1) * num_files]
dirname = "dir_%s" % i
call(["mkdir", dirname])
file_names = ' '.join(files)
arg = ['mv', '-t', '%s/' % dirname]
arg.extend(files)
call(arg)
Then, from the Linux shell type python manage_files.py and the magic will do its stuff.
I hope it helps you!
Here, rootfile folder contains several files, those names are included like ..om500.. or ..om501.. or ..om502..
So, I wanted to copy a specific number of each files name to a new folder inside rootfile folder, rootrate.
#!/bin/bash
mkdir rootfile/rootrate
for ((j=500;j<503;j++)); do
echo $j
file=(`ls rootfile/*om${j}_*.root`)
if [ "$j" -eq "500" ];
then
max=60
fi
if [ "$j" -eq "501" ];
then
max=8
fi
if [ "$j" -eq "502" ];
then
max=120
fi
for ((i = 0; i<max ; i++)); do
echo "${file[i]}"
echo $i
cp "${file[i]}" rootfile/rootrate/
done
done
I want to create a bash alias to do the following:
Assume I am at the following path:
/dir1/dir2/dir3/...../dirN
I want to go up to dir3 directly without using cd ... I will just write cdd dir3 and it should go directly to /dir1/dir2/dir3. cdd is my alias name.
I wrote the following alias, but it doesn't work:
alias cdd='export newDir=$1; export myPath=`pwd | sed "s/\/$newDir\/.*/\/$newDir/"`; cd $myPath'
Simply it should get the current full path, then remove anything after the new destination directory, then cd to this new path
The problem with my command is that $1 doesn't get my input to the command cdd
This is a slightly simpler function that I think achieves what you're trying to do:
cdd() { cd ${PWD/$1*}$1; }
Explanation:
${PWD/$1*}$1 takes the current working directory and strips off everything after the string passed to it (the target directory), then adds that string back. This is then used as an argument for cd. I didn't bother adding any error handling as cdwill take care of that itself.
Example:
[atticus:pgl]:~/tmp/a/b/c/d/e/f $ cdd b
[atticus:pgl]:~/tmp/a/b $
It's a little ugly, but it works.
Here's a function - which you could place in your shell profile - which does what you want; note that in addition to directory names it also supports levels (e.g., cdd 2 to go up 2 levels in the hierarchy); just using cdd will move up to the parent directory.
Also note that matching is case-INsensitive.
The code is taken from "How can I replace a command line argument with tab completion?", where you'll also find a way to add complementary tab-completion for ancestral directory names.
cdd ()
{
local dir='../';
[[ "$1" == '-h' || "$1" == '--help' ]] && {
echo -e "usage:
$FUNCNAME [n]
$FUNCNAME dirname
Moves up N levels in the path to the current working directory, 1 by default.
If DIRNAME is given, it must be the full name of an ancestral directory (case does not matter).
If there are multiple matches, the one *lowest* in the hierarchy is changed to." && return 0
};
if [[ -n "$1" ]]; then
if [[ $1 =~ ^[0-9]+$ ]]; then
local strpath=$( printf "%${1}s" );
dir=${strpath// /$dir};
else
if [[ $1 =~ ^/ ]]; then
dir=$1;
else
local wdLower=$(echo -n "$PWD" | tr '[:upper:]' '[:lower:]');
local tokenLower=$(echo -n "$1" | tr '[:upper:]' '[:lower:]');
local newParentDirLower=${wdLower%/$tokenLower/*};
[[ "$newParentDirLower" == "$wdLower" ]] && {
echo "$FUNCNAME: No ancestral directory named '$1' found." 1>&2;
return 1
};
local targetDirPathLength=$(( ${#newParentDirLower} + 1 + ${#tokenLower} ));
dir=${PWD:0:$targetDirPathLength};
fi;
fi;
fi;
pushd "$dir" > /dev/null
}
I agree with mklement0, this should be a function. But a simpler one.
Add this to your .bashrc:
cdd () {
newDir="${PWD%%$1*}$1"
if [ ! -d "$newDir" ]; then
echo "cdd: $1: No such file or directory" >&2
return 1
fi
cd "${newDir}"
}
Note that if $1 (your search string) appears more than once in the path, this function will prefer the first one. Note also that if $1 is a substring of a path, it will not be found. For example:
[ghoti#pc ~]$ mkdir -p /tmp/foo/bar/baz/foo/one
[ghoti#pc ~]$ cd /tmp/foo/bar/baz/foo/one
[ghoti#pc /tmp/foo/bar/baz/foo/one]$ cdd foo
[ghoti#pc /tmp/foo]$ cd -
/tmp/foo/bar/baz/foo/one
[ghoti#pc /tmp/foo/bar/baz/foo/one]$ cdd fo
cdd: fo: No such file or directory
If you'd like to include the functionality of going up 2 levels by running cdd 2, this might work:
cdd () {
newDir="${PWD%%$1*}$1"
if [ "$1" -gt 0 -a "$1" = "${1%%.*}" -a ! -d "$1" ]; then
newDir=""
for _ in $(seq 1 $1); do
newDir="../${newDir}"
done
cd $newDir
return 0
elif [ ! -d "$newDir" ]; then
echo "cdd: $1: No such file or directory" >&2
return 1
fi
cd "${newDir}"
}
The long if statement verifies that you've supplied an integer that is not itself a directory. We build a new $newDir so that you can cd - to get back to your original location if you want.
Is there an easy way to find all files where no part of the path of the file is a symbolic link?
Short:
find myRootDir -type f -print
This would answer the question.
Care to not add a slash at end of specified dir ( not myRootDir/ but myRootDir ).
This won't print other than real files in real path.
No symlinked file nor file in symlinked dir.
But...
If you wanna ensure that a specified dir contain a symlink, there is a litte bash function to could do the job:
isPurePath() {
if [ -d "$1" ];then
while [ ! -L "$1" ] && [ ${#1} -gt 0 ] ;do
set -- "${1%/*}"
if [ "${1%/*}" == "$1" ] ;then
[ ! -L "$1" ] && return
set -- ''
fi
done
fi
false
}
if isPurePath /usr/share/texmf/dvips/xcolor ;then echo yes; else echo no;fi
yes
if isPurePath /usr/share/texmf/doc/pgf ;then echo yes; else echo no;fi
no
So you could Find all files where no part of the path of the file is a symbolic link in running this command:
isPurePath myRootDir && find myRootDir -type f -print
So if something is printed, there are no symlink part !
You can use this script : (copy/paste the whole code in a shell)
cat<<'EOF'>sympath
#!/bin/bash
cur="$1"
while [[ $cur ]]; do
cur="${cur%/*}"
if test -L "$cur"; then
echo >&2 "$cur is a symbolic link"
exit 1
fi
done
EOF
${cur%/*} is a bash parameter expansion
EXAMPLE
chmod +x sympath
./sympath /tmp/foo/bar/base
/tmp/foo/bar is a symbolic link
I don't know any easy way, but here's an answer that fully answers your question, using two methods (that are, in fact, essentially the same):
Using an auxiliary script
Create a file called hasnosymlinkinname (or choose a better name --- I've always sucked at choosing names):
#!/bin/bash
name=$1
if [[ "$1" = /* ]]; then
name="$(pwd)/$1"
else
name=$1
fi
IFS=/ read -r -a namearray <<< "$name"
for ((i=0;i<${#namearray[#]}; ++i)); do
IFS=/ read name <<< "${namearray[*]:0:i+1}"
[[ -L "$name" ]] && exit 1
done
exit 0
Then chmod +x hasnosymlinkinname. Then use with find:
find /path/where/stuff/is -exec ./hasnosymlinkinname {} \; -print
The scripts works like this: using IFS trickery, we decompose the filename into each part of the path (separated by the /) and put each part in an array namearray. Then, we loop through the (cumulative) parts of the array (joined with the / thanks to some IFS trickery) and if this part is a symlink (see the -L test), we exit with a non-success return code (1), otherwise, we exit with a success return code (0).
Then find runs this script to all files in /path/where/stuff/is. If the script exits with a success return code, the name of the file is printed out (but instead of -print you could do whatever else you like).
Using a one(!)-liner (if you have a large screen) to impress your grand-mother (or your dog)
find /path/where/stuff/is -exec bash -c 'if [[ "$0" = /* ]]; then name=$0; else name="$(pwd)/$0"; fi; IFS=/ read -r -a namearray <<< "$name"; for ((i=0;i<${#namearray[#]}; ++i)); do IFS=/ read name <<< "${namearray[*]:0:i+1}"; [[ -L "$name" ]] && exit 1; done; exit 0' {} \; -print
Note
This method is 100% safe regarding spaces or funny symbols that could appear in file names. I don't know how you'll use the output of this command, but please make sure that you'll use a good method that will also be safe regarding spaces and funny symbols that could appear in a file name, i.e., don't parse its output with another script unless you use -print0 or similar smart thing.
I'm trying to write a function that will traverse the file directory and give me the value of the deepest directory. I've written the function and it seems like it is going to each directory, but my counter doesn't seem to work at all.
dir_depth(){
local olddir=$PWD
local dir
local counter=0
cd "$1"
for dir in *
do
if [ -d "$dir" ]
then
dir_depth "$1/$dir"
echo "$dir"
counter=$(( $counter + 1 ))
fi
done
cd "$olddir"
}
What I want it to do is feed the function a directory, say /home, and it'll go down each subdirectory within and find the deepest value. I'm trying to learn recursion better, but I'm not sure what I'm doing wrong.
Obviously find should be used for this
find . -type d -exec bash -c 'echo $(tr -cd / <<< "$1"|wc -c):$1' -- {} \; | sort -n | tail -n 1 | awk -F: '{print $1, $2}'
At the end I use awk to just print the output, but if that were the output you wanted it would be better just to echo it that way to begin with.
Not that it helps learn about recursion, of course.
Here's a one–liner that's pretty fast:
find . -type d -printf '%d:%p\n' | sort -n | tail -1
Or as a function:
depth()
{
find $1 -type d -printf '%d:%p\n' | sort -n | tail -1
}
Here is a version that seems to work:
#!/bin/sh
dir_depth() {
cd "$1"
maxdepth=0
for d in */.; do
[ -d "$d" ] || continue
depth=`dir_depth "$d"`
maxdepth=$(($depth > $maxdepth ? $depth : $maxdepth))
done
echo $((1 + $maxdepth))
}
dir_depth "$#"
Just a few small changes to your script. I've added several explanatory comments:
dir_depth(){
# don't need olddir and counter needs to be "global"
local dir
cd -- "$1" # the -- protects against dirnames that start with -
# do this out here because we're counting depth not visits
((counter++))
for dir in *
do
if [ -d "$dir" ]
then
# we want to descend from where we are rather than where we started from
dir_depth "$dir"
fi
done
if ((counter > max))
then
max=$counter # these are what we're after
maxdir=$PWD
fi
((counter--)) # decrement and test to see if we're back where we started
if (( counter == 0 ))
then
echo $max $maxdir # ta da!
unset counter # ready for the next run
else
cd .. # go up one level instead of "olddir"
fi
}
It prints the max depth (including the starting directory as 1) and the first directory name that it finds at that depth. You can change the test if ((counter > max)) to >= and it will print the last directory name it finds at that depth.
The AIX (6.1) find command seems to be quite limited (e.g. no printf option). If you like to list all directories up to a given depth try this combination of find and dirname. Save the script code as maxdepth.ksh. In comparison to the Linux find -maxdepth option, AIX find will not stop at the given maximum level which results in a longer runtime, depending on the size/depth of the scanned direcory:
#!/usr/bin/ksh
# Param 1: maxdepth
# Param 2: Directoryname
max_depth=0
netxt_dir=$2
while [[ "$netxt_dir" != "/" ]] && [[ "$netxt_dir" != "." ]]; do
max_depth=$(($max_depth + 1))
netxt_dir=$(dirname $netxt_dir)
done
if [ $1 -lt $max_depth ]; then
ret=1
else
ret=0
ls -d $2
fi
exit $ret
Sample call:
find /usr -type d -exec maxdepth.ksh 2 {} \;
The traditional way to do this is to have dir_depth return the maximum depth too. So you'll return both the name and depth.
You can't return an array, struct, or object in bash, so you can return e.g. a comma-separated string instead..
dir_depth(){
local dir
local max_dir="$1"
local max_depth=0
for dir in $1/*
do
if [ -d "$dir" ]
then
cur_ret=$(dir_depth "$dir")
cur_depth=$(expr "$cur_ret" : '\([^,]*\)')
cur_dir=$(expr "$cur_ret" : '.*,\(.*\)')
if [[ "$cur_depth" -gt "$max_depth" ]]; then
max_depth="$cur_depth"
max_dir="$cur_dir"
fi
fi
done
max_depth=$(($max_depth + 1))
echo "$max_depth,$max_dir"
}
EDIT: Fixed now. It starts with the directory you passed in as level 1, then counts upwards. I removed the cd, as it isn't necessary. Note that this will fail if filenames contain commas.
You might want to consider using a programming language with more built-in data structures, like Python.