I have a following directory structure
libs logs src etc .........
|-- logs
|-- src
|-- inc
"logs" directory is everywhere inside. So I want to list all directories except "logs". What will be shell command for that.
Something like
#!/bin/bash
for dir in `find * -type d`; do
if [[ ${dir} != "{logs}*" ]]; then
echo ${dir}
fi
done
but this does not seems to be working.
Regards,
Farrukh Arshad.
Rather than trying to process these things one at a time with checks, why don't you get all directories and just filter out the ones you don't want:
find * -type d | egrep -v '^logs/|/logs/'
The grep simply removes lines containing either logs/ at the start or /logs/ anywhere.
That's going to be a lot faster than individually checking every single directory one-by-one.
As mentioned in the above comment you can use egrep and | to separate patterns or like below define it all in find
find . -type d -print
.
./logs1
./test
./logs
$ find . -type d -not -name logs -not -name logs1 -print
.
./test
Related
What I want:
In a bash script: Find all files in current directory that contain a certain string "teststring" and cop them into a subfolder "./testfolder"
Found this to find the filenames which im looking for
find . -type f -print0 | xargs -0 grep -l "teststring"
..and this to copy found files to another folder (here selecting by strings in filename):
find . -type f -iname "stringinfilename" -exec cp {} ./testfolder/ \;
Whats the best way to combine both commands to achieve what I described at the top?
Just let find do both:
find . -name subdir -prune -o -type f -exec \
grep -q teststring "{}" \; -exec cp "{}" subdir \;
Note that things like this are much easier if you don't try to add to the directory you're working in. In other words, write to a sibling dir instead of writing to a subdirectory. If you want to wind up with the data in a subdir, mv it when you're done. That way, you don't have to worry about the prune (ie, you don't have to worry about find descending into the subdir and attempting to duplicate the work).
Suppose that the following project structure is given:
/
|
|- /bin
|- /src
|
|- /bin
|- abc
...
|
|- /bin
and I would like to erase the contents of the bin subdirectories, but not delete them. How do I achieve tis through the Bash command line?
EDIT
I have tried find . -regex bin/* -type d -exec rm -rf {} \;, but to no avail.
I suggest with GNU find:
find . -regex ".*/bin/.*" -type f -exec echo rm {} \;
if everything looks fine remove echo.
When you want to delete files, it is unnecessary (indeed dangerous) to pass -r to rm.
Besides, find "knows" how to delete, without the need to call rm in the first place.
find . -type f -wholename "*/bin/*" -delete
Debug your command: try each argument to find in isolation to see if it is producing expected output.
We quickly see that
find . -regex bin/*
doesn't work because of an issue in regex. If we check the man for find, we see that the regex must match the full path:
-regex pattern
File name matches regular expression pattern. This is a match
on the whole path, not a search. For example, to match a file
named ./fubar3', you can use the regular expression.*bar.'
or .*b.*3', but notf.*r3'.
This should do the job:
find . -regex '".*/bin/.*"' -type d -delete
or using your command:
find . -regex '".*/bin/.*"' -type d -prune -exec rm -rf {} \;
Remember to quote your expressions to avoid unexpected shell expansion.
I have a directory containing a large number of directories. Each directory contains some files and in some cases another directory.
parent_directory
sub_dir_1
sub_dir_1.txt
sub_dir_1_1.txt
sub_dir_2
sub_dir_2.txt
sub_dir_2_1.txt
sub_dir_3
sub_dir_3.txt
sub_dir_3_1.txt
sub_dir_4
sub_dir_4.txt
sub_dir_4_1.txt
sub_dir_5
sub_dir_5.txt
sub_dir_5_1.txt
I need to check that each sub_dir contains at least one file with the exact same name. I don' need to check any further down if there are sub directories within the sub_dirs.
I was thinking of using for d in ./*/ ; do (command here); done but I dont know how to get access to the sub_dir name inside the for loop
for d in ./*/ ;
do
(if directory does not contain 1 file that is the same name as the directory then echo directory name );
done
What is the best way to do this or is there a simpler way?
from the parent directory
find -maxdepth 1 -type d -printf "%f\n" |
xargs -I {} find {} -maxdepth 1 -type f -name {}.txt
will give you the name/name.txt pair. Compare with the all dir names to find the missing ones.
UPDATE
this might be simpler, instead of scanning you can check whether file exists or not
for f in $(find -maxdepth 1 -type d -printf "%f\n");
do if [ ! -e "$f/$f.txt" ];
then echo "$f not found";
fi; done
Maybe not understand fully, but
find . -print | grep -P '/(.*?)/\1\.txt'
this will print any file which is inside of the same-named directory, e.g:
./a/b/b.txt
./a/c/d/d.txt
etc...
Similarly
find . -print | sed -n '/\(.*\)\/\1\.txt/p'
this
find . -print | grep -P '/(.*?)/\1\.'
will list all files regardless of the extension in same-named dirs.
You can craft other regexes following the backreference logic.
Similar to this question
but I want to have file name be the same name as the directory with "_info.txt" appended to it.
I have tried this
#!/bin/bash
find . -mindepth 1 -maxdepth 1 -type d | while read line; do
touch /Users/xxx/git/xxx/$line/$line_info.txt
done
It is creating ".txt" in each subdirectory.
What am I missing ?
The crucial mistake is that the underscore character is a valid character for a bash variable name:
$ a=1
$ a_b=2
$ echo $a_b
2
There are several ways around this:
$ echo "$a"_b
1_b
$ echo $a'_b'
1_b
$ echo ${a}_b
1_b
As for your task, here's a fast way:
find . -mindepth 1 -maxdepth 1 -type d -printf "%p_info.txt\0" | xargs -0 touch
The find -printf prints the path of each directory, and printf's %p is unaffected by an underscore after it. Then, xargs passes the filenames as many arguments to few runs of touch, making the creation of the files much faster.
Figured it out:
#!/bin/bash
find . -mindepth 1 -maxdepth 1 -type d | while read line; do
touch /Users/xxxx/git/xxxx/$line/"$line"_info.txt
done
I think I can still do the "find ." better as I have to run in directory I want the files added to. Ideally it should be able to be run anywhere.
If you are not recursing at all, you don't need find. To loop over subdirectories, all you need is
for dir in */; do
touch /Users/xxxx/git/xxxx/"$dir/${dir%/}_info.txt"
done
Incidentally, notice that the variable needs to be in double quotes always.
Try this script :
#!/bin/bash
find $1 -type d | while read line
do
touch "$line/$(basename $( readlink -m $line))_info.txt"
done
Save it as, say, appendinfo and run it as
./appendinfo directory_name_which_include_symlinks
Before
ssam#udistro:~/Documents/so$ tree 36973628
36973628
`-- 36973628p1
`-- 36973628p2
`-- 36973628p3
3 directories, 0 files
After
ssam#udistro:~/Documents/so$ tree 36973628
36973628
|-- 36973628_info.txt
`-- 36973628p1
|-- 36973628p1_info.txt
`-- 36973628p2
|-- 36973628p2_info.txt
`-- 36973628p3
`-- 36973628p3_info.txt
3 directories, 4 files
Note : readlink canonicalizes the path by following the symlinks. It is not required if, say, you're not going to give arguments like ..
Try this :
find . /Users/xxx/git/xxx/ -mindepth 1 -maxdepth 1 -type d -exec bash -c 'touch ${0}/${0##*/}_info.txt' {} \;
You're missing the path to the folder. I think you meant to write
touch "./$folder/*/*_info.txt"
Although this still only works if you start in the right directory. You should really have written something like
touch "/foo/bar/user/$folder/*/*_info.txt"
However, your use of "*/*_info.txt" is very... questionable... to say the least. I'm no bash expert, but I don't expect it to work. Reading everything in a directory (using "find" for example, and only accepting directories) and piping that output to "touch" would be a better approach.
I have two directories in totally different places in the filesystem:
/path1/dir1/*
/path2/dir2/*
dir1 has a list of subdirectories and dir2 has a similar list of subdirectories, some of which are also in dir1
I'd like a command that can use a list of the subdirectories that are currently in dir1 and if they exist in dir2, delete them.
I was able to output a list of the subdirectory names using the find command and sed together like this:
find $PWD -maxdepth 1 -type d | sed -e 's\^/path1/dir1///g' and that will output:
subdir1
subdir2
subdir3
but I don't know how to then feed that into a command to delete (recursively) those subdirectories from another location. Do I need to use awk or xargs or something?
Sounds like you want something like this:
cd /path1/dir1; find . -type d -maxdepth 1 -mindepth 1 -exec rm -rf /path2/dir2/{} \;
Replace the "rm -rf" with "echo" to see what directories it will delete before trying it :-)
The "-f" option prevents errors if the directory doesn't exist
Some versions of find (GNU?) also have "-execdir". You can use it like this:
find /path1/dir -type d -maxdepth 1 -mindepth 1 -execdir rm -rf /path2/dir2/{} \;
for dir in path1/dir1/*/
do
rm -rf path2/dir2/"$(basename dir)"
done
You could also try using find to locate the dirs and piping to awk:
find /path1/dir1/ -maxdepth 1 -mindepth 1 -type d |awk 'BEGIN{FS="/"}{system("echo rm -rf /path2/dir2/"$NF);}'
remove the "echo" in the system() call when you are sure the command is behaving properly.