Creating empty file in all subfolders with unique name - bash - linux

Similar to this question
but I want to have file name be the same name as the directory with "_info.txt" appended to it.
I have tried this
#!/bin/bash
find . -mindepth 1 -maxdepth 1 -type d | while read line; do
touch /Users/xxx/git/xxx/$line/$line_info.txt
done
It is creating ".txt" in each subdirectory.
What am I missing ?

The crucial mistake is that the underscore character is a valid character for a bash variable name:
$ a=1
$ a_b=2
$ echo $a_b
2
There are several ways around this:
$ echo "$a"_b
1_b
$ echo $a'_b'
1_b
$ echo ${a}_b
1_b
As for your task, here's a fast way:
find . -mindepth 1 -maxdepth 1 -type d -printf "%p_info.txt\0" | xargs -0 touch
The find -printf prints the path of each directory, and printf's %p is unaffected by an underscore after it. Then, xargs passes the filenames as many arguments to few runs of touch, making the creation of the files much faster.

Figured it out:
#!/bin/bash
find . -mindepth 1 -maxdepth 1 -type d | while read line; do
touch /Users/xxxx/git/xxxx/$line/"$line"_info.txt
done
I think I can still do the "find ." better as I have to run in directory I want the files added to. Ideally it should be able to be run anywhere.

If you are not recursing at all, you don't need find. To loop over subdirectories, all you need is
for dir in */; do
touch /Users/xxxx/git/xxxx/"$dir/${dir%/}_info.txt"
done
Incidentally, notice that the variable needs to be in double quotes always.

Try this script :
#!/bin/bash
find $1 -type d | while read line
do
touch "$line/$(basename $( readlink -m $line))_info.txt"
done
Save it as, say, appendinfo and run it as
./appendinfo directory_name_which_include_symlinks
Before
ssam#udistro:~/Documents/so$ tree 36973628
36973628
`-- 36973628p1
`-- 36973628p2
`-- 36973628p3
3 directories, 0 files
After
ssam#udistro:~/Documents/so$ tree 36973628
36973628
|-- 36973628_info.txt
`-- 36973628p1
|-- 36973628p1_info.txt
`-- 36973628p2
|-- 36973628p2_info.txt
`-- 36973628p3
`-- 36973628p3_info.txt
3 directories, 4 files
Note : readlink canonicalizes the path by following the symlinks. It is not required if, say, you're not going to give arguments like ..

Try this :
find . /Users/xxx/git/xxx/ -mindepth 1 -maxdepth 1 -type d -exec bash -c 'touch ${0}/${0##*/}_info.txt' {} \;

You're missing the path to the folder. I think you meant to write
touch "./$folder/*/*_info.txt"
Although this still only works if you start in the right directory. You should really have written something like
touch "/foo/bar/user/$folder/*/*_info.txt"
However, your use of "*/*_info.txt" is very... questionable... to say the least. I'm no bash expert, but I don't expect it to work. Reading everything in a directory (using "find" for example, and only accepting directories) and piping that output to "touch" would be a better approach.

Related

Bash script to rename folder with dynamic name and replace it's strings inside

I'm just starting to use Docker, and I'm newbie in bash scripts, but now I need to write bash script that will do next thing:
I have 2 dirs with some subdirs: Rtp/ and Rtp-[version]/, I need if Rtp-[version]/ dir exists rename it to the Rtp/ and override it's content. [version] - is dynamic number.
My dir structure:
|-- Rtp
|--- subdir 1
|--- subdir 2
|-- Rtp-1.0 (or Rtp-1.6, Rtp-2.7)
|--- subdir 1
|--- subdir 2
After this I need in the new Rtp/ dir find specific file app.properties, and change inside of it string: myvar=my value to string myvar=new value, and do the same thing with 3 more files
I tried this link: http://stackoverflow.com/questions/15290186/…: find . -name 'Rtp-' -exec bash -c 'mv $0 ${0/*/Rtp}' {} \; The problem that if dir already exists it move one directory into another.
Also I want rename it and not copy because it's big dir, and it can take some time to copy.
Thanks in advance, can you explain please the solution, in order to I will can change in the future if something will be changed.
1.
for dir in $(find Rtp-[version] -maxdepth 1 -type d): do
cp -Rf $dir Rtp
done
Find all directories in Rtp-version
Iterate through all of the results (for...)
Copy recursively to Rtp/, and -f will overwrite
2.
for f in $(find Rtp -type f -name "app.properties"): do
sed -ie s/myvar=myval/myvar=newval/ $f
done
Find all files named app.properties
Use sed (the Stream editor) to -i interactively -e search for a string (by regex) and replace it (eg s/<oldval>/<newval>/). Note that oldval and newval will need to be escaped. If they contain a lot of /'s,you could do something like s|<oldval>|<newval>|.
Based on #Brian Hazeltine answer and Check if a file exists with wildcard in shell script
I found next solution:
if ls Rtp-*/ 1> /dev/null 2>&1; then
mv -T Rtp-*/ Rtp
find appl.properties -type f -exec sed -ie 's/myvar=my value/myvar=new value/' {} \;
fi

linux command line recursively check directories for at least 1 file with the same name as the directory

I have a directory containing a large number of directories. Each directory contains some files and in some cases another directory.
parent_directory
sub_dir_1
sub_dir_1.txt
sub_dir_1_1.txt
sub_dir_2
sub_dir_2.txt
sub_dir_2_1.txt
sub_dir_3
sub_dir_3.txt
sub_dir_3_1.txt
sub_dir_4
sub_dir_4.txt
sub_dir_4_1.txt
sub_dir_5
sub_dir_5.txt
sub_dir_5_1.txt
I need to check that each sub_dir contains at least one file with the exact same name. I don' need to check any further down if there are sub directories within the sub_dirs.
I was thinking of using for d in ./*/ ; do (command here); done but I dont know how to get access to the sub_dir name inside the for loop
for d in ./*/ ;
do
(if directory does not contain 1 file that is the same name as the directory then echo directory name );
done
What is the best way to do this or is there a simpler way?
from the parent directory
find -maxdepth 1 -type d -printf "%f\n" |
xargs -I {} find {} -maxdepth 1 -type f -name {}.txt
will give you the name/name.txt pair. Compare with the all dir names to find the missing ones.
UPDATE
this might be simpler, instead of scanning you can check whether file exists or not
for f in $(find -maxdepth 1 -type d -printf "%f\n");
do if [ ! -e "$f/$f.txt" ];
then echo "$f not found";
fi; done
Maybe not understand fully, but
find . -print | grep -P '/(.*?)/\1\.txt'
this will print any file which is inside of the same-named directory, e.g:
./a/b/b.txt
./a/c/d/d.txt
etc...
Similarly
find . -print | sed -n '/\(.*\)\/\1\.txt/p'
this
find . -print | grep -P '/(.*?)/\1\.'
will list all files regardless of the extension in same-named dirs.
You can craft other regexes following the backreference logic.

recursively call a program to run on each subdirectory

I have a program which does something like
#!bin/bash
cd $1
tree $1
Where I run:
myprogram.sh subdir1
Where subdir1 is a subdirectory of dir I however have subdir1, subdir2, subdir3... subdirN within dir.
How can I tell my program to run on every sub directory of dir? Obviously my program doe not just run tree but just to denote I pass a subdirectory through the command line, of which my program uses the subdirectory name for a numer of processes.
Thanks
Use find. For example find $1 -type d will return a list of all directories under $1, recursing as needed.
You can use it before your script with xargs or exec:
find DIR -type d -print0 | xargs -0 -n1 thescript.sh
or
find DIR -type d -exec thescript.sh {} \;
Both of the above are safe for strangely named directories.
If you want to use find inside your script and no directory names contain newlines, try:
#!/bin/bash
find "$1" -type d| IFS='' while read d; do
pushd "$d" #like cd, but remembers where you came from
tree "$d"; #<-- your code here
popd #go back to starting point
done
If you only want direct subdirectories of the starting point, try adding -depth 1 in the argument list to find in the above examples.

How to find/list the directories where a particular sub-directory is not present

I am writing a shell script where it is checking if the bin directory is present under all the users directory under /home directory. The bin directory can be present directly under user directory or under the child directory of the user directory.
I mean let say I have a user as amit under /home. So the bin directory can be present directly as /amit/bin or can be present as /amit/jash/bin
Now my requirement is that I should have a list of users directories where the bin directory is not present either directly under user directory or under the child directory of the user directory. I tried the command as :
find /home -type d ! -exec test -e '{}/bin' \; -print
but it is not working. However when I am replacing the bin directory with some file, the command is working fine. Looks like this command is particularly for files. Is there any similar command for directories?? Any help on this will be greatly appreciated.
You're on the right track. The catch is that your test of "does the following directory NOT exist in this target" can't be expressed within find's conditions in such a way as to return only the top-level directory. So you need to nest, one way or another.
One strategy would be to use a for loop in bash:
$ mkdir foo bar baz one two
$ mkdir bar/bin baz/bin
$ for d in /home/*/; do find "$d" -type d -name bin | grep -q . || echo "$d"; done
foo/
one/
two/
This uses pathname expansion (globbing) to generate the list of directories to test, and then checks for the existence of "bin". If that check fails (i.e. find outputs nothing), the directory is printed. Note the trailing slash on /home/*/, which ensures that you will only be searching within directories, rather than files that might accidentally exist in /home/.
Another possibility might be to use nested finds, if you don't want to depend on bash:
$ find /home/ -type d -depth 1 -not -exec sh -c "find {}/ -type d -name bin -print | grep -q . " \; -print
/home/foo
/home/one
/home/two
This roughly duplicates the effect of the bash for loop above, but by nesting find within find -exec. It uses grep -q . to convert the output of find into an exit status that can be used as a condition for the outer find.
Note that since you're looking for a bin directory, we want to use test -d rather than test -e (which would also check for a bin file, which probably does not matter to you.)
Another option is to use bash process redirection. On multiple lines for easier reading:
cd /home/
comm -3 \
<(printf '%s\n' */ | sed 's|/.*||' | sort) \
<(find */ -type d -name bin | cut -d/ -f1 | uniq)
This unfortunately requires you to change to the /home directory before running, because of the way it strips off subdirectories. You can of course collapse this into a big long one-liner if you feel so inclined.
This comm solution also has the risk of failing on directories with special characters in their names, like newlines.
One last option is bash-only but more than a one-liner. It involves subtracting the directories containing "bin" from the full list. It uses an associative array and globstar, so it depends on bash version 4.
#!/usr/bin/env bash
shopt -s globstar
# Go to our root
cd /home
# Declare an associative array
declare -A dirs=()
# Populate the array with our "full" list of home directories
for d in */; do dirs[${d%/}]=""; done
# Remove directories that contain a "bin" somewhere inside 'em
for d in **/bin; do unset dirs[${d%%/*}]; done
# Print the result in reproducible form
declare -p dirs
# Or print the result just as a list of words.
printf '%s\n' "${!dirs[#]}"
Note that we're storing directories in the array index, which (1) makes it easy for us to find and delete items, and (2) insures unique entries, even if one user has multiple "bin" directories under their home.
cd /home
find . -maxdepth 1 -type d ! -name . | sort > a
find . -type d -name bin | cut -d/ -f1,2 | sort > b
comm -23 a b
Here, I'm making two sorted lists. The first contains all the home directories, and the second contains the top parent of any bin subdirectory. Finally I output any items from the first list not present in the second.

Listing all directories except one

I have a following directory structure
libs logs src etc .........
|-- logs
|-- src
|-- inc
"logs" directory is everywhere inside. So I want to list all directories except "logs". What will be shell command for that.
Something like
#!/bin/bash
for dir in `find * -type d`; do
if [[ ${dir} != "{logs}*" ]]; then
echo ${dir}
fi
done
but this does not seems to be working.
Regards,
Farrukh Arshad.
Rather than trying to process these things one at a time with checks, why don't you get all directories and just filter out the ones you don't want:
find * -type d | egrep -v '^logs/|/logs/'
The grep simply removes lines containing either logs/ at the start or /logs/ anywhere.
That's going to be a lot faster than individually checking every single directory one-by-one.
As mentioned in the above comment you can use egrep and | to separate patterns or like below define it all in find
find . -type d -print
.
./logs1
./test
./logs
$ find . -type d -not -name logs -not -name logs1 -print
.
./test

Resources