printing directory with simple ls and grep command Linux - linux

So I have this command ls -al -R | grep libbpf.h and it just act dump print
-rw-r--r-- 1 root root 53107 جنوری 27 12:05 libbpf.h
I also need the exact subdirectories that contain this file is there a way I can use the above command with some option for grep or ls so it also prints some thining like
-rw-r--r-- 1 root root ./libbpf/src/include/libbpf.h 53107 جنوری 27 12:05 libbpf.h
so I only knows the the libbpf.h does exists in somewhere from root directory recursively searching just give me the path, does any one knows this

you can use find command
find "$(pwd -P)" -type f -name "libbpf.h" -ls
if you want only paths
find "$(pwd -P)" -type f -name "libbpf.h"
or
find . -type f -name "libbpf.h" -exec realpath {} \;

Related

Delete all directories excluding a specific one

On a linux host, given an absolute path, I want to delete all except a certain directory.
To simplify things below is the directory structure and I want to delete all directories except test2
[root#hostname test]# pwd
/opt/data/test
root#hostname test]# ls -ltr
total 8
drwxr-xr-x 5 root root 4096 Dec 5 09:33 test1
drwxr-xr-x 2 root root 4096 Dec 5 09:44 test2
[root#hostname test]#
I looked into How to exclude a directory in find . command and tried the prune switch like this
[root#hostname test]# find /opt/data/test -type d -path test2 -prune
-o ! -name "test2" -print
/opt/data/test
/opt/data/test/test1
/opt/data/test/test1/ls.txt
/opt/data/test/test1/test13
/opt/data/test/test1/test13/temp.py
/opt/data/test/test1/test13/try.pl
/opt/data/test/test1/test11
/opt/data/test/test1/test11/ls.txt
/opt/data/test/test1/test11/temp.py
/opt/data/test/test1/test11/try.pl
/opt/data/test/test1/test12
/opt/data/test/test1/test12/ls.txt
/opt/data/test/test1/test12/temp.py
/opt/data/test/test1/test12/try.pl
/opt/data/test/test1/temp.py
/opt/data/test/test1/try.pl
/opt/data/test/test2/ls.txt
/opt/data/test/test2/temp.py
/opt/data/test/test2/try.pl
[root#hostname test]
now it lists all the folder including /opt/data/test and if I add the xargs rm -rf to this, it will delete the parent folder as well. I don't think I understood the concept -path and -name correctly, please help
Using a simple negation with -not may be easier than pruning:
$ find /opt/data/test -type d -not -name test2
EDIT:
There's no reason to recurse in to the subdirectories, since you're going to delete the top directories anyway, so you could add -maxdepth and avoid finding the directories inside test2:
$ find /opt/data/test -maxdepth 1 -type d -not -name test2
I was able to achieve the required behavior by adding -mindepth 1 to the find command to exclude the parent directory.

Deleting a directory starting with a specific string in shell script

When I'm trying to delete all directories starting with tmp,
I used this command:
find -type d -name tmp* -exec rmdir {} \;
And it does the trick, but this command is exiting with an error code:
find: `./tmp09098': No such file or directory
What causing failing my build.
Can anyone tell me how I can delete those folders without getting the error?
After trying what #anubhava suggested and quoted 'temp*',
find -type d -name 'tmp*' -exec rmdir {} \;
I still get the same error:
find: `./tmp0909565g': No such file or directory
find: `./tmp09095': No such file or directory
When running:
find -type d -name 'tmp*' -exec ls -ld '{}' \;
This is the result:
drwxr-xr-x 2 root root 4096 Jun 16 10:08 ./tmp0909565g
drwxr-xr-x 2 root root 4096 Jun 16 10:07 ./tmp09095
drwxr-xr-x 2 root root 4096 Jun 16 10:08 ./tmp09094544656
You should quote the pattern otherwise it will be expanded by shell on command line:
find . -type d -name 'tmp*' -mindepth 1 -exec rm -rf '{}' \; -prune
-prune causes find to not descend into the current file/dir.
This works for me:
#! /bin/bash
tmpdirs=`find . -type d -name "tmp*"`
echo "$tmpdirs" |
while read dir;
do
echo "Removing directory $dir"
rm -r $dir;
done;

CVS Tagging recursively from within a shell script

My team uses CVS for revision control.I need to develop a shell script which extracts the content from a file and does a CVS tag to all .txt files(also the text files present in the sub-directories of the current direcotry) with that content. The file from which the content is extracted ,the script ,both are present in the same directory.
I tried running the script :
#!bin/bash
return_content(){
content=$(cat file1)
echo $content
}
find . -name "*.txt" -type f -print0|grep - v CVS|xargs -0 cvs tag $content
file1=> the file from where the content is extracted
"abc"=> content inside file1
Output:
abc
find: paths must precede expression
Usage: find [path...] [expression]
cvs tag: in directory
cvs [tag aborted]: there is no version here; run 'cvs checkout' first
I cannot figure out the problem. Please help
There are a few problems with the script.
1) The shebang line is missing the root /.
You have #!bin/bash and it should be #!/bin/bash
2) the -v option to grep has a space between the - and the v (and it shouldn't)
3) You don't actually call the return_content function in the last line - you refer to a variable inside the function. Perhaps the last line should look like:
find . -name "*.txt" -type f -print0|grep -v CVS|\
xargs -0 cvs tag $( return_content )
4) even after fixing all that, you may find that the grep complains because the print0 is passing it binary data (there are embedded nulls due to the -print0), and grep is expecting text. You can use more arguments to the find command to perform the function of the grep command and cut grep out, like this:
find . -type d -name CVS -prune -o -type f -name "*.txt" -print0 |\
xargs -0 cvs tag $( return_content )
find will recurse through all the entries in the current directory (and below), discarding anything that is a directory named CVS or below, and of the rest it will choose only files named *.txt.
I tested my version of that line with:
find . -type d -name CVS -prune -o -type f -name "*.txt" -print0 |\
xargs -t -0 echo ls -la
I created a couple of files with spaces in the names and .txt extensions in the directory so the script would show results:
bjb#spidy:~/junk/find$ find . -type d -name CVS -prune -o \
-type f -name "*.txt" -print0 | xargs -t -0 ls -la
ls -la ./one two.txt ./three four.txt
-rw-r--r-- 1 bjb bjb 0 Jun 27 00:44 ./one two.txt
-rw-r--r-- 1 bjb bjb 0 Jun 27 00:45 ./three four.txt
bjb#spidy:~/junk/find$
The -t argument makes xargs show the command it is about to run. I used ls -la instead of cvs tag - it should work similarly for cvs.

Find and rename a directory

I am trying to find and rename a directory on a linux system.
the folder name is something like : thefoldername-23423-431321
thefoldername is consistent but the numbers change every time.
I tried this:
find . -type d -name 'thefoldername*' -exec mv {} newfoldername \;
The command actually works and rename that directory. But I got an error on terminal saying that there is no such file or directory.
How can I fix it?
It's a harmless error which you can get rid of with the -depth option.
find . -depth -type d -name 'thefoldername*' -exec mv {} newfoldername \;
Find's normal behavior is to process directories and then recurse into them. Since you've renamed it find complains when it tries to recurse. The -depth option tells find to recurse first, then process the directory after.
It's missing the -execdir option! As stated in man pages of find:
-execdir command {};
Like -exec, but the specified command is run from the subdirectory containing the matched file, which is not normally the directory in which you started find.
find . -depth -type d -name 'thefoldername*' -execdir mv {} newfoldername \;
With the previous answer my folders contents are disappeared.
This is my solution. It works well:
for i in find -type d -name 'oldFolderName';
do
dirname=$(dirname "$i")
mv $dirname/oldFolderName $dirname/newFolderName
done
.../ABC -> .../BCD
find . -depth -type d -name 'ABC' -execdir mv {} $(dirname $i)/BCD \;
Replace 1100 with old_value and 2200 with new_value that you want to replace.
example
for i in $(find . -type d -iname '1100');do echo "mv "$i" "$i"__" >> test.txt; sed 's/1100__/2200/g' test.txt > test_1.txt; bash test_1.txt ; rm test*.txt ; done
Proof
[user#server test]$ ls -la check/
drwxr-xr-x. 1 user user 0 Jun 7 12:16 1100
[user#server test]$ for i in $(find . -type d -iname '1100');do echo "mv "$i" "$i"__" >> test.txt; sed 's/1100__/2200/g' test.txt > test_1.txt; bash test_1.txt ; rm test*.txt ; done
[user#server test]$ ls -la check/
drwxr-xr-x. 1 user user 0 Jun 7 12:16 2200
here __ in sed is used only to change the name it have no other significance

Find all writable files in the current directory

I want to quickly identify all writable files in the directory. What is the quick way to do it?
find -type f -maxdepth 1 -writable
The -writable option will find files that are writable by the current user. If you'd like to find files that are writable by anyone (or even other combinations), you can use the -perm option:
find -maxdepth 1 -type f -perm /222
This will find files that are writable by their owner (whoever that may be):
find -maxdepth 1 -type f -perm /200
Various characters can be used to control the meaning of the mode argument:
/ - any permission bit
- - all bits (-222 would mean all - user, group and other)
no prefix - exact specification (222 would mean no permssions other than write)
to find writable files regardless of owner, group or others, you can check the w flag in the file permission column of ls.
ls -l | awk '$1 ~ /^.*w.*/'
$1 is the first field, (ie the permission block of ls -l) , the regular expression just say find the letter "w" in field one. that's all.
if you want to find owner write permission
ls -l | awk '$1 ~ /^..w/'
if you want to find group write permission
ls -l | awk '$1 ~ /^.....w/'
if you want to find others write permission
ls -l | awk '$1 ~ /w.$/'
-f will test for a file
-w will test whether it's writeable
Example:
$ for f in *; do [ -f $f ] && [ -w $f ] && echo $f; done
If you are in shell use
find . -maxdepth 1 -type f -writable
see man find
You will find you get better answers for this type of question on superuser.com or serverfault.com
If you are writing code not just using shell you may be interested in the access(2) system call.
This question has already been asked on serverfault
EDIT: #ghostdog74 asked if you removed write permissions for this file if this would still find the file. The answer, no this only finds files that are writable.
dwaters#eirene ~/temp
$ cd temp
dwaters#eirene ~/temp/temp
$ ls
dwaters#eirene ~/temp/temp
$ touch newfile
dwaters#eirene ~/temp/temp
$ ls -alph
total 0
drwxr-xr-x+ 2 dwaters Domain Users 0 Mar 22 13:27 ./
drwxrwxrwx+ 3 dwaters Domain Users 0 Mar 22 13:26 ../
-rw-r--r-- 1 dwaters Domain Users 0 Mar 22 13:27 newfile
dwaters#eirene ~/temp/temp
$ find . -maxdepth 1 -type f -writable
./newfile
dwaters#eirene ~/temp/temp
$ chmod 000 newfile
dwaters#eirene ~/temp/temp
$ ls -alph
total 0
drwxr-xr-x+ 2 dwaters Domain Users 0 Mar 22 13:27 ./
drwxrwxrwx+ 3 dwaters Domain Users 0 Mar 22 13:26 ../
---------- 1 dwaters Domain Users 0 Mar 22 13:27 newfile
dwaters#eirene ~/temp/temp
$ find . -maxdepth 1 -type f -writable
dwaters#eirene ~/temp/temp
for var in `ls`
do
if [ -f $var -a -w $var ]
then
echo "$var having write permission";
else
echo "$var not having write permission";
fi
done
The problem with find -writable is that it's not portable and it's not easy to emulate correctly with portable find operators. If your version of find doesn't have it, you can use touch to check if the file can be written to, using -r to make sure you (almost) don't modify the file:
find . -type f | while read f; do touch -r "$f" "$f" && echo "File $f is writable"; done
The -r option for touch is in POSIX, so it can be considered portable. Of course, this will be much less efficient than find -writable.
Note that touch -r will update each file's ctime (time of last change to its meta-data), but one rarely cares about ctime anyway.
Find files writeable by owner:
find ./ -perm /u+w
Find files writeable by group:
find ./ -perm /g+w
Find files writeable by anyone:
find ./ -perm /o+w
Find files with defined permission:
find ./ -type -d -perm 0777
find ./ -type -d -perm 0755
find ./ -type -f -perm 0666
find ./ -type -f -perm 0644
Disable recursive with:
-maxdepth 1
stat -c "%A->%n" *| sed -n '/^.*w.*/p'
I know this a very old thread, however...
The below command helped me: find . -type f -perm /+w
You can use -maxdepth based on how many levels below directory you want to search.
I am using Linux 2.6.18-371.4.1.el5.
If you want to find all files that are writable by apache etal then you can do this:
sudo su www-data
find . -writable 2>/dev/null
Replace www-data with nobody or apache or whatever your web user is.

Resources