Find all writable files in the current directory - linux

I want to quickly identify all writable files in the directory. What is the quick way to do it?

find -type f -maxdepth 1 -writable

The -writable option will find files that are writable by the current user. If you'd like to find files that are writable by anyone (or even other combinations), you can use the -perm option:
find -maxdepth 1 -type f -perm /222
This will find files that are writable by their owner (whoever that may be):
find -maxdepth 1 -type f -perm /200
Various characters can be used to control the meaning of the mode argument:
/ - any permission bit
- - all bits (-222 would mean all - user, group and other)
no prefix - exact specification (222 would mean no permssions other than write)

to find writable files regardless of owner, group or others, you can check the w flag in the file permission column of ls.
ls -l | awk '$1 ~ /^.*w.*/'
$1 is the first field, (ie the permission block of ls -l) , the regular expression just say find the letter "w" in field one. that's all.
if you want to find owner write permission
ls -l | awk '$1 ~ /^..w/'
if you want to find group write permission
ls -l | awk '$1 ~ /^.....w/'
if you want to find others write permission
ls -l | awk '$1 ~ /w.$/'

-f will test for a file
-w will test whether it's writeable
Example:
$ for f in *; do [ -f $f ] && [ -w $f ] && echo $f; done

If you are in shell use
find . -maxdepth 1 -type f -writable
see man find
You will find you get better answers for this type of question on superuser.com or serverfault.com
If you are writing code not just using shell you may be interested in the access(2) system call.
This question has already been asked on serverfault
EDIT: #ghostdog74 asked if you removed write permissions for this file if this would still find the file. The answer, no this only finds files that are writable.
dwaters#eirene ~/temp
$ cd temp
dwaters#eirene ~/temp/temp
$ ls
dwaters#eirene ~/temp/temp
$ touch newfile
dwaters#eirene ~/temp/temp
$ ls -alph
total 0
drwxr-xr-x+ 2 dwaters Domain Users 0 Mar 22 13:27 ./
drwxrwxrwx+ 3 dwaters Domain Users 0 Mar 22 13:26 ../
-rw-r--r-- 1 dwaters Domain Users 0 Mar 22 13:27 newfile
dwaters#eirene ~/temp/temp
$ find . -maxdepth 1 -type f -writable
./newfile
dwaters#eirene ~/temp/temp
$ chmod 000 newfile
dwaters#eirene ~/temp/temp
$ ls -alph
total 0
drwxr-xr-x+ 2 dwaters Domain Users 0 Mar 22 13:27 ./
drwxrwxrwx+ 3 dwaters Domain Users 0 Mar 22 13:26 ../
---------- 1 dwaters Domain Users 0 Mar 22 13:27 newfile
dwaters#eirene ~/temp/temp
$ find . -maxdepth 1 -type f -writable
dwaters#eirene ~/temp/temp

for var in `ls`
do
if [ -f $var -a -w $var ]
then
echo "$var having write permission";
else
echo "$var not having write permission";
fi
done

The problem with find -writable is that it's not portable and it's not easy to emulate correctly with portable find operators. If your version of find doesn't have it, you can use touch to check if the file can be written to, using -r to make sure you (almost) don't modify the file:
find . -type f | while read f; do touch -r "$f" "$f" && echo "File $f is writable"; done
The -r option for touch is in POSIX, so it can be considered portable. Of course, this will be much less efficient than find -writable.
Note that touch -r will update each file's ctime (time of last change to its meta-data), but one rarely cares about ctime anyway.

Find files writeable by owner:
find ./ -perm /u+w
Find files writeable by group:
find ./ -perm /g+w
Find files writeable by anyone:
find ./ -perm /o+w
Find files with defined permission:
find ./ -type -d -perm 0777
find ./ -type -d -perm 0755
find ./ -type -f -perm 0666
find ./ -type -f -perm 0644
Disable recursive with:
-maxdepth 1

stat -c "%A->%n" *| sed -n '/^.*w.*/p'

I know this a very old thread, however...
The below command helped me: find . -type f -perm /+w
You can use -maxdepth based on how many levels below directory you want to search.
I am using Linux 2.6.18-371.4.1.el5.

If you want to find all files that are writable by apache etal then you can do this:
sudo su www-data
find . -writable 2>/dev/null
Replace www-data with nobody or apache or whatever your web user is.

Related

printing directory with simple ls and grep command Linux

So I have this command ls -al -R | grep libbpf.h and it just act dump print
-rw-r--r-- 1 root root 53107 جنوری 27 12:05 libbpf.h
I also need the exact subdirectories that contain this file is there a way I can use the above command with some option for grep or ls so it also prints some thining like
-rw-r--r-- 1 root root ./libbpf/src/include/libbpf.h 53107 جنوری 27 12:05 libbpf.h
so I only knows the the libbpf.h does exists in somewhere from root directory recursively searching just give me the path, does any one knows this
you can use find command
find "$(pwd -P)" -type f -name "libbpf.h" -ls
if you want only paths
find "$(pwd -P)" -type f -name "libbpf.h"
or
find . -type f -name "libbpf.h" -exec realpath {} \;

listing files and copy it in unix

The purpose is to copy files generated in last 2 hours. Here is the small script:
a=`find . -mmin -120 -ls`
cp $a /tmp/
echo $a
401 1 drwxr-x--- 2 oracle oinstall 1024 Mar 26 11:00 . 61
5953 -rw-r----- 1 oracle oinstall 6095360 Mar 26 11:00 ./file1
5953 -rw-r----- 1 oracle oinstall 6095360 Mar 26 11:00 ./file2
I get the following error:
cp: invalid option -- 'w'
Try `cp --help' for more information.
How can I fix the script ?
the -ls is giving you ls style output. Try dropping that and you should just get the relative path to the file which should be more like what you want. Or see Biffen's comment on your question, that seems like the approach I would have taken.
One problem is that -ls will print a lot of things beside the filenames, and they will be passed to cp and cp will be confused. So the first thing to do is to stop using -ls. (In the future you can use set -x to see what gets executed, it should help you debug this type of problem.)
Another problem is that the output of find can contain spaces and other things (imagine a file named $(rm -r *)) that can't simply be passed as arguments to cp.
I see three different solutions:
Use a single find command with -exec:
find . -mmin -120 -exec cp {} /tmp/ \;
Use xargs:
find . -mmin -120 -print0 | xargs -0 cp -t /tmp/
(Note the use of -t with cp to account for the swapped arguments.
Iterate over the output of find:
while IFS='' read -r -d '' file
do
cp "${file}" /tmp/
done < <( find . -mmin -120 -print0 )
(Caveat: I haven't tested any of the above.)
All you have to do is to extract only the filenames. So, change the find command to the following:
a=`find . -mmin -120 -type f`
cp $a /tmp/
Above find command only captures the files and finds only files whose where modified in last 120 mins. Or do it with single find command like below:
find . -mmin -120 -type f -exec cp '{}' /tmp/ \;

How can I get the owner of every file in a directory in Linux?

I need to check if root is the owner of every file in a particular directory. I can do
stat --format=%u /directory/name/here
to get the owner of the directory itself, but not the files in it.
My other idea was to do
ls -lL | grep "^-\|^d" | cut -d ' ' -f 2
but that doesn't work if the last byte in the permissions is a space and not a '.'.
This is also CentOS if that matters.
you can use find:
find /tmp -type f -printf '%u\n' | sort -u
lightdm
root
tiago
If you need UID in numeric form, like using stat:
find /tmp -type f -printf '%U\n' | sort -u
0
1000
104
You're asking two different questions.
I need to check if root is the owner of every file in a particular directory
To find any files that are not owned by root, you can do:
find /yourdir ! -user root
If it returns any filenames at all, then root is not the owner of every file in the particular directory.
How can I get the owner of every file in a directory in Linux?
To print every file in the directory with username:
find /yourdir -printf '%u %p\n'
And if the final step would be to chown the files not owned by root, you can simply do chown -R root /yourdir, since there's no harm in chowning root's files to root.
Try
find /your/dir/ -type f -exec stat --format='%u %n' '{}' \;
I added %n to display the file name.
Read find(1) for more info about find .... You may want -max_depth 1 to avoid going deeply in /your/dir/
for F in /directory/*; do stat --format='%u' "$F"; done
And optionally add dotglob option to match files beginning with . as well:
shopt -s dotglob
for F in /directory/*; do stat --format='%u' "$F"; done
* --format is equivalent to -c.

Find and rename a directory

I am trying to find and rename a directory on a linux system.
the folder name is something like : thefoldername-23423-431321
thefoldername is consistent but the numbers change every time.
I tried this:
find . -type d -name 'thefoldername*' -exec mv {} newfoldername \;
The command actually works and rename that directory. But I got an error on terminal saying that there is no such file or directory.
How can I fix it?
It's a harmless error which you can get rid of with the -depth option.
find . -depth -type d -name 'thefoldername*' -exec mv {} newfoldername \;
Find's normal behavior is to process directories and then recurse into them. Since you've renamed it find complains when it tries to recurse. The -depth option tells find to recurse first, then process the directory after.
It's missing the -execdir option! As stated in man pages of find:
-execdir command {};
Like -exec, but the specified command is run from the subdirectory containing the matched file, which is not normally the directory in which you started find.
find . -depth -type d -name 'thefoldername*' -execdir mv {} newfoldername \;
With the previous answer my folders contents are disappeared.
This is my solution. It works well:
for i in find -type d -name 'oldFolderName';
do
dirname=$(dirname "$i")
mv $dirname/oldFolderName $dirname/newFolderName
done
.../ABC -> .../BCD
find . -depth -type d -name 'ABC' -execdir mv {} $(dirname $i)/BCD \;
Replace 1100 with old_value and 2200 with new_value that you want to replace.
example
for i in $(find . -type d -iname '1100');do echo "mv "$i" "$i"__" >> test.txt; sed 's/1100__/2200/g' test.txt > test_1.txt; bash test_1.txt ; rm test*.txt ; done
Proof
[user#server test]$ ls -la check/
drwxr-xr-x. 1 user user 0 Jun 7 12:16 1100
[user#server test]$ for i in $(find . -type d -iname '1100');do echo "mv "$i" "$i"__" >> test.txt; sed 's/1100__/2200/g' test.txt > test_1.txt; bash test_1.txt ; rm test*.txt ; done
[user#server test]$ ls -la check/
drwxr-xr-x. 1 user user 0 Jun 7 12:16 2200
here __ in sed is used only to change the name it have no other significance

BASH - counting the number of executable files

Im trying to find the executables files and their total in a folder,its showing but the total is not this is my code below,can someone help me out were i am making mistakes,i am just a newbie trying to learn some bash scripting hope this is the right way of doing it thanks
#!/bin/bash
To="home/magie/d2"
cd "$To"
find . -type f -perm 755
if
find . -type f -perm 755
then
echo | echo wc -l
fi
If you want to find all the executable files then use this command:
find home/magie/d2 -type f -perm -u+rx | wc -l
OR
find home/magie/d2 -type f -perm +111 | wc -l
All the answers here are finding files with permission 755 only however keep in mind even 744 or 700 are also executable files by the user.
Just remove the if structure and the echo's
#!/bin/bash
To="home/magie/d2"
cd "$To"
find . -type f -perm 755
find . -type f -perm 755 | wc -l
Use /111 to find any file that has any of the execute bits set.
find . -type f -perm /111 | wc -l
I think I'd do something like this:
#!/bin/bash
dir=$1
files="$(find $dir -perm 755)"
total=$(wc -l <<< "$files")
echo "$files"
echo "Total: $total"
where the desired directory has to be passed as an argument in the command line and the quotes are used to preserve line breaks needed later by wc to correctly count the number of lines.
From the command line a simple one-liner should do the trick -
wc -l < <(find /home/magie/d2 -type f -perm 755)
<(..) is process substitution.

Resources