How to find all files which can be executed by specific user (not current!)
For current I can do it like
find /some/start/dir -executable
But now I want to do something like: find all files which user 'josh' can execute (by 'other' permissions, 'user' permissions and 'group' permissions). Of course, I do not know users's 'josh' password so I cannot su'ing.
Look up the user id of "josh" in /etc/passwd.
Then run: find /some/start/dir -type "f" -uid <ID> -perm 111.
I know this is an older thread, but I had to do this recently and it is still relevant.
Since we're talking about *nix permissions, one tedious yet thorough way to approach this is by looking at the membership that the ID has on the system:
ie:
# assuming josh is a member of group "grpname"
find / -user josh -perm -100 # gets files owned by josh & are executable
find / -group grpname -perm -010 # gets files with grp ownership and executable
# via group
# Must be repeated for each group josh is in
find / -perm -001 # gets files executable by any user
Note there could be some overlap for files that josh owns but are also owned by group "grpname". A sort|uniq would filter those out pretty easily.
Related
Here is what I have to do
“ Make a script to find all old, unused files, larger than 1 megabyte, starting at a directory supplied by the user (default: /tmp). Make “old” be more than a month by default, but allow the user to change it. Similarly make the size be one megabyte by default, but allow the user to change it.”
I have attempted several scripts but can’t seem to get this right. Here is what I wrote:
#!/bin/bash
find /home/usr/local/temp -atime +31 –size +1M
# Then to allow permission for everyone
chmod u+x Filename
In –size you seem to be using the Unicode en dash instead of an ASCII hyphen. Otherwise your command is fine.
Consider adding type -f to find only files, not directories.
find /home/usr/local/temp -atime +31 -size +1M -type f
Also, by "allow the user to change it", I think it means allow the user to change the parameters, like 10 days and 5 MB, not giving the user write permission.
I need to find a file belonging to a specific user.
I have tried using -O "userName" but, it prints: The -O option must be immediately followed by a decimal integer. I tried it with small 'o' and still, it wouldn't work
For searches based on the file owner, you should use the -user option.
find /path/to/some/folder -user exampleuser
The -O option is used for search optimization and is followed by an integer (1 to 3 levels).
I am trying to write a shell script that loops through all the directories under a Parent Directory and skip the directories that have empty folder "I_AM_Already_Processed" at leaf level.
Parent directory is provided as input to shell script as:
. selectiveIteration.sh /Employee
Structure under parent directory is shown below
( Employee directory contains data bifurcated by yearly -> monthly -> daily -> hourly basis )
/Employee/alerts/output/2014/10/08/HOURS/Actual_Files
Shell script is trying to find out which directory is not already processed. For Example:
Let us consider three hours of data for Date : 10/08/2014
1. /USD/alerts/output/2014/10/08/2(hourly_directory)/Actual_file +
directory_with_name(I_AM_Already_Processed)
2. /USD/alerts/output/2014/10/08/3(hourly_directory)/Actual_file +
directory_with_name(I_AM_Already_Processed)
3. /USD/alerts/output/2014/10/08/(hourly_directory)/Actual_file
in above example leaf directories 2 and 3 are already processed as they contain the folder named
"I_AM_Already_Processed" and whereas directory 4 is not already processed.
So shell script should skip folders 2, 3 but should process directory 4 ( print this directory in output).
Research/work I did:
Till now i am being able to iterate through the directory structure and go through all folders/files from root to leaf level, but i am not sure how to check for specific file and skip the directory if that file is present. ( i was able to do this much after referring few tutorials and older posts on StackOverflow)
I am newbie to shell scripting, this is my first time writing shell script, so if this too basic question to ask please excuse me. Trying to learn.
Any suggestion is welcome. Thanks in advance.
To check if a some_directory has already been processed, just do something like
find some_directory -type d -links 2 -name 'I_AM_Already_Processed'
Which will return the directory name if it has, or nothing if it hasn't. Note -links 2 tests if the directory is a leaf (meaning it only has links to its parent and itself, but not to any subdirectories). See this answer for more information.
So in a script, you could do
#!/bin/bash
directory_list=(/dir1 /dir2)
for dir in "${directory_list[#]}"; do
if [[ -n $(find "$dir" -type d -links 2 -name 'I_AM_Already_Processed' -print -quit) ]]; then
echo 'Has been processed'
else
echo 'Has not been processed'
fi
As i have a serous sever performance warning with installing drupal-commons (this is a installation-profile) i now want to reduce the server load.
Why - i get a message when trying to install drupal commons: Too-many-files-open it says!
Well Drupal & modules (ab)uses too many files! 50,000 maximum files and maybe 5000 directories is their goal and that si what they only backup so its in
So my question: How can i get rid of all those silly translation files or whatever for tiny miny parts of info and
unnecesary subdivisions; How i can get rid of them!
Background: I would expect that file_exists() during the installation(or bootstrap-cycle) is the most expensive built-in PHP function measured as total time spent calling the function for all invocations in a single request.
Well now i try to get rid of all the overhead (especially of the translation-files that are so called - po-files) - and unnecessary files that are contained in the drupal-commons 6.x-2.3 in order to get it runnning on my server.
i want to get rid all those silly translation files or whatever for tiny miny parts of info and unnecesary subdivisions;
How to search for all those .po-files recursivly - with GREP i guess ..
Note: i do not now where they are!
linux-vi17:/home/martin/web_technik/drupal/commons_3_jan_12/commons-6.x-2.3/commons-6.x-2.3 # lsCHANGELOG.txt
._.htaccess install.php modules themes
._CHANGELOG.txt ._includes INSTALL.txt ._profiles ._update.php
COMMONS_RELEASE_NOTES.txt includes ._INSTALL.txt profiles update.php
._COMMONS_RELEASE_NOTES.txt ._index.php LICENSE.txt ._robots.txt UPGRADE.txt
COPYRIGHT.txt index.php ._LICENSE.txt robots.txt ._UPGRADE.txt
._COPYRIGHT.txt INSTALL.mysql.txt MAINTAINERS.txt ._scripts ._xmlrpc.php
._cron.php ._INSTALL.mysql.txt ._MAINTAINERS.txt scripts xmlrpc.php
cron.php INSTALL.pgsql.txt ._misc ._sites
.directory ._INSTALL.pgsql.txt misc sites
.htaccess ._install.php ._modules ._themes
linux-vi17:/home/martin/web_technik/drupal/commons_3_jan_12/commons-6.x-2.3/commons-6.x-2.3 # grep .po
Any way i want to remove all .po files with one bash command - is this possible
but wait: first of all - i want to find out all the files - and the ni want to list it:
- since i then know what i rease (/or remove)
Well - all language translations in Drupal are named with .po -
how to find them with GREP?
How to list them - and subsequently - how to erase them!?
update:
i did the search with
find -type f -name "*.po"
. well i found approx 930 files.
afterwards i did remove all them with
6.x-2.3 # find -type f -name "*.po" -exec rm -f {} \;
a final serach with that code
find -type f -name "*.po"
gave no results back so every po-file was erased!
manym many thanks for the hints.
greetings
zero
If you want to find all files named *.po in a directory named /some/directory, you can use find:
find /some/directory -type f -name "*.po"
If you want to delete them all in a row (you do have backups, don't you?), then append an action to this command:
find /some/directory -type f -name "*.po" -exec rm -f {} \;
Replace /some/directory with the appropriate value and you should be set.
The issue with "too many open files" isn't normally because there are too many files in the filesystem, but because there is a limitation to the amount of files an application or user can have open at one time. This issue has been covered on drupal forums, for example, see this thread to solve it more permanently/nicely:
http://drupal.org/node/474152
A few more links about open files:
http://www.cyberciti.biz/tips/linux-procfs-file-descriptors.html
http://blog.thecodingmachine.com/content/solving-too-many-open-files-exception-red5-or-any-other-application
I am using Linux(Ubuntu), I am trying to find the files, but it is not working properly.
I have created some files in my directory structure, for example: World/India/Maharashtra/Pune/filename.xml
When I use the find command like:
find /home/lokesh/Desktop/Testing_India2/Test/World/India/Maharashtra/ -name filename*.xml -mmin -3000
It is giving the result perfectly.
But, when I am using the same command at "World" or "India" level:
find /home/lokesh/Desktop/Testing_India2/Test/World/ -name filename*.xml -mmin -3000
it does not give any result.
I have lots of directories at "India" level as well as at "Maharashtra" level and may be some directories within "Maharashtra's" inner directories. I have to find each file created in all directories.
And I have mounted all folders from different machine.(I mean some state from different and some from different machine.)
If someone knows how to solve this problem please reply me as soon as possible.
Double quote your search string and -L to make it follow symbolic links:
find -L /home/lokesh/Desktop/Testing_India2/Test/World/ -name "filename*.xml" -mmin -30000
This is something I ran into earlier today actually when using the '*' wildcard. I couldn't get it to continually traverse the subdirectories unless I escaped the * with a .
Give this a try:
find -L /home/lokesh/Desktop/Testing_India2/Test/World/ -name filename\*.xml -mmin -30000
Yes, as mentioned you have to double qoute your -name argument or use a backslash prior to the *. The reason for it not working from one directory, but working fine in other directories, is that the * character is used for filename generation by your shell. This of course happens before the find command is executed. Therefore, if you have a file that match the filename*.xml pattern in your current directory it will be substituted before find is executed, which is not what you want. On the other hand, if there is no pattern match in the current directory, the * character is passed on to the find command unmodified. By qouting you protect the string from shell filename generation.
Regards