I am entirely new to shell scripts. I know how to change the mode for a single file.
But, I need to meet the following requirement and my operating system is red hat enterprise linux
To find all files in a directory which are having 640 mode and then change it to 644.
Like wise im having 10 directories where i need to recursively find all the files in all directories and change the mode to 644.
Later sending email with the file names whose status have been changed.
Expecting your kind assistance to complete this requirement.
Some research points to get you going.
The find command can be used to find files of varying properties under a given point.
You can use it with -type to select specific file types (regular files, directories, etc).
You can use it with -maxdepth to restrict how deep it will go down the tree.
You can use it with -perm to select only files with specific permissions.
You can use it with -print to output the filenames, including capturing them to a text file for later mailing (with a tool like mailx).
You can use it with -exec to carry out arbitrary commands on each file matching the conditions.
chmod is the command to change permissions.
For example, the following commands will find all regular files of the form *.dat, in the current directory (no subdirectories) and with permission 640, then change all those permissions to 644:
find . -type f -perm 640 -name '*.dat' -maxdepth 1 -exec chmod 644 {} ';'
All these options, and more, can be found in the manpage for find with the command:
man find
or by looking for some good explanations, such as the GNU find documentation.
However, find is not a tool for the faint of heart, it will bite you at every opportunity. Expect to ask at least another ten questions here before you get what you need :-)
Related
I've started to work with Docker for local development moving from installing everything on my mac to containers. Looking through a number of projects I regularly see the following shell commands, particularly
find /www -type d -exec chmod 750 {} \; \
find /www -type f -exec chmod 640 {} \;
Firstly, what are they trying to achieve, secondly what do the commands actually mean and lastly why/ when would you want or need to use this?
I recently duplicated and modified another project and found pulling these commands out seemed to make no difference (fair enough it was no longer based on the same base container.... but still).
Any glimmer of enlightenment would be greatly appreciated.
EDITS:
That handy link in the comments below to explain shell tells us:
What: find all the folders in /www and execute the chmod command, changing the permissions to 750
- still unsure of 750, and more importantly why you would do this.
The commands sets all files and directories to be readable and writable by the owner, and readable by the group but the files can not be executed by anyone.
You might want to read up on unix permissions in a bit more detail first.
find /www -type f -exec chmod 640 {} \;
Find all files under /www and set the user to have read, write access (6) and the group to have read access (4). Other users have no access (0).
find /www -type d -exec chmod 750 {} \;
Find all directories under /www and set the user to have read, write and execute permissions (7) and the group to have read and execute permissions (5) to those directories. Other users have no permissions (0).
The \; after each find terminates the -exec command and must be escaped when run in a shell so it is not interpreted as a regular ; which is the end of the shell command. This can also be achieved with a + which is easier to read as it doesn't need to be escaped and more efficient. The efficiency can cause differences in output, if you are relying on the stdout/stderr somewhere else.
Execute permissions on a directory mean that a user can change into the directory and access the files inside. As a directory can't be executed in the sense of a executable file, the execute bit was overloaded to mean something else.
The link Cyrus posted to explainshell.com is an excellent tool as well.
This command is slow: find / -name 'program.c' 2>/dev/null
1) Any faster alternatives?
2) Is there an alternative to the above command to search for a file within a specific nested directory (but not the entire system)?
The first / in your command is the base directory from which find will begin searching. You can specify any directory you like, so if you know, for example, that program.c is somewhere in your home directory you could do find ~ -name 'program.c' or if it's in, say, /usr/src do find /usr/src -name 'program.c'
That should help with both 1 and 2.
If you want a command that's not find that can be faster you can check out the mlocate stuff. If you've done a recent updatedb (or had cron do it for you overnight) you can do locate <pattern> and it will show you everywhere that matches that pattern in a file/directory name, and that's usually quite fast.
For fast searching, you probably want locate
It is usually setup to do a daily scan of the filesystem, and index the files.
http://linux.die.net/man/1/locate
although locate & updatedb is for the whole system, the search usually is faster.
I am using fedora 15 on my machine. Actually i am trying to find the folders with name like apache-tomcat-7.0.37 by searching in the entire file system
Actually i had some folders like below in the path /opt/tomcat/
apache-tomcat-7.0.37
apache-tomcat-6.0.34
apache-tomcat-7.0.67
.........
And some folders at /usr/share/tomcat/
apache-tomcat-4.0.7
apache-tomcat-6.0.4
apache-tomcat-8.0.6
.........
So i want is to locate/find/search all these folder paths from linux terminal by using a command.
I have googled a lot and got some commands like locate and find as below
find / -name apache-tomcat*
locate apache-tomcat
The above commands are listing all the folder including extra unwanted information of folders, so actually what i want is , need to search for only for the folders that has the name like apache-tomcat-x.x.x,apache-tomcat-xx.xx.xx
Here in the folder name the starting words apache-tomcat is same and only the integer part (version number) changes. So i want to find all the folders with different version number like by using regular expressions in place of integer numbers to find the folders
so can anyone please let me know how to search the folder of the above scenario by using a command with regular expressions or someting like that which find all the folders with name apache-tomcat-xx.x.xxx.....
This should find all files, diretories, links, etc. that have the pattern apache-tomcat-X.Y.Z where X, Y, and Z are integers.
find . -regextype sed -regex ".*/apache-tomcat-[0-9]\+\.[0-9]\+\.[0-9]\+"
If you're looking only for directories, use this variant:
find . -type d -regextype sed -regex ".*/apache-tomcat-[0-9]\+\.[0-9]\+\.[0-9]\+"
If you want to search the entire system starting at /, use this variant:
find / -type d -regextype sed -regex ".*/apache-tomcat-[0-9]\+\.[0-9]\+\.[0-9]\+"
You can provide a suitable regular expression to locate in order to do a fast search of your entire system:
locate -b --regex "apache-tomcat-[0-9]+.[0-9]+.[0-9]+$"
As with any use of locate, the file database it uses will need to be sufficiently up-to-date. If you have sufficient permissions, you can do sudo updatedb to force an update.
I have to make a cronjob to remove files older than 99 days in a particular directory but I'm not sure the file names were made by trustworthy Linux users. I must expect special characters, spaces, slash characters, and others.
Here is what I think could work:
find /path/to/files -mtime +99 -exec rm {}\;
But I suspect this will fail if there are special characters or if it finds a file that's read-only, (cron may not be run with superuser privileges). I need it to go on if it meets such files.
When you use -exec rm {} \;, you shouldn't have any problems with spaces, tabs, returns, or special characters because find calls the rm command directly and passes it the name of each file one at a time.
Directories won't' be removed with that command because you aren't passing it the -r parameter, and you probably don't want too. That could end up being a bit dangerous. You might also want to include the -f parameter to do a force in case you don't have write permission. Run the cron script as root, and you should be fine.
The only thing I'd worry about is that you might end up hitting a file that you don't want to remove, but has not been modified in the past 100 days. For example, the password to stop the autodestruct sequence at your work. Chances are that file hasn't been modified in the past 100 days, but once that autodestruct sequence starts, you wouldn't want the one to be blamed because the password was lost.
Okay, more reasonable might be applications that are used but rarely modified. Maybe someone's resume that hasn't been updated because they are holding a current job, etc.
So, be careful with your assumptions. Just because a file hasn't been modified in 100 days doesn't mean it isn't used. A better criteria (although still questionable) is whether the file has been accessed in the last 100 days. Maybe this as a final command:
find /path/to/files -atime +99 -type f -exec rm -f {}\;
One more thing...
Some find commands have a -delete parameter which can be used instead of the -exec rm parameter:
find /path/to/files -atime +99 -delete
That will delete both found directories and files.
One more small recommendation: For the first week, save the files found in a log file instead of removing them, and then examine the log file. This way, you make sure that you're not deleting something important. Once you're happy thet there's nothing in the log file you don't want to touch, you can remove those files. After a week, and you're satisfied that you're not going to delete anything important, you can revert the find command to do the delete for you.
If you run rm with the -f option, your file is going to be deleted regardless of whether you have write permission on the file or not (all that matters is the containing folder). So, either you can erase all the files in the folder, or none. Add also -r if you want to erase subfolders.
And I have to say it: be very careful! You're playing with fire ;) I suggest you debug with something less harmful likfe the file command.
You can test this out by creating a bunch of files like, e.g.:
touch {a,b,c,d,e,f}
And setting permissions as desired on each of them
You should use -execdir instead of -exec. Even better, read the full Security considerations for find chapter in the findutils manual.
Please, always use rm [opts] -- [files], this will save you from errors with files like -rf wiich would otherwise be parsed as options. When you provide file names, then end all options.
I'm trying to write a shell script under linux, which lists all folders (recursively) with a certain name and no symlink pointing to it.
For example, I have:
/home/htdocs/cust1/typo3_src-4.2.11
/home/htdocs/cust2/typo3_src-4.2.12
/home/htdocs/cust3/typo3_src-4.2.12
Now I want to go through all subdirectories of /home/htdocs and find those folders typo3_*, that are not pointed to from somewhere.
Should be possible with a shellscript or a command, but I have no idea how.
Thanks for you help
Stefan
I think none of the common file systems store if there are symlinks pointing to this file in the file node, so you would have to scan all other files to see if it is a symlink to this one. If you don't limit your depth of search to a certain level, this might take a very long time. If you want to perform that search in /home/htdocs, for example, it would work something like this:
# find specified folders:
find /home/htdocs -name 'typo3_*' -type d | while read folder; do
# list all symlinks pointing to $folder
find -L /home/htdocs -samefile "$folder"|grep -v "$folder\$"
done