Whenever I do a "zfs diff" on certain zfs file systems, the output is cluttered by "modified" user files that get "changed" by running chmod over them (in a cron, to ensure some security aspects).
Question: is there an easy way that I missed to force (POSIX) permissions and ownership on file hierarchies without chmod/chown touching them when the permissions are already as I want them to be?
You could do something like
find dir/ -type f -perm /0111 -exec chmod a-x {} +
instead of an unconditional chmod to remove the permissions. (all the x permissions here.)
The fact aside that security by cron sounds like a bad idea the simple answer is "No". Neither chmod nor chown have a flag to do a modify a file/directory only when your desired state doesn't match.
You have two options:
write a patch for the tools
write a wrapper, as larsks suggested in the comments above
Depending on the size of your filesystem / directory structure that may increases the runtime of your cron job quite dramatically, though.
I've started to work with Docker for local development moving from installing everything on my mac to containers. Looking through a number of projects I regularly see the following shell commands, particularly
find /www -type d -exec chmod 750 {} \; \
find /www -type f -exec chmod 640 {} \;
Firstly, what are they trying to achieve, secondly what do the commands actually mean and lastly why/ when would you want or need to use this?
I recently duplicated and modified another project and found pulling these commands out seemed to make no difference (fair enough it was no longer based on the same base container.... but still).
Any glimmer of enlightenment would be greatly appreciated.
EDITS:
That handy link in the comments below to explain shell tells us:
What: find all the folders in /www and execute the chmod command, changing the permissions to 750
- still unsure of 750, and more importantly why you would do this.
The commands sets all files and directories to be readable and writable by the owner, and readable by the group but the files can not be executed by anyone.
You might want to read up on unix permissions in a bit more detail first.
find /www -type f -exec chmod 640 {} \;
Find all files under /www and set the user to have read, write access (6) and the group to have read access (4). Other users have no access (0).
find /www -type d -exec chmod 750 {} \;
Find all directories under /www and set the user to have read, write and execute permissions (7) and the group to have read and execute permissions (5) to those directories. Other users have no permissions (0).
The \; after each find terminates the -exec command and must be escaped when run in a shell so it is not interpreted as a regular ; which is the end of the shell command. This can also be achieved with a + which is easier to read as it doesn't need to be escaped and more efficient. The efficiency can cause differences in output, if you are relying on the stdout/stderr somewhere else.
Execute permissions on a directory mean that a user can change into the directory and access the files inside. As a directory can't be executed in the sense of a executable file, the execute bit was overloaded to mean something else.
The link Cyrus posted to explainshell.com is an excellent tool as well.
This was an interview question, they did not tell any information about the files, ie: extension, hidden files?, location (stored in single directory or a directory tree), so my first reaction to this question was:
rm -fr *
oh no, wait, should be:
rm -fr -- *
Then I realize that the above command would not remove hidden files successfully and quite frankly directories like . and .. might interfere, my second and final thought was a ShellScript that uses find.
find -depth -type f -delete
I'm not sure if this is the right way of doing it, I'm wondering if there is a better way of doing this task.
It's not as obvious as it seems:
http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html
I have to make a cronjob to remove files older than 99 days in a particular directory but I'm not sure the file names were made by trustworthy Linux users. I must expect special characters, spaces, slash characters, and others.
Here is what I think could work:
find /path/to/files -mtime +99 -exec rm {}\;
But I suspect this will fail if there are special characters or if it finds a file that's read-only, (cron may not be run with superuser privileges). I need it to go on if it meets such files.
When you use -exec rm {} \;, you shouldn't have any problems with spaces, tabs, returns, or special characters because find calls the rm command directly and passes it the name of each file one at a time.
Directories won't' be removed with that command because you aren't passing it the -r parameter, and you probably don't want too. That could end up being a bit dangerous. You might also want to include the -f parameter to do a force in case you don't have write permission. Run the cron script as root, and you should be fine.
The only thing I'd worry about is that you might end up hitting a file that you don't want to remove, but has not been modified in the past 100 days. For example, the password to stop the autodestruct sequence at your work. Chances are that file hasn't been modified in the past 100 days, but once that autodestruct sequence starts, you wouldn't want the one to be blamed because the password was lost.
Okay, more reasonable might be applications that are used but rarely modified. Maybe someone's resume that hasn't been updated because they are holding a current job, etc.
So, be careful with your assumptions. Just because a file hasn't been modified in 100 days doesn't mean it isn't used. A better criteria (although still questionable) is whether the file has been accessed in the last 100 days. Maybe this as a final command:
find /path/to/files -atime +99 -type f -exec rm -f {}\;
One more thing...
Some find commands have a -delete parameter which can be used instead of the -exec rm parameter:
find /path/to/files -atime +99 -delete
That will delete both found directories and files.
One more small recommendation: For the first week, save the files found in a log file instead of removing them, and then examine the log file. This way, you make sure that you're not deleting something important. Once you're happy thet there's nothing in the log file you don't want to touch, you can remove those files. After a week, and you're satisfied that you're not going to delete anything important, you can revert the find command to do the delete for you.
If you run rm with the -f option, your file is going to be deleted regardless of whether you have write permission on the file or not (all that matters is the containing folder). So, either you can erase all the files in the folder, or none. Add also -r if you want to erase subfolders.
And I have to say it: be very careful! You're playing with fire ;) I suggest you debug with something less harmful likfe the file command.
You can test this out by creating a bunch of files like, e.g.:
touch {a,b,c,d,e,f}
And setting permissions as desired on each of them
You should use -execdir instead of -exec. Even better, read the full Security considerations for find chapter in the findutils manual.
Please, always use rm [opts] -- [files], this will save you from errors with files like -rf wiich would otherwise be parsed as options. When you provide file names, then end all options.
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 months ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do I change the permissions of a folder and all its subfolders and files?
This only applies to the /opt/lampp/htdocs folder, not its contents:
chmod 775 /opt/lampp/htdocs
How do I set chmod 755 for all of the /opt/lampp/htdocs folder's current contents, as well as automatically in the future for new folders/files created under it?
The other answers are correct, in that chmod -R 755 will set these permissions to all files and subfolders in the tree. But why on earth would you want to? It might make sense for the directories, but why set the execute bit on all the files?
I suspect what you really want to do is set the directories to 755 and either leave the files alone or set them to 644. For this, you can use the find command. For example:
To change all the directories to 755 (drwxr-xr-x):
find /opt/lampp/htdocs -type d -exec chmod 755 {} \;
To change all the files to 644 (-rw-r--r--):
find /opt/lampp/htdocs -type f -exec chmod 644 {} \;
Some splainin': (thanks #tobbez)
chmod 755 {} specifies the command that will be executed by find for each directory
chmod 644 {} specifies the command that will be executed by find for each file
{} is replaced by the path
; the semicolon tells find that this is the end of the command it's supposed to execute
\; the semicolon is escaped, otherwise it would be interpreted by the shell instead of find
Check the -R option
chmod -R <permissionsettings> <dirname>
In the future, you can save a lot of time by checking the man page first:
man <command name>
So in this case:
man chmod
If you want to set permissions on all files to a+r, and all directories to a+x, and do that recursively through the complete subdirectory tree, use:
chmod -R a+rX *
The X (that is capital X, not small x!) is ignored for files (unless they are executable for someone already) but is used for directories.
You can use -R with chmod for recursive traversal of all files and subfolders.
You might need sudo as it depends on LAMP being installed by the current user or another one:
sudo chmod -R 755 /opt/lampp/htdocs
The correct recursive command is:
sudo chmod -R 755 /opt/lampp/htdocs
-R: change every sub folder including the current folder
To set to all subfolders (recursively) use -R
chmod 755 /folder -R
chmod 755 -R /opt/lampp/htdocs will recursively set the permissions. There's no way to set the permissions for files automatically in only this directory that are created after you set the permissions, but you could change your system-wide default file permissions with by setting umask 022.
You might want to consider this answer given by nik on Super User and use "one chmod" for all files/folders like this:
chmod 755 $(find /path/to/base/dir -type d)
chmod 644 $(find /path/to/base/dir -type f)
Here's another way to set directories to 775 and files to 664.
find /opt/lampp/htdocs \
\( -type f -exec chmod ug+rw,o+r {} \; \) , \
\( -type d -exec chmod ug+rwxs,o+rx {} \; \)
It may look long, but it's pretty cool for three reasons:
Scans through the file system only once rather than twice.
Provides better control over how files are handled vs. how directories are handled. This is useful when working with special modes such as the sticky bit, which you probably want to apply to directories but not files.
Uses a technique straight out of the man pages (see below).
Note that I have not confirmed the performance difference (if any) between this solution and that of simply using two find commands (as in Peter Mortensen's solution). However, seeing a similar example in the manual is encouraging.
Example from man find page:
find / \
\( -perm -4000 -fprintf /root/suid.txt %#m %u %p\n \) , \
\( -size +100M -fprintf /root/big.txt %-10s %p\n \)
Traverse the filesystem just once, listing setuid files and direct‐
tories into /root/suid.txt and large files into /root/big.txt.
Use:
sudo chmod 755 -R /whatever/your/directory/is
However, be careful with that. It can really hurt you if you change the permissions of the wrong files/folders.
chmod -R 755 directory_name works, but how would you keep new files to 755 also? The file's permissions becomes the default permission.
For Mac OS X 10.7 (Lion), it is:
chmod -R 755 /directory
And yes, as all other say, be careful when doing this.
For anyone still struggling with permission issues, navigate up one directory level cd .. from the root directory of your project, add yourself (user) to the directory and give permission to edit everything inside (tested on macOS).
To do that you would run this command (preferred):
sudo chown -R username: foldername .*
Note: for currently unsaved changes, one might need to restart the code editor first to be able to save without being asked for a password.
Also, please remember you can press Tab to see the options while typing the username and folder to make it easier for yourself.
Or simply:
sudo chmod -R 755 foldername
but as mentioned above, you need to be careful with the second method.
There are two answers to finding files and applying chmod to them.
The first one is find the file and apply chmod as it finds (as suggested by #WombleGoneBad).
find /opt/lampp/htdocs -type d -exec chmod 755 {} \;
The second solution is to generate a list of all files with the find command and supply this list to the chmod command (as suggested by #lamgesh).
chmod 755 $(find /path/to/base/dir -type d)
Both of these versions work nicely as long as the number of files returned by the find command is small. The second solution looks great to the eye and is more readable than the first one. If there are a large number of files, the second solution returns an error: Argument list too long.
So my suggestion is
Use chmod -R 755 /opt/lampp/htdocs if you want to change the permissions of all files and directories at once.
Use find /opt/lampp/htdocs -type d -exec chmod 755 {} \; if the number of files you are using is very large. The -type x option searches for a specific type of file only, where d is used for finding the directory, f for file and l for link.
Use chmod 755 $(find /path/to/base/dir -type d) otherwise
Better to use the first one in any situation
You want to make sure that appropriate files and directories are chmod-ed/permissions for those are appropriate. For all directories you want
find /opt/lampp/htdocs -type d -exec chmod 711 {} \;
And for all the images, JavaScript, CSS, HTML...well, you shouldn't execute them. So use
chmod 644 img/* js/* html/*
But for all the logic code (for instance PHP code), you should set permissions such that the user can't see that code:
chmod 600 file
I think Adam was asking how to change the umask value for all processes that are trying to operate on the /opt/lampp/htdocs directory.
The user file-creation mode mask (umask) is used to determine the file permissions for newly created files. It can be used to control the default file permissions for new files.
so if you will use some kind of FTP program to upload files into /opt/lampp/htdocs you need to configure your FTP server to use the umask you want.
If files / directories need be created, for example, by PHP, you need to modify the PHP code:
<?php
umask(0022);
// Other code
?>
If you will create new files / folders from your Bash session, you can set umask value in your shell profile ~/.bashrc file.
Or you can set up a umask in /etc/bashrc or /etc/profile file for all users.
Add the following to the file:
umask 022
Sample umask Values and File Creation Permissions
If umask value set to User permission Group permission Others permission
000 all all all
007 all all none
027 all read / execute none
And to change permissions for already created files, you can use find.
You can change permissions by using the following command:
sudo chmod go+rwx /opt/lampp/htdocs
Use:
sudo chmod -R a=-x,u=rwX,g=,o= folder
Owner rw, others no access, and directory with rwx. This will clear the existing 'x' on files.
The symbolic chmod calculation is explained in Chmod 744.
It's very simple.
In Terminal, go to the file manager. Example: sudo nemo. Go to /opt/, and then click Properties → Permission. And then Other. Finally, change to create and delete and file access to read and write and click on button Apply... And it works.