Sync file permissions *only* - file-permissions

A junior team member did a nasty chmod -R 777 in /etc/ and cause SSH cannot login remotely in a Ubuntu server. Now I fixed this login issue by manually set the correct file permissions on /etc/ssh/*, /etc/sudoers, /etc/ssl/* by comparing other normal system. But there are so many other files which may cause future issues.
I am thinking to use rsync to do the work, but don't want it to sync file contents, just permissions, no more work.
Is that possible? I see rsync has -a option but it does too much.

If you have the "normal" content of /etc available on the same system (like mounted in some other directory, let's say /mnt/correct/etc), you could use the --reference parameter to chmod and chown commands, and combine it with find that is started from the "normal" directory:
$ cd /mnt/correct/etc
$ find . ! -type l -exec chown -v --reference='{}' /etc/'{}' \;
$ find . ! -type l -exec chmod -v --reference='{}' /etc/'{}' \;
(I'm assuming you're on a UNIX system with GNU coreutils versions of chmod and chown.)
The "! -type l" condition in find excludes symbolic links, because otherwise chmod will use the link's permissions to change the file the link points to (and same applies to chown).

Please note you can also try something that won't necessarily make you need to copy files from one place to another (depending on the filesize it may be desired)
You could use a mix of find and some grepping to generate a shell script to be executed on the host where you need to fix permissions.. you could use the same approach to generate a script for changing users/groups as well.. for example:
# find . -printf 'chmod %m %p #%M\n' | sort -k3 | grep -Pi '\s+\S*s\S*$' > /var/tmp/fix_permissions.bash
# bash /var/tmp/fix_permissions.bash
In the example above, what it does is to list all the files with their attributes in this format:
chmod 2755 ./addfs_7.1.0/bin #drwxr-sr-x
chmod 2755 ./addfs_7.1.0/config #drwxr-sr-x
chmod 2755 ./addfs_7.1.0 #drwxr-sr-x
chmod 2755 ./addfs_7.1.0/install #drwxr-sr-x
chmod 2755 ./addfs_7.1.0/library.dda #drwxr-sr-x
chmod 2755 ./addfs_7.1.0/library #drwxr-sr-x
chmod 2755 ./autosimimport #drwxr-sr-x
And in my case I only want to sync those with the 's' flag, so I filter with grep -Pi '\s+\S*s\S*$'. Sort was there as well because I had to compare the files in the other host.
TLDR
If you just want to apply all the permissions with no filtering or comparing:
Create a script with the correct permissions on the "base" host
find . -printf 'chmod %m %p\n' > /var/tmp/fix_permissions.sh
Execute the script in the other host
bash /var/tmp/fix_permissions.sh

Related

How to set permissions recursively, 700 for folders and 600 for files, without using find

I'm trying to figure out a way to set permissions recursively 700 for dirs and subdirs on a specific path and 600 for files. I would use these commands:
find /path -type d -print0 | xargs -0 chmod 700
find /path -type f -print0 | xargs -0 chmod 600
But the user does not have permission to run the "find" command.
As a workaround I tried to make a script that contains the above commands from the root user with setuid sticky bit set so it will run with root privileges (like passwd or sudo commands that normal users run with root privileges):
chmod 4755 script.sh
but i cannot execute the script from the limited user account, it still says that I don't have permission to run the find command.
Does anyone have any idea how i can accomplish this without having to use the find command?
Edit:
OS: Centos 6.5
Apparently this is very easy to implement. There are 2 ways: using chmod only, or setting ACL (access control list) on the desired path:
Using chmod i would run:
chmod -R 600 /path # to remove executable permissions
chmod -R u=rwX,g=,o= /path # to make directories transversable
for the user owner i'm giving capital "X", so it does apply only to directories and not files.
Using ACL:
setfacl -Rm u::rwX,g::0,o::0 /path
setfacl -Rm d:u::rwX,g::0,o::0 /path
again using capital X so it applies only to directories and not files. The first command applies the ACL, the second one makes it default policy so newly created files will inherit the desired permissions.

Granting my access permissions to everyone?

If folder folder is read/write/execute accessible to me, then it should become read/write/execute to everyone.
Calling chmod -R 777 ./folder does not suit, because it makes all files executable, even those that were not executable before.
Is there an easy way?
You could do it with UNIX find combined with the exec flag to run a chmod command on every file that matches a filter, and filter on the executable bit.
e.g.
first find the non executable files recursively and change them to all RW
find ./folder -not -executable -exec chmod a=rw {} \;
then find all the executable ones recursively and change them to all RWX
find ./folder -executable -exec chmod a=rwx {} \;
You might want to add add the files in the folder to a user group like everyone or users depending on your distro.
chown -R <youruser>:everyone ./folder
You can check what available user groups you have with groups command.

List too long to chmod recursively

I have tried the following command to chmod many images within a folder...
chown -R apache:apache *
But i get the follwing error
-bash: /usr/bin: Argument list too long
I then tried ...
ls | xargs chown -R apache:apache *
and then get the following message...
-bash: /usr/bin/xargs: Argument list too long
Does anyone have any way to do this? I'm stumped :(
Many thanks
William
Omit the * after xargs chown because it will try to add the list of all file names twice twice (once from ls and then again from *).
Try
chown -R apache:apache .
This changes the current folder (.) and everything in it and always works. If you need different permissions for the folder itself, write them down and restore them afterwards using chown without -R.
If you really want to process only the contents of the folder, this will work:
find . -maxdepth 1 -not -name "." -print0 | xargs --null chown -R apache:apache
This may work:
find . -maxdepth 1 -not -name -exec chown -R apache:apache {} \;
You can simply pass the current directory to chown -R:
chown -R apache:apache .
The one corner case where this is incorrect is if you want all files and subdirectories, but not the current directory and the .dotfiles in it, to have the new owner. The rest of this answer explains approaches for that scenario in more detail, but if you don't need that, you can stop reading here.
If you have root or equivalent privileges, doing a cleanup back to the original owner without -R is probably acceptable; or you can fix the xargs to avoid the pesky, buggy ls and take out the incorrect final * argument from the OP's attempt:
printf '%s\0' * | xargs -r0 chmod -R apache:apache
Notice the GNU extension to use a null byte as separator. If you don't have -0 in your xargs, maybe revert to find, as suggested already in an older answer.
find . -maxdepth 1 -name '*' -exec chmod -R apache:apache {} +
If your find doesn't understand -exec ... {} + try -exec ... {} \; instead.

Changing the file permissions of multiple files through Unix terminal

Hi I have about a 100 files in a folder and I want to change the file permissions to read write and execute for each file in this folder.
I know how to change the file permissions for a single file i.e. chmod a+rwx foo.txt
but not for a group of files. Please help me out
Thank you!
GT
you can use wildcards, like
chmod a+rwx *.txt
or
find <directory> -type f -exec chmod a+rwx {} \;
the last command will find all files and exec the chmod per each file.
however, having a+rwx is not recommended at all

How do I exclude a folder when performing file operations i.e. cp, mv, rm and chown etc. in Linux

How do you exclude a folder when performing file operations i.e. cp etc.
I would currently use the wild card * to apply file operation to all, but I need to exclude one single folder.
The command I'm actually wanting to use is chown to change the owner of all the files in a directory but I need to exclude one sub directory.
If you're using bash and enable extglob via shopt -s extglob then you can use !(<pattern>) to exclude the given pattern.
find dir_to_start -name dir_to_exclude -prune -o -print0 | xargs -0 chown owner
find dir_to_start -not -name "file_to_exclude" -print0 | xargs -0 chown owner
for file in *; do
if [ $file != "file_I_dont_want_to_chown" ]
then
chown -R Camsoft $file
fi
done
Combine multiple small sharp tools of unix:
To exclude the folder "foo"
% ls -d * | grep -v foo | xargs -d "\n" chown -R Camsoft
For this situation I would recommend using find. You can specify paths to exclude using the -not -iwhilename 'PATH'. Then using exec you execute the command you want to execute
find . -not -iwholename './var/foo*' -exec chown www-data '{}' \;
Although this probably does help for your situation I have also see scripts set the immutable flag. Make sure you remove the flag when your done you should use trap for this just in case the script is killed early (note: run from a script, the trap code runs when the bash session exits). A lot of trouble in my option but it's good in some situations.
cd /var
trap 'chattr -R -i foo > /dev/null 2>&1' 0
chattr -R +i foo
chown -R www-data *
Another option might be to temporarily remove permission on the that file /folder.
In Unix you need 'x' permission on a directory to enter it.
edit: obviously this isn't goign to work if you are backing up a live production database - but for excluding your 'interesting images' collection when copying documents to a USB key it's reasoanable.

Resources