List too long to chmod recursively - linux

I have tried the following command to chmod many images within a folder...
chown -R apache:apache *
But i get the follwing error
-bash: /usr/bin: Argument list too long
I then tried ...
ls | xargs chown -R apache:apache *
and then get the following message...
-bash: /usr/bin/xargs: Argument list too long
Does anyone have any way to do this? I'm stumped :(
Many thanks
William

Omit the * after xargs chown because it will try to add the list of all file names twice twice (once from ls and then again from *).
Try
chown -R apache:apache .
This changes the current folder (.) and everything in it and always works. If you need different permissions for the folder itself, write them down and restore them afterwards using chown without -R.
If you really want to process only the contents of the folder, this will work:
find . -maxdepth 1 -not -name "." -print0 | xargs --null chown -R apache:apache

This may work:
find . -maxdepth 1 -not -name -exec chown -R apache:apache {} \;

You can simply pass the current directory to chown -R:
chown -R apache:apache .
The one corner case where this is incorrect is if you want all files and subdirectories, but not the current directory and the .dotfiles in it, to have the new owner. The rest of this answer explains approaches for that scenario in more detail, but if you don't need that, you can stop reading here.
If you have root or equivalent privileges, doing a cleanup back to the original owner without -R is probably acceptable; or you can fix the xargs to avoid the pesky, buggy ls and take out the incorrect final * argument from the OP's attempt:
printf '%s\0' * | xargs -r0 chmod -R apache:apache
Notice the GNU extension to use a null byte as separator. If you don't have -0 in your xargs, maybe revert to find, as suggested already in an older answer.
find . -maxdepth 1 -name '*' -exec chmod -R apache:apache {} +
If your find doesn't understand -exec ... {} + try -exec ... {} \; instead.

Related

Optimizing bash script (for loops, permissions, ...)

I made this script (very quickly ... :)) ages ago and use it very often, but now I'm interested how bash experts would optimize it.
What it does is it goes through files and directories in the current directory and sets the correct permissions:
#!/bin/bash
echo "Repairing chowns."
for item in "$#"; do
sudo chown -R ziga:ziga "$item"
done
echo "Setting chmods of directories to 755."
for item in $#; do
sudo find "$item" -type d -exec chmod 755 {} \;
done
echo "Setting chmods of files to 644."
for item in $#; do
sudo find "$item" -type f -exec chmod 644 {} \;
done
echo "Setting chmods of scripts to 744."
for item in $#; do
sudo find "$item" -type f -name "*.sh" -exec chmod 744 {} \;
sudo find "$item" -type f -name "*.pl" -exec chmod 744 {} \;
sudo find "$item" -type f -name "*.py" -exec chmod 744 {} \;
done
What I'd like to do is
Reduce the number of for-loops
Reduce the number of find statements (I know the last three can be combined into one, but I wonder if it can be reduced even further)
Make script accept parameters other than the current directory and possibly accept multiple parameters (right now it only works if I cd into a desired directory and then call bash /home/ziga/scripts/repairPermissions.sh .). NOTE: the parameters might have spaces in the path.
a) you are looping through $# in all cases, you only need a single loop.
a1) But find can do this for you, you don't need a bash loop.
a2) And chown can take multiple directories as arguments.
b) Per chw21, remove the sudo for files you own.
c) One exec per found file/directory is expensive, use xargs.
d) Per chw21, combine the last three finds into one.
#!/bin/bash
echo "Repairing permissions."
sudo chown -R ziga:ziga "$#"
find "$#" -type d -print0 | xargs -0 --no-run-if-empty chmod 755
find "$#" -type f -print0 | xargs -0 --no-run-if-empty chmod 644
find "$#" -type f \
\( -name '*.sh' -o -name '*.pl' -o -name '*.py' \) \
-print0 | xargs -0 --no-run-if-empty chmod 744
This is 11 execs (sudo, chown, 3 * find/xargs/chmod) of other processes (if the argument list is very long, xargs will exec chmod multiple times).
This does however read the directory tree four times. The OS's filesystem caching should help though.
Edit: Explanation for why xargs is used in answer to chepner's comment:
Imagine that there is a folder with 100 files in it.
a find . -type f -exec chmod 644 {} \; will execute chmod 100 times.
Using find . -type f -print0 | xargs -0 chmod 644 execute xargs once and chmod once (or more if the argument list is very long).
This is three processes started compared to 101 processes started. The resources and time (and energy) needed to execute three processes is far less.
Edit 2:
Added --no-run-if-empty as an option to xargs. Note that this may not be portable to all systems.
I assume that you are ziga. This means that after the first chown command in there, you own every file and directory, and I don't see why you'd need any sudo after that.
You can combine the three last finds quite easily:
find "$item" -type f \( -name "*.sh" -o -name "*.py" -o -name "*.pl" \) -exec chmod 744 {} \;
Apart from that, I wonder what kind of broken permissions you tend to find. For example, chmod does know the +X modifier which only sets the x if at least one of user, group, or other already has a x.
You can simplify this:
for item in "$#"; do
To this:
for item; do
That's right, the default values for a for loop are taken from "$#".
This won't work well if some of the directories contain spaces:
for item in $#; do
Again, replace with for item; do. Same for all the other loops.
As the other answer pointed out, if you are running this script as ziga, then you can drop all the sudo except in the first loop.
You may use the idiom for item instead of for item in "$#".
We may reduce the use of find to just once (with a double loop).
I am assuming that ziga is your "$USER" name.
Bash 4.4 is required for the readarray with the -d option.
#!/bin/bash
user=$USER
for i
do
# Repairing chowns.
sudo chown -R "$user:$user" "$i"
readarray -d '' -t line< <(sudo find "$i" -print0)
for f in "${line[#]}"; do
# Setting chmods of directories to 755.
[[ -d $f ]] && sudo chmod 755 "$f"
# Setting chmods of files to 644.
[[ -f $f ]] && sudo chmod 644 "$f"
# Setting chmods of scripts to 744.
[[ $f == *.#(sh|pl|py) ]] && sudo chmod 744 "$f"
done
done
If you have an older bash (2.04+), change the readarray line to this:
while IFS='' read -d '' line; do arr+=("$line"); done < <(sudo find "$i" -print0)
I am keeping the sudo assuming the "items" in "$#" migth be repeated inside searched directories. If there is no repetition, sudo may be omited.
There are two inevitable external commands (sudo chown) per loop.
Plus only one find per loop.
The other two external commands (sudo chmod) are reduced to only those needed to efect the change you need.
But the shell is allways very very slow to do its job.
So, the gain in speed depends on the type of files where the script is used.
Do some testing with time script to find out.
Subject to a few constraints, listed below, I would expect something similar to the following to be faster than anything mentioned thus far. I also use only Bourne shell constructs:
#!/bin/sh
set -e
per_file() {
chown ziga:ziga "$1"
test -d "$1" && chmod 755 "$1" || { test -f "$1" && chmod 644 "$1"; }
file "$1" | awk -F: '$2 !~ /script/ {exit 1}' && chmod 744 "$1"
}
if [ "$process_my_file" ]
then
per_file("$1")
exit
fi
export process_my_file=$0
find "$#" -exec $0 {} +
The script calls find(1) on the command-line arguments, and invokes itself for each file found. It detects re-invocation by testing for the existence of an environment variable named process_my_file. Although there is a cost to invoking the shell each time, it should be dwarfed by not traversing the filesystem.
Notes:
set -e to exit on first error
+ in find to get parallel execution
file(1) to detect a script
chown(1) not invoked recursively
symbolic links not followed
Calling file(1) for every file to determine if it's a script is definitely more expensive than just testing the name, but it's also more general. Note that awk is testing only the description text, not the filename. It would misfire if the filename contained a colon.
You could lift chown back up out of the per_file() function, and use its -R option. That might be faster, as it's one execution. It would be an interesting experiment.
In terms of your requirements, there are no loops, and only one call to find(1). You also mentioned,
right now it only works if I cd into a desired directory
but that's not true. It also works if you mention the target directory on the command line,
$ fix_tree_perms my/favorite/directory
You could add, say, a -d option to change there first, but I don't see how that would be easier to use.

cannot delete directory using shell command

I wanted to delete all directories with name in same pattern RC_200, here is what I did:
find -name "RC_200" -type d -delete
But it's complaining about this:
find: cannot delete '.RC200': Directory not empty
You should try:
find . -name RC200 -type d -exec rm -r {} \;
You can see here, what the command does
More, you can try what #anubhava recommended in comment (Note the + at the end); this one is equivalent to a xargs solution:
find . -name RC200 -type d -print0|xargs -0 rm -r
xargs executes the command passed as parameter, with the arguments passed to stdin. This is using rm -r to delete the directory and all its children
May be you should run administrator privilege.
if your os Ubuntu add "sudo" before your Command or what ever your os look at administrator key.
OR
remove file protection if it has.

Bash - Recursively change ownership of only the directories which belong to a certain user/group

I have a directory (we will call /files) with ~1300 subdirectories, each of which contains further subdirectories and files.
90% of the top level directories in /files belong to apache:apache and the rest belong to root:root. I need everything to belong to apache:apache.
I think if I do a recursive chown on the whole lot it will be quite extreme, so I was wondering if there's a more efficient way to recursively change ownership of just the root:root directories to apache:apache.
Bonus if chmod can be done on these directories in the same way.
Your recursive chown would have probably been done already, but you could use this instead:
find . -type d \( ! -user apache -o ! -group apache \) -print0 | xargs -0 chown apache:apache
To change directories that have the wrong permission:
find . -type d ! -perm 755 -print0 | xargs -0 chmod 755
Using linux's find command is going to help there:
find /files -user root -group root -type d \
-exec chmod something {} \; -exec chown apache.apache {} \;
for more details on WHY that works there is http://www.explainshell.com/explain?cmd=find+%2Ffiles+-user+root+-group+root+-type+d+-exec+foo+\%3B

changing permissions of files in a directory recursively

I am trying to change the permissions of a files present in a directory and subdirectories using the below command and running into below error..can anyone help?
user#machine:/local/mnt/workspace$ find . -type f -exec chmod 644 {} \;
chmod: changing permissions of `./halimpl/ncihal/adaptation/NonVolatileStore.cpp': Operation not permitted
you can run the following command:
#chown -R directory_path
But it will change the permissions of directories also.
For only files, you can run.
#find directory_path -type f -exec chmod 644 {} \;
It also looks like you dont have enough permissions. try
#sudo find directory_path -type f -exec chmod 644 {} \;
or run the command as root user.
It looks to me like you don't have permission to change NonVolatileStore.cpp.
Are you aware of chmod's -R switch that recursively changes permissions?
if you have the root privilege, try:
sudo find . -type f -exec chmod 644 {} \;
It could be that you simply don't own that file. Run an ls -l on it to see full permissions and who the owner is.
It could also be the filesystem is read only.

How do I exclude a folder when performing file operations i.e. cp, mv, rm and chown etc. in Linux

How do you exclude a folder when performing file operations i.e. cp etc.
I would currently use the wild card * to apply file operation to all, but I need to exclude one single folder.
The command I'm actually wanting to use is chown to change the owner of all the files in a directory but I need to exclude one sub directory.
If you're using bash and enable extglob via shopt -s extglob then you can use !(<pattern>) to exclude the given pattern.
find dir_to_start -name dir_to_exclude -prune -o -print0 | xargs -0 chown owner
find dir_to_start -not -name "file_to_exclude" -print0 | xargs -0 chown owner
for file in *; do
if [ $file != "file_I_dont_want_to_chown" ]
then
chown -R Camsoft $file
fi
done
Combine multiple small sharp tools of unix:
To exclude the folder "foo"
% ls -d * | grep -v foo | xargs -d "\n" chown -R Camsoft
For this situation I would recommend using find. You can specify paths to exclude using the -not -iwhilename 'PATH'. Then using exec you execute the command you want to execute
find . -not -iwholename './var/foo*' -exec chown www-data '{}' \;
Although this probably does help for your situation I have also see scripts set the immutable flag. Make sure you remove the flag when your done you should use trap for this just in case the script is killed early (note: run from a script, the trap code runs when the bash session exits). A lot of trouble in my option but it's good in some situations.
cd /var
trap 'chattr -R -i foo > /dev/null 2>&1' 0
chattr -R +i foo
chown -R www-data *
Another option might be to temporarily remove permission on the that file /folder.
In Unix you need 'x' permission on a directory to enter it.
edit: obviously this isn't goign to work if you are backing up a live production database - but for excluding your 'interesting images' collection when copying documents to a USB key it's reasoanable.

Resources