How do I exclude a folder when performing file operations i.e. cp, mv, rm and chown etc. in Linux - linux

How do you exclude a folder when performing file operations i.e. cp etc.
I would currently use the wild card * to apply file operation to all, but I need to exclude one single folder.
The command I'm actually wanting to use is chown to change the owner of all the files in a directory but I need to exclude one sub directory.

If you're using bash and enable extglob via shopt -s extglob then you can use !(<pattern>) to exclude the given pattern.

find dir_to_start -name dir_to_exclude -prune -o -print0 | xargs -0 chown owner
find dir_to_start -not -name "file_to_exclude" -print0 | xargs -0 chown owner

for file in *; do
if [ $file != "file_I_dont_want_to_chown" ]
then
chown -R Camsoft $file
fi
done

Combine multiple small sharp tools of unix:
To exclude the folder "foo"
% ls -d * | grep -v foo | xargs -d "\n" chown -R Camsoft

For this situation I would recommend using find. You can specify paths to exclude using the -not -iwhilename 'PATH'. Then using exec you execute the command you want to execute
find . -not -iwholename './var/foo*' -exec chown www-data '{}' \;
Although this probably does help for your situation I have also see scripts set the immutable flag. Make sure you remove the flag when your done you should use trap for this just in case the script is killed early (note: run from a script, the trap code runs when the bash session exits). A lot of trouble in my option but it's good in some situations.
cd /var
trap 'chattr -R -i foo > /dev/null 2>&1' 0
chattr -R +i foo
chown -R www-data *

Another option might be to temporarily remove permission on the that file /folder.
In Unix you need 'x' permission on a directory to enter it.
edit: obviously this isn't goign to work if you are backing up a live production database - but for excluding your 'interesting images' collection when copying documents to a USB key it's reasoanable.

Related

Optimizing bash script (for loops, permissions, ...)

I made this script (very quickly ... :)) ages ago and use it very often, but now I'm interested how bash experts would optimize it.
What it does is it goes through files and directories in the current directory and sets the correct permissions:
#!/bin/bash
echo "Repairing chowns."
for item in "$#"; do
sudo chown -R ziga:ziga "$item"
done
echo "Setting chmods of directories to 755."
for item in $#; do
sudo find "$item" -type d -exec chmod 755 {} \;
done
echo "Setting chmods of files to 644."
for item in $#; do
sudo find "$item" -type f -exec chmod 644 {} \;
done
echo "Setting chmods of scripts to 744."
for item in $#; do
sudo find "$item" -type f -name "*.sh" -exec chmod 744 {} \;
sudo find "$item" -type f -name "*.pl" -exec chmod 744 {} \;
sudo find "$item" -type f -name "*.py" -exec chmod 744 {} \;
done
What I'd like to do is
Reduce the number of for-loops
Reduce the number of find statements (I know the last three can be combined into one, but I wonder if it can be reduced even further)
Make script accept parameters other than the current directory and possibly accept multiple parameters (right now it only works if I cd into a desired directory and then call bash /home/ziga/scripts/repairPermissions.sh .). NOTE: the parameters might have spaces in the path.
a) you are looping through $# in all cases, you only need a single loop.
a1) But find can do this for you, you don't need a bash loop.
a2) And chown can take multiple directories as arguments.
b) Per chw21, remove the sudo for files you own.
c) One exec per found file/directory is expensive, use xargs.
d) Per chw21, combine the last three finds into one.
#!/bin/bash
echo "Repairing permissions."
sudo chown -R ziga:ziga "$#"
find "$#" -type d -print0 | xargs -0 --no-run-if-empty chmod 755
find "$#" -type f -print0 | xargs -0 --no-run-if-empty chmod 644
find "$#" -type f \
\( -name '*.sh' -o -name '*.pl' -o -name '*.py' \) \
-print0 | xargs -0 --no-run-if-empty chmod 744
This is 11 execs (sudo, chown, 3 * find/xargs/chmod) of other processes (if the argument list is very long, xargs will exec chmod multiple times).
This does however read the directory tree four times. The OS's filesystem caching should help though.
Edit: Explanation for why xargs is used in answer to chepner's comment:
Imagine that there is a folder with 100 files in it.
a find . -type f -exec chmod 644 {} \; will execute chmod 100 times.
Using find . -type f -print0 | xargs -0 chmod 644 execute xargs once and chmod once (or more if the argument list is very long).
This is three processes started compared to 101 processes started. The resources and time (and energy) needed to execute three processes is far less.
Edit 2:
Added --no-run-if-empty as an option to xargs. Note that this may not be portable to all systems.
I assume that you are ziga. This means that after the first chown command in there, you own every file and directory, and I don't see why you'd need any sudo after that.
You can combine the three last finds quite easily:
find "$item" -type f \( -name "*.sh" -o -name "*.py" -o -name "*.pl" \) -exec chmod 744 {} \;
Apart from that, I wonder what kind of broken permissions you tend to find. For example, chmod does know the +X modifier which only sets the x if at least one of user, group, or other already has a x.
You can simplify this:
for item in "$#"; do
To this:
for item; do
That's right, the default values for a for loop are taken from "$#".
This won't work well if some of the directories contain spaces:
for item in $#; do
Again, replace with for item; do. Same for all the other loops.
As the other answer pointed out, if you are running this script as ziga, then you can drop all the sudo except in the first loop.
You may use the idiom for item instead of for item in "$#".
We may reduce the use of find to just once (with a double loop).
I am assuming that ziga is your "$USER" name.
Bash 4.4 is required for the readarray with the -d option.
#!/bin/bash
user=$USER
for i
do
# Repairing chowns.
sudo chown -R "$user:$user" "$i"
readarray -d '' -t line< <(sudo find "$i" -print0)
for f in "${line[#]}"; do
# Setting chmods of directories to 755.
[[ -d $f ]] && sudo chmod 755 "$f"
# Setting chmods of files to 644.
[[ -f $f ]] && sudo chmod 644 "$f"
# Setting chmods of scripts to 744.
[[ $f == *.#(sh|pl|py) ]] && sudo chmod 744 "$f"
done
done
If you have an older bash (2.04+), change the readarray line to this:
while IFS='' read -d '' line; do arr+=("$line"); done < <(sudo find "$i" -print0)
I am keeping the sudo assuming the "items" in "$#" migth be repeated inside searched directories. If there is no repetition, sudo may be omited.
There are two inevitable external commands (sudo chown) per loop.
Plus only one find per loop.
The other two external commands (sudo chmod) are reduced to only those needed to efect the change you need.
But the shell is allways very very slow to do its job.
So, the gain in speed depends on the type of files where the script is used.
Do some testing with time script to find out.
Subject to a few constraints, listed below, I would expect something similar to the following to be faster than anything mentioned thus far. I also use only Bourne shell constructs:
#!/bin/sh
set -e
per_file() {
chown ziga:ziga "$1"
test -d "$1" && chmod 755 "$1" || { test -f "$1" && chmod 644 "$1"; }
file "$1" | awk -F: '$2 !~ /script/ {exit 1}' && chmod 744 "$1"
}
if [ "$process_my_file" ]
then
per_file("$1")
exit
fi
export process_my_file=$0
find "$#" -exec $0 {} +
The script calls find(1) on the command-line arguments, and invokes itself for each file found. It detects re-invocation by testing for the existence of an environment variable named process_my_file. Although there is a cost to invoking the shell each time, it should be dwarfed by not traversing the filesystem.
Notes:
set -e to exit on first error
+ in find to get parallel execution
file(1) to detect a script
chown(1) not invoked recursively
symbolic links not followed
Calling file(1) for every file to determine if it's a script is definitely more expensive than just testing the name, but it's also more general. Note that awk is testing only the description text, not the filename. It would misfire if the filename contained a colon.
You could lift chown back up out of the per_file() function, and use its -R option. That might be faster, as it's one execution. It would be an interesting experiment.
In terms of your requirements, there are no loops, and only one call to find(1). You also mentioned,
right now it only works if I cd into a desired directory
but that's not true. It also works if you mention the target directory on the command line,
$ fix_tree_perms my/favorite/directory
You could add, say, a -d option to change there first, but I don't see how that would be easier to use.

Sync file permissions *only*

A junior team member did a nasty chmod -R 777 in /etc/ and cause SSH cannot login remotely in a Ubuntu server. Now I fixed this login issue by manually set the correct file permissions on /etc/ssh/*, /etc/sudoers, /etc/ssl/* by comparing other normal system. But there are so many other files which may cause future issues.
I am thinking to use rsync to do the work, but don't want it to sync file contents, just permissions, no more work.
Is that possible? I see rsync has -a option but it does too much.
If you have the "normal" content of /etc available on the same system (like mounted in some other directory, let's say /mnt/correct/etc), you could use the --reference parameter to chmod and chown commands, and combine it with find that is started from the "normal" directory:
$ cd /mnt/correct/etc
$ find . ! -type l -exec chown -v --reference='{}' /etc/'{}' \;
$ find . ! -type l -exec chmod -v --reference='{}' /etc/'{}' \;
(I'm assuming you're on a UNIX system with GNU coreutils versions of chmod and chown.)
The "! -type l" condition in find excludes symbolic links, because otherwise chmod will use the link's permissions to change the file the link points to (and same applies to chown).
Please note you can also try something that won't necessarily make you need to copy files from one place to another (depending on the filesize it may be desired)
You could use a mix of find and some grepping to generate a shell script to be executed on the host where you need to fix permissions.. you could use the same approach to generate a script for changing users/groups as well.. for example:
# find . -printf 'chmod %m %p #%M\n' | sort -k3 | grep -Pi '\s+\S*s\S*$' > /var/tmp/fix_permissions.bash
# bash /var/tmp/fix_permissions.bash
In the example above, what it does is to list all the files with their attributes in this format:
chmod 2755 ./addfs_7.1.0/bin #drwxr-sr-x
chmod 2755 ./addfs_7.1.0/config #drwxr-sr-x
chmod 2755 ./addfs_7.1.0 #drwxr-sr-x
chmod 2755 ./addfs_7.1.0/install #drwxr-sr-x
chmod 2755 ./addfs_7.1.0/library.dda #drwxr-sr-x
chmod 2755 ./addfs_7.1.0/library #drwxr-sr-x
chmod 2755 ./autosimimport #drwxr-sr-x
And in my case I only want to sync those with the 's' flag, so I filter with grep -Pi '\s+\S*s\S*$'. Sort was there as well because I had to compare the files in the other host.
TLDR
If you just want to apply all the permissions with no filtering or comparing:
Create a script with the correct permissions on the "base" host
find . -printf 'chmod %m %p\n' > /var/tmp/fix_permissions.sh
Execute the script in the other host
bash /var/tmp/fix_permissions.sh

In a Linux script, how to remove all files & directories but one, in current directory?

In a shell script, I want to remove all files and directories but one file, in the current directory.
I used
ls | grep -v 'nameoffiletokeep' | xargs rm
this removes files, but directories are not deleted.
find . -mindepth 1 -maxdepth 1 ! -iname nameoffiletokeep -print0| xargs -0 rm -rf;
This finds all files and directories that are direct children of the current working directory that are not named nameoffiletokeep and removes them all (recursively for directories), regardless of leading dots (e.g. .hidden, which would be missed if you used a glob like rm -rf *), spaces, or other metachars in the file names.
I've used -iname for case-insensitive matching against nameoffiletokeep, but if you want case-sensitivity, you should use -name. The choice should depend on the underlying file system behavior, and your awareness of the letter-case of the file name you're trying to protect.
If you are using bash, you can use extended globbing:
shopt -s extglob
rm -fr !(nameoffiletokeep)
In zsh the same idea is possible:
setopt extended_glob
rm -fr ^nameoffiletokeep

List too long to chmod recursively

I have tried the following command to chmod many images within a folder...
chown -R apache:apache *
But i get the follwing error
-bash: /usr/bin: Argument list too long
I then tried ...
ls | xargs chown -R apache:apache *
and then get the following message...
-bash: /usr/bin/xargs: Argument list too long
Does anyone have any way to do this? I'm stumped :(
Many thanks
William
Omit the * after xargs chown because it will try to add the list of all file names twice twice (once from ls and then again from *).
Try
chown -R apache:apache .
This changes the current folder (.) and everything in it and always works. If you need different permissions for the folder itself, write them down and restore them afterwards using chown without -R.
If you really want to process only the contents of the folder, this will work:
find . -maxdepth 1 -not -name "." -print0 | xargs --null chown -R apache:apache
This may work:
find . -maxdepth 1 -not -name -exec chown -R apache:apache {} \;
You can simply pass the current directory to chown -R:
chown -R apache:apache .
The one corner case where this is incorrect is if you want all files and subdirectories, but not the current directory and the .dotfiles in it, to have the new owner. The rest of this answer explains approaches for that scenario in more detail, but if you don't need that, you can stop reading here.
If you have root or equivalent privileges, doing a cleanup back to the original owner without -R is probably acceptable; or you can fix the xargs to avoid the pesky, buggy ls and take out the incorrect final * argument from the OP's attempt:
printf '%s\0' * | xargs -r0 chmod -R apache:apache
Notice the GNU extension to use a null byte as separator. If you don't have -0 in your xargs, maybe revert to find, as suggested already in an older answer.
find . -maxdepth 1 -name '*' -exec chmod -R apache:apache {} +
If your find doesn't understand -exec ... {} + try -exec ... {} \; instead.

Using find - Deleting all files/directories (in Linux ) except any one

If we want to delete all files and directories we use, rm -rf *.
But what if i want all files and directories be deleted at a shot, except one particular file?
Is there any command for that? rm -rf * gives the ease of deletion at one shot, but deletes even my favourite file/directory.
Thanks in advance
find can be a very good friend:
$ ls
a/ b/ c/
$ find * -maxdepth 0 -name 'b' -prune -o -exec rm -rf '{}' ';'
$ ls
b/
$
Explanation:
find * -maxdepth 0: select everything selected by * without descending into any directories
-name 'b' -prune: do not bother (-prune) with anything that matches the condition -name 'b'
-o -exec rm -rf '{}' ';': call rm -rf for everything else
By the way, another, possibly simpler, way would be to move or rename your favourite directory so that it is not in the way:
$ ls
a/ b/ c/
$ mv b .b
$ ls
a/ c/
$ rm -rf *
$ mv .b b
$ ls
b/
Short answer
ls | grep -v "z.txt" | xargs rm
Details:
The thought process for the above command is :
List all files (ls)
Ignore one file named "z.txt" (grep -v "z.txt")
Delete the listed files other than z.txt (xargs rm)
Example
Create 5 files as shown below:
echo "a.txt b.txt c.txt d.txt z.txt" | xargs touch
List all files except z.txt
ls|grep -v "z.txt"
a.txt
b.txt
c.txt
d.txt
We can now delete(rm) the listed files by using the xargs utility :
ls|grep -v "z.txt"|xargs rm
You can type it right in the command-line or use this keystroke in the script
files=`ls -l | grep -v "my_favorite_dir"`; for file in $files; do rm -rvf $file; done
P.S. I suggest -i switch for rm to prevent delition of important data.
P.P.S You can write the small script based on this solution and place it to the /usr/bin (e.g. /usr/bin/rmf). Now you can use it as and ordinary app:
rmf my_favorite_dir
The script looks like (just a sketch):
#!/bin/sh
if [[ -z $1 ]]; then
files=`ls -l`
else
files=`ls -l | grep -v $1`
fi;
for file in $files; do
rm -rvi $file
done;
At least in zsh
rm -rf ^filename
could be an option, if you only want to preserve one single file.
If it's just one file, one simple way is to move that file to /tmp or something, rm -Rf the directory and then move it back. You could alias this as a simple command.
The other option is to do a find and then grep out what you don't want (using -v or directly using one of finds predicates) and then rming the remaining files.
For a single file, I'd do the former. For anything more, I'd write something custom similar to what thkala said.
In bash you have the !() glob operator, which inverts the matched pattern. So to delete everything except the file my_file_name.txt, try this:
shopt -s extglob
rm -f !(my_file_name.txt)
See this article for more details:
http://karper.wordpress.com/2010/11/17/deleting-all-files-in-a-directory-with-exceptions/
I don't know of such a program, but I have wanted it in the past for some times. The basic syntax would be:
IFS='
' for f in $(except "*.c" "*.h" -- *); do
printf '%s\n' "$f"
done
The program I have in mind has three modes:
exact matching (with the option -e)
glob matching (default, like shown in the above example)
regex matching (with the option -r)
It takes the patterns to be excluded from the command line, followed by the separator --, followed by the file names. Alternatively, the file names might be read from stdin (if the option -s is given), each on a line.
Such a program should not be hard to write, in either C or the Shell Command Language. And it makes a good excercise for learning the Unix basics. When you do it as a shell program, you have to watch for filenames containing whitespace and other special characters, of course.
I see a lot of longwinded means here, that work, but with
a/ b/ c/ d/ e/
rm -rf *.* !(b*)
this removes everything except directory b/ and its contents (assuming your file is in b/.
Then just cd b/ and
rm -rf *.* !(filename)
to remove everything else, but the file (named "filename") that you want to keep.
mv subdir/preciousfile ./
rm -rf subdir
mkdir subdir
mv preciousfile subdir/
This looks tedious, but it is rather safe
avoids complex logic
never use rm -rf *, its results depend on your current directory (which could be / ;-)
never use a globbing *: its expansion is limited by ARGV_MAX.
allows you to check the error after each command, and maybe avoid the disaster caused by the next command.
avoids nasty problems caused by space or NL in the filenames.
cd ..
ln trash/useful.file ./
rm -rf trash/*
mv useful.file trash/
you need to use regular expression for this. Write a regular expression which selects all other files except the one you need.

Resources