How to find all subdirectories with a specific name and delete its contents (and NOT delete the directories themselves) - linux

I can't seem to find a solution that works within several posts that seem to ask a similar question. This is the command that has come the closest to working which I've used as a test on test folders:
find . -iname "*Adobe Premiere Pro Video Previews*" -exec sh -c 'rm -rf {}/*' \;
The problem is that find . -iname "*Adobe Premiere Pro Video Previews*" by itself finds the subdirectories and prints them while -exec sh -c 'rm -rf {}/*' \; does the job of deleting only the contents without deleting the directory itself. But they do not work to find the directory and delete its contents when put together. What command should I use to accomplish those two tasks simultaneously?
Thanks

for dir in `find . -type d -iname "*Adobe Premiere Pro Video Previews*"`; do
find $dir -type f -delete
done

I may not be an expert in bash, but for me the following command is working :
find . -iname "*test*" -type d -exec sh -c "rm -rf {}/*" \;

#!/bin/bash
while read -r line; do
echo "Deleting CONTENTS of folder: $line"
rm -rf "$line/*"
done <<< $(find . -type d -iname "*Adobe Premiere Pro Video Previews*")
This loops through the results of find (Using -type d to only show directories so you won't run into problems if a file contains the search string) and performs an rm -rf on the contents of each result. The /* is important as without it it would delete the result directory too, instead of just its contents.

Related

Using find to delete symbolic links except those which point to directories

At the moment I recursively remove all softlinks from my current working directory like this:
find . -type l -delete
But I don't want to remove symlinks pointing to a directory anymore.
Is there simple way to customize the find command or do I have to omit the -delete and script something to inspect every found softlink "myself" before removing?
As already suggested in the comments, you can use the test utility for this; but you don't need readlink because test -d always resolves symbolic links.
# replace -print with -exec rm {} +
find . -type l ! -exec test -d {} \; -print
It might be slow due to the overhead from spawning a new process for each symlink though. If that's a problem, you can incorporate a small shell script in your find command to process them in bulks.
find . -type l -exec sh -c '
for link; do
shift
if ! test -d "$link"; then
set "$#" "$link"
fi
done
# remove echo
echo rm "$#"' sh {} +
Or, if you have GNU find installed, you can utilize the -xtype primary.
# replace -print with -delete
find -type l ! -xtype d -print

How to delete all subdirectories with a specific name

I'm working on Linux and there is a folder, which contains lots of sub directories. I need to delete all of sub directories which have a same name. For example,
dir
|---subdir1
|---subdir2
| |-----subdir1
|---file
I want to delete all of subdir1. Here is my script:
find dir -type d -name "subdir1" | while read directory ; do
rm -rf $directory
done
However, I execute it but it seems that nothing happens.
I've tried also find dir -type d "subdir1" -delete, but still, nothing happens.
If find finds the correct directories at all, these should work:
find dir -type d -name "subdir1" -exec echo rm -rf {} \;
or
find dir -type d -name "subdir1" -exec echo rm -rf {} +
(the echo is there for verifying the command hits the files you wanted, remove it to actually run the rm and remove the directories.)
Both piping to xargs and to while read have the downside that unusual file names will cause issues. Also, find -delete will only try to remove the directories themselves, not their contents. It will fail on any non-empty directories (but you should at least get errors).
With xargs, spaces separate words by default, so even file names with spaces will not work. read can deal with spaces, but in your command it's the unquoted expansion of $tar that splits the variable on spaces.
If your filenames don't have newlines or trailing spaces, this should work, too:
find ... | while read -r x ; do rm -rf "$x" ; done
With the globstar option (enable with shopt -s globstar, requires Bash 4.0 or newer):
rm -rf **/subdir1/
The drawback of this solution as compared to using find -exec or find | xargs is that the argument list might become too long, but that would require quite a lot of directories named subdir1. On my system, ARG_MAX is 2097152.
Using xargs:
find dir -type d -name "subdir1" -print0 |xargs -0 rm -rf
Some information not directly related to the question/problem:
find|xargs or find -exec
https://www.everythingcli.org/find-exec-vs-find-xargs/
From the question, it seems you've tried to use while with find. The following substitution may help you:
while IFS= read -rd '' dir; do rm -rf "$dir"; done < <(find dir -type d -name "subdir" -print0)

find command in bash script resulting in "No such file or directory" error only for directories?

UPDATE 2014-03-21
So I realized I wasn't as efficient as I could be, as all the disks that I needed to "scrub" were under /media and named "disk1, disk2,disk3, etc." Here's the final script:
DIRTY_DIR="/media/disk*"
find $DIRTY_DIR -depth -type d -name .AppleDouble -exec rm -rf {} \;
find $DIRTY_DIR -depth -type d -name .AppleDB -exec rm -rf {} \;
find $DIRTY_DIR -depth -type d -name .AppleDesktop -exec rm -rf {} \;
find $DIRTY_DIR -type f -name ".*DS_Store" -exec rm -f {} \;
find $DIRTY_DIR -type f -name ".Thumbs.db" -exec rm -f {} \; # I know, I know, this is a Windows file.
Next will probably to just clean up the code even more, and add features like logging and reporting results (through e-mail or otherwise); excluding system and directories; and allowing people to customize the list of files/directories.
Thanks for all the help!
UPDATE
Before I incorporated the helpful suggestions provided by everyone, I performed some tests, the results of which were very interesting (see below).
As a test, I ran this command:
root#doi:~# find /media/disk3 -type d -name .AppleDouble -exec echo rm -rf {} \;
The results (which is what I expected):
rm -rf /media/disk3/Videos/Chorus/.AppleDouble
However, when I ran the actual command (without echo):
root#doi:~# find /media/disk3 -type d -name .AppleDouble -exec rm -rf {} \;
I received the same "error" output:
find: `/media/disk3/Videos/Chorus/.AppleDouble': No such file or directory
I put "error" in quotes because obviously the folder was removed, as verified by immediately running:
root#doi:~# find /media/disk3 -type d -name .AppleDouble -exec rm -rf {} \;
root#doi:~#
It seems like the find command stored the original results, acted on it by deleting the directory, but then tried to delete it again? Or is the -f option of rm, which is supposed to be for ignoring nonexistent files and arguments, is ignored? I note that when I run tests with the rm command alone without the find command, everything worked as expected. Thus, directly running rm -rf ... \nonexistent_directory, no errors were returned even though the "non_existent_directory" was not there, and directly running rm -r \nonexistent_directory provided the expected:
rm: cannot remove 'non_existent_directory': No such file or directory
Should I use the -delete option instead of the -exec rm ... option? I had wanted to make the script as broadly applicable as possible for systems that didn't have -delete option for find.
Lastly, I don't presume it matters if /media/disk1, /media/disk2, ... are combined in an AUFS filesystem under /media/storage as the find command is operating on the individual disks themselves?
Thanks for all the help so far, guys. I'll publish the script when I'm done.
ORIGINAL POST
I'm writing a bash script to delete a few OS X remnants on my Lubuntu file shares. However, when executing this:
...
BASE_DIR="/media/disk" # I have 4 disks: disk1, disk2, ...
COUNTER=1
while [ $COUNTER -lt 5 ]; do # Iterate through disk1, disk2, ...
DIRTY_DIR=${BASE_DIR}$COUNTER # Look under the current disk counter /media/disk1, /media/disk2, ...
find $DIRTY_DIR -name \.AppleDouble -exec rm -rf {} \; # Delete all .AppleDouble directories
find $DIRTY_DIR -name ".*DS_Store" -exec rm -rf {} \; # Delete all .DS_Store and ._.DS_Store files
COUNTER=$(($COUNTER+1))
done
...
I see the following output:
find: /media/disk1/Pictures/.AppleDouble: No such file or directory
Before I added the -exec rm ... portion the script found the /media/disk1/Pictures/.AppleDouble directory. The script works properly for removing DS_Store files, but what am I missing for the find command for directories?
I'm afraid to screw too much with the -exec portion as I don't want to obliterate directories in error.
tl;dr - Pass -prune if you're deleting directories using find.
For anyone else who stumbles on this question. Running an example like this
find /media/disk3 -type d -name .AppleDouble -exec rm -rf {} \;
results in an error like
rm: cannot remove 'non_existent_directory': No such file or directory
When finding and deleting directories with find, you'll often encounter this error because find stores the directory to process subdirectories, then deletes it with exec, then tries to traverse the subdirectories which no longer exist.
You can either pass -maxdepth 0 or -prune to prevent this issue. Like so:
find /media/disk3 -type d -name .AppleDouble -prune -exec rm -rf {} \;
Now it deletes the directories without any errors. Hurray! :)
You don't need to escape DOT in shell glob as this is not regex. So use .AppleDouble instead of \.AppleDouble:
find $DIRTY_DIR -name .AppleDouble -exec rm -rf '{}' \;
PS: I don't see anywhere $COUNTER being incremented in your script.

nonzero return code although find -exec rm works

I'm on a linux system I wonder what is wrong with the following execution of find:
mkdir a && touch a/b
find . -name a -type d -exec echo '{}' \;
./a
find . -name a -type d -exec rm -r '{}' \;
find: `./a': No such file or directory
The invocation of echo is just for testing purposes. I would expect the last command to remove the directory './a' entirely and return 0. Instead it removes the directory and generates the error message. To repeat, it does remove the directory! What is going on?
rm executes without a problem. The issue is that find is confused, since it knew the directory ./a was there, it tries to visit that directory to look for directories named a. However, find cannot enter the directory, since it was already removed.
One way to avoid this is to do
find -name a -type d | xargs rm -r
This will let the find move along before the rm command is executed. Or, you can simply ignore the error in your original command.
Based on epsalon's comment the solution is to use the -depth option which causes the deeper files to be visited first.
find . -depth -name a -type d -exec rm -r '{}' \;
does the trick. Thanks a bunch!
If performance is an issue, use -prune in order to prevent find from descending into directories named "a":
find . -name a -type d -prune -exec rm -r '{}' \;

Create file in Linux and replace content

I have a project in Linux. I want to create a file named index.html in all folders.
So I have used the following command:
find . -type d -exec touch {}/index.html \;
It's working! Now I'm trying to copy the existing file from a given location and it to be automatically replaced into all the folders of my project.
This should actually work exactly in the same way:
find . -type d -exec cp $sourcedir/index.html {}/index.html \;
If I understand your question correctly, what you want is to copy a given file in all the directories.
You can use a similar find command :
find . -type d -exec cp -f /tmp/index.html {} \;
where /tmp/index.html is path to the original file (replace it with your own path).
Also, you don't need to create the files if your final objective is to replace them with the original file.
tar -cvzf index.tar.gz `find . -type f -iname 'index.html'` && scp index.tar.gz USER#SERVER:/your/projec/root/on/SERVER && ssh USER#SERVER "tar -xvzf index.tar.gz"
Or if you're in the proper directory localhost, and rsync is available:
rsync -r --exclude='**' --include='**/index.html' . USER#SERVER:/your/projec/root/on/SERVER
HTH

Resources