I'm trying to remove all Folder with
rm -R !(foo|bar|abc)
exclude the given folder names, that could be two or more.
That works fine!
In the next step I need to copy the needed missing folder from another direction in this folder.
I tried following, but it doesn't work, and it should also be flexible with folder counts.
rm -R !($neededfolders)
ownedfolders=$(ls ./dest/) #
find ../source/ -maxdepth 1 -type d | grep "$neededfolders" | grep -v "$ownedfolders" | xargs cp -Rt ./dest/
My problem with the code is that grep won't use multiple names, I also tried to declare the ownedfolder, set the second grep to
grep -v ${ownedfolder[i]}
and put the whole thing in a for loop, but these ends in a fallacy.
Many thanks!
You can use a for loop:
needed='#(foo|bar|abc)'
for dir in ../source/*/ ; do
dir=${dir%/}
if [[ $dir == ../source/$needed && ! -d dest/${dir##*/} ]] ; then
cp -R "$dir" dest/
fi
done
This avoids the ugly variable $ownedfolders populated by the output of ls.
You need to use the -E option to grep to enable extended regular expressions, which recognize | alternatives:
find ../source/ -maxdepth 1 -type d | grep -E "$neededfolders" | grep -v -E "$ownedfolders" | xargs cp -Rt ./dest/
Related
I would like to find the newest sub directory in a directory and save the result to variable in bash.
Something like this:
ls -t /backups | head -1 > $BACKUPDIR
Can anyone help?
BACKUPDIR=$(ls -td /backups/*/ | head -1)
$(...) evaluates the statement in a subshell and returns the output.
There is a simple solution to this using only ls:
BACKUPDIR=$(ls -td /backups/*/ | head -1)
-t orders by time (latest first)
-d only lists items from this folder
*/ only lists directories
head -1 returns the first item
I didn't know about */ until I found Listing only directories using ls in bash: An examination.
This ia a pure Bash solution:
topdir=/backups
BACKUPDIR=
# Handle subdirectories beginning with '.', and empty $topdir
shopt -s dotglob nullglob
for file in "$topdir"/* ; do
[[ -L $file || ! -d $file ]] && continue
[[ -z $BACKUPDIR || $file -nt $BACKUPDIR ]] && BACKUPDIR=$file
done
printf 'BACKUPDIR=%q\n' "$BACKUPDIR"
It skips symlinks, including symlinks to directories, which may or may not be the right thing to do. It skips other non-directories. It handles directories whose names contain any characters, including newlines and leading dots.
Well, I think this solution is the most efficient:
path="/my/dir/structure/*"
backupdir=$(find $path -type d -prune | tail -n 1)
Explanation why this is a little better:
We do not need sub-shells (aside from the one for getting the result into the bash variable).
We do not need a useless -exec ls -d at the end of the find command, it already prints the directory listing.
We can easily alter this, e.g. to exclude certain patterns. For example, if you want the second newest directory, because backup files are first written to a tmp dir in the same path:
backupdir=$(find $path -type -d -prune -not -name "*temp_dir" | tail -n 1)
The above solution doesn't take into account things like files being written and removed from the directory resulting in the upper directory being returned instead of the newest subdirectory.
The other issue is that this solution assumes that the directory only contains other directories and not files being written.
Let's say I create a file called "test.txt" and then run this command again:
echo "test" > test.txt
ls -t /backups | head -1
test.txt
The result is test.txt showing up instead of the last modified directory.
The proposed solution "works" but only in the best case scenario.
Assuming you have a maximum of 1 directory depth, a better solution is to use:
find /backups/* -type d -prune -exec ls -d {} \; |tail -1
Just swap the "/backups/" portion for your actual path.
If you want to avoid showing an absolute path in a bash script, you could always use something like this:
LOCALPATH=/backups
DIRECTORY=$(cd $LOCALPATH; find * -type d -prune -exec ls -d {} \; |tail -1)
With GNU find you can get list of directories with modification timestamps, sort that list and output the newest:
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\0" | sort -z -n | cut -z -f2- | tail -z -n1
or newline separated
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\n" | sort -n | cut -f2- | tail -n1
With POSIX find (that does not have -printf) you may, if you have it, run stat to get file modification timestamp:
find . -mindepth 1 -maxdepth 1 -type d -exec stat -c '%Y %n' {} \; | sort -n | cut -d' ' -f2- | tail -n1
Without stat a pure shell solution may be used by replacing [[ bash extension with [ as in this answer.
Your "something like this" was almost a hit:
BACKUPDIR=$(ls -t ./backups | head -1)
Combining what you wrote with what I have learned solved my problem too. Thank you for rising this question.
Note: I run the line above from GitBash within Windows environment in file called ./something.bash.
I am using Linux and would like to replace all files containing the string 000000 with an existing file /home/user/offblack.png but keep the existing filename. I've been working at this for a while with various combinations of -exec and xargs but no luck. So far I have:
find | grep 000000
Which does list all the files I want to change fine. How do I copy and replace these files with my existing offblack.png file?
Here's what I would use:
find (your find args here) \
| xargs fgrep '000000' /dev/null \
| awk -F: '{print $1}' \
| xargs -n 1 -I ORIGINAL_FILENAME /bin/echo /bin/cp /path/to/offblack.png ORIGINAL_FILENAME
Expanding, find all the files you're interested in, grep inside of them for the string '000000' (adding /dev/null to the list of files in case one of the generated fgreps ended up with only one filename - it ensures the output is always formatted as "filename: <line containing '000000'>"), strip out only the filenames, then one-by-one, copy in offblack.png over those files. Note that I inserted a /bin/echo in there. That's your dry-run. Remove the echo to get it to run for real.
If what you mean is that the filenames contain "000000":
find . -type f -a -name '*000000*' -exec /bin/echo /bin/cp /path/to/offblack.png {} \;
Much simpler. :-) Find every file under the current directory with a name containing your string and exec the copy of offblack.png over it. Again, what I've given you there is a dry-run. Remove the echo for your live fire drill. :-)
find . -type f | grep 000000 | tr \\n \\0 | xargs -0i+ cp ~/offblack.png "+"
Let's try and use Bash a bit more:
for read -r filename
do
hit=""
for read -r
do
if [[ $REPLY == *000000* ]]
then
hit=$filename
break
fi
done < $filename
[[ -n $hit ]] && cp /path/offblack.png $filename
done < <(find . -type -f)
Fewer man pages to search!
I am trying to delete erroneous emails based on finding the email address in the file via Linux CLI.
I can get the files with
find . | xargs grep -l email#example.com
But I cannot figure out how to delete them from there as the following code doesn't work.
rm -f | xargs find . | xargs grep -l email#example.com
Solution for your command:
grep -l email#example.com * | xargs rm
Or
for file in $(grep -l email#example.com *); do
rm -i $file;
# ^ prompt for delete
done
For safety I normally pipe the output from find to something like awk and create a batch file with each line being "rm filename"
That way you can check it before actually running it and manually fix any odd edge cases that are difficult to do with a regex
find . | xargs grep -l email#example.com | awk '{print "rm "$1}' > doit.sh
vi doit.sh // check for murphy and his law
source doit.sh
You can use find's -exec and -delete, it will only delete the file if the grep command succeeds. Using grep -q so it wouldn't print anything, you can replace the -q with -l to see which files had the string in them.
find . -exec grep -q 'email#example.com' '{}' \; -delete
I liked Martin Beckett's solution but found that file names with spaces could trip it up (like who uses spaces in file names, pfft :D). Also I wanted to review what was matched so I move the matched files to a local folder instead of just deleting them with the 'rm' command:
# Make a folder in the current directory to put the matched files
$ mkdir -p './matched-files'
# Create a script to move files that match the grep
# NOTE: Remove "-name '*.txt'" to allow all file extensions to be searched.
# NOTE: Edit the grep argument 'something' to what you want to search for.
$ find . -name '*.txt' -print0 | xargs -0 grep -al 'something' | awk -F '\n' '{ print "mv \""$0"\" ./matched-files" }' > doit.sh
Or because its possible (in Linux, idk about other OS's) to have newlines in a file name you can use this longer, untested if works better (who puts newlines in filenames? pfft :D), version:
$ find . -name '*.txt' -print0 | xargs -0 grep -alZ 'something' | awk -F '\0' '{ for (x=1; x<NF; x++) print "mv \""$x"\" ./matched-files" }' > doit.sh
# Evaluate the file following the 'source' command as a list of commands executed in the current context:
$ source doit.sh
NOTE: I had issues where grep could not match inside files that had utf-16 encoding.
See here for a workaround. In case that website disappears what you do is use grep's -a flag which makes grep treat files as text and use a regex pattern that matches any first-byte in each extended character. For example to match Entité do this:
grep -a 'Entit.e'
and if that doesn't work then try this:
grep -a 'E.n.t.i.t.e'
Despite Martin's safe answer, if you've got certainty of what you want to delete, such as in writing a script, I've used this with greater success than any other one-liner suggested before around here:
$ find . | grep -l email#example.com | xargs -I {} rm -rf {}
But I rather find by name:
$ find . -iname *something* | xargs -I {} echo {}
rm -f `find . | xargs grep -li email#example.com`
does the job better. Use `...` to run the command to offer the file names containing email.#example.com (grep -l lists them, -i ignores case) to remove them with rm (-f forcibly / -i interactively).
find . | xargs grep -l email#example.com
how to remove:
rm -f 'find . | xargs grep -l email#example.com'
Quick and efficent. Replace find_files_having_this_text with the text you want to search.
grep -Ril 'find_files_having_this_text' . | xargs rm
I'm not sure if this is possible in one line (i.e., without writing a script), but I want to run an ls | grep command and then for each result, pipe it to another command.
To be specific, I've got a directory full of images and I only want to view certain ones. I can filter the images I'm interested in with ls | grep -i <something>, which will return a list of matching files. Then for each file, I want to view it by passing it in to eog.
I've tried simply passing the results in to eog like so:
eog $(ls | grep -i <something>)
This doesn't quite work as it will only open the first entry in the result list.
So, how can I execute eog FILENAME for each entry in the result list without having to bundle this operation into a script?
Edit: As suggested in the answers, I can use a for loop like so:
for i in 'ls | grep -i ...'; do eog $i; done
This works, but the loop waits to iterate until I close the currently opened eog instance.
Ideally I'd like for n instances of eog to open all at once, where n is the number of results returned from my ls | grep command. Is this possible?
Thanks everybody!
I would use xargs:
$ ls | grep -i <something> | xargs -n 1 eog
A bare ls piped into grep is sort of redundant given arbitrary?sh*ll-glo[bB] patterns (unless there are too many matches to fit on a command line in which case the find | xargs combinations in other answers should be used.
eog is happy to take multiple file names so
eog pr0n*really-dirty.series?????.jpg
is fine and simpler.
Use find:
find . -mindepth 1 -maxdepth 1 -regex '...' -exec eog '{}' ';'
or
find . -mindepth 1 -maxdepth 1 -regex '...' -print0 | xargs -0 -n 1 eog
If the pattern is not too complex, then globbing is possible, making the call much easier:
for file in *.png
do
eog -- "$file"
done
Bash also has builtin regex support:
pattern='.+\.png'
for file in *
do
[[ $file =~ $pattern ]] && eog -- "$file"
done
Never use ls in scripts, and never use grep to filter file names.
#!/bin/bash
shopt -s nullglob
for image in *pattern*
do
eog "$image"
done
Bash 4
#!/bin/bash
shopt -s nullglob
shopt -s globstar
for image in **/*pattern*
do
eog "$image"
done
Try looping over the results:
for i in `ls | grep -i <something>`; do
eog $i
done
Or you can one-line it:
for i in `ls | grep -i <something>`; do eog $i; done
Edit: If you want the eog instances to open in parallel, launch each in a new process with eog $i &. The updated one-liner would then read:
for i in `ls | grep -i <something>`; do (eog $i &); done
If you want more control over the number of arguments passed on to eog, you may use "xargs -L" in combination with "bash -c":
printf "%s\n" {1..10} | xargs -L 5 bash -c 'echo "$#"' arg0
ls | grep -i <something> | xargs -L 5 bash -c 'eog "$#"' arg0
I am new to shell scripting, so I need some help here. I have a directory that fills up with backups. If I have more than 10 backup files, I would like to remove the oldest files, so that the 10 newest backup files are the only ones that are left.
So far, I know how to count the files, which seems easy enough, but how do I then remove the oldest files, if the count is over 10?
if [ls /backups | wc -l > 10]
then
echo "More than 10"
fi
Try this:
ls -t | sed -e '1,10d' | xargs -d '\n' rm
This should handle all characters (except newlines) in a file name.
What's going on here?
ls -t lists all files in the current directory in decreasing order of modification time. Ie, the most recently modified files are first, one file name per line.
sed -e '1,10d' deletes the first 10 lines, ie, the 10 newest files. I use this instead of tail because I can never remember whether I need tail -n +10 or tail -n +11.
xargs -d '\n' rm collects each input line (without the terminating newline) and passes each line as an argument to rm.
As with anything of this sort, please experiment in a safe place.
find is the common tool for this kind of task :
find ./my_dir -mtime +10 -type f -delete
EXPLANATIONS
./my_dir your directory (replace with your own)
-mtime +10 older than 10 days
-type f only files
-delete no surprise. Remove it to test your find filter before executing the whole command
And take care that ./my_dir exists to avoid bad surprises !
Make sure your pwd is the correct directory to delete the files then(assuming only regular characters in the filename):
ls -A1t | tail -n +11 | xargs rm
keeps the newest 10 files. I use this with camera program 'motion' to keep the most recent frame grab files. Thanks to all proceeding answers because you showed me how to do it.
The proper way to do this type of thing is with logrotate.
I like the answers from #Dennis Williamson and #Dale Hagglund. (+1 to each)
Here's another way to do it using find (with the -newer test) that is similar to what you started with.
This was done in bash on cygwin...
if [[ $(ls /backups | wc -l) > 10 ]]
then
find /backups ! -newer $(ls -t | sed '11!d') -exec rm {} \;
fi
Straightforward file counter:
max=12
n=0
ls -1t *.dat |
while read file; do
n=$((n+1))
if [[ $n -gt $max ]]; then
rm -f "$file"
fi
done
I just found this topic and the solution from mikecolley helped me in a first step. As I needed a solution for a single line homematic (raspberrymatic) script, I ran into a problem that this command only gave me the fileames and not the whole path which is needed for "rm". My used CUxD Exec command can not start in a selected folder.
So here is my solution:
ls -A1t $(find /media/usb0/backup/ -type f -name homematic-raspi*.sbk) | tail -n +11 | xargs rm
Explaining:
find /media/usb0/backup/ -type f -name homematic-raspi*.sbk searching only files -type f whiche are named like -name homematic-raspi*.sbk (case sensitive) or use -iname (case insensitive) in folder /media/usb0/backup/
ls -A1t $(...) list the files given by find without files starting with "." or ".." -A sorted by mtime -t and with a return of only one column -1
tail -n +11 return of only the last 10 -n +11 lines for following rm
xargs rm and finally remove the raiming files in the list
Maybe this helps others from longer searching and makes the solution more flexible.
stat -c "%Y %n" * | sort -rn | head -n +10 | \
cut -d ' ' -f 1 --complement | xargs -d '\n' rm
Breakdown: Get last-modified times for each file (in the format "time filename"), sort them from oldest to newest, keep all but the last ten entries, and then keep all but the first field (keep only the filename portion).
Edit: Using cut instead of awk since the latter is not always available
Edit 2: Now handles filenames with spaces
On a very limited chroot environment, we had only a couple of programs available to achieve what was initially asked. We solved it that way:
MIN_FILES=5
FILE_COUNT=$(ls -l | grep -c ^d )
if [ $MIN_FILES -lt $FILE_COUNT ]; then
while [ $MIN_FILES -lt $FILE_COUNT ]; do
FILE_COUNT=$[$FILE_COUNT-1]
FILE_TO_DEL=$(ls -t | tail -n1)
# be careful with this one
rm -rf "$FILE_TO_DEL"
done
fi
Explanation:
FILE_COUNT=$(ls -l | grep -c ^d ) counts all files in the current folder. Instead of grep we could use also wc -l but wc was not installed on that host.
FILE_COUNT=$[$FILE_COUNT-1] update the current $FILE_COUNT
FILE_TO_DEL=$(ls -t | tail -n1) Save the oldest file name in the $FILE_TO_DEL variable. tail -n1 returns the last element in the list.
Based on others suggestions and some awk foo, I got this to work. I know this an old thread, but I didn't find a decent answer here and this sorted it for me. This just deletes the oldest file, but you can change the head -n 1 to 10 and get the oldest 10.
find $DIR -type f -printf '%T+ %p\n' | sort | head -n 1 | awk '{first =$1; $1 =""; print $0}' | xargs -d '\n' rm
Using inode numbers via stat & find command (to avoid pesky-chars-in-file-name issues):
stat -f "%m %i" * | sort -rn -k 1,1 | tail -n +11 | cut -d " " -f 2 | \
xargs -n 1 -I '{}' find "$(pwd)" -type f -inum '{}' -print
#stat -f "%m %i" * | sort -rn -k 1,1 | tail -n +11 | cut -d " " -f 2 | \
# xargs -n 1 -I '{}' find "$(pwd)" -type f -inum '{}' -delete