At the moment our backup script explicitly runs svnadmin hotcopy on each of our repositories every night. Our repos are all stored under a parent directory (/usr/local/svn/repos)
Our backup script has a line for each of the repos under that directory along the lines of:
svnadmin hotcopy /usr/local/svn/repos/myrepo1 /usr/local/backup/myrepo1
Instead of having to manually add a new line for each every new repo we bring online, I was hoping to using the find command to run svnadmin hotcopy for every directory it finds under /usr/local/svn/repos.
So far I've got:
find /usr/local/svn/repos/ -maxdepth 1 -mindepth 1 -type d -exec echo /usr/local/backup{} \;
,where I'm substituting "svnadmin hotcopy" with "echo" for simplicity's sake.
The output of which is:
/usr/local/backup/usr/local/svn/repos/ure
/usr/local/backup/usr/local/svn/repos/cheetah
/usr/local/backup/usr/local/svn/repos/casemgt
/usr/local/backup/usr/local/svn/repos/royalliver
/usr/local/backup/usr/local/svn/repos/ure_andras
/usr/local/backup/usr/local/svn/repos/profserv
/usr/local/backup/usr/local/svn/repos/frontoffice
/usr/local/backup/usr/local/svn/repos/ure.orig
/usr/local/backup/usr/local/svn/repos/projectcommon
/usr/local/backup/usr/local/svn/repos/playground
/usr/local/backup/usr/local/svn/repos/casegen
The problem being the full path is included in {}. I need only the last element of the directory name passed to -exec
The output I want being:
/usr/local/backup/ure
/usr/local/backup/cheetah
/usr/local/backup/casemgt
/usr/local/backup/royalliver
/usr/local/backup/ure_andras
/usr/local/backup/profserv
/usr/local/backup/frontoffice
/usr/local/backup/ure.orig
/usr/local/backup/projectcommon
/usr/local/backup/playground
/usr/local/backup/casegen
I'm pretty much stuck at this point. Can anyone help me out here?
Thanks in advance,
Dave
You were on the right track. Try this:
find /usr/local/svn/repos/ -maxdepth 1 -mindepth 1 -type d -printf "%f\0" | xargs -0 -I{} echo svnadmin hotcopy /usr/local/svn/repos/\{\} /usr/local/backup/\{\}
The %f is like basename and the null plus the -0 on xargs ensures that names with spaces, etc., get passed through successfully.
Just remove the echo and make any adjustments you might need and it should do the trick.
put a cut command at the end
find /usr/local/svn/repos/ -maxdepth 1 -mindepth 1 -type d -exec echo /usr/local/backup{} \| cut -f1,2,3,9 -d"/"
How about adding a sed filter cuting out the middle part?
sed 's/usr.local.svn.repos.//g'
Added like this
find /usr/local/svn/repos/ -maxdepth 1 -mindepth 1 -type d -exec echo /usr/local/backup{} ";" | sed 's/usr.local.svn.repos.//g'
ls -al /usr/local/svn/repos/ |grep '^d' |sed s/^...............................................................//" |xargs -L 1 -I zzyggy echo /usr/local/svn/repos/zzyggy
It's a bit long but it does the trick. You don't have to do everything with find when there are lots of other shell commands, although if I had to write this kind of script, I would do it in Python and leave the shell for interactive work.
ls -al lists all the files in the named directory with attributes
grep '^d' selects the lines beginning with d which are directories
sed strips off all the characters to the left of the actual directory name. You may need to add or delete some dots
xargs takes the list of directory names and issues it one at a time. I specified zzyggy as the name to substitute in the executed command, but you can choose what you like. Of course, you would replace echo with your svnadmin command.
If it was in a shell script you should really do this
SVNDIRNAME="/usr/local/svn/repos"
ls -al $SVNDIRNAME |grep '^d' |sed s/^...............................................................//" |xargs -L 1 -I zzyggy echo $SVNDIRNAME/zzyggy
but I decided to show the wrong and right way just to explain this point. I'm going to tag this with some shell tag, but I still think that a Python script is a superior way to solve this kind of problem in the 21st century.
Related
I'm trying to create a script which will find missing topics from multiple log files. These logfiles are filled top down, so the newest logs are at the bottom of the file. I would like to grep only the last line from this file which includes UNKNOWN_TOPIC_OR_PARTITION. This should be done in multiple files with completely different names. Is grep the best solution or is there another solution that suits my needs. I already tried adding tail, but that doesn't seem to work.
missingTopics=$(grep -Ri -m1 --exclude=*.{1,2,3,4,5} UNKNOWN_TOPIC_OR_PARTITION /app/tibco/log/tra/domain/)
You could try a combination of find, tac and grep:
find /app/tibco/log/tra/domain -type f ! -name '*.[1-5]' -exec sh -c \
'tac "$1" | grep -im1 UNKNOWN_TOPIC_OR_PARTITION' "sh" '{}' \;
tac prints files in reverse, the -exec sh -c SCRIPT "sh" '{}' \; action of find executes the shell SCRIPT each time a file matching the previous tests is found. The SCRIPT is executed with "sh" as parameter $0 and the path of the found file as parameter $1.
If performance is an issue you can probably improve it with:
find . -type f ! -name '*.[1-5]' -exec sh -c 'for f in "$#"; do \
tac "$f" | grep -im1 UNKNOWN_TOPIC_OR_PARTITION; done' "sh" '{}' +
which will spawn less shells. If security is also an issue you can also replace -exec by -execdir (even if with this SCRIPT I do not immediately see any exploit).
I would like to find the newest sub directory in a directory and save the result to variable in bash.
Something like this:
ls -t /backups | head -1 > $BACKUPDIR
Can anyone help?
BACKUPDIR=$(ls -td /backups/*/ | head -1)
$(...) evaluates the statement in a subshell and returns the output.
There is a simple solution to this using only ls:
BACKUPDIR=$(ls -td /backups/*/ | head -1)
-t orders by time (latest first)
-d only lists items from this folder
*/ only lists directories
head -1 returns the first item
I didn't know about */ until I found Listing only directories using ls in bash: An examination.
This ia a pure Bash solution:
topdir=/backups
BACKUPDIR=
# Handle subdirectories beginning with '.', and empty $topdir
shopt -s dotglob nullglob
for file in "$topdir"/* ; do
[[ -L $file || ! -d $file ]] && continue
[[ -z $BACKUPDIR || $file -nt $BACKUPDIR ]] && BACKUPDIR=$file
done
printf 'BACKUPDIR=%q\n' "$BACKUPDIR"
It skips symlinks, including symlinks to directories, which may or may not be the right thing to do. It skips other non-directories. It handles directories whose names contain any characters, including newlines and leading dots.
Well, I think this solution is the most efficient:
path="/my/dir/structure/*"
backupdir=$(find $path -type d -prune | tail -n 1)
Explanation why this is a little better:
We do not need sub-shells (aside from the one for getting the result into the bash variable).
We do not need a useless -exec ls -d at the end of the find command, it already prints the directory listing.
We can easily alter this, e.g. to exclude certain patterns. For example, if you want the second newest directory, because backup files are first written to a tmp dir in the same path:
backupdir=$(find $path -type -d -prune -not -name "*temp_dir" | tail -n 1)
The above solution doesn't take into account things like files being written and removed from the directory resulting in the upper directory being returned instead of the newest subdirectory.
The other issue is that this solution assumes that the directory only contains other directories and not files being written.
Let's say I create a file called "test.txt" and then run this command again:
echo "test" > test.txt
ls -t /backups | head -1
test.txt
The result is test.txt showing up instead of the last modified directory.
The proposed solution "works" but only in the best case scenario.
Assuming you have a maximum of 1 directory depth, a better solution is to use:
find /backups/* -type d -prune -exec ls -d {} \; |tail -1
Just swap the "/backups/" portion for your actual path.
If you want to avoid showing an absolute path in a bash script, you could always use something like this:
LOCALPATH=/backups
DIRECTORY=$(cd $LOCALPATH; find * -type d -prune -exec ls -d {} \; |tail -1)
With GNU find you can get list of directories with modification timestamps, sort that list and output the newest:
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\0" | sort -z -n | cut -z -f2- | tail -z -n1
or newline separated
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\n" | sort -n | cut -f2- | tail -n1
With POSIX find (that does not have -printf) you may, if you have it, run stat to get file modification timestamp:
find . -mindepth 1 -maxdepth 1 -type d -exec stat -c '%Y %n' {} \; | sort -n | cut -d' ' -f2- | tail -n1
Without stat a pure shell solution may be used by replacing [[ bash extension with [ as in this answer.
Your "something like this" was almost a hit:
BACKUPDIR=$(ls -t ./backups | head -1)
Combining what you wrote with what I have learned solved my problem too. Thank you for rising this question.
Note: I run the line above from GitBash within Windows environment in file called ./something.bash.
I have a directory that contains a list of files having the following format:
240-timestamp1.ts
240-timestamp2.ts
...
360-timestamp1.ts
360-timestamp2.ts
Now, I want to implement a bash command which matches the files that start with '240' and renames them so that instead of '240-timestampX.ts' the files look like '240-human-readable-timestampX.ts'.
I have tried the following:
find . -maxdepth 1 -mmin +5 -type f -name "240*"
-exec mv $0 {$0/240-***and here I want to insert
either stat -c %y filename or date -d #timestampX***} '{}' \;
I stuck here because I don't know if I can embed a bash command inside the mv command. I know the task may look a bit confusing and over-complicated, but I would like to know if it is possible to do so. Of course I can create a bash script that would go through all the files in the directory and while loop them with changing their respective names, but somehow I think that a single command would be more efficient (even if less readable).
The OS is Linux Ubuntu 12.04.5
The shell is bash
Thank you both Kenavoz and Kurt Stutsman for the proposed solutions. Both your answers perform the task; however, I marked Kenavoz's answer as the accepted one because of the degree of similarity between my question and Kenavoz's answer. Even if it is indeed possible to do it in a cleaner way with omitting the find command, it is necessary in my case to use the respective command because I need to find files older than X units of time. So thank you both once again!
In case you want to keep your mmin option, your can use find and process found files with a bash command using xargs :
find . -maxdepth 1 -mmin +5 -type f -name "240*.ts" | xargs -L 1 bash -c 'mv "${1}" "240-$(stat -c %y ${1}).ts"' \;
In bash if all your files are in a single directory, you don't need to use find at all. You can do a for loop:
for file in 240-*; do
hr_timestamp=$(date -d $(echo "$file" | sed 's/.*-\([0-9]*\)\.ts/\1/'))
mv "$file" "240-$hr_timestamp.ts"
done
I've got a couple of thousand images that are saved as logs that need to be deleted.
To avoid the limit of rm and to do this across multiple servers, I used the following code
Net::SSH::Multi.start(:on_error => :ignore) do |session|
# define servers in groups for more granular access
session.group :app do
session.use 'example#example', :password=> 'example'
end
# execute commands on a subset of servers
session.with(:app).exec "find /tmp/motion -maxdepth 1 -not -name 'lastsnap.jpg' -print0 | sudo xargs -0 rm"
end
An ls -l lastsnap.jpg shows that lastsnap.jpg is linked to another file, like so
30 Jun 3 08:18 lastsnap.jpg -> 81-20140603081840-snap.jpg
This other file is constantly changed due to logging scenario that i mentioned above.
Reiterating the question, how do I delete every other logged file that is NOT lastsnap.jpg and it's linked file.
Thanks for the help :)
cd /tmp/motion
ls -1 | grep -v -E '$(basename `find . -lname lastsnap.jpg`)|lastsnap.jpg' | while read n ; do rm -rvf $n ; done
EDIT as per the comment
cd /tmp/motion; rm -rvf $(ls -1 | grep -v -E "$(basename `find . -lname lastsnap.jpg`)|lastsnap.jpg")
Note: Make sure that your file names don't have spaces in it. Other wise this method will not work and needs modification in order to accommodate spaces in the file name.
I wrote a logic using find command. Check whether its useful to you.
My directory contains following files
pyramid-stone.jpg
tallest_water_slide.jpg
SAOLA.JPG
testnap.jpg
silicon_valley_talent.jpg
The_Organic_Battery_From_Japan.jpg
Out of which testnap.jpg is a link
testnap.jpg -> pyramid-stone.jpg
So i wrote a small awk script to get the link name and where its pointing to
IG1=`ls -l | grep ^l | awk '{printf $(NF-2);}'`
IG2=`ls -l | grep ^l | awk '{printf $(NF);}'`
Then i used find command to print all jpg's instead of the link
find . -type f \( -iname "*.jpg" ! -iname $IG1 ! -iname $IG2 \)
OP is
./SAOLA.JPG
./silicon_valley_talent.jpg
./tallest_water_slide.jpg
./The_Organic_Battery_From_Japan.jpg
NOTE:You have add rm to remove files after the find command
How can I move all files except one? I am looking for something like:
'mv ~/Linux/Old/!Tux.png ~/Linux/New/'
where I move old stuff to new stuff -folder except Tux.png. !-sign represents a negation. Is there some tool for the job?
If you use bash and have the extglob shell option set (which is usually the case):
mv ~/Linux/Old/!(Tux.png) ~/Linux/New/
Put the following to your .bashrc
shopt -s extglob
It extends regexes.
You can then move all files except one by
mv !(fileOne) ~/path/newFolder
Exceptions in relation to other commands
Note that, in copying directories, the forward-flash cannot be used in the name as noticed in the thread Why extglob except breaking except condition?:
cp -r !(Backups.backupdb) /home/masi/Documents/
so Backups.backupdb/ is wrong here before the negation and I would not use it neither in moving directories because of the risk of using wrongly then globs with other commands and possible other exceptions.
I would go with the traditional find & xargs way:
find ~/Linux/Old -maxdepth 1 -mindepth 1 -not -name Tux.png -print0 |
xargs -0 mv -t ~/Linux/New
-maxdepth 1 makes it not search recursively. If you only care about files, you can say -type f. -mindepth 1 makes it not include the ~/Linux/Old path itself into the result. Works with any filenames, including with those that contain embedded newlines.
One comment notes that the mv -t option is a probably GNU extension. For systems that don't have it
find ~/Linux/Old -maxdepth 1 -mindepth 1 -not -name Tux.png \
-exec mv '{}' ~/Linux/New \;
A quick way would be to modify the tux filename so that your move command will not match.
For example:
mv Tux.png .Tux.png
mv * ~/somefolder
mv .Tux.png Tux.png
I think the easiest way to do is with backticks
mv `ls -1 ~/Linux/Old/ | grep -v Tux.png` ~/Linux/New/
Edit:
Use backslash with ls instead to prevent using it with alias, i.e. mostly ls is aliased as ls --color.
mv `\ls -1 ~/Linux/Old/ | grep -v Tux.png` ~/Linux/New/
Thanks #Arnold Roa
For bash, sth answer is correct. Here is the zsh (my shell of choice) syntax:
mv ~/Linux/Old/^Tux.png ~/Linux/New/
Requires EXTENDED_GLOB shell option to be set.
I find this to be a bit safer and easier to rely on for simple moves that exclude certain files or directories.
ls -1 | grep -v ^$EXCLUDE | xargs -I{} mv {} $TARGET
This could be simpler and easy to remember and it works for me.
mv $(ls ~/folder | grep -v ~/folder/exclude.png) ~/destination
The following is not a 100% guaranteed method, and should not at all be attempted for scripting. But some times it is good enough for quick interactive shell usage. A file file glob like
[abc]*
(which will match all files with names starting with a, b or c) can be negated by inserting a "^" character first, i.e.
[^abc]*
I sometimes use this for not matching the "lost+found" directory, like for instance:
mv /mnt/usbdisk/[^l]* /home/user/stuff/.
Of course if there are other files starting with l I have to process those afterwards.
How about:
mv $(echo * | sed s:Tux.png::g) ~/Linux/New/
You have to be in the folder though.
This can bei done without grep like this:
ls ~/Linux/Old/ -QI Tux.png | xargs -I{} mv ~/Linux/Old/{} ~/Linux/New/
Note: -I is a captial i and makes the ls command ignore the Tux.png file, which is listed afterwards.
The output of ls is then piped into mv via xargs, which allows to use the output of ls as source argument for mv.
ls -Q just quotes the filenames listed by ls.
mv `find Linux/Old '!' -type d | fgrep -v Tux.png` Linux/New
The find command lists all regular files and the fgrep command filters out any Tux.png. The backticks tell mv to move the resulting file list.
ls ~/Linux/Old/ | grep -v Tux.png | xargs -i {} mv ~/Linux/New/'
move all files(not include except file) to except_file
find -maxdepth 1 -mindepth 1 -not -name except_file -print0 |xargs -0 mv -t ./except_file
for example(cache is current except file)
find -maxdepth 1 -mindepth 1 -not -name cache -print0 |xargs -0 mv -t ./cache