I want to recursively search a directory tree and get the 10 most recently modified files.
For each one of these files, i want to create a symlink in my /home/mostrecent/ directory.
I know i could solve this with a scripting language, but I'm a bit miffed that I can't do it with a linux command!
So far i have this:
find /home/myfiles -type f -printf '%TY-%Tm-%Td %TT %p\n' | sort | tail -n 10 | cut -c 32-
How do i create a symlink in /home/mostrecent for each one of these files, without using a scripting language?
Actually, bash is a scripting language, more than capable of doing that sort of stuff even from the command line :-)
Assuming that the command you posted works (and it seems to, based on my cursory testing), you can just do:
i=0
for f in $(CMD) ; do
ln -s $f $HOME/recent$i
((i++))
done
Or, as a one-liner:
i=0;for f in $(CMD);do ln -s $f $HOME/recent$i;((i++));done
This will create the files recent0 through recent9 in your home directory, which are symlinks to the most recent files.
Obviously, you should put your actual command where I've put the marker text CMD above. I've used the marker just so it formats nicely here on SO.
As Jan Hudec points out in a comment, that will only work for files without spaces, evil things in my opinion :-)
But, since people seem to use them, you can use the safer:
i=0
CMD | while read f; do
ln -s $f $HOME/recent$i
((i++))
done
And, again, the one-liner version:
i=0;CMD|while read f;do ln -s $f $HOME/recent$i;((i++));done
I solved this with sed.
All hail sed!
find /home/myfiles/ -type f -printf '%TY-%Tm-%Td %TT %p\n' | sort | tail -n 10 | cut -c 21- | sed -e "s/^/ln -s \"/" -e "s/$/\"/" -e "s/$/ \"\/home\recent\/\"/" | sh
If i pipe sed to cat instead of sh, this is the output:
ln -s "/home/myfiles/1.simplest" "/home/recent/"
ln -s "/home/myfiles/2.with space" "/home/recent/"
ln -s "/home/myfiles/3.with'apostraphe" "/home/recent/"
ln -s "/home/myfiles/4.with'apostrophe space" "/home/recent/"
Thanks for your help.
Create symlinks to several file types modified in the last 24 hrs, with the same filenames (but a different path of course)
Thanks to Pax, Jan and Jon, with a little modification...
Make a 'recent' directory
mkdir ~/recent
Create 'getrecentfiles.sh' and add...
#!/usr/bin/bash
find $HOME -mtime 0 -name \*.txt -print -o \
-mtime 0 -name \*.pdf -print -o \
-mtime 0 -name \*.extensionname -print -o | while read f; do
ln -s $f $HOME/recent/
done
filters:
-mtime (n*24hrs) is time since last modified (n=1 shows only files modified bw 24-48hrs ago)
-o is the OR operator for multiple files (default is AND)
Change it to executable, add it to your startup scripts and make a shortcut to ~/recent on your desktop, to have the latest files you want on hand!
Related
I would like to find the newest sub directory in a directory and save the result to variable in bash.
Something like this:
ls -t /backups | head -1 > $BACKUPDIR
Can anyone help?
BACKUPDIR=$(ls -td /backups/*/ | head -1)
$(...) evaluates the statement in a subshell and returns the output.
There is a simple solution to this using only ls:
BACKUPDIR=$(ls -td /backups/*/ | head -1)
-t orders by time (latest first)
-d only lists items from this folder
*/ only lists directories
head -1 returns the first item
I didn't know about */ until I found Listing only directories using ls in bash: An examination.
This ia a pure Bash solution:
topdir=/backups
BACKUPDIR=
# Handle subdirectories beginning with '.', and empty $topdir
shopt -s dotglob nullglob
for file in "$topdir"/* ; do
[[ -L $file || ! -d $file ]] && continue
[[ -z $BACKUPDIR || $file -nt $BACKUPDIR ]] && BACKUPDIR=$file
done
printf 'BACKUPDIR=%q\n' "$BACKUPDIR"
It skips symlinks, including symlinks to directories, which may or may not be the right thing to do. It skips other non-directories. It handles directories whose names contain any characters, including newlines and leading dots.
Well, I think this solution is the most efficient:
path="/my/dir/structure/*"
backupdir=$(find $path -type d -prune | tail -n 1)
Explanation why this is a little better:
We do not need sub-shells (aside from the one for getting the result into the bash variable).
We do not need a useless -exec ls -d at the end of the find command, it already prints the directory listing.
We can easily alter this, e.g. to exclude certain patterns. For example, if you want the second newest directory, because backup files are first written to a tmp dir in the same path:
backupdir=$(find $path -type -d -prune -not -name "*temp_dir" | tail -n 1)
The above solution doesn't take into account things like files being written and removed from the directory resulting in the upper directory being returned instead of the newest subdirectory.
The other issue is that this solution assumes that the directory only contains other directories and not files being written.
Let's say I create a file called "test.txt" and then run this command again:
echo "test" > test.txt
ls -t /backups | head -1
test.txt
The result is test.txt showing up instead of the last modified directory.
The proposed solution "works" but only in the best case scenario.
Assuming you have a maximum of 1 directory depth, a better solution is to use:
find /backups/* -type d -prune -exec ls -d {} \; |tail -1
Just swap the "/backups/" portion for your actual path.
If you want to avoid showing an absolute path in a bash script, you could always use something like this:
LOCALPATH=/backups
DIRECTORY=$(cd $LOCALPATH; find * -type d -prune -exec ls -d {} \; |tail -1)
With GNU find you can get list of directories with modification timestamps, sort that list and output the newest:
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\0" | sort -z -n | cut -z -f2- | tail -z -n1
or newline separated
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\n" | sort -n | cut -f2- | tail -n1
With POSIX find (that does not have -printf) you may, if you have it, run stat to get file modification timestamp:
find . -mindepth 1 -maxdepth 1 -type d -exec stat -c '%Y %n' {} \; | sort -n | cut -d' ' -f2- | tail -n1
Without stat a pure shell solution may be used by replacing [[ bash extension with [ as in this answer.
Your "something like this" was almost a hit:
BACKUPDIR=$(ls -t ./backups | head -1)
Combining what you wrote with what I have learned solved my problem too. Thank you for rising this question.
Note: I run the line above from GitBash within Windows environment in file called ./something.bash.
I want to create symlinks to all files in 'myfiles' which are not already linked to and specify the destination folder for the just-created symlinks.
I am using the following cmd, successfully, to generate the list of existing links, which point to 'myfolder' :
find ~/my-existing-links/ -lname '*/myfiles/*' -printf "%f\n" > results.txt
And I'm using the following cmd to reverse match i.e. to list the files in myfiles which are not linked to:
ls ~/myfiles | grep -vf results.txt > results2.txt
So, results2.txt has a list of the files, each of which I now want to create a new symlink to.... in a folder called ~/newlinks .
I know it is possible to feed 'ln -s' a file list using the find / exec combination i.e.
find ~/myfiles/ -exec ln -s {} -t ~/newlinks \; -print
.... but that would be the unfiltered file list in myfiles. I want to use the filtered list.
Any ideas how I can do this? I'm going to be adding files to myfiles regularly and so will periodically visit the folder for the purpose of generating symlinks for all the new files so I can divi the links up logically(rather than change the original filename).
Try with xargs:
cat results2.txt | xargs -I{} ln -s {} ~/newlinks
You can use xargs to apply the links, so that your composite command might look like this:
find ~/myfiles/ | grep -vf results.txt | xargs make-my-links
and make-my-links would be a script something like this:
#!/bin/sh
for source in "$#"
do
ln -s "$source" -t ~/newlinks
done
The separate script and loop are used with xargs because it does not accept a command-template, but will (default) send as many of the inputs as it thinks will fit on a command-line.
So, you have 3 entities of type directory:
~/myfiles/: contains your files.
~/my-existing-links/: contains links to files from ~/myfiles/.
~/newlinks/: contains links to new files from ~/myfiles/
To me, the third entity is rather unnecessary. Why the new links aren't created directly in ~/my-existing-links/?
I would only use a script to update the list of links in ~/my-existing-links/, whenever new files are added in ~/myfiles/:
update_v1.sh
#!/bin/bash
for f in $(find ~/myfiles -type f); do
ln -sf "$f" "~/my-existing-links/$(basename $f)"
done
update_v2.sh
find ~/myfiles -type f -exec sh -c \
'for f; do ln -sf "$f" "~/my-existing-links/${f#*/}"; done' sh {} +
update_print.sh
#!/bin/bash
for f in $(find ~/myfiles -type f); do
if [[ ! -L "~/my-existing-links/${f#*/}" ]]; then
echo "Link not existing for $f"
fi
done
Thanks, Thomas and pasaba... I found a way to do it:
So I did the following from ~/newlinks :
while read line; do ln -s "$line" "${line##*/}" ; done < ~/myfiles/results2.txt
Thanks again for your time.
I have the following problem using UNIX Commands. I wish to go through a large number of files and convert them using a command that converts them. My idea is to work like this: command *.fileending > *.newfileending
The problem is that I wish to keep the file-names and only replace the file-ending. Thus filename.fileending should become filename.newfileending. How do I achieve this?
Use a for loop:
for file in *.krn; do
hum2mid "$file" -o "${file%.krn}.mid"
done
In a single line: for file in *.krn; do hum2mid "$file" -o "${file%.krn}.mid"; done
To apply the command to files and subdirectories recursively, use the find|xargs pattern:
find -type f -name '*.krn' -print0 \
| xargs -0 -n1 sh -c 'hum2mid "$1" -o "/destination/dir/$(basename ${1%.krn}.mid)"' -
Note that this will overwrite already converted files, if a file from another directory has the same name.
rename .fileending .newfileending *
#!/bin/bash
ls -1 *.fileending | while read i; do
command "$i" > "${i/%.fileending/.newfileending}"
done
if you need process 'weird' filenames ( like with embedded '\n', for example ), you can use following trick:
create file foo.sh:
#!/bin/bash
command "$1" > "${1/%.fileending/.newfileending}"
, then do chmod +x foo.sh and finally find . -maxdepth 1 -a -type f -a -name '*.fileending' -print0 | xargs -0 -n 1 -J '%' ./foo.sh "%"
Once I am in the directory containing .mp3 files, I can play songs randomly using
mpg123 -Z *.mp3
But if I want to recursively search a directory and its subfolders for .mp3 files and play them randomly, I tried following command, but it does not work.
mpg123 -Z <(find /media -name *.mp3)
(find /media -name *.mp3), when executed gives all .mp3 files present in /media and its sub directories.
How can I get this to work?
mpg123 -Z $(find -name "*.mp3")
The $(...) means execute the command and paste the output here.
Also, to bypass the command-line length limit laalto mentioned, try:
mpg123 -Z $(find -name "*.mp3" | sort --random-sort| head -n 100)
EDIT: Sorry, try:
find -name "*.mp3" | sort --random-sort| head -n 100|xargs -d '\n' mpg123
That should cope with the spaces correctly, presuming you don't have filenames with embedded newlines.
It will semi-randomly permute your list of MP3s, then pick the first 100 of the random list, then pass those to mpg123.
In both zsh and bash 4.0,
mpg123 -Z **/*.mp3
(Bash users will probably need to shopt -s globstar first.)
Backticks.
mpg123 -Z `find /media -name \*.mp3`
Though if you have a lot of files, you may encounter command line length limitations.
Would something like this work?
find /media -name *.mp3 -print0 | xargs -0 mpg123 -Z
The following works fine.
find /media -name "*.mp3" | xargs -d '\n' -n10 mpg123 -Z.
By '-n' option we can provide no. of arguments for a single invocation of command.
Even after I close the terminal where i wrote this command, the songs continue to play as the process mpg123 becomes an orphan and continues to run.
devikasingh#Interest:~$ ps -e | grep mpg123
7239 ? 00:00:01 mpg123
ps -f 7239
UID PID PPID C STIME TTY STAT TIME CMD
1000 7239 1 0 15:21 ? S 0:01 mpg123 -Z /media/MUSIC & PIC/audio_for_you/For You.mp3 /media/MUSIC & PIC/audio_for_you/In My Place.mp3 /
Thanks for the suggestions, By using them I was able to create the following script:
#!/bin/bash song=$(zenity --width=360 --height=320 --title "Select Folder" --file-selection --directory $HOME) find "$song" -name "*.mp3" | sort --random-sort | head -n 100 | xargs -d '\n' mpg123
Probably its better to use xargs, but I use a while loop in bash on Red Hat.
find . -iname "*.mp3" -print | sort -R --random-source=/dev/urandom | while IFS= read -r filename; do play "$filename"; done
The only problem with it is that it is annoying to kill. To kill it, you must hold down Ctrl-C until the while loop is killed.
while...do...done loops through each field in the sort output.
IFS describes the field separators.
IFS= makes each line a single field.
read copies the current field into the filename variable.
The -r option removes backslash processing, which doesn't seem to be necessary on Linux.
play is a simple way of using sox for playback.
I found this one and IMHO, much cleaner than other solutions. I don't own the credits, they goes to site owner.
find $HOME/mp3s -iname '*.mp3' | mpg123 -Z -# -
Found on https://dannyman.toldme.com/2004/12/28/howto-mpg123-random-mp3s/
I just changed from name to iname as sometimes files can have extension in caps...
I tried almost all and when mpg123 is run througth a pipe it returns this error: "Can't get terminal attributes" and I cannot use terminal control keys.
The only way I found to play a list of files found with the command find and be able to use terminal control keys is this (I have directories and files with spaces):
find /media -type f -iname "*.mp3" > /tmp/mp3list
mpg123 -CZvv -# /tmp/mp3list
It looks that mpg123 use the space as separator if you use $(find /media -type f -iname "*.mp3") and in my case doesn't work because I have spaces in all directory's names and in almost all file's names.
This is a script (playmp3.sh) to only execute find when the file doesn't exist:
#!/bin/sh
if ! [ -f /tmp/mp3list ]; then
find /media -type f -iname "*.mp3" > /tmp/mp3list
fi
mpg123 -CZvv -# /tmp/mp3list
I have my library in a separate partition an in my root dir i have this small script that also plays randomly previous song, i have like 40 gb of music, so they almost never repeat.
# !/bin/sh
cd "/media/$USER/7789f483-c7bf-46bc-9293-e8e05dd62199/musik/"
mpg123 -Z */*/*.mp3;
How can I move all files except one? I am looking for something like:
'mv ~/Linux/Old/!Tux.png ~/Linux/New/'
where I move old stuff to new stuff -folder except Tux.png. !-sign represents a negation. Is there some tool for the job?
If you use bash and have the extglob shell option set (which is usually the case):
mv ~/Linux/Old/!(Tux.png) ~/Linux/New/
Put the following to your .bashrc
shopt -s extglob
It extends regexes.
You can then move all files except one by
mv !(fileOne) ~/path/newFolder
Exceptions in relation to other commands
Note that, in copying directories, the forward-flash cannot be used in the name as noticed in the thread Why extglob except breaking except condition?:
cp -r !(Backups.backupdb) /home/masi/Documents/
so Backups.backupdb/ is wrong here before the negation and I would not use it neither in moving directories because of the risk of using wrongly then globs with other commands and possible other exceptions.
I would go with the traditional find & xargs way:
find ~/Linux/Old -maxdepth 1 -mindepth 1 -not -name Tux.png -print0 |
xargs -0 mv -t ~/Linux/New
-maxdepth 1 makes it not search recursively. If you only care about files, you can say -type f. -mindepth 1 makes it not include the ~/Linux/Old path itself into the result. Works with any filenames, including with those that contain embedded newlines.
One comment notes that the mv -t option is a probably GNU extension. For systems that don't have it
find ~/Linux/Old -maxdepth 1 -mindepth 1 -not -name Tux.png \
-exec mv '{}' ~/Linux/New \;
A quick way would be to modify the tux filename so that your move command will not match.
For example:
mv Tux.png .Tux.png
mv * ~/somefolder
mv .Tux.png Tux.png
I think the easiest way to do is with backticks
mv `ls -1 ~/Linux/Old/ | grep -v Tux.png` ~/Linux/New/
Edit:
Use backslash with ls instead to prevent using it with alias, i.e. mostly ls is aliased as ls --color.
mv `\ls -1 ~/Linux/Old/ | grep -v Tux.png` ~/Linux/New/
Thanks #Arnold Roa
For bash, sth answer is correct. Here is the zsh (my shell of choice) syntax:
mv ~/Linux/Old/^Tux.png ~/Linux/New/
Requires EXTENDED_GLOB shell option to be set.
I find this to be a bit safer and easier to rely on for simple moves that exclude certain files or directories.
ls -1 | grep -v ^$EXCLUDE | xargs -I{} mv {} $TARGET
This could be simpler and easy to remember and it works for me.
mv $(ls ~/folder | grep -v ~/folder/exclude.png) ~/destination
The following is not a 100% guaranteed method, and should not at all be attempted for scripting. But some times it is good enough for quick interactive shell usage. A file file glob like
[abc]*
(which will match all files with names starting with a, b or c) can be negated by inserting a "^" character first, i.e.
[^abc]*
I sometimes use this for not matching the "lost+found" directory, like for instance:
mv /mnt/usbdisk/[^l]* /home/user/stuff/.
Of course if there are other files starting with l I have to process those afterwards.
How about:
mv $(echo * | sed s:Tux.png::g) ~/Linux/New/
You have to be in the folder though.
This can bei done without grep like this:
ls ~/Linux/Old/ -QI Tux.png | xargs -I{} mv ~/Linux/Old/{} ~/Linux/New/
Note: -I is a captial i and makes the ls command ignore the Tux.png file, which is listed afterwards.
The output of ls is then piped into mv via xargs, which allows to use the output of ls as source argument for mv.
ls -Q just quotes the filenames listed by ls.
mv `find Linux/Old '!' -type d | fgrep -v Tux.png` Linux/New
The find command lists all regular files and the fgrep command filters out any Tux.png. The backticks tell mv to move the resulting file list.
ls ~/Linux/Old/ | grep -v Tux.png | xargs -i {} mv ~/Linux/New/'
move all files(not include except file) to except_file
find -maxdepth 1 -mindepth 1 -not -name except_file -print0 |xargs -0 mv -t ./except_file
for example(cache is current except file)
find -maxdepth 1 -mindepth 1 -not -name cache -print0 |xargs -0 mv -t ./cache