One-liner to move files that match a pattern to a new directory - linux

I need to move files in my home directory into a new directory. I need to select files with names that begin with a random set of 20 capital letters. The random set will consist of only 4 letters: T, A, G, or C. Because of the variability of names searched for, and the presence of names that would consist of 20 consecutive capital letters but without meeting the T, A, G, or C only requirement, a regex seems like the best option here.
I have tried to use a regular expression, but it doesn't work with this one-liner.
find source -name "([TAGC]{20})" -exec mv -i {} -t ~/dst \;
Is there a way of making this, or a similar, one-liner work for this situation, or do I need to write a script?

You can try with -regex option:
find . -regex ".*/[TAGC]{20}[^/]*$" -exec mv -i {} -t ~/dst \;

The regex must match the entire filename which begin with ./
find . -regextype posix-egrep -regex '\./[TAGC]{20}' -print

Related

List files with names that contain alphabetic characters and any other symbols (i.e numbers, punctuation, etc.) and sort them by size

I need help modifying command i have already written.
That's what i was able to achieve:
find -type f -name '*[:alpha:]*' -exec ls -ltu {} \; | sort -k 5 -n -r
However, this command also finds filenames that cosist solely of alphabetic characters, so i need to get rid of them too. I have tried doing something like this to the code:
find -type f -name '*[:alpha:]*' -and ! -name '[:alpha:]' -exec ls -ltu {} \; | sort -k 5 -n -r
But it does nothing. I understand that something is wrong with my name formatting but i have no idea how to fix it.
Character classes like [:alpha:] may only be used within character range [..] expressions, e.g. [0-9_[:alpha:]]. They may not be used alone.
[:alpha:] by itself a character range expression equivalent to [ahlp:] and matches any of the characters "ahlp" or colons. It does not match alphabetical characters.
To find files that contains both at least one alphabetic and at least one non-alphabetic characters:
find dir -type f -name '*[[:alpha:]]*' -name '*[^[:alpha:]]*'

Copy files in different folders in ther respective folder via shell

I'm trying to copy files in different folders to their own folder:
/Test/Folder1/File1
/Test/Folder2/File2
/Test/Folder3/File3
I want to create a copy of each file in it's own folder:
/Test/Folder1/File1.Copy
/Test/Folder2/File2.Copy
/Test/Folder3/File3.Copy
I would try using
find /SapBackup/server*/SAPDBBAK/*_COM.dump -mmin 360 -type f -execdir cp . {}
but I don't know how to use the filename and folder of the found files as an operand.
I want to use a one-liner to add it to crontab, so a for-solution would not be suitable (AFAIK)
Thanks for your help
You are on the right track with -execdir action.
Just note you can use {} multiple times, for example:
find /some/path -mmin 360 -type f -execdir cp {} {}.copy \;
or even simpler, combine it with brace expansion1 in bash:
find /some/path -mmin 360 -type f -execdir cp {}{,.copy} \;
1 Brace expansion, as explained in the docs, is a shell expansion by which arbitrary strings may be generated. In fact, you might consider it to be a Cartesian product in bash.
For example: a{b,c} will expand to two words: ab and ac. Namely, set containing word a was "multiplied" with a set containing two words, b and c.
Similarly when used multiple times, e.g. {a,b}{c,d} expands to 4 words: ac, ad, bc and bd (to test, try: echo {a,b}{c,d}).
In cp command above, we used a zero-length word and .copy to (re-)produce the original word and the original word with .copy appended:
$ echo filename{,.copy}
filename filename.copy
If you put these lines on a bash script, you can call it from the crontab.
for i in $(ls /path/to/folder); do cp $i $i.copy; done
It's a simple code and you can change it so easy.
I hope that's usefull.
In order to copy directories you have to add -r flag to cp command. What's about a oneliner like following:
find /SapBackup/server*/SAPDBBAK/*_COM.dump -mmin 360 | xargs -I {} cp -r {} {}.copy

Command Linux to copy files from a certain weekday

I am figuring out a command to copy files that are modified on a Saturday.
find -type f -printf '%Ta\t%p\n'
This way the line starts with the weekday.
When I combine this with a 'egrep' command using a regular expression (starts with "za") it shows only the files which start with "za".
find -type f -printf '%Ta\t%p\n' | egrep "^(za)"
("za" is a Dutch abbreviation for "zaterdag", which means Saturday,
This works just fine.
Now I want to copy the files with this command:
find -type f -printf '%Ta\t%p\n' -exec cp 'egrep "^(za)" *' /home/richard/test/ \;
Unfortunately it doesn't work.
Any suggestions?
The immediate problem is that -printf and -exec are independent of each other. You want to process the result of -printf to decide whether or not to actually run the -exec part. Also, of course, passing an expression in single quotes simply passes a static string, and does not evaluate the expression in any way.
The immediate fix to the evaluation problem is to use a command substitution instead of single quotes, but the problem that the -printf function's result is not available to the command substitution still remains (and anyway, the command substitution would happen before find runs, not while it runs).
A common workaround would be to pass a shell script snippet to -exec, but that still doesn't expose the -printf function to the -exec part.
find whatever -printf whatever -exec sh -c '
case $something in za*) cp "$1" "$0"; esac' "$DEST_DIR" {} \;
so we have to figure out a different way to pass the $something here.
(The above uses a cheap trick to pass the value of $DEST_DIR into the subshell so we don't have to export it. The first argument to sh -c ... ends up in $0.)
Here is a somewhat roundabout way to accomplish this. We create a format string which can be passed to sh for evaluation. In order to avoid pesky file names, we print the inode numbers of matching files, then pass those to a second instance of find for performing the actual copying.
find \( -false $(find -type f \
-printf 'case %Ta in za*) printf "%%s\\n" "-o -inum %i";; esac\n' |
sh) \) -exec cp -t "$DEST_DIR" \+
Using the inode number means any file name can be processed correctly (including one containing newlines, single or double quotes, etc) but may increase running time significantly, because we need two runs of find. If you have a large directory tree, you will probably want to refactor this for your particular scenario (maybe run only in the current directory, and create a wrapper to run it in every directory you want to examine ... thinking out loud here; not sure it helps actually).
This uses features of GNU find which are not available e.g. in *BSD (including OSX). If you are not on Linux, maybe consider installing the GNU tools.
What you can do is a shell expansion. Something like
cp $(find -type f -printf '%Ta\t%p\n' | egrep "^(za)") $DEST_DIR
Assuming that the result of your find and grep is just the filenames (and full paths, at that), this will copy all the files that match your criteria to whatever you set $DEST_DIR to.
EDIT As mentioned in the comments, this won't work if your filenames contain spaces. If that's the case, you can do something like this:
find -type f -printf '%Ta\t%p\n' | egrep "^(za)" | while read file; do cp "$file" $DEST_DIR; done

How to recursively delete all files in folder that dont match a given pattern

I would like to delete all files in a given folder that dont match the pattern ^transactions_[0-9]+
Let's say I have these files in the folder
file_list
transactions_010116.csv
transactions_020116.csv
transactions_check_010116.csv
transactions_check_020116.csv
I would like to delete transactions_check_010116.csv and transactions_check_020116.csv and leave the first two as they are using ^transactions_[0-9]+
I've been trying to use find something like below, but this expression deletes everything in the folder not just the files that dont match the pattern:
find /my_file_location -type f ! -regex '^transactions_[0-9]+' -delete
What i'm trying to do here is using regex find all files in folder that dont start with ^transactions_[0-9]+ and delete them.
Depending on your implementation, you could have to use option -E to allow the use of full regexes. An other problem is that -regex gives you an almost full path starting with the directory you passed.
So the correct command should be:
find -E /my_file_location ! -regex '.*/transactions_[0-9]+$' -type f -delete
But you should first issue the same with -print to be sure...
grep has -v option to grep everything not matching the provided regex:
find . | grep -v '^transactions_[0-9]+' | xargs rm -f

Scan For Multiple File Exentions and Copy To New Dir If Found

I am using Ubuntu and I am needing to recursively search a directory and subs for any .mp4, .mkv, or .wav files. I saw that mmv may be my best solution, but I can not get the syntax correct. So how would I convert this to search for an array of file names, or would I just want to run 3 iterations for each file extension I am after?
I was using the tutorial here to write this, so please forgive me if I am way outta line
find all m3
# find / -iname "*.mp4" -type f -exec mv {} ./Downloads/Media Files \;
With GNU bash 4:
shopt -s globstar nullglob
mv -v **/*.{mp4,mkv,wav} ./Downloads/Media Files
globstar: If set, the pattern ** used in a pathname expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a /, only directories and subdirectories match.
nullglob: If set, bash allows patterns which match no files (see Pathname Expansion) to expand to a null string, rather than themselves.
You can use \( and -o (or):
find / -type f \( -iname "*.mp4" -o -iname "*.mkv" -o -iname "*.wav" \) -exec mv {} ./Downloads/Media Files \;
Replace -iname with -regex. Regex understands emacs regular expressions by default (but you can change this behaviour using -regextype):
find / -regex ".*\.mp4\|.*\.mkv\|.*\.wav" ...
Learn the power of reguar expressions, it will open a new universe of power!

Resources