Move multiple files with unique name to new folder and append to file name - linux

I have about 2000 files in a folder.
All the files contain the string test in the name.
What I need to do is move all those files ~1250 to a folder called trash within the same directory and append _scrap to the end of each file.
mv *test* trash/
What I want is something like this:
[root#server] ls
test1.txt test2.txt test3.txt trash video1.txt video2.txt video3.txt
[root#server] mv *test* trash/*_scrap
[root#server] ls
trash vidoe1.txt video2.txt video3.txt
[root#server] ls trash/
test1.txt_scrap test2.txt_scrap test3.txt_scrap
I can move all files, however I cannot figure out how to append the _scrap to the end.
As I have to do this on a number of machines, a one liner would be preferable over a small script.

$ touch test1.txt test2.txt test3.txt vidoe1.txt vidoe2.txt vidoe3.txt
$ mkdir trash
$ for file in *test*; do mv "$file" "trash/${file}_scrap"; done
$ ls
trash vidoe1.txt vidoe2.txt vidoe3.txt
$ ls trash
test1.txt_scrap test2.txt_scrap test3.txt_scrap
$
You could also use xargs
$ ls *test* | xargs -t -I{} mv {} trash/{}_scrap
mv test1.txt trash/test1.txt_scrap
mv test2.txt trash/test2.txt_scrap
mv test3.txt trash/test3.txt_scrap
$
You could use find
$ find . -name '*test*' -maxdepth 1 -exec mv {} trash/{}_scrap \;

You can use rename to avoid shell for loops. It's a perl script but it comes installed with many common distros (including Ubuntu 14):
$ mv *test* trash/
$ rename 's/$/_scrap/g' trash/*
$ ls trash/
test1.txt_scrap test3.txt_scrap test2.txt_scrap

Related

Difference between two directories ignoring file type?

Is there a quick way to compare two directories but ignore the file extension??
I know the command is typically:
diff dir1 dir2
But in this case apple.gif in dir1 and apple.png in dir2 two are differences.
Is there a way to get apple.gif and apple.png to be considered the same?
You can use bash for loop to copy both dirs to /tmp/ without files names extensions then run diff on the copied dirs and finally delete the temporary dirs such as:
Assuming you have dir/1 and dir/2 you want to compare 1 and 2.
#create all dirs and subdirs
cd dir;
for i in `find . -type d `;do mkdir /tmp/$i ;done
#copy all files without extension(remove string after last '.')
for i in `find . -type f `;do cp $i /tmp/`echo $i | rev | cut -d'.' -f 2- | rev` ;done
#run diff
diff /tmp/1 /tmp/2
#clean up ,remove created dirs
rm -rf /tmp/1 /tmp/2

linux command: show content of all files

I tried the two following cmd to show content of all files under current directory. I want to know why one works, the other does not.
ls | xargs cat # does not work, No such file or directory
find . | xargs cat # works
cat is just an example, it can be any cmd which takes a file name as its parameter.
---------------------------------Update---------------------------------
Here are some observation from my PC.
$ echo 1 > test1.txt
$ echo 2 > test2.txt
$ echo 3 > test3.txt
$ ls
test1.txt test2.txt test3.txt
$ ls *.txt | xargs cat
cat: test1.txt: No such file or directory
cat: test2.txt: No such file or directory
cat: test3.txt: No such file or directory
$ find . -name '*.txt' | xargs cat
2
1
3
For others that might see this, we found the issue in the comments. Hao's problem was that ls was an alias, causing issues piping xargs to cat.
With 'type ls', they saw it was aliased, and using '\ls' to remove the alias solved the problem.

Using find - Deleting all files/directories (in Linux ) except any one

If we want to delete all files and directories we use, rm -rf *.
But what if i want all files and directories be deleted at a shot, except one particular file?
Is there any command for that? rm -rf * gives the ease of deletion at one shot, but deletes even my favourite file/directory.
Thanks in advance
find can be a very good friend:
$ ls
a/ b/ c/
$ find * -maxdepth 0 -name 'b' -prune -o -exec rm -rf '{}' ';'
$ ls
b/
$
Explanation:
find * -maxdepth 0: select everything selected by * without descending into any directories
-name 'b' -prune: do not bother (-prune) with anything that matches the condition -name 'b'
-o -exec rm -rf '{}' ';': call rm -rf for everything else
By the way, another, possibly simpler, way would be to move or rename your favourite directory so that it is not in the way:
$ ls
a/ b/ c/
$ mv b .b
$ ls
a/ c/
$ rm -rf *
$ mv .b b
$ ls
b/
Short answer
ls | grep -v "z.txt" | xargs rm
Details:
The thought process for the above command is :
List all files (ls)
Ignore one file named "z.txt" (grep -v "z.txt")
Delete the listed files other than z.txt (xargs rm)
Example
Create 5 files as shown below:
echo "a.txt b.txt c.txt d.txt z.txt" | xargs touch
List all files except z.txt
ls|grep -v "z.txt"
a.txt
b.txt
c.txt
d.txt
We can now delete(rm) the listed files by using the xargs utility :
ls|grep -v "z.txt"|xargs rm
You can type it right in the command-line or use this keystroke in the script
files=`ls -l | grep -v "my_favorite_dir"`; for file in $files; do rm -rvf $file; done
P.S. I suggest -i switch for rm to prevent delition of important data.
P.P.S You can write the small script based on this solution and place it to the /usr/bin (e.g. /usr/bin/rmf). Now you can use it as and ordinary app:
rmf my_favorite_dir
The script looks like (just a sketch):
#!/bin/sh
if [[ -z $1 ]]; then
files=`ls -l`
else
files=`ls -l | grep -v $1`
fi;
for file in $files; do
rm -rvi $file
done;
At least in zsh
rm -rf ^filename
could be an option, if you only want to preserve one single file.
If it's just one file, one simple way is to move that file to /tmp or something, rm -Rf the directory and then move it back. You could alias this as a simple command.
The other option is to do a find and then grep out what you don't want (using -v or directly using one of finds predicates) and then rming the remaining files.
For a single file, I'd do the former. For anything more, I'd write something custom similar to what thkala said.
In bash you have the !() glob operator, which inverts the matched pattern. So to delete everything except the file my_file_name.txt, try this:
shopt -s extglob
rm -f !(my_file_name.txt)
See this article for more details:
http://karper.wordpress.com/2010/11/17/deleting-all-files-in-a-directory-with-exceptions/
I don't know of such a program, but I have wanted it in the past for some times. The basic syntax would be:
IFS='
' for f in $(except "*.c" "*.h" -- *); do
printf '%s\n' "$f"
done
The program I have in mind has three modes:
exact matching (with the option -e)
glob matching (default, like shown in the above example)
regex matching (with the option -r)
It takes the patterns to be excluded from the command line, followed by the separator --, followed by the file names. Alternatively, the file names might be read from stdin (if the option -s is given), each on a line.
Such a program should not be hard to write, in either C or the Shell Command Language. And it makes a good excercise for learning the Unix basics. When you do it as a shell program, you have to watch for filenames containing whitespace and other special characters, of course.
I see a lot of longwinded means here, that work, but with
a/ b/ c/ d/ e/
rm -rf *.* !(b*)
this removes everything except directory b/ and its contents (assuming your file is in b/.
Then just cd b/ and
rm -rf *.* !(filename)
to remove everything else, but the file (named "filename") that you want to keep.
mv subdir/preciousfile ./
rm -rf subdir
mkdir subdir
mv preciousfile subdir/
This looks tedious, but it is rather safe
avoids complex logic
never use rm -rf *, its results depend on your current directory (which could be / ;-)
never use a globbing *: its expansion is limited by ARGV_MAX.
allows you to check the error after each command, and maybe avoid the disaster caused by the next command.
avoids nasty problems caused by space or NL in the filenames.
cd ..
ln trash/useful.file ./
rm -rf trash/*
mv useful.file trash/
you need to use regular expression for this. Write a regular expression which selects all other files except the one you need.

How can I add text to the same line?

I used this command to find mp3 files and write their name on log.txt:
find -name *.mp3 >> log.txt
I want to move the files using the mv command and I would like to append that to the log file so it could show the path where the files have been moved.
For example if the mp3 files are 1.mp3 and 2.mp3 then the log.txt should look like
1.mp3 >>>> /newfolder/1.mp3
2.mp3 >>>> /newfolder/2.mp3
How can I do that using unix commands? Thank you!
Using only move:
mv -v *.mp3 tmp/ > log.txt
or using find:
find -name '*.mp3' -exec mv -v {} test/ >> log.txt \;
You should probably use some scripting language like Perl or Python; text processing is rather awkward in the shell.
E.g. in Perl you can just postprocess the output of find, and print out what you did.
#!/usr/bin/perl -w
use strict;
use File::Find;
my #directories_to_search=("/tmp/");
sub wanted {
print "$File::Find::name >>> newdir/$_\n";
# do what you want with the file, e.g. invoke commands on it using system()
}
find(\&wanted, #directories_to_search);
Doing it in Perl or similar makes some things easier than in the shell; in particular handling of funny filenames (embedded spaces, special chars) is easier. Be careful when invoking syste() commands though.
For docs on the File::Find module see http://perldoc.perl.org/File/Find.html .
GNU find
find /path -type f -iname "*.mp3" -printf "%f/%p\n" | while IFS="/" -r read filename path
do
mv "$path" "$destination"
echo "$filename >>> $destination/$filename " > newfile.txt
done
output
$ touch 'test"quotes.txt'
$ ls -ltr
total 0
-rw-r--r-- 1 root root 0 2009-11-20 10:30 test"quotes.txt
$ mkdir temp
$ ls -l temp
total 0
$ find . -type f -iname "*\"*" -printf "%f:%p\n" | while IFS=":" read filename path; do mv "$filename" temp ; done
$ ls -l temp
total 0
-rw-r--r-- 1 root root 0 2009-11-20 11:53 test"quotes.txt

How to copy a file to multiple directories using the gnu cp command

Is it possible to copy a single file to multiple directories using the cp command ?
I tried the following , which did not work:
cp file1 /foo/ /bar/
cp file1 {/foo/,/bar}
I know it's possible using a for loop, or find. But is it possible using the gnu cp command?
You can't do this with cp alone but you can combine cp with xargs:
echo dir1 dir2 dir3 | xargs -n 1 cp file1
Will copy file1 to dir1, dir2, and dir3. xargs will call cp 3 times to do this, see the man page for xargs for details.
No, cp can copy multiple sources but will only copy to a single destination. You need to arrange to invoke cp multiple times - once per destination - for what you want to do; using, as you say, a loop or some other tool.
Wildcards also work with Roberts code
echo ./fs*/* | xargs -n 1 cp test
I would use cat and tee based on the answers I saw at https://superuser.com/questions/32630/parallel-file-copy-from-single-source-to-multiple-targets instead of cp.
For example:
cat inputfile | tee outfile1 outfile2 > /dev/null
As far as I can see it you can use the following:
ls | xargs -n 1 cp -i file.dat
The -i option of cp command means that you will be asked whether to overwrite a file in the current directory with the file.dat. Though it is not a completely automatic solution it worked out for me.
These answers all seem more complicated than the obvious:
for i in /foo /bar; do cp "$file1" "$i"; done
ls -db di*/subdir | xargs -n 1 cp File
-b in case there is a space in directory name otherwise it will be broken as a different item by xargs, had this problem with the echo version
Not using cp per se, but...
This came up for me in the context of copying lots of Gopro footage off of a (slow) SD card to three (slow) USB drives. I wanted to read the data only once, because it took forever. And I wanted it recursive.
$ tar cf - src | tee >( cd dest1 ; tar xf - ) >( cd dest2 ; tar xf - ) | ( cd dest3 ; tar xf - )
(And you can add more of those >() sections if you want more outputs.)
I haven't benchmarked that, but it's definitely a lot faster than cp-in-a-loop (or a bunch of parallel cp invocations).
If you want to do it without a forked command:
tee <inputfile file2 file3 file4 ... >/dev/null
To use copying with xargs to directories using wildcards on Mac OS, the only solution that worked for me with spaces in the directory name is:
find ./fs*/* -type d -print0 | xargs -0 -n 1 cp test
Where test is the file to copy
And ./fs*/* the directories to copy to
The problem is that xargs sees spaces as a new argument, the solutions to change the delimiter character using -d or -E is unfortunately not properly working on Mac OS.
Essentially equivalent to the xargs answer, but in case you want parallel execution:
parallel -q cp file1 ::: /foo/ /bar/
So, for example, to copy file1 into all subdirectories of current folder (including recursion):
parallel -q cp file1 ::: `find -mindepth 1 -type d`
N.B.: This probably only conveys any noticeable speed gains for very specific use cases, e.g. if each target directory is a distinct disk.
It is also functionally similar to the '-P' argument for xargs.
No - you cannot.
I've found on multiple occasions that I could use this functionality so I've made my own tool to do this for me.
http://github.com/ddavison/branch
pretty simple -
branch myfile dir1 dir2 dir3
ls -d */ | xargs -iA cp file.txt A
Suppose you want to copy fileName.txt to all sub-directories within present working directory.
Get all sub-directories names through ls and save them to some temporary file say, allFolders.txt
ls > allFolders.txt
Print the list and pass it to command xargs.
cat allFolders.txt | xargs -n 1 cp fileName.txt
Another way is to use cat and tee as follows:
cat <source file> | tee <destination file 1> | tee <destination file 2> [...] > <last destination file>
I think this would be pretty inefficient though, since the job would be split among several processes (one per destination) and the hard drive would be writing several files at once over different parts of the platter. However if you wanted to write a file out to several different drives, this method would probably be pretty efficient (as all copies could happen concurrently).
Using a bash script
DESTINATIONPATH[0]="xxx/yyy"
DESTINATIONPATH[1]="aaa/bbb"
..
DESTINATIONPATH[5]="MainLine/USER"
NumberOfDestinations=6
for (( i=0; i<NumberOfDestinations; i++))
do
cp SourcePath/fileName.ext ${DESTINATIONPATH[$i]}
done
exit
if you want to copy multiple folders to multiple folders one can do something like this:
echo dir1 dir2 dir3 | xargs -n 1 cp -r /path/toyourdir/{subdir1,subdir2,subdir3}
If all your target directories match a path expression — like they're all subdirectories of path/to — then just use find in combination with cp like this:
find ./path/to/* -type d -exec cp [file name] {} \;
That's it.
If you need to be specific on into which folders to copy the file you can combine find with one or more greps. For example to replace any occurences of favicon.ico in any subfolder you can use:
find . | grep favicon\.ico | xargs -n 1 cp -f /root/favicon.ico
This will copy to the immediate sub-directories, if you want to go deeper, adjust the -maxdepth parameter.
find . -mindepth 1 -maxdepth 1 -type d| xargs -n 1 cp -i index.html
If you don't want to copy to all directories, hopefully you can filter the directories you are not interested in. Example copying to all folders starting with a
find . -mindepth 1 -maxdepth 1 -type d| grep \/a |xargs -n 1 cp -i index.html
If copying to a arbitrary/disjoint set of directories you'll need Robert Gamble's suggestion.
I like to copy a file into multiple directories as such:
cp file1 /foo/; cp file1 /bar/; cp file1 /foo2/; cp file1 /bar2/
And copying a directory into other directories:
cp -r dir1/ /foo/; cp -r dir1/ /bar/; cp -r dir1/ /foo2/; cp -r dir1/ /bar2/
I know it's like issuing several commands, but it works well for me when I want to type 1 line and walk away for a while.
For example if you are in the parent directory of you destination folders you can do:
for i in $(ls); do cp sourcefile $i; done

Resources