How to copy files filtered with grep - linux

I need find and copy files in /usr/share/man
Especialy need man7-8 and everything that have "et" in name.
I try this:
ls man7 man8 | grep "et"
This works perfectly.
Than i want that files copy with cp but i dont know how to format it properly
ls man7 man8 | grep "et" | xargs -I '{}' cp '{}' /home/marty/homework
But this is not working

It's not working because ls directory just outputs the filenames, without the directory prefixes. So cp doesn't know what directory to copy the file from.
But there's need for ls or grep, just use a wildcard.
cp man7/*et* man8/*et* /home/marty/homework
Your code would also fail for any filenames containing whitespace, since xargs treats that as a delimiter by default.

Related

bash rm to delete old files only deleting the first one

I'm using Ubuntu 16.04.1 LTS
I found a script to delete everything but the 'n' newest files in a directory.
I modified it to this:
sudo rm /home/backup/`ls -t /home/backup/ | awk 'NR>5'`
It deletes only one file. It reports the following message about the rest of the files it should have deleted:
rm: cannot remove 'delete_me_02.tar': No such file or directory
rm: cannot remove 'delete_me_03.tar': No such file or directory
...
I believe that the problem is the path. It's looking for delete_me_02.tar (and subsequent files) in the current directory, and it's somehow lost its reference to the correct directory.
How can I modify my command to keep looking in the /home/backup/ directory for all 'n' files?
Maybe find could help you do what you want:
find /home/backup -type f | xargs ls -t | head -n 5 | xargs rm
But I would first check what find would return (just remove | xargs rm) and check what is going to be removed.
The command in the backticks will be expanded to the list of relative file paths:
%`ls -t /home/backup/ | awk 'NR>5'`
a.txt b.txt c.txt ...
so the full command will now look like this:
sudo rm /home/backup/a.txt b.txt c.txt
which, I believe, makes it pretty obvious on why only the first file is removed.
There is also a limit on a number of arguments you can pass to rm, so
you better modify your script to use xargs instead:
ls -t|tail -n+5|xargs -I{} echo rm /home/backup/'{}'
(just remove echo, once you verify that it produces an expected results for you)
After the command substitution expands, your command line looks like
sudo rm /home/backup/delete_me_01.tar delete_me_02.tar delete_me_03.tar etc
/home/backup is not prefixed to each word from the output. (Aside: don't use ls in a script; see http://mywiki.wooledge.org/ParsingLs.)
Frankly, this is something most shells just doesn't make easy to do properly. (Exception: with zsh, you would just use sudo rm /home/backup/*(Om[1,-6]).) I would use some other language.

linux : listing files that contain several words

I try to find a way to list all the files in the directory tree (recursively) that contain several words.
While searching I found example such as egrep -R -l 'toto|tata' . but | induce OR. I would like AND...
Thank you for your help
Using GNU grep with GNU xargs,
grep -ERl 'toto' | xargs -r grep 'tata'
The first grep lists those files containing the pattern toto which is then fed to xargs and with the second grep those files containing tata is retrieved. The -r flag is to ensure second grep doesn't run on an empty output.
The -r flag in xargs from the man page,
-r, --no-run-if-empty
If the standard input does not contain any nonblanks, do not run the command.
Normally, the command is run once even if there is no input. This option is a GNU
extension.
agrep tool is designed for providing AND to grep with usage:
agrep 'pattern1;pattern2' file
In your case you could run
find . -type f -exec agrep 'toto;tata' {} \; #apply -l to display the file names
PS1: For current directory you can just agrep 'pattern1;pattern2' *.*
PS2: Unfortunatelly agrep does not support -R option.

'ls | grep -c' and full path

Can I use ls | grep -c /full/path/to/file to count the occurrences of a file, but while executing the command from a different directory than where the files I'm looking for are?
Let's say I want to look how many .txt files I have in my "results" directory. Can I do something like ls | grep -c /full/path/to/results/*.txt while I'm in another directory?
Although I have .txt files in that directory, I always get a zero when I run the command from another directory :( What's happening? Can I only use ls for the current directory?
You have to use ls <dirname>. Plain ls defaults only to the current directory.
What you are trying to do can be accomplished by find <dir> -name "*.txt" | grep -c txt or find <dir> -name "*.txt" | wc -l
But you can do ls * | grep \.txt$ as well. Please read the manual to find the differences.
grep accepts regular expressions, not glob. /foo/bar/*.txt is a glob. Try /foo/bar/.*\.txt
also ls lists files and directories under your current directory. It will not list the full path. Do some tests, and you will see it easily.
ls may output results in a single line, and this could make your grep -c give an incorrect result. Because grep does line-based matching.

Search and replace entire files

I've seen numerous examples for replacing one string with another among multiple files but what I want to do is a bit different. Probably a lot simpler :)
Find all the files that match a certain string and replace them completely with the contents of a new file.
I have a find command that works
find /home/*/public_html -name "index.php" -exec grep "version:1.23" '{}' \; -print
This finds all the files I need to update.
Now how do I replace their entire content with the CONTENTS of /home/indexnew.txt (I could also name it /home/index.php)
I emphasize content because I don't want to change the name or ownership of the files I'm updating.
find ... | while read filename; do cat static_file > "$filename"; done
efficiency hint: use grep -q -- it will return "true" immediately when the first match is found, not having to read the entire file.
If you have a bunch of files you want to replace, and you can get all of their names using wildcards you can try piping output to the tee command:
cat my_file | tee /home/*/update.txt
This should look through all the directories in /home and write the text in my_file to update.txt in each of those directories.
Let me know if this helps or isn't what you want.
I am not sure if your command without -l and then print it is better than to add -l in grep to list file directly.
find /home/*/public_html -name "index.php" -exec grep -l "version:1.23" '{}' \; |xargs -i cp /home/index.php {}
Here is the option -l detail
-l, --files-with-matches
Suppress normal output; instead print the name of each input
file from which output would normally have been printed. The
scanning will stop on the first match. (-l is specified by
POSIX.)

Sorting files based on content using command line

I have a database of files in a folder. I wish to sort the files containing *C: into one folder and the files containing *c: into another folder. How can this by achieved?**
I can use *.krn to access every file.
$ grep --help | grep with-matches
-l, --files-with-matches print only names of FILEs containing matches
What now depends on how many files there are and how paranoid you must be about their names. From the simplest
mv $(grep -l pattern files) target
to the most robust
grep -l -Z pattern files | xargs -0 mv -t target-directory --

Resources