I am trying to reverse the order of multiple text files (for plotting purposes) which are essentially rows of numbers. I tried to do it with tac and combined it with find and -exec as
find ./dir1/dir2/ -name foo.txt -type f -exec tac {} \;
but this only gives the output on the screen and does not modify the files intended.
Am I missing something here?
You're almost there - tac writes to stdout so you can simply redirect the output somewhere handy:
find .... \; > newfoo.txt
If you want each file reversed and written to the same location, something like this will do:
find . -type f -exec sh -c 'tac "$1" > "$1"-new' -- {} \;
Cheers,
Related
This is a simple problem, I'm just stuck on it. I am taking the contents of a bunch of different files and printing each file's name as a header before its contents. That much works. But I want to have an empty line separating the the contents of one file and the header for the next file's content.
I want it to look like:
File 1 header
File 1 contents
[empty space]
File 2 header
File 2 contents
I tried putting \n after "{}" in my code, but that didn't work. Any suggestions?
find . -type f -name '*_top_hits.txt' -print -exec cat {} \; > combinedresults.txt
You can just an empty echo as part of the find -exec as
find . -type f -name "*_top_hits.txt" -print -exec sh -c "cat {};echo" \; > combinedresults.txt
The echo just produces a single empty new-line after each file content. Also you don't need multiple -exec options rather use a single sub-shell.
You can try adding a second -exec:
find . -type f -name '*_top_hits.txt' -print -exec cat {} \; -exec echo \; > combinedresults.txt
One side effect of this is that a new line will be added at the end after the contents of the last file.
I'm trying to count the total lines in the files within a directory. To do this I am trying to use a combination of find and wc. However, when I run find . -exec wc -l {}\;, I recieve the error find: missing argument to -exec. I can't see any apparent issues, any ideas?
You simply need a space between {} and \;
find . -exec wc -l {} \;
Note that if there are any sub-directories from the current location, wc will generate an error message for each of them that looks something like that:
wc: ./subdir: Is a directory
To avoid that problem, you may want to tell find to restrict the search to files :
find . -type f -exec wc -l {} \;
Another note: good idea using the -exec option . Too many times people pipe commands together thinking to get the same result, for instance here it would be :
find . -type f | xargs wc -l
The problem with piping commands in such a manner is that it breaks if any files has spaces in it. For instance here if a file name was "a b" , wc would receive "a" and then "b" separately and you would obviously get 2 error messages: a: no such file and b: no such file.
Unless you know for a fact that your file names never have any spaces in them (or non-printable characters), if you do need to pipe commands together, you need to tell all the tools you are piping together to use the NULL character (\0) as a separator instead of a space. So the previous command would become:
find . -type f -print0 | xargs -0 wc -l
With version 4.0 or later of bash, you don't need your find command at all:
shopt -s globstar
wc -l **/*
There's no simple way to skip directories, which as pointed out by Gui Rava you might want to do, unless you can differentiate files and directories by name alone. For example, maybe directories never have . in their name, while all the files have at least one extension:
wc -l **/*.*
I need to remove about 40 emails from several files in a distribution list.
One Address might appear in different files and need to be removed from all of them.
I am working in a directory with several .sh files which also have several lines.
I have done something like this in a couple of test files:
find . -type f -exec grep -li ADDRESS_TO_FIND {} 2>/dev/null \; | xargs sed -i 's/ADDRESS_TO_REMOVE/ /g' *
It works fine but once I try it in the real files, it takes a long time and just sits there. I need to run this in different servers, this is the main cause I want to optimize this.
I have tried to run something like this:
find . -type f -name '*sh' 2>/dev/null | xargs grep ADDRESS_TO_FIND
but that will return:
./FileContainingAddress.sh:ADDRESS_TO_FIND
How do I add something like this:
awk '{print substr($0,1,10)}'
But to return me everything before the ":"?
I can do the rest from there, but haven't found how to trim that part
You can use -exec as a predicate in find, as long as you don't use the multiple file + version, which means that you can provide several -exec clauses each of which will be dependent on the success of the previous one. This style will avoid the construction of lists of filenames, which makes it much more robust in the face of files with odd characters in their names.
For example:
find . -type f -name '*sh' \
-exec grep -qi ADDRESS_TO_FIND {} \; \
-exec sed -i 's/ADDRESS_TO_FIND/ /g' {} \;
You probably want to provide the address as a parameter rather than having to type it twice, unless you really meant for the two instance to be different (ADDRESS_TO_FIND vs. ADDRESS_TO_REMOVE):
clean() {
find . -type f -name '*sh' \
-exec grep -qi "$1" {} \; \
-exec sed -i "s/$1/ /g" {} \;
}
(Watch out for / in the argument to clean. I'll leave making the sed more robust as an exercise.)
After looking back at your question, I noticed something that's potentially quite important:
find -type f -exec grep -li ADDRESS {} \; | xargs sed -i 's/ADDRESS/ /g' *
# here! -----------------------------------------------------------------^
The asterisk is being expanded, so the sed line is operating on every file in the directory.
Assuming that this wasn't a typo in your question, I believe that this is the source of your poor performance. You should remove it!
I am trying to recursively (with sub-directories) read the last line of each file of a certain type (*.log) and write the output into individual files for each of the *.log files
e.g. (tail_"filename").
The closest bit of code I've been able to piece together is the following. I would need to send the information to a file for each of the instances it runs the tail command however.
find -type f | while read filename; do tail -1 $filename; done
You were almost there with your solution. Just add the > ${f}.tail to create the tail file:
find . -type f | while read f;do tail -1 $f > ${f}.tail;done
Another possibility might be
find . -type f -exec sh -c "tail -1 '{}' > '{}'.tail" \;
I have a Textfile with one Filename per row:
Interpret 1 - Song 1.mp3
Interpret 2 - Song 2.mp3
...
(About 200 Filenames)
Now I want to search a Folder recursivly for this Filenames to get the full path for each Filename in Filenames.txt.
How to do this? :)
(Purpose: Copied files to my MP3-Player but some of them are broken and i want to recopy them all without spending hours of researching them out of my music folder)
The easiest way may be the following:
cat orig_filenames.txt | while read file ; do find /dest/directory -name "$file" ; done > output_file_with_paths
Much faster way is run the find command only once and use fgrep.
find . -type f -print0 | fgrep -zFf ./file_with_filenames.txt | xargs -0 -J % cp % /path/to/destdir
You can use a while read loop along with find:
filecopy.sh
#!/bin/bash
while read line
do
find . -iname "$line" -exec cp '{}' /where/to/put/your/files \;
done < list_of_files.txt
Where list_of_files.txt is the list of files line by line, and /where/to/put/your/files is the location you want to copy to. You can just run it like so in the directory:
$ bash filecopy.sh
+1 for #jm666 answer, but the -J option doesn't work for my flavor of xargs, so i chaned it to:
find . -type f -print0 | fgrep -zFf ./file_with_filenames.txt | xargs -0 -I{} cp "{}" /path/to/destdir/