I am trying to recursively (with sub-directories) read the last line of each file of a certain type (*.log) and write the output into individual files for each of the *.log files
e.g. (tail_"filename").
The closest bit of code I've been able to piece together is the following. I would need to send the information to a file for each of the instances it runs the tail command however.
find -type f | while read filename; do tail -1 $filename; done
You were almost there with your solution. Just add the > ${f}.tail to create the tail file:
find . -type f | while read f;do tail -1 $f > ${f}.tail;done
Another possibility might be
find . -type f -exec sh -c "tail -1 '{}' > '{}'.tail" \;
Related
i saw a few posts in forum but i cant manage to make them work for me
i have a script that runs in a folder and i want it to count the size only of the files in that folder but without the folders inside.
so if i have
file1.txt
folder1
file2.txt
it will return the size in bytes of file1+file2 without folder1
find . -maxdepth 1 -type f
gives me a list of all the files i want to count but how can i get the size of all this files?
The tool for this is xargs:
find "$dir" -maxdepth 1 -type f -print0 | xargs -0 wc -c
Note that find -print0 and xargs -0 are GNU extensions, but if you know they are available, they are well worth using in your script - you don't know what characters might be present in the filenames in the target directory.
You will need to post-process the output of wc; alternatively, use cat to give it a single input stream, like this:
find "$dir" -maxdepth 1 -type f -print0 | xargs -0 cat | wc -c
That gives you a single number you can use in following commands.
(I've assumed you meant "size" in bytes; obviously substitute wc -m if you meant characters or wc -l if you meant lines).
How do I display the content of files regular files matched with grep command? For example I grep a directory in order to see the regular files it has. I used the next line to see the regular files only:
ls -lR | grep ^-
Then I would like to display the content of the files found there. How do I do it?
I would do something like:
$ cat `ls -lR | egrep "^-" | rev | cut -d ' ' -f 1 | rev`
Use ls to find the files
grep finds your pattern
reverse the whole result
cut out the first file separated field to get the file name (files with spaces are problematic)
reverse the file name back to normal direction
Backticks will execute that and return the list of file names to cat.
or the way I would probably do it is use vim to look at each file.
$ vim `ls -lR | egrep "^-" | rev | cut -d ' ' -f 1 | rev`
It feels like you are trying to find only the files recursively. This is what I do in those cases:
$ vim `find . -type f -print`
There are multiple ways of doing it. Would try to give you a few easy and clean ways here. All of them handle filenames with space.
$ find . -type f -print0 | xargs -0 cat
-print0 adds a null character '\0' delimiter and you need to call xargs -0 to recognise the null delimiter. If you don't do that, whitespace in the filename create problems.
e.g. without -print0 filenames: abc 123.txt and 1.inc would be read as three separate files abc, 123.txt and 1.inc.
with -print0 this becomes abc 123.txt'\0' and 1.inc'\0' and would be read as abc 123.txt and 1.inc
As for xargs, it can accept the input as a parameter. command1 | xargs command2 means the output of command1 is passed to command2.
cat displays the content of the file.
$ find . -type f -exec echo {} \; -exec cat {} \;
This is just using the find command. It finds all the files (type f), calls echo to output the filename, then calls cat to display its content.
If you don't want the filename, omit -exec echo {} \;
Alternatively you can use cat command and pass the output of find.
$ cat `find . -type f -print`
If you want to scroll through the content of multiple files one by one. You can use.
$ less `find . -type f -print`
When using less, you can navigate through :n and :p for next and previous file respectively. press q to quit less.
I am trying to reverse the order of multiple text files (for plotting purposes) which are essentially rows of numbers. I tried to do it with tac and combined it with find and -exec as
find ./dir1/dir2/ -name foo.txt -type f -exec tac {} \;
but this only gives the output on the screen and does not modify the files intended.
Am I missing something here?
You're almost there - tac writes to stdout so you can simply redirect the output somewhere handy:
find .... \; > newfoo.txt
If you want each file reversed and written to the same location, something like this will do:
find . -type f -exec sh -c 'tac "$1" > "$1"-new' -- {} \;
Cheers,
I have a Textfile with one Filename per row:
Interpret 1 - Song 1.mp3
Interpret 2 - Song 2.mp3
...
(About 200 Filenames)
Now I want to search a Folder recursivly for this Filenames to get the full path for each Filename in Filenames.txt.
How to do this? :)
(Purpose: Copied files to my MP3-Player but some of them are broken and i want to recopy them all without spending hours of researching them out of my music folder)
The easiest way may be the following:
cat orig_filenames.txt | while read file ; do find /dest/directory -name "$file" ; done > output_file_with_paths
Much faster way is run the find command only once and use fgrep.
find . -type f -print0 | fgrep -zFf ./file_with_filenames.txt | xargs -0 -J % cp % /path/to/destdir
You can use a while read loop along with find:
filecopy.sh
#!/bin/bash
while read line
do
find . -iname "$line" -exec cp '{}' /where/to/put/your/files \;
done < list_of_files.txt
Where list_of_files.txt is the list of files line by line, and /where/to/put/your/files is the location you want to copy to. You can just run it like so in the directory:
$ bash filecopy.sh
+1 for #jm666 answer, but the -J option doesn't work for my flavor of xargs, so i chaned it to:
find . -type f -print0 | fgrep -zFf ./file_with_filenames.txt | xargs -0 -I{} cp "{}" /path/to/destdir/
my purpose is to parse several text files using the POS parser HunPos http://code.google.com/p/hunpos/wiki/UserManualI
is there a way to bash script hunpos through a bunch of text files?
Typical mechanisms look like:
for f in glob; do command $f ; done
I often run commands like: for f in *; do echo -n "$f " ; cat $f ; done to see the contents of all the files in a directory. (Especially nice with /proc/sys/kernel/-style directories, where all the files have very short contents.)
or
find . -type f -exec command {} \;
or
find . -type f -print0 | xargs -0 command parameters
Something like find . -type f -exec file {} \; or find . -type f -print0 | xargs -0 file (only works if the command accepts multiple filenames during input).
Of course, if the program accepts multiple filename arguments (like cat or more or similar Unix shell tools) and all the files are in a single directory, you can very easily run: cat * (show contents of all files in the directory) or cat *.* (show contents of all files with a period in the filename).
If you frequently want "all files in all [sub]*directories", the zsh **/ option can be handy: ls -l **/*.c would show you foo/bar/baz.c and /blort/bleet/boop.c at once. Neat tool, but I usually don't mind writing the find command equivalent, I just don't need it that often. (And zsh isn't installed everywhere, so relying on its features could be frustrating in the future.)