viewing file's content for each file-name appearing in a list - linux

I'm creating a list of file-names using the command:
ls | grep "\.txt$"
I'm getting a list of files:
F1.txt
F2.txt
F3.txt
F4.txt
I want to view the content of these files (using less / more / cat /...)
is there a way to do this by pipping?
(Btw, I got a list of file-names using a more complex command, this is just a simpler example for clarification)

Would this be enough?
$ cat *txt
For richer queries, you could use find and xargs:
$ find . -name "*txt" | xargs cat

you can try something like this:
#!/bin/bash
for i in *.txt
do
echo Displaying file $i ...
more $i
done

What about:
cat $(ls | grep "\.txt$")

Related

Sorting file numerically and incrementally

So, I have 1000 files in a folderA.
Let's say:
File_0001, File_0002, File_0003, File_0004, File_0005, . . . , File_1000
Question, how to sort these files every two incremental number and copy these files into another folder (folderB). So that the files in folderB will be like this:
File_0002, File_0004, File_0006,File_0008, File_0010, . . . , File_1000
Any suggestions will be really appreciated.
Thank you
You can also use simple cp command:
cp File_*[02468] folderB
ls | sort | xargs -n2 echo | awk '{print $2}' | xargs -I '{}' echo mv '{}' /folderB
the trick is to use | xargs -n2 echo | awk '{print $2}' to get the even line.
Depending on what's actually wanted, I'd say #demostene's answer is probably right. If OP actually wants alternate files from the list, regardless of possibly skipped numbers, then
cp $(ls | awk 'NR%2 == 0 {print $0}') folderB
would seem to do the trick. Note the obvious extensions for every third, fourth, or Nth file.

How to group bash command into one function?

Here is what I am trying to achieve. I want to run a sequence of commands on that file, so for example
ls * | xargs (cat - | calculateforfile)
I want to run (cat | calculateforthisfile) on each of the file separately. So basically, how to group a list of commands as if it is one single function?
No need to use xargs. Just use a loop. You also don't need to use cat. Just redirect its input with the file.
for A in *; do
calculateforfile < "$A"
done
As a single line:
for A in *; do calculateforfile < "$A"; done
If you're looking for xargs solution for this (for example find command)
find . -name "*.txt" | xargs -I % cat %
This will cat all the files found under current directory that end in .txt
The -I option is the key there

How To Insert New Line Using Unix CAT and Find

I have the List of file that looks like this:
/somedir/file1.fa
>foo
ATCGGGGG
/somedir/file2.fa
>bar
CCCCCCC
And there are many of these files.
I want to perform a CAT using the following command
find /somedir/ -name "*.fa" | xargs cat > All.fa
But why I encounter this in All.fa
>foo
ATCGGGGG>bar
CCCCCCC
Instead of
>foo
ATCGGGGG
>bar
CCCCCCCC
Is there a way to correct it?
It looks like your files are missing newlines at the end.
find /somedir/ -name "*.fa" | xargs -n 1 -I % bash -c "cat %; echo" > All.fa

Linux: cat matching files in date order?

I have a few files in a directory with names similar to
_system1.log
_system2.log
_system3.log
other.log
but they are not created in that order.
Is there a simple, non-hardcoded, way to cat the files starting with the underscore in date order?
Quick 'n' dirty:
cat `ls -t _system*.log`
Safer:
ls -1t _system*.log | xargs -d'\n' cat
Use ls:
ls -1t | xargs cat
ls -1 | xargs cat
You can concatenate and also store them in a single file according to their time of creation and also you can specify the files which you want to concatenate. Here, I find it very useful. The following command will concatenate the files which are arranged according to their time of creaction and have common string 'xyz' in their file name and store all of them in outputfile.
cat $(ls -t |grep xyz)>outputfile

Problems with Grep Command in bash script

I'm having some rather unusual problems using grep in a bash script. Below is an example of the bash script code that I'm using that exhibits the behaviour:
UNIQ_SCAN_INIT_POINT=1
cat "$FILE_BASENAME_LIST" | uniq -d >> $UNIQ_LIST
sed '/^$/d' $UNIQ_LIST >> $UNIQ_LIST_FINAL
UNIQ_LINE_COUNT=`wc -l $UNIQ_LIST_FINAL | cut -d \ -f 1`
while [ -n "`cat $UNIQ_LIST_FINAL | sed "$UNIQ_SCAN_INIT_POINT"'q;d'`" ]; do
CURRENT_LINE=`cat $UNIQ_LIST_FINAL | sed "$UNIQ_SCAN_INIT_POINT"'q;d'`
CURRENT_DUPECHK_FILE=$FILE_DUPEMATCH-$CURRENT_LINE
grep $CURRENT_LINE $FILE_LOCTN_LIST >> $CURRENT_DUPECHK_FILE
MATCH=`grep -c $CURRENT_LINE $FILE_BASENAME_LIST`
CMD_ECHO="$CURRENT_LINE matched $MATCH times," cmd_line_echo
echo "$CURRENT_DUPECHK_FILE" >> $FILE_DUPEMATCH_FILELIST
let UNIQ_SCAN_INIT_POINT=UNIQ_SCAN_INIT_POINT+1
done
On numerous occasions, when grepping for the current line in the file location list, it has put no output to the current dupechk file even though there have definitely been matches to the current line in the file location list (I ran the command in terminal with no issues).
I've rummaged around the internet to see if anyone else has had similar behaviour, and thus far all I have found is that it is something to do with buffered and unbuffered outputs from other commands operating before the grep command in the Bash script....
However no one seems to have found a solution, so basically I'm asking you guys if you have ever come across this, and any idea/tips/solutions to this problem...
Regards
Paul
The `problem' is the standard I/O library. When it is writing to a terminal
it is unbuffered, but if it is writing to a pipe then it sets up buffering.
try changing
CURRENT_LINE=`cat $UNIQ_LIST_FINAL | sed "$UNIQ_SCAN_INIT_POINT"'q;d'`
to
CURRENT LINE=`sed "$UNIQ_SCAN_INIT_POINT"'q;d' $UNIQ_LIST_FINAL`
Are there any directories with spaces in their names in $FILE_LOCTN_LIST? Because if they are, those spaces will need escaped somehow. Some combination of find and xargs can usually deal with that for you, especially xargs -0
A small bash script using md5sum and sort that detects duplicate files in the current directory:
CURRENT="" md5sum * |
sort |
while read md5sum filename;
do
[[ $CURRENT == $md5sum ]] && echo $filename is duplicate;
CURRENT=$md5sum;
done
you tagged linux, some i assume you have tools like GNU find,md5sum,uniq, sort etc. here's a simple example to find duplicate files
$ echo "hello world">file
$ md5sum file
6f5902ac237024bdd0c176cb93063dc4 file
$ cp file file1
$ md5sum file1
6f5902ac237024bdd0c176cb93063dc4 file1
$ echo "blah" > file2
$ md5sum file2
0d599f0ec05c3bda8c3b8a68c32a1b47 file2
$ find . -type f -exec md5sum "{}" \; |sort -n | uniq -w32 -D
6f5902ac237024bdd0c176cb93063dc4 ./file
6f5902ac237024bdd0c176cb93063dc4 ./file1

Resources