Concatenate Files In Order Linux Command - linux

I just started learning to use command line. Hopefully this is not a dump question.
I have the following files in my directory:
L001_R1_001.fastq
L002_R2_001.fastq
L004_R1_001.fastq
L005_R2_001.fastq
L001_R2_001.fastq
L003_R1_001.fastq
L004_R2_001.fastq
L006_R1_001.fastq
L002_R1_001.fastq
L003_R2_001.fastq
L005_R1_001.fastq
L006_R2_001.fastq
You can see in the filenames, it's a mix of R1 and R2 and the numbers after L00 are not sorted.
I want to concatenate files in the order of filename, separately for R1 and R2 files.
If I do it manually, it will look like the following:
# for R1 files
cat L001_R1_001.fastq L002_R1_001.fastq L003_R1_001.fastq L004_R1_001.fastq L005_R1_001.fastq L006_R1_001.fastq > R1.fastq
# for R2 files
cat L001_R2_001.fastq L002_R2_001.fastq L003_R2_001.fastq L004_R2_001.fastq L005_R2_001.fastq L006_R2_001.fastq > R2.fastq
Could you please help me write a script that I can re-use later?
Thank you!

cat `ls -- *_R1_*.fastq | sort` >R1.fastq
cat `ls -- *_R2_*.fastq | sort` >R2.fastq
The | sort is not needed on most systems because ls sorts the files by name.
If the names of the files contain whitespace, then do this first:
IFS='
'

Try using wildcard character *. It will automatically expand file names in alphabetical order.
cat L*_R1_001.fastq > R1.fastq
cat L*_R2_001.fastq > R2.fastq
EDIT:
If above command doesn't give desired sorting, try overriding locale setting using LC_ALL=C as sugested by Fredrik Pihl
LC_ALL=C cat L*_R1_001.fastq > R1.fastq

Related

pasting many files to a single large file

i have many text files in a directory like 1.txt 2.txt 3.txt 4.txt .......2000.txt and i want to paste them to make a large file.
In this regard i did something like
paste *.txt > largefile.txt
but the above command reads the .txt file randomly, so i need to read the files sequentially and paste as 1.txt 2.txt 3.txt....2000.txt
please suggest a better solution for pasting many files.
Thanks and looking forward to hearing from you.
Sort the file names numerically yourself then.
printf "%s\n" *.txt | sort -n | xargs -d '\n' paste
When dealing with many files, you may hit ulimit -n. On my system ulimit -n is 1024, but this is a soft limit and can be raised with just like ulimit -n 99999.
Without raising the soft limit, go with a temporary file that would accumulate results each "round" of ulimit -n count of files, like:
touch accumulator.txt
... | xargs -d '\n' -n $(($(ulimit -n) - 1)) sh -c '
paste accumulator.txt "$#" > accumulator.txt.sav;
mv accumulator.txt.sav accumulator.txt
' _
cat accumulator.txt
Instead use the wildcard * to enumerate all your files in a directory, if your file names pattern are sequentially ordered, you can manually list all files in order and concatenate to a large file. The output order of * enumeration might look different in different environment, as it not works as you expect.
Below is a simple example
$ for i in `seq 20`;do echo $i > $i.txt;done
# create 20 test files, 1.txt, 2.txt, ..., 20.txt with number 1 to 20 in each file respectively
$ cat {1..20}.txt
# show content of all file in order 1.txt, 2.txt, ..., 20.txt
$ cat {1..20}.txt > 1_20.txt
# concatenate them to a large file named 1_20.txt
In bash or any other shell, glob expansions are done in lexicographical order. When having files numberd, this sadly means that 11.txt < 1.txt < 2.txt. This weird ordering comes from the fact that, lexicographically, 1 < . (<dot>-character (".")).
So here are a couple of ways to operate on your files in order:
rename all your files:
for i in *.txt; do mv "$i" "$(sprintf "%0.5d.txt" ${i%.*}"); done
paste *.txt
use brace-expansion:
Brace expansion is a mechanism that allows for the generation of arbitrary strings. For integers you can use {n..m} to generate all numbers from n to m or {n..m..s} to generate all numbers from n to m in steps of s:
paste {1..2000}.txt
The downside here is that it is possible that a file is missing (eg. 1234.txt). So you can do
shopt -s extglob; paste ?({1..2000}.txt)
The pattern ?(pattern) matches zero or one glob-matches. So this will exclude the missing files but keeps the order.

Automate and looping through batch script

I'm new to batch. I want iterate through a list and use the output content to replace a string in another file.
ls -l somefile | grep .txt | awk 'print $4}' | while read file
do
toreplace="/Team/$file"
sed 's/dataFile/"$toreplace"/$file/ file2 > /tmp/test.txt
done
When I run the code I get the error
sed: 1: "s/dataFile/"$torepla ...": bad flag in substitute command: '$'
Example of somefile with which has list of files paths
foo/name/xxx/2020-01-01.txt
foo/name/xxx/2020-01-02.txt
foo/name/xxx/2020-01-03.txt
However, my desired output is to use the list of file paths in somefile directory to replace a string in another file2 content. Something like this:
This is the directory of locations where data from /Team/foo/name/xxx/2020-01-01.txt ............
I'm not sure if I understand your desired outcome, but hopefully this will help you to figure out your problem:
You have three files in a directory:
TEAM/foo/name/xxx/2020-01-02.txt
TEAM/foo/name/xxx/2020-01-03.txt
TEAM/foo/name/xxx/2020-01-01.txt
And you have another file called to_be_changed.txt which contains the text This is the directory of locations where data from TO_BE_REPLACED ............ and you want to grab the filenames of your three files and insert them into your to_be_changed.txt file, you can do it with:
while read file
do
filename="$file"
sed "s/TO_BE_REPLACED/${filename##*/}/g" to_be_changed.txt >> changed.txt
done < <(find ./TEAM/ -name "*.txt")
And you will then have made a file called changed.txt which contains:
This is the directory of locations where data from 2020-01-02.txt ............
This is the directory of locations where data from 2020-01-03.txt ............
This is the directory of locations where data from 2020-01-01.txt ............
Is this what you're trying to achieve? If you need further clarification I'm happy to edit this answer to provide more details/explanation.
ls -l somefile | grep .txt | awk 'print $4}' | while read file
No. No, no, nono.
ls -l somefile is only going to show somefile unless it's a directory.
(Don't name a directory "somefile".)
If you mean somefile.txt, please clarify in your post.
grep .txt is going to look through the lines presented for the three characters txt preceded by any character (the dot is a regex wildcard). Since you asked for a long listing of somefile it shouldn't find any, so nothing should be passed along.
awk 'print $4}' is a typo which won't compile. awk will crash.
Keep it simple. What I suspect you meant was
for file in *.txt
Then in
toreplace="/Team/$file"
sed 's/dataFile/"$toreplace"/$file/ file2 > /tmp/test.txt
it's unlear what you expect $file to be - awk's $4 from an ls -l seems unlikely.
Assuming it's the filenames from the for above, then try
sed "s,dataFile,/Team/$file," file2 > /tmp/test.txt
Does that help? Correct me as needed. Sorry if I seem harsh.
Welcome to SO. ;)

Creating a file by merging two files

I would like to merge two files and create a new file using Linux command.
I have the two files named as a1b.txt and a1c.txt
Content of a1b.txt
Hi,Hi,Hi
How,are,you
Content of a1c.txt
Hadoop|are|world
Data|Big|God
And I need a new file called merged.txt with the below content(expected output)
Hi,Hi,Hi
How,are,you
Hadoop|are|world
Data|Big|God
To achieve that in terminal I am running the below command,but it gives me output like below
Hi,Hi,Hi
How,are,youHadoop|are|world
Data|Big|God
cat /home/cloudera/inputfiles/a1* > merged.txt
Could somebody help on getting the expected ouput
Probably your files do not have newline characters. Here is how to put the newline character to them.
$ sed -i -e '$a\' /home/cloudera/inputfiles/a1*
$ cat /home/cloudera/inputfiles/a1* > merged.txt
If you are allowed to be destructive (not have to keep the original two files unmodified) then:
robert#debian:/tmp$ cat fileB.txt >> fileA.txt
robert#debian:/tmp$ cat fileA.txt
this is file A
This is file B.

Command to open a file which contains the given data

I had this question in interview.
He put a situation in front of me that there are 12 files in your Linux operating system.
Give me a command which will open a file containing data "Hello"..
I told him I just know grep command which will give you the names of files having "Hello" data.
Please tell me if there is any command to open a file in this way..
Assuming it will be only one file containing the word hello:
less $(grep -H "hello" *.txt | sed s/:.*//)
Here it is first capturing the file name using grep with -H parameter. Then using sed removing everything except the filename. And finally its using less to open the file.
Maybe this could help:
$ echo "foo" > file1.txt
$ echo "bar" > file2.txt
$ grep -l foo * | xargs cat
foo
You have 2 files, and you are looking for the one with the string "foo" in it. Change cat with your command of choice to open files. Might try vi, emacs, nano, pico... (no, another flame war!)
You may want to try a different approach if there are several files that contains the string you are looking for... Just thought of only one file containing the string.

cat | sort csv file by name in bash

i have a bunch of csv files that i want to save them in one file ordered by name
i use
cat *.csv | sort -t\ -k2 -n *.csv > output.csv
works good for a naming like a001, a002, a010. a100
but in my files the names are fup a bit so they are like a1. a2. a10. a100
and the command i wrote will arrange my things like this:
cn201
cn202
cn202
cn203
cn204
cn99
cn98
cn97
cn96
..
cn9
can anyone please help me ?
Thanks
If I understand correctly, you want to use the -V (version-sort) flag instead of -n. This is only available on GNU sort, but that's probably the one you are using.
However, it depends how you want the prefixes to be sorted.
If you don't have the -V option, sort allows you to be more precise about what characters constitute a sort key.
sort -t\ -k2.3n *.csv > output.csv
The .3 tells sort that the key to sort on starts with the 3rd character of the second field, effectively skipping the cn prefix. You can put the n directly in the field specifier, which saves you two whole characters, but more importantly for more complex sorts, allows you to treat just that key as a number, rather than applying -n globally (which is only an issue if you specify multiple keys with several uses of -k).
The sort version on the live server is 5.97 from 2006
so few things did not work correctly.
However the code bellow is my solution
#!/bin/bash
echo "This script reads all CSVs into a single file (clusters.csv) in this directory"
for filers in *.csv
do
echo "" >> clusters.csv
echo "--------------------------------" >> clusters.csv
echo $filers >> largefile.txt
echo "--------------------------------" >> clusters.csv
cat $filers >> clusters.csv
done
or if you want to keep it simple inside one command
awk 'FNR > 1' *.csv > clusers.csv

Resources