Bash Script to export first columns in txt file to excel with Header - excel

I would like to export first column of my txt file into excel along with the user defined header
Currently my txt file has following information
667869 667869
580083 580083
316133 316133
9020 9020
and i would like to export it to excel with my own header, how could i possibly do it in bash script?.

Using a for loop with sed maybe this will help:
for file in /path/to/folder/*.txt ; do
bname=$(basename $file)
pref=${bname%%.txt}
sed -i '1iCOL1,COL2' $file
done
This will add a header COL1,COL2 to each .txt file in the directory

You can do something along these lines:
awk -v header="Col_1" 'NR==1 {print header} {print $1}' file
That is assuming that the separator between columns is a space and the fields do not have spaces.

Related

split and write the files with AWK -Bash

INPUT_FILE.txt in c:\Pro\usr\folder1
ABCDEFGH123456
ABCDEFGH123456
ABCDEFGH123456
BBCDEFGH123456
BBCDEFGH123456
used the below AWK command in .SH script which runs from c:\Pro\usr\folder2 to split the file to multiple txt files with an extension of _kg based on first 8 characters.
awk '{ F=substr($0,1,8) "_kg" ".txt"; print $0 >> F; close(F) }' ' "c:\Pro\usr\folder1\input_file.txt"
this is working good , but the files are writing in the main location where the bash is pointing. How can I route the created files to another location like c:\Pro\usr\folder3.
Thanks
Following awk code may help you in same, written and tested with shown samples in GNU awk.
awk -v outPath='c:\\Pro\\usr\\folder3' -v FPAT='^.{8}' '{outFile=($1"_kg.txt");outFile=outPath"\\"outFile;print > (outFile);close(outFile)}' Input_file
Explanation: Creating an awk variable named outPath which has path mentioned by OP in samples. Then setting FPAT(field separator settings as a regex), where I am creating field of 8 characters starting from first character. In main program of awk, creating outFile variable which has output file names in it(1st field following by _kg.txt), then printing whole line to output file and closing the output file in backend to avoid "too many opened files" error.
Pass the destination folder as a variable to awk:
awk -v dest='c:\\Pro\\usr\\folder3\\' '{F=dest substr($0,1,8) "_kg" ".txt"; print $0 >> F; close(F) }' "c:\Pro\usr\folder1\input_file.txt"
I think the doubled backslashes are required.

Split and rename single file into multiple files using keywords present in file

New to awk like commands. I have single text file holding SQL DDL's in below format.
DROP TABLE IF EXISTS $database.TABLE_A ;
...
...
DROP TABLE IF EXISTS $database.TABLE_B ;
...
...
Would like to split single file into multiple files as
TABLE_A.SQL
TABLE_B.SQL
TABLE_X.SQL
I am able to get the table names from single file with the help of below awk command. Still struggling to split and rename file with TABLE_X.SQL name.
awk 'FNR==1 {split($5,a,"."); print a[2]}' *.SQL
I am using Windows 10 DOS shell.
Finally I am able to acheive desired output with the help of below Shell script, which we can run in Windows bash shell ...
#!/bin/bash
#Split single file
awk '/DROP/{x="F"++i;}{print > x".TXT";}' $1
#Create output directory
mkdir -p ./_output
#Move file by chaning extention
for f in *.TXT ; do
newfilename=$(awk 'FNR==1 {split($5,a,"."); print a[2]}' "$f")
echo Processed $f ... new file is $newfilename".SQL" ...
mv $f ./_output/$newfilename".SQL"
done
Could you please try following.
awk '/DROP/{if(file){close(file)};match($0,/TABLE_[^ ]*/);file=substr($0,RSTART,RLENGTH)".SQL"} {print > (file)}' Input_file
awk -F "[. ]" '{print >($(NF-1)".SQL")}' file.sql

Linux CSV - Add a colum from a CSV file to another CSV File

I'm struggling to create a CSV file from two other ones
Here's what I need
File I want (lot of others lines)
"AB";"A";"B";"C";"D";"E"
Files I have:
File 1:
"A";"B";"C";"D";"E"
File 2:
"AB";"C";"D";"E"
How can I simply add "AB" from File to the 1st position of 1st one, adding one ";" ?
Thanks for your help
You can use awk as below. This assumes that you have only ; character as the field separator. And it is not used anywhere else in the CSV file.
$ awk -F\; '{print $2}' file.csv

Want to append records in two file using shell script

My first input file contains records name abc.txt:
abc#gmail.com
bscd#yahoo.co.in
abcd.21#gmail.com
1234#hotmail.com
My second file contains record name details.txt:
123456^atulsample^1203320
I want my final file having output to be Final.txt:
abc#gmail.com^123456^atulsample^1203320
bscd#yahoo.co.in^123456^atulsample^1203320
abcd.21#gmail.com^123456^atulsample^1203320
I have uses sed command but I am not getting my required output.
Kindly help as I don't have much knowledge in shell scripting.
try something like this;
#!/bin/bash
while read -r line
do
detail="$line"
sed '/^[ \t]*$/d' abc.txt | sed "s/$/^${detail}/" >> Final.txt
done < "details.txt"
this is to delete blank lines;
sed '/^[ \t]*$/d' abc.txt
this is to append from details.txt
sed "s/$/^${detail}/"

Appending Text To the Existing First Line with Sed

I have a data that looks like this (FASTA format). Note that
in comes with block of 2 ">" header and the sequence.
>SRR018006
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNGN
>SRR018006
ACCCGCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
What I want to do is to append a text (e.g. "foo" in the > header)
yielding:
>SRR018006-foo
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNGN
>SRR018006-foo
ACCCGCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
Is there a way to do that using SED? Preferably inline modifying
the original file.
This will do what you're looking for.
sed -ie 's/^\(>.*\)/\1-foo/' file
since judging from your previous post, you are also experienced using awk: here's an awk solution.
# awk '/^>/{print $0"-foo";next}1' file
>SRR018006-foo
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNGN
>SRR018006-foo
ACCCGCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
# awk '/^>/{print $0"-foo";next}1' file > temp
# mv temp file
if you insist on sed
# sed -e '/^>/s/$/-foo/' file

Resources