Add pipe delimiter at the end of each row using unix - linux

I am new to unix commands, please forgive if i am not using correct line of code below.
I have files (xxxx.txt.date) on winscp with header and footer. Now i want to add N number of pipe (|) at the end of the each row of all files starting from 2nd line till second last line. (i dont want | in header as well as footer).
Now i have created a scirpt in which i am using below command:
sed -e "2,\$s/$/|/" $file | column -t
2,$s/$/|/: adds | at the end of every line from line 2
Now below are the issues i am facing
First
The data doesn't change in the files i am able to see pipe added at end of each row in hive, how can i change data in files?
I don't want | in footer.
Any suggestion or help will be appreciated.
Thanks in advance !!

If you need to append just one "|" at the end of each line except header and footer
sed -i '1n; $n; s/$/|/' file_name
1n; $n; : Just print first and last line as is.
-i : make changes to the file instead of printing to STDOUT.
If you need to append n pipes at the end of each line except Header and Footer. If you use the below awk command, you will have to redirect the output to a temporary file and then rename it.
Assumptions:
I am assuming your Header and Footer are standard and start with some character(e.g., H, F, T etc) or String(Header, Footer, Trailer etc)
I am assuming your original file is delimited with "|". You can specify your actual delimiter in the below awk.
awk -F'|' -v n=7 '{if(/^Header|^Footer/) {print} else {end="";for (i=1;i<=n;i++) end=sprintf("%s%s", end, "|"); rec=sprintf("%s%s", $0, end); print rec}}' file_name
n=number of times you want to repeat | at the end of each line.
^Header|^Footer - If the line starts with "Header" or "Footer", just print the record as it is. You can specify your header and footer strings from file.
for loop - prepares a string "end" which contains "|" n times.
rec - Contains concatenated string of entire record followed by end string

Related

Replace all double quotes only in Nth Column

I have a file like this
abc|def||ghi|jklm||uv||xyz
abc|def||ghi|jklm|nopqrst|uv||xyz
abc|def||ghi|jklm|nopq"rst|uv||xyz
abc|def||ghi|jklm|"nopqrst"|uv||xyz
abc|def||ghi|jklm|"nopq"rst"|uv||xyz
abc|def||ghi|jklm|"nopq"r"st"|uv||xyz
The 6th Column could be double quoted. I want to replace all the occurances of double quotes in this field with a backslash-double quote (\")
I wish my output to look like
abc|def||ghi|jklm||uv||xyz
abc|def||ghi|jklm|nopqrst|uv||xyz
abc|def||ghi|jklm|nopq\"rst|uv||xyz
abc|def||ghi|jklm|"nopqrst"|uv||xyz
abc|def||ghi|jklm|"nopq\"rst"|uv||xyz
abc|def||ghi|jklm|"nopq\"r\"st"|uv||xyz
I have tried combinations of below, but ending short each time
sed -i 's/\"/\\\"/2' file.txt (this replaces only 2nd occurrence)
sed -i 's/\"/\\\"/2g' file.txt (this replaces only 2nd occurrence and all rest also)
My file will be having millions of rows; so I may need a sed or awk command only.
Please help.
You may use this awk solution in any version of awk:
awk 'BEGIN {FS=OFS="|"} {
c1 = substr($6, 1, 1)
c2 = substr($6, length($6), 1)
s = substr($6, 2, length($6)-2)
gsub(/"/, "\\\"", s)
$6 = c1 s c2
} 1' file
abc|def||ghi|jklm||uv||xyz
abc|def||ghi|jklm|nopqrst|uv||xyz
abc|def||ghi|jklm|nopq\"rst|uv||xyz
abc|def||ghi|jklm|"nopqrst"|uv||xyz
abc|def||ghi|jklm|"nopq\"rst"|uv||xyz
abc|def||ghi|jklm|"nopq\"r\"st"|uv||xyz
If this isn't all you need then edit your question to provide more truly representative sample input/output including cases that this doesn't work for:
$ sed 's/"/\\"/g; s/|\\"/|"/g; s/\\"|/"|/g' file
abc|def||ghi|jklm||uv||xyz
abc|def||ghi|jklm|nopqrst|uv||xyz
abc|def||ghi|jklm|nopq\"rst|uv||xyz
abc|def||ghi|jklm|"nopqrst"|uv||xyz
abc|def||ghi|jklm|"nopq\"rst"|uv||xyz
abc|def||ghi|jklm|"nopq\"r\"st"|uv||xyz
The above will work in any sed.
This might work for you (GNU sed):
sed -E 's/[^|]*/\n&\n/6 # isolate the 6th field
h # make a copy
s/"/\\"/g # replace " by \"
s/\\(")\n|\n\\(")/\1\n\2/g # repair start and end "s
H # append amended line to copy
g # get copies to current line
s/\n.*\n(.*)\n.*\n(.*)\n.*/\2\1/' file # swap fields
Surround the 6th field by newlines and make a copy in the hold space.
Replace all "'s by \"'s and remove the \'s at the start and end of the field if the field begins and ends in "'s
Append the amended line to the copy and replace the current line by the doubled line.
Using pattern matching replace copied line 6th field by the amended one.

insert consecutive number after .fa header id

I have a large .fa file that consists of multiple merged fasta files. Each file is separated by header line and begins with ">".
Here is an example:
>DPB1*04:01:01:01 [most similar sequence] for DPB1 in 3507009462
I would like to modify each header inside the file by adding a consecutive integer after each id. The id is the first sequence of characters after ">" and before the first space.
The modified header would look like this:
>DPB1*04:01:01:011 [most similar sequence] for DPB1 in 3507009462
I found some code that replaces the header by a consecutive number but not sure how to insert it after the header id.
cat youFile.fa | perl -ane 'if(/\>/){$a++;print ">$a\n"}else{print;}' > youFile_new.fa
Thanks for your help
$ perl -wpe 's/\s/++$i . " "/e if /^>/' input.fa
Explanation:
Substitute first occurrence of whitespace with counter variable and single space...
...but only if the line starts with >
Print every line (due to -p switch)

Removing two columns from csv without removing the column heading

Been stuck on this for a while, managed to remove two columns completely from it but now I need to remove two columns (3 in total) within the 1 column heading. I've attached a snippit from my csv file.
timestamp;CPU;%usr;%nice;%sys;%iowait;%steal;%irq;%soft;%guest;%idle
2014-09-17 10-20-39 UTC;-1;6.53;0.00;4.02;0.00;0.00;0.00;0.00;0.00;89.45
2014-09-17 10-20-41 UTC;-1;0.50;0.00;1.51;0.00;0.00;0.00;0.00;0.00;97.99
2014-09-17 10-20-43 UTC;-1;1.98;0.00;1.98;5.45;0.00;0.50;0.00;0.00;90.10
2014-09-17 10-20-45 UTC;-1;0.50;0.00;1.51;0.00;0.00;0.00;0.00;0.00;97.99
2014-09-17 10-20-47 UTC;-1;0.50;0.00;1.50;0.00;0.00;0.00;0.00;0.00;98.00
2014-09-17 10-20-49 UTC;-1;0.50;0.00;1.01;3.02;0.00;0.00;0.00;0.00;95.48
What I'm wanting to do is remove yyyy-mm-dd and also UTC, leaving just 10-20-39 underneath the timestamp column heading. I've tried removing them but I can't seem to do it without taking out the headings.
Thanks to anyone who can help me with this
A perl way:
perl -pe 's/^.+? (.+?) .+?;/$1;/ if $.>1' file
Explanation
The -pe means "print each line after applying the script to it". The script itself simply substitutes identifies the 3 first non-whitespace words and replaces them with the 2nd of the three ($1 since the pattern was captured). This is only run if the current line number ($.) is greater than 1.
An awk way
awk -F';' '(NR>1){sub(/[^ ]* /,"",$1); sub(/ [^ ]*$/,"",$1)}1;' OFS=";" file
Here, we set the input field delimiter to ; and use sub() to remove the 1st and last word from the 1st field.
This following sed command works for you:
sed '1!s/^[^ ]\+ //;1!s/ UTC//'
Explanations:
1! Do not apply to the first line.
s/^[^ ]\+ // Remove the first group of non-space characters at line beginning ("2014-09-17 " in your case).
s/ UTC// Remove the string " UTC".
Assuming the csv file is stored as a.csv, then
sed '1!s/^[^ ]\+ //;1!s/ UTC//' < a.csv
prints the results to standard output, and
sed '1!s/^[^ ]\+ //;1!s/ UTC//' < a.csv > b.csv
saves the result to b.csv.
EDITED:
Added: sample results:
[pengyu#GLaDOS tmp]$ sed '1!s/^[^ ]\+ //;1!s/ UTC//' < a.csv
timestamp;CPU;%usr;%nice;%sys;%iowait;%steal;%irq;%soft;%guest;%idle
10-20-39;-1;6.53;0.00;4.02;0.00;0.00;0.00;0.00;0.00;89.45
10-20-41;-1;0.50;0.00;1.51;0.00;0.00;0.00;0.00;0.00;97.99
10-20-43;-1;1.98;0.00;1.98;5.45;0.00;0.50;0.00;0.00;90.10
10-20-45;-1;0.50;0.00;1.51;0.00;0.00;0.00;0.00;0.00;97.99
10-20-47;-1;0.50;0.00;1.50;0.00;0.00;0.00;0.00;0.00;98.00
10-20-49;-1;0.50;0.00;1.01;3.02;0.00;0.00;0.00;0.00;95.48

add a row with empty columns to a tab delimited file

I have some tab delimited data files with different column numbers. I want to add a header line to these files, the header line only contains 'ID' in the first column, however, the total column number of the header line should be the same as the file. Can I do it with some linux commands? Thank you very much!!
for file in *
do
awk 'NR==1{hdr=$0; gsub(/[^\t]/,"",hdr); print "ID" hdr}1' "$file" > tmp &&
mv tmp "$file"
done
sed -i '1 { h; s/[^\t]//g; s/^/ID/; p; g; }' *.tsv
Copy the first line, remove all non-tabs (to clear the field contents), tack on "ID", then print this line plus the original.

Remove lines with duplicate cells

I need to remove lines with a duplicate value. For example I need to remove line 1 and 3 in the block below because they contain "Value04" - I cannot remove all lines containing Value03 because there are lines with that data that are NOT duplicates and must be kept. I can use any editor; excel, vim, any other Linux command lines.
In the end there should be no duplicate "UserX" values. User1 should only appear 1 time. But if User1 exists twice, I need to remove the entire line containing "Value04" and keep the one with "Value03"
Value01,Value03,User1
Value02,Value04,User1
Value01,Value03,User2
Value02,Value04,User2
Value01,Value03,User3
Value01,Value03,User4
Your ideas and thoughts are greatly appreciated.
Edit: For clarity and leaving words out from the editing process.
The following Awk command removes all but the first occurrence of a value in the third column:
$ awk -F',' '{
if (!seen[$3]) {
seen[$3] = 1
print
}
}' textfile.txt
Output:
Value01,Value03,User1
Value01,Value03,User2
Value01,Value03,User3
Value01,Value03,User4
same thing in Perl:
perl -F, -nae 'print unless $c{$F[2]}++;' textfile.txt
this uses autosplit mode: "-F, -a" splits by comma and places the result into #F array

Resources