Extract orders and match to trades from two files - linux

I have two attached files (orders1.txt and trades1.txt) I need to write a Bash script (possibly awk?) to extract orders and match them to trades.
The output should produce a report that prints comma separated values containing “ClientID, OrderID, Price, Volume”.
In addition to this for each client, I need to print the total volume and turnover (turnover is the subtotal of price * volume on each trade).
Can someone please help me with a bash script that will do the above using the attached files?
Any help would be greatly appreciated
orders1.txt
Entry Time, Client ID, Security ID, Order ID
25455410,DOLR,XGXUa,DOLR1435804437
25455410,XFKD,BUP3d,XFKD4746464646
25455413,QOXA,AIDl,QOXA7176202067
25455415,QOXA,IRUXb,QOXA6580494597
25455417,YXKH,OBWQs,YXKH4575139017
25455420,JBDX,BKNs,JBDX6760353333
25455428,DOLR,AOAb,DOLR9093170513
25455429,JBDX,QMP1Sh,JBDX2756804453
25455431,QOXA,QIP1Sh,QOXA6563975285
25455434,QOXA,XMUp,QOXA5569701531
25455437,XFKD,QLOJc,XFKD8793976660
25455438,YXKH,MRPp,YXKH2329856527
25455442,JBDX,YBPu,JBDX0100506066
25455450,QOXA,BUPYd,QOXA5832015401
25455451,QOXA,SIOQz,QOXA3909507967
25455451,DOLR,KID1Sh,DOLR2262067037
25455454,DOLR,JJHi,DOLR9923665017
25455461,YXKH,KBAPBa,YXKH2637373848
25455466,DOLR,EPYp,DOLR8639062962
25455468,DOLR,UQXKz,DOLR4349482234
25455474,JBDX,EFNs,JBDX7268036859
25455481,QOXA,XCB1Sh,QOXA4105943392
25455486,YXKH,XBAFp,YXKH0242733672
25455493,JBDX,BIF1Sh,JBDX2840241688
25455500,DOLR,QSOYp,DOLR6265839896
25455503,YXKH,IIYz,YXKH8505951163
25455504,YXKH,ZOIXp,YXKH2185348861
25455513,YXKH,MBOOp,YXKH4095442568
25455515,JBDX,P35p,JBDX9945514579
25455524,QOXA,YXOKz,QOXA1900595629
25455528,JBDX,XEQl,JBDX0126452783
25455528,XFKD,FJJMp,XFKD4392227425
25455535,QOXA,EZIp,QOXA4277118682
25455543,QOXA,YBPFa,QOXA6510879584
25455551,JBDX,EAMp,JBDX8924251479
25455552,QOXA,JXIQp,QOXA4360008399
25455554,DOLR,LISXPh,DOLR1853653280
25455557,XFKD,LOX14p,XFKD1759342196
25455558,JBDX,YXYb,JBDX8177118129
25455567,YXKH,MZQKl,YXKH6485420018
25455569,JBDX,ZPIMz,JBDX2010952336
25455573,JBDX,COPe,JBDX1612537068
25455582,JBDX,HFKAp,JBDX2409813753
25455589,QOXA,XFKm,QOXA9692126523
25455593,XFKD,OFYp,XFKD8556940415
25455601,XFKD,FKQLb,XFKD4861992028
25455606,JBDX,RIASp,JBDX0262502677
25455608,DOLR,HRKKz,DOLR1739013513
25455615,DOLR,ZZXp,DOLR6727725911
25455623,JBDX,CKQPp,JBDX2587184235
25455630,YXKH,ZLQQp,YXKH6492126889
25455632,QOXA,ORPz,QOXA3594333316
25455640,XFKD,HPIXSh,XFKD6780729432
25455648,QOXA,ABOJe,QOXA6661411952
25455654,XFKD,YLIp,XFKD6374702721
25455654,DOLR,BCFp,DOLR8012564477
25455658,JBDX,ZMDKz,JBDX6885176695
25455665,JBDX,CBOe,JBDX8942732453
25455670,JBDX,FRHMl,JBDX5424320405
25455679,DOLR,YFJm,DOLR8212353717
25455680,XFKD,XAFp,XFKD4132890550
25455681,YXKH,PBIBOp,YXKH6106504736
25455684,DOLR,IFDu,DOLR8034515043
25455687,JBDX,JACe,JBDX8243949318
25455688,JBDX,ZFZKz,JBDX0866225752
25455693,QOXA,XOBm,QOXA5011416607
25455694,QOXA,IDQe,QOXA7608439570
25455698,JBDX,YBIDb,JBDX8727773702
25455705,YXKH,MXOp,YXKH7747780955
25455710,YXKH,PBZRYs,YXKH7353828884
25455719,QOXA,QFDb,QOXA2477859437
25455720,XFKD,PZARp,XFKD4995735686
25455722,JBDX,ZLKKb,JBDX3564523161
25455730,XFKD,QFH1Sh,XFKD6181225566
25455733,JBDX,KWVJYc,JBDX7013108210
25455733,YXKH,ZQI1Sh,YXKH7095815077
25455739,YXKH,XIJp,YXKH0497248757
25455739,YXKH,ZXJp,YXKH5848658513
25455747,JBDX,XASd,JBDX4986246117
25455751,XFKD,XQIKz,XFKD5919379575
25455760,JBDX,IBXPb,JBDX8168710376
25455763,XFKD,EVAOi,XFKD8175209012
25455765,XFKD,JXKp,XFKD2750952933
25455773,XFKD,PTBAXs,XFKD8139382011
25455778,QOXA,XJp,QOXA8227838196
25455783,QOXA,CYBIp,QOXA2072297264
25455792,JBDX,PZI1Sh,JBDX7022115629
25455792,XFKD,XIKQl,XFKD6434550362
25455792,DOLR,YKPm,DOLR6394606248
25455796,QOXA,JXOXPh,QOXA9672544909
25455797,YXKH,YIWm,YXKH5946342983
25455803,YXKH,JZEm,YXKH5317189370
25455810,QOXA,OBMFz,QOXA0985316706
25455810,QOXA,DAJPp,QOXA6105975858
25455810,JBDX,FBBJl,JBDX1316207043
25455819,XFKD,YXKm,XFKD6946276671
25455821,YXKH,UIAUs,YXKH6010226371
25455828,DOLR,PTJXs,DOLR1387517499
25455836,DOLR,DCEi,DOLR3854078054
25455845,YXKH,NYQe,YXKH3727923537
25455853,XFKD,TAEc,XFKD5377097556
25455858,XFKD,LMBOXo,XFKD4452678489
25455858,XFKD,AIQXp,XFKD5727938304
trades1.txt
# The first 8 characters is execution time in microseconds since midnight
# The next 14 characters is the order ID
# The next 8 characters is the zero padded price
# The next 8 characters is the zero padded volume
25455416QOXA6580494597 0000013800001856
25455428JBDX6760353333 0000007000002458
25455434DOLR9093170513 0000000400003832
25455435QOXA6563975285 0000034700009428
25455449QOXA5569701531 0000007500009023
25455447YXKH2329856527 0000038300009947
25455451QOXA5832015401 0000039900006432
25455454QOXA3909507967 0000026900001847
25455456DOLR2262067037 0000034700002732
25455471YXKH2637373848 0000010900006105
25455480DOLR8639062962 0000027500001975
25455488JBDX7268036859 0000005200004986
25455505JBDX2840241688 0000037900002029
25455521YXKH4095442568 0000046400002150
25455515JBDX9945514579 0000040800005904
25455535QOXA1900595629 0000015200006866
25455533JBDX0126452783 0000001700006615
25455542XFKD4392227425 0000035500009948
25455570XFKD1759342196 0000025700007816
25455574JBDX8177118129 0000022400000427
25455567YXKH6485420018 0000039000008327
25455573JBDX1612537068 0000013700001422
25455584JBDX2409813753 0000016600003588
25455603XFKD4861992028 0000017600004552
25455611JBDX0262502677 0000007900003235
25455625JBDX2587184235 0000024300006723
25455658XFKD6374702721 0000046400009451
25455673JBDX6885176695 0000010900009258
25455671JBDX5424320405 0000005400003618
25455679DOLR8212353717 0000041100003633
25455697QOXA5011416607 0000018800007376
25455696QOXA7608439570 0000013000007463
25455716YXKH7747780955 0000037000006357
25455719QOXA2477859437 0000039300009840
25455723XFKD4995735686 0000045500009858
25455727JBDX3564523161 0000021300000639
25455742YXKH7095815077 0000023000003945
25455739YXKH5848658513 0000042700002084
25455766XFKD5919379575 0000022200003603
25455777XFKD8175209012 0000033300006350
25455788XFKD8139382011 0000034500007461
25455793QOXA8227838196 0000011600007081
25455784QOXA2072297264 0000017000004429
25455800XFKD6434550362 0000030000002409
25455801QOXA9672544909 0000039600001033
25455815QOXA6105975858 0000034800008373
25455814JBDX1316207043 0000026500005237
25455831YXKH6010226371 0000011400004945
25455838DOLR1387517499 0000046200006129
25455847YXKH3727923537 0000037400008061
25455873XFKD5727938304 0000048700007298
I have the following script:
'''
#!/bin/bash
declare -A volumes
declare -A turnovers
declare -A orders
# Read the first file, remembering for each order the client id
while read -r line
do
# Jump over comments
if [[ ${line:0:1} == "#" ]] ; then continue; fi;
details=($(echo $line | tr ',' " "))
order_id=${details[3]}
client_id=${details[1]}
orders[$order_id]=$client_id
done < $1
echo "ClientID,OrderID,Price,Volume"
while read -r line
do
# Jump over comments
if [[ ${line:0:1} == "#" ]] ; then continue; fi;
order_id=$(echo ${line:8:20} | tr -d '[:space:]')
client_id=${orders[$order_id]}
price=${line:28:8}
volume=${line: -8}
echo "$client_id,$order_id,$price,$volume"
price=$(echo $price | awk '{printf "%d", $0}')
volume=$(echo $volume | awk '{printf "%d", $0}')
order_turnover=$(($price*$volume))
old_turnover=${turnovers[$client_id]}
[[ -z "$old_turnover" ]] && old_turnover=0
total_turnover=$(($old_turnover+$order_turnover))
turnovers[$client_id]=$total_turnover
old_volumes=${volumes[$client_id]}
[[ -z "$old_volumes" ]] && old_volumes=0
total_volume=$((old_volumes+volume))
volumes[$client_id]=$total_volume
done < $2
echo "ClientID,Volume,Turnover"
for client_id in ${!volumes[#]}
do
volume=${volumes[$client_id]}
turnover=${turnovers[$client_id]}
echo "$client_id,$volume,$turnover"
done
Can anyone think of anything more elegant?
Thanks in advance
C

Assumption 1: the two files are ordered, so line x represents an action that is older than x+1. If not, then further work is needed.
The assumption makes our work easier. Let's first change the delimiter of traders into a comma:
sed -i 's/ /,/g' traders.txt
This will be done in place for sake of simplicity. So, you now have traders which is comma separated, as is orders. This is the Assumption 2.
Keep working on traders: split all columns and add titles1. More on the reasons why in a moment.
gawk -i inplace -v INPLACE_SUFFIX=.bak 'BEGINFILE{FS=",";OFS=",";print "execution time,order ID,price,volume";}{print substr($1,1,8),substr($1,9),substr($2,1,9),substr($2,9)}' traders.txt
Ugly but works. Now let's process your data using the following awk script:
BEGIN {
FS=","
OFS=","
}
{
if (1 == NR) {
getline line < TRADERS # consume title line
print "Client ID,Order ID,Price,Volume,Turnover"; # consume title line. Remove print to forget it
getline line < TRADERS # reads first data line
split(line, transaction, ",")
next
}
if (transaction[2] == $4) {
print $2, $4, transaction[3], transaction[4], transaction[3]*transaction[4]
getline line < TRADERS # reads new data line
split(line, transaction, ",")
}
}
called by:
gawk -f script -v TRADERS=traders.txt orders.txt
And there you have it. Some caveats:
check the numbers, as implicit gawk number conversion might not be correct with zero-padded numbers. There is a fix for that in case;
getline might explode if we run out of lines from traders. I haven't put any check, that's up to you
no control over timestamps. Match is based on Order ID.
Output file:
Client ID,Order ID,Price,Volume,Turnover
QOXA,QOXA6580494597,000001380,00001856,2561280
JBDX,JBDX6760353333,000000700,00002458,1720600
DOLR,DOLR9093170513,000000040,00003832,153280
QOXA,QOXA6563975285,000003470,00009428,32715160
QOXA,QOXA5569701531,000000750,00009023,6767250
YXKH,YXKH2329856527,000003830,00009947,38097010
QOXA,QOXA5832015401,000003990,00006432,25663680
QOXA,QOXA3909507967,000002690,00001847,4968430
DOLR,DOLR2262067037,000003470,00002732,9480040
YXKH,YXKH2637373848,000001090,00006105,6654450
DOLR,DOLR8639062962,000002750,00001975,5431250
JBDX,JBDX7268036859,000000520,00004986,2592720
JBDX,JBDX2840241688,000003790,00002029,7689910
YXKH,YXKH4095442568,000004640,00002150,9976000
JBDX,JBDX9945514579,000004080,00005904,24088320
QOXA,QOXA1900595629,000001520,00006866,10436320
JBDX,JBDX0126452783,000000170,00006615,1124550
XFKD,XFKD4392227425,000003550,00009948,35315400
XFKD,XFKD1759342196,000002570,00007816,20087120
JBDX,JBDX8177118129,000002240,00000427,956480
YXKH,YXKH6485420018,000003900,00008327,32475300
JBDX,JBDX1612537068,000001370,00001422,1948140
JBDX,JBDX2409813753,000001660,00003588,5956080
XFKD,XFKD4861992028,000001760,00004552,8011520
JBDX,JBDX0262502677,000000790,00003235,2555650
JBDX,JBDX2587184235,000002430,00006723,16336890
XFKD,XFKD6374702721,000004640,00009451,43852640
JBDX,JBDX6885176695,000001090,00009258,10091220
JBDX,JBDX5424320405,000000540,00003618,1953720
DOLR,DOLR8212353717,000004110,00003633,14931630
QOXA,QOXA5011416607,000001880,00007376,13866880
QOXA,QOXA7608439570,000001300,00007463,9701900
YXKH,YXKH7747780955,000003700,00006357,23520900
QOXA,QOXA2477859437,000003930,00009840,38671200
XFKD,XFKD4995735686,000004550,00009858,44853900
JBDX,JBDX3564523161,000002130,00000639,1361070
YXKH,YXKH7095815077,000002300,00003945,9073500
YXKH,YXKH5848658513,000004270,00002084,8898680
XFKD,XFKD5919379575,000002220,00003603,7998660
XFKD,XFKD8175209012,000003330,00006350,21145500
XFKD,XFKD8139382011,000003450,00007461,25740450
QOXA,QOXA8227838196,000001160,00007081,8213960
QOXA,QOXA2072297264,000001700,00004429,7529300
XFKD,XFKD6434550362,000003000,00002409,7227000
QOXA,QOXA9672544909,000003960,00001033,4090680
QOXA,QOXA6105975858,000003480,00008373,29138040
JBDX,JBDX1316207043,000002650,00005237,13878050
YXKH,YXKH6010226371,000001140,00004945,5637300
DOLR,DOLR1387517499,000004620,00006129,28315980
YXKH,YXKH3727923537,000003740,00008061,30148140
XFKD,XFKD5727938304,000004870,00007298,35541260
1: requires gawk 4.1.0 or higher

Related

How to swap the even and odd lines via bash script?

In the incoming string stream from the standard input, swap the even and odd lines.
I've tried to do it like this, but reading from file and $i -lt $a.count aren't working:
$a= gc test.txt
for($i=0;$i -lt $a.count;$i++)
{
if($i%2)
{
$a[$i-1]
}
else
{
$a[$i+1]
}
}
Please, help me to get this working
Suggesting one line awk script:
awk '!(NR%2){print$0;print r}NR%2{r=$0}' input.txt
awk script explanation
!(NR % 2){ # if row number divide by 2 witout reminder
print $0; # print current row
print evenRow; # print saved row
}
(NR % 2){ # if row number divided by 2 with reminder
evenRow = $0; # save current row in variable
}
There's already a good awk-based answer to the question here, and there are a few decent-looking sed and awk solutions in Swap every 2 lines in a file. However, the shell solution in Swap every 2 lines in a file is almost comically incompetent. The code below is an attempt at a functionally correct pure Bash solution. Bash is very slow, so it is practical to use only on small files (maybe up to 10 thousand lines).
#! /bin/bash -p
idx=0 lines=()
while IFS= read -r 'lines[idx]' || [[ -n ${lines[idx]} ]]; do
(( idx == 1 )) && printf '%s\n%s\n' "${lines[1]}" "${lines[0]}"
idx=$(( (idx+1)%2 ))
done
(( idx == 1 )) && printf '%s\n' "${lines[0]}"
The lines array is used to hold two consecutive lines.
IFS= prevents whitespace being stripped as lines are read.
The idx variable cycles through 0, 1, 0, 1, ... (idx=$(( (idx+1)%2 ))) so reading to lines[idx] cycles through putting input lines at indexes 0 and 1 in the lines array.
The read function returns non-zero status immediately if it encounters an unterminated final line in the input. That could cause the loop to terminate before processing the last line, thus losing it in the output. The || [[ -n ${lines[idx]} ]] avoids that by checking if the read actually read some input. (Fortunately, there's no such thing as an unterminated empty line at the end of a file.)
printf is used instead of echo to output the lines because echo doesn't work reliably for arbitrary strings. (For instance, a line containing just -n would get lost completely by echo "$line".) See Why is printf better than echo?.
The question doesn't say what to do if the input has an odd number of lines. This code ((( idx == 1 )) && printf '%s\n' "${lines[0]}") just passes the last line through, after the swapped lines. Other reasonable options would be to drop the last line or print it preceded by an extra blank line. The code can easily be modified to do one of those if desired.
The code is Shellcheck-clean.

Interleave lines sorted by a column

(Similar to How to interleave lines from two text files but for a single input. Also similar to Sort lines by group and column but interleaving or randomizing versus sorting.)
I have a set of systems and tasks in two columns, SYSTEM,TASK:
alpha,90198500
alpha,93082105
alpha,30184438
beta,21700055
beta,33452909
beta,40850198
beta,82645731
gamma,64910850
I want to distribute the tasks to each system in a balanced way. The ideal case where each system has the same number of tasks would be round-robin, one alpha then one beta then one gamma and repeat until finished.
I get the whole list of tasks + systems at once, so I don't need to keep any state
The list of systems is not static, on the order of N=100
The total number of tasks is variable, on the order of N=500
The number of tasks for each system is not guaranteed to be equal
Hard / absolute interleaving isn't required, as long as there aren't two of the same system twice in a row
The same task may show up more than once, but not for the same system
Input format / delimiter can be changed
I can solve this well enough with some fancy scripting to split the data into multiple files (grep ^alpha, input > alpha.txt etc) and then recombine them with paste or similar, but I'd like to use a single command or set of pipes to run it without intermediate files or a proper scripting language. Just using sort -R gets me 95% of the way there, but I end up with 2 tasks for the same system in a row almost every time, and sometimes 3 or more depending on the initial distribution.
edit:
To clarify, any output should not have the same system on two lines in a row. All system,task pairs must be preserved, you can't move a task from one system to another - that'd make this really easy!
One of several possible sample outputs:
beta,40850198
alpha,90198500
beta,82645731
alpha,93082105
gamma,64910850
beta,21700055
alpha,30184438
beta,33452909
We start with by answering the underlying theoretical problem. The problem is not as simple as it seems. Feel free to implement a script based on this answer.
The blocks formatted as quotes are not quotes. I just wanted to highlight them to improve navigation in this rather long answer.
Theoretical Problem
Given a finite set of letters L with frequencies f : L→ℕ0, find a sequence of letters such that every letter ℓ appears exactly f(ℓ) times and adjacent elements of the sequence are always different.
Example
L = {a,b,c} with f(a)=4, f(b)=2, f(c)=1
ababaca, acababa, and abacaba are all valid solutions.
aaaabbc is invalid – Some adjacent elements are equal, for instance aa or bb.
ababac is invalid – The letter a appears 3 times, but its frequency is f(a)=4
cababac is invalid – The letter c appears 2 times, but its frequency is f(c)=1
Solution
The following approach produces a valid sequence if and only if there exists a solution.
Sort the letters by their frequencies.
For ease of notation we assume, without loss of generality, that f(a) ≥ f(b) ≥ f(c) ≥ ... ≥ 0.
Note: There exists a solution if and only if f(a) ≤ 1 + ∑ℓ≠a f(ℓ).
Write down a sequence s of f(a) many a.
Add the remaining letters into a FIFO working list, that is:
(Don't add any a)
First add f(b) many b
Then f(c) many c
and so on
Iterate from left to right over the sequence s and insert after each element a letter from the working list. Repeat this step until the working list is empty.
Example
L = {a,b,c,d} with f(a)=5, f(b)=5, f(c)=4, f(d)=2
The letters are already sorted by their frequencies.
s = aaaaa
workinglist = bbbbbccccdd. The leftmost entry is the first one.
We iterate from left to right. The places where we insert letters from the working list are marked with an _ underscore.
s = a_a_a_a_a_ workinglist = bbbbbccccdd
s = aba_a_a_a_ workinglist = bbbbccccdd
s = ababa_a_a_ workinglist = bbbccccdd
...
s = ababababab workinglist = ccccdd
⚠️ We reached the end of sequence s. We repeat step 4.
s = a_b_a_b_a_b_a_b_a_b_ workinglist = ccccdd
s = acb_a_b_a_b_a_b_a_b_ workinglist = cccdd
...
s = acbcacb_a_b_a_b_a_b_ workinglist = cdd
s = acbcacbca_b_a_b_a_b_ workinglist = dd
s = acbcacbcadb_a_b_a_b_ workinglist = d
s = acbcacbcadbda_b_a_b_ workinglist =
⚠️ The working list is empty. We stop.
The final sequence is acbcacbcadbdabab.
Implementation In Bash
Here is a bash implementation of the proposed approach that works with your input format. Instead of using a working list each line is labeled with a binary floating point number specifying the position of that line in the final sequence. Then the lines are sorted by their labels. That way we don't have to use explicit loops. Intermediate results are stored in variables. No files are created.
#! /bin/bash
inputFile="$1" # replace $1 by your input file or call "./thisScript yourFile"
inputBySys="$(sort "$inputFile")"
sysFreqBySys="$(cut -d, -f1 <<< "$inputBySys" | uniq -c | sed 's/^ *//;s/ /,/')"
inputBySysFreq="$(join -t, -1 2 -2 1 <(echo "$sysFreqBySys") <(echo "$inputBySys") | sort -t, -k2,2nr -k1,1)"
maxFreq="$(head -n1 <<< "$inputBySysFreq" | cut -d, -f2)"
lineCount="$(wc -l <<< "$inputBySysFreq")"
increment="$(awk '{l=log($1/$2)/log(2); l=int(l)-(int(l)>l); print 2^l}' <<< "$maxFreq $lineCount")"
seq="$({ echo obase=2; seq 0 "$increment" "$maxFreq" | head -n-1; } | bc |
awk -F. '{sub(/0*$/,"",$2); print 0+$1 "," $2 "," length($2)}' |
sort -snt, -k3,3 -k2,2 | head -n "$lineCount")"
paste -d, <(echo "$seq") <(echo "$inputBySysFreq") | sort -nt, -k1,1 -k2,2 | cut -d, -f4,6
This solution could fail for very long input files due to the limited precision of floating point numbers in seq and awk.
Well, this is what I've come up with:
args=()
while IFS=' ' read -r _ name; do
# add a file redirection with grepped certain SYSTEM only for later eval
args+=("<(grep '^$name,' file)")
done < <(
# extract SYSTEM only
<file cut -d, -f1 |
#sort with the count
sort | uniq -c | sort -nr
)
# this is actually safe, because we control all arguments
eval paste -d "'\\n'" "${args[#]}" |
# paste will insert empty lines when the list ended - remove them
sed '/^$/d'
First, I extract and sort the SYSTEM names in the order which occurs the most often to be first. So for the input example we get:
4 beta
3 alpha
1 gamme
Then for each such name I add the proper string <(grep '...' file) to arguments list witch will be later evalulated.
Then I evalulate the call to paste <(grep ...) <(grep ...) <(grep ...) ... with newline as the paste's delimeter. I remove empty lines with simple sed call.
The output for the input provided:
beta,21700055
alpha,90198500
gamma,64910850
beta,33452909
alpha,93082105
beta,40850198
alpha,30184438
beta,82645731
Converted to a fancy oneliner, with substituting the while read with command substitution and sed. Got safe with inputfile naming with printf "%q" "$inputfile" and double quoting inside sed regex.
inputfile="file"
fieldsep=","
eval paste -d '"\\n"' "$(
cut -d "$fieldsep" -f1 "$inputfile" |
sort | uniq -c | sort -nr |
sed 's/^[[:space:]]*[0-9]\+[[:space:]]*\(.*\)$/<(grep '\''^\1'"$fieldsep"\'' "'"$(printf "%q" "$inputfile")"'")/' |
tr '\n' ' '
)" |
sed '/^$/d'
inputfile="inputfile"
fieldsep=","
# remember SYSTEMS with it's occurrence counts
counts=$(cut -d "$fieldsep" -f1 "$inputfile" | sort | uniq -c)
# remember last outputted system name
lastsys=''
# until there are any systems with counts
while ((${#counts})); do
# get the most occurrented system with it's count from counts
IFS=' ' read -r cnt sys < <(
# if lastsys is empty, don't do anything, if not, filter it out
if [ -n "$lastsys" ]; then
grep -v " $lastsys$";
else
cat;
# ha suprise - counts is here!
# probably would be way more readable with just `printf "%s" "$counts" |`
fi <<<"$counts" |
# with the most occurence
sort -n | tail -n1
)
if [ -z "$cnt" ]; then
echo "ERROR: constructing output is not possible! There have to be duplicate system lines!" >&2
exit 1
fi
# update counts - decrement the count of this system, or remove it if count is 1
counts=$(
# remove current system from counts
<<<"$counts" grep -v " $sys$"
# if the count of the system is 1, don't add it back - it's count is now 0
if ((cnt > 1)); then
# decrement count and add the line with system to counts
printf "%s" "$((cnt - 1)) $sys"
fi
)
# finally print output
printf "%s\n" "$sys"
# and remember last system
lastsys="$sys"
done |
{
# get system names only in `system` - using cached counts variable
# for each system name open a grep for that name from the input file
# with asigned file descritpro
# The file descriptor list is saved in an array `fds`
fds=()
systems=""
while IFS=' ' read -r _ sys; do
exec {fd}< <(grep "^$sys," "$inputfile")
fds+=("$fd")
systems+="$sys"$'\n'
done <<<"$counts"
# for each line in input
while IFS='' read -r sys; do
# get the position inside systems list of that system decremented by 1
# this will be the underlying filesystem for filtering that system out of input
fds_idx=$(<<<"$systems" grep -n "$sys" | cut -d: -f1)
fds_idx=$((fds_idx - 1))
# read one line from that file descriptor
# I wonder is `sed 1p` would be faster
IFS='' read -r -u "${fds[$fds_idx]}" line
# output that line
printf "%s\n" "$line"
done
}
To accommodate for strange input values this script implements somewhat simple but hardy in bash statemachine.
The variable counts stores SYSTEM names with their're occurrence count. So from the example input it will be
4 alpha
3 beta
1 gamma
Now - we output the SYSTEM name with the biggest occurrence count that is also different from the last outputted SYSTEM name. We decrement it's occurrence count. If the count is equal to zero, it is removed from the list. We remember the last outputted SYSTEM name. We repeat this process until all occurrence counts reach zero, so the list is empty. For the example input this will output:
beta
alpha
beta
alpha
beta
alpha
beta
gamma
Now, we need to join that list with the job names. We can't use join as the input is not sorted and we don't want to change the ordering. So what I do, I get only SYSTEM names in system. Then for each system I open a different file descriptor with filtered only that SYSTEM name from the input file. All the file descriptors are stored in an array. Then for each SYSTEM name from the input, I find the file descriptor that filters that SYSTEM name from the input file and read exactly one line from the file descriptor. This works like an array of file positions each file position associated / filtering specified SYSTEM name.
beta,21700055
alpha,90198500
beta,33452909
alpha,93082105
beta,40850198
alpha,30184438
beta,82645731
gamma,64910850
The script was done so for the input in the form of:
alpha,90198500
alpha,93082105
alpha,30184438
beta,21700055
gamma,64910850
the script outputs correctly:
alpha,90198500
gamma,64910850
alpha,93082105
beta,21700055
alpha,30184438
I think this algorithm will mostly always print correct output, but the ordering is so that the least common SYSTEMs will be outputted last, which may be not optimal.
Tested manually with some custom tests and checker on paiza.io.
inputfile="inputfile"
in=( 1 2 1 5 )
cat <<EOF > "$inputfile"
$(seq ${in[0]} | sed 's/^/A,/' )
$(seq ${in[1]} | sed 's/^/B,/' )
$(seq ${in[2]} | sed 's/^/C,/' )
$(seq ${in[3]} | sed 's/^/D,/' )
EOF
sed -i -e '/^$/d' "$inputfile"
inputfile="inputfile"
fieldsep=","
# remember SYSTEMS with it's occurrence counts
counts=$(cut -d "$fieldsep" -f1 "$inputfile" | sort | uniq -c)
# I think this holds true
# The SYSTEM with the most count should be lower than the sum of all others
# remember last outputted system name
lastsys=''
# until there are any systems with counts
while ((${#counts})); do
# get the most occurrented system with it's count from counts
IFS=' ' read -r cnt sys < <(
# if lastsys is empty, don't do anything, if not, filter it out
if [ -n "$lastsys" ]; then
grep -v " $lastsys$";
else
cat;
# ha suprise - counts is here!
# probably would be way more readable with just `printf "%s" "$counts" |`
fi <<<"$counts" |
# with the most occurence
sort -n | tail -n1
)
if [ -z "$cnt" ]; then
echo "ERROR: constructing output is not possible! There have to be duplicate system lines!" >&2
exit 1
fi
# update counts - decrement the count of this system, or remove it if count is 1
counts=$(
# remove current system from counts
<<<"$counts" grep -v " $sys$"
# if the count of the system is 1, don't add it back - it's count is now 0
if ((cnt > 1)); then
# decrement count and add the line with system to counts
printf "%s" "$((cnt - 1)) $sys"
fi
)
# finally print output
printf "%s\n" "$sys"
# and remember last system
lastsys="$sys"
done |
{
# get system names only in `system` - using cached counts variable
# for each system name open a grep for that name from the input file
# with asigned file descritpro
# The file descriptor list is saved in an array `fds`
fds=()
systems=""
while IFS=' ' read -r _ sys; do
exec {fd}< <(grep "^$sys," "$inputfile")
fds+=("$fd")
systems+="$sys"$'\n'
done <<<"$counts"
# for each line in input
while IFS='' read -r sys; do
# get the position inside systems list of that system decremented by 1
# this will be the underlying filesystem for filtering that system out of input
fds_idx=$(<<<"$systems" grep -n "$sys" | cut -d: -f1)
fds_idx=$((fds_idx - 1))
# read one line from that file descriptor
# I wonder is `sed 1p` would be faster
IFS='' read -r -u "${fds[$fds_idx]}" line
# output that line
printf "%s\n" "$line"
done
} |
{
# check if the output is correct
output=$(cat)
# output should have same lines as inputfile
if ! cmp <(sort "$inputfile") <(<<<"$output" sort); then
echo "Output does not match input!" >&2
exit 1
fi
# two consecutive lines can't have the same system
lastsys=""
<<<"$output" cut -d, -f1 |
while IFS= read -r sys; do
if [ -n "$lastsys" -a "$lastsys" = "$sys" ]; then
echo "Same systems found on two consecutive lines!" >&2
exit 1
fi
lastsys="$sys"
done
# all ok
echo "all ok!"
echo -------------
printf "%s\n" "$output"
}
exit

optimizing awk command for large file

I have these functions to process a 2GB text file. I'm splitting it into 6 parts for simultaneous processing but it is still taking 4+ hours.
What else can I try make the script faster?
A bit of details:
I feed my input csv into a while loop to be read line by line.
I grabbed the values from the csv line from 4 fields in the read2col function
The awk in my mainf function takes the values from read2col and do some arithmetic calculation. I'm rounding the result to 2 decimal places. Then, print the line to a text file.
Sample data:
"111","2018-08-24","01:21","ZZ","AAA","BBB","0","","","ZZ","ZZ111","ZZ110","2018-10-12","07:00","2018-10-12","08:05","2018-10-19","06:30","2018-10-19","09:35","ZZZZ","ZZZZ","A","B","146.00","222.26","76.26","EEE","abc","100.50","45.50","0","E","ESSENTIAL","ESSENTIAL","4","4","7","125","125"
Script:
read2col()
{
is_one_way=$(echo "$line"| awk -F'","' '{print $7}')
price_outbound=$(echo "$line"| awk -F'","' '{print $30}')
price_exc=$(echo "$line"| awk -F'","' '{print $25}')
tax=$(echo "$line"| awk -F'","' '{print $27}')
price_inc=$(echo "$line"| awk -F'","' '{print $26}')
}
#################################################
#for each line in the csv
mainf()
{
cd $infarepath
while read -r line; do
#read the value of csv fields into variables
read2col
if [[ $is_one_way == 0 ]]; then
if [[ $price_outbound > 0 ]]; then
#calculate price inc and print the entire line to txt file
echo $line | awk -v CONVFMT='%.2f' -v pout=$price_outbound -v tax=$tax -F'","' 'BEGIN {OFS = FS} {$25=pout;$26=(pout+(tax / 2)); print}' >>"$csvsplitfile".tmp
else
#divide price ecx and inc by 2 if price outbound is not greater than 0
echo $line | awk -v CONVFMT='%.2f' -v pexc=$price_exc -v pinc=$price_inc -F'","' 'BEGIN {OFS = FS} {$25=(pexc / 2);$26=(pinc /2); print}' >>"$csvsplitfile".tmp
fi
else
echo $line >>"$csvsplitfile".tmp
fi
done < $csvsplitfile
}
The first thing you should do is stop invoking six subshells for running awk for every single line of input. Let's do some quick, back-of-the-envelope calculations.
Assuming your input lines are about 292 characters (as per you example), a 2G file will consist of a little over 7.3 million lines. That means you are starting and stopping a whopping forty-four million processes.
And, while Linux admirably handles fork and exec as efficiently as possible, it's not without cost:
pax$ time for i in {1..44000000} ; do true ; done
real 1m0.946s
In addition, bash hasn't really been optimised for this sort of processing, its design leads to sub-optimal behaviour for this specific use case. For details on this, see this excellent answer over on one of our sister sites.
An analysis of the two methods of file processing (one program reading an entire file (each line has just hello on it), and bash reading it a line at a time) is shown below. The two commands used to get the timings were:
time ( cat somefile >/dev/null )
time ( while read -r x ; do echo $x >/dev/null ; done <somefile )
For varying file sizes (user+sys time, averaged over a few runs), it's quite interesting:
# of lines cat-method while-method
---------- ---------- ------------
1,000 0.375s 0.031s
10,000 0.391s 0.234s
100,000 0.406s 1.994s
1,000,000 0.391s 19.844s
10,000,000 0.375s 205.583s
44,000,000 0.453s 889.402s
From this, it appears that the while method can hold its own for smaller data sets, it really does not scale well.
Since awk itself has ways to do calculations and formatted output, processing the file with one single awk script, rather than your bash/multi-awk-per-line combination, will make the cost of creating all those processes and line-based delays go away.
This script would be a good first attempt, let's call it prog.awk:
BEGIN {
FMT = "%.2f"
OFS = FS
}
{
isOneWay=$7
priceOutbound=$30
priceExc=$25
tax=$27
priceInc=$26
if (isOneWay == 0) {
if (priceOutbound > 0) {
$25 = sprintf(FMT, priceOutbound)
$26 = sprintf(FMT, priceOutbound + tax / 2)
} else {
$25 = sprintf(FMT, priceExc / 2)
$26 = sprintf(FMT, priceInc / 2)
}
}
print
}
You just run that single awk script with:
awk -F'","' -f prog.awk data.txt
With the test data you provided, here's the before and after, with markers for field numbers 25 and 26:
<-25-> <-26->
"111","2018-08-24","01:21","ZZ","AAA","BBB","0","","","ZZ","ZZ111","ZZ110","2018-10-12","07:00","2018-10-12","08:05","2018-10-19","06:30","2018-10-19","09:35","ZZZZ","ZZZZ","A","B","146.00","222.26","76.26","EEE","abc","100.50","45.50","0","E","ESSENTIAL","ESSENTIAL","4","4","7","125","125"
"111","2018-08-24","01:21","ZZ","AAA","BBB","0","","","ZZ","ZZ111","ZZ110","2018-10-12","07:00","2018-10-12","08:05","2018-10-19","06:30","2018-10-19","09:35","ZZZZ","ZZZZ","A","B","100.50","138.63","76.26","EEE","abc","100.50","45.50","0","E","ESSENTIAL","ESSENTIAL","4","4","7","125","125"

Print a file in multiple columns based on delimiter

This seems like a simple task, but using duckduckgo I wasn't able to find a way to properly do what I'm trying to.
The main question is: How do I split the output of a command in linux or bash into multiple columns using a delimeter?
I have a file that looks like this: (this is just a simplified example)
-----------------------------------
Some data
that varies in line length
-----------------------------------
-----------------------------------
More data that is seperated
by a new line and dashes
-----------------------------------
And so on. Everytime data gets written to the file, it's enclosed in a line of dashes, seperated by an empty line from the last block. Line-length of the data varies. What I want is basically a tool or way using bash to split the file into multiple columns like this:
----------------------------------- -----------------------------------
Some data More data that is seperated
that varies in line length by a new line and dashes
----------------------------------- -----------------------------------
Each column should take 50% of the screen, no centering (as in alignment) needed. The file has to be split per-block. Splitting the file in the middle or something like that won't work. I basically want block 1 go to the left column, block 2 to the right, 3 to the left again, 4 right, and so on. The file gets updated constantly and updates should be written to the screen right away. (Currently I'm using tail -f)
Since this sounds like a rather common question I would welcome a general approach to this instead of a specific answer that works only for my case so people coming from search engines looking for a way to have a two column layout in bash get some information too. I tried column and pr, both don't work as desired. (I elaborated on this in the comments)
Edit: To be clear, I am looking for a general approach on this. Going through a file, getting data between the delimiter, putting it to column A, getting the next one putting it to column B, and so on.
The question is tagged as Perl so here is a possible Perl answer:
#!/usr/bin/env perl
use strict;
use warnings;
my $is_col1 = 1;
my $in_block = 0;
my #col1;
while (<DATA>) {
chomp;
if (/^\s*-+\s*$/ ... /^\s*-+\s*$/) {
$in_block = 1;
if ($is_col1) {
push #col1, $_;
}
else {
printf "%-40s%-40s\n", shift #col1 // '', $_;
}
}
else {
if ($in_block) {
$in_block = ! $in_block;
$is_col1 = ! $is_col1;
print "\n" if $is_col1; # line separating blocks
}
}
}
print join("\n", #col1), "\n\n" if #col1;
__DATA__
-----------------------------------
Some data
that varies in line length
-----------------------------------
-----------------------------------
More data that is seperated
by a new line and dashes
with a longer column2
-----------------------------------
-----------------------------------
The odd last column
-----------------------------------
Output:
----------------------------------- -----------------------------------
Some data More data that is seperated
that varies in line length by a new line and dashes
----------------------------------- with a longer column2
-----------------------------------
-----------------------------------
The odd last column
-----------------------------------
This script is getting max width of current terminal and splitting it in 2, then printing records split by RS="\n\n" separator, print the first found and placing the cursor at the first line/last column of it to write the next record.
#!/bin/bash
tput clear
# get half current terminal width
twidth=$(($(tput cols)/2))
tail -n 100 -f test.txt | stdbuf -i0 -o0 gawk -v twidth=$twidth 'BEGIN{ RS="\n\n"; FS=OFS="\n"; oldNF=0 } {
sep="-----------------------------------"
pad=" "
printf "%-" twidth "s", $0
getline
for(i = 1; i <= NF; i++){
# move cursor to first line, last column of previous record
print "\033[" oldNF ";" twidth "f" $i
oldNF+=1
}
}'
Here's a simpler version
gawk 'BEGIN{ RS="[-]+\n\n"; FS="\n" } {
sep="-----------------------------------"
le=$2
lo=$3
getline
printf "%-40s %-40s\n", sep,sep
printf "%-40s %-40s\n", le,$2
printf "%-40s %-40s\n", lo,$3
printf "%-40s %-40s\n\n", sep,sep
}' test.txt
Output
----------------------------------- -----------------------------------
Some data More data that is seperated
that varies in line length by a new line and dashes
----------------------------------- -----------------------------------
----------------------------------- -----------------------------------
Some data More data that is seperated
that varies in line length by a new line and dashes
----------------------------------- -----------------------------------
Assuming file contains uniform blocks of five lines each, using paste, sed, and printf:
c=$((COLUMNS/2))
paste -d'#' <(sed -n 'p;n;p;n;p;n;p;n;p;n;n;n;n;n' file) \
<(sed -n 'n;n;n;n;n;p;n;p;n;p;n;p;n;p' file) |
sed 's/.*/"&"/;s/#/" "/' |
xargs -L 1 printf "%-${c}s %-${c}s\n"
Problem with OP spec
The OP reports that the block lengths may vary, and should be separated by a fixed number of lines. Even numbered blocks go in Column A, odd numbered blocks in Column B.
That creates a tail -f problem then. Suppose the block lengths of the source input begin with 1000 lines, then one line, 1000, 1, 1000, 1, etc. So Column A gets all the 1000 line blocks, and Column B gets all the one line blocks. Let's say the blocks in the output are separated by 1 line each. So one block in Column A lines up with 500 blocks in Column B. So for a terminal with scrolling output, that means before we can output the first block in Column A, we have to wait for 1000 blocks of input. To output the third block in Column A, (just below the first block), we have to wait for 2000 blocks of input.
If the blocks are added to the input file relatively slowly, with a one second delay between blocks, then it will take three seconds for the block 3 to appear in the input file, but it will take 33 minutes for block 3 to be displayed in the output file.
Alright, since apprently there is no clean way to do this I came up with my own solution. It's a bit messy and requires GNU screen to be installed, but it works. Any amount of lines within or around the blocks, 50% of the screen automatically resizing and each column prints independantly from each other with a fixed amount of newlines between them. Also automatic updates every x seconds. (120 in my example)
#!/bin/bash
screen -S testscr -X layout save default
screen -S testscr -X split -v
screen -S testscr -X screen tail -f /tmp/testscr1.txt
screen -S testscr -X focus
screen -S testscr -X screen tail -f /tmp/testscr2.txt
while : ; do
echo "" > /tmp/testscr1.txt
echo "" > /tmp/testscr2.txt
cfile=1 # current column
ctype=0 # start or end of block
while read; do
if [[ $REPLY == "------------------------------------------------------------" ]]; then
if [[ $ctype -eq 0 ]]; then
ctype=1
else
if [[ $cfile -eq 1 ]]; then
echo "${REPLY}" >> /tmp/testscr1.txt
echo "" >> /tmp/testscr1.txt
echo "" >> /tmp/testscr1.txt
cfile=2
else
echo "${REPLY}" >> /tmp/testscr2.txt
echo "" >> /tmp/testscr2.txt
echo "" >> /tmp/testscr2.txt
cfile=1
fi
ctype=0
fi
fi
if [[ $ctype -eq 1 ]]; then
if [[ $cfile -eq 1 ]]; then
echo "${REPLY}" >> /tmp/testscr1.txt
else
echo "${REPLY}" >> /tmp/testscr2.txt
fi
fi
done < "$1"
sleep 120
done
First, start a screen session with screen -S testscr then, either within or outside the session, execute the script above. This will split the screen vertically using 50% per column and execute tail -f on both columns, afterwards it will go through the input file and write block by block to each tmp. file in the desired way. Since it's in an infinite while loop it's essentially automatically updating the shown output every x seconds (here 120).

In-line replacement bash (replace line with new one using variables)

I'm going through and reading lines from a file. They have a ton of information that is unnecessary, and I want to reformat the lines for later use so that I can use the necessary information later.
Example line in file (file1)
Name: *name* Date: *date* Age: *age* Gender: *gender* Score: *score*
Say I want to just pull gender and age from the file and use that later
New line
*gender*, *age*
In bash:
while read line; do
<store variable for gender>
<store variable for age>
<overwrite each line in CSV - gender,age>
<use gender/age as inputs for later comparisons>
done < file1
EDIT: There is no stability in the entries. One value can be found using a echo $line | cut and the other value is found using a [ $line =~ "keyValue" ] then setting that value
I was thinking of storing the combination of the two variables as such:
newLine="$val1,$val2"
Then using a sed in-line replace to replace the $line with $newLine.
Is there a better way, though? It may come down to a sed formatting issue with variables.
This will produce your desired output from your posted sample input:
$ cat file
Name: *name* Date: *date* Age: *age* Gender: *gender* Score: *score*
$ awk -F'[: ]+' -v OFS=', ' '{for (i=1;i<NF;i+=2) a[$i]=$(i+1); print a["Gender"], a["Age"]}' file
*gender*, *age*
$ awk -F'[: ]+' -v OFS=', ' '{for (i=1;i<NF;i+=2) a[$i]=$(i+1); print a["Score"], a["Name"], a["Date"] }' file
*score*, *name*, *date*
and you can see above how easy it is to print whatever fields you like in whatever order you like.
If it's not what you want, post some more representative input.
Your example leaves room for interpretation, so I'm assuming that there may be whitespace in the field values, but that there are no colons in the field values and that each field key is followed by a colon. I also assume that the order is stable.
while IFS=: read _ _ _ age gender _; do
age="${age% Gender}" # Use parameter expansion to strip off the key for the *next* field.
gender="${gender% Score}"
printf '"%s","%s"\n' "$gender" "$age"
done < file1 > file1.csv
Update
Since your question now states that there is no stability, you have to iterate through the possible values to get your output:
while IFS=: read -a line; do
unset age key sex
for chunk in "${line[#]}"; do
val="${chunk% *}" # Everything but the key
case "$key" in
Age) age="$val";;
Gender) sex="$val";;
esac
# The key is for the *next* iteration.
key="${chunk##* }"
done
if [[ $age || $sex ]]; then
printf '"%s","%s"\n' "$sex" "$age"
fi
done < file1 > file1.csv
(Also I added quotes around the output values in the csv to be compliant with the actual csv format and in case sex or age happened to have commas in it. Maybe someone is 1,000,000 years old. ;)

Resources