Adding rows that match a criteria in another column in Excel - excel

This is a sample data
Polling_Booth INC SAD BSP PS_NO
1 89 47 2 1
2 97 339 6 1
3 251 485 8 1
4 356 355 25 2
5 290 333 9 2
6 144 143 4 3
7 327 196 1 4
8 370 235 1 5
And this is what I'm trying to achieve
Polling_Booth INC SAD BSP PS_NO OP_INC OP_SAD OP_BSP
1 89 47 2 1
2 97 339 6 1
3 251 485 8 1 437 871 16
4 356 355 25 2
5 290 333 9 2 646 688 34
6 144 143 4 3 144 143 4
7 327 196 1 4 327 196 1
8 370 235 1 5 370 235 1
This is achieved adding up rows which has the same PS_NO, This is what I have tried
=if(E2=E3,sum(B2,B3),0) #same for all the rows
Any help would be much appreciated..Thanks

You could get it to look like your table by adding another condition to check if it's the last occurrence of the PS_No in column E and setting the result to an empty string if not
=IF(COUNTIF($E$2:$E2,$E2)=COUNTIF($E$2:$E$10,$E2),SUMIF($E$2:$E$10,$E2,B$2:B$10),"")
If the data is sorted by PS_No, you can do it more easily by
=IF($E3<>$E2,SUMIF($E$2:$E$10,$E2,B$2:B$10),"")
which I think is what you were trying in your question

Related

Retaining bad_lines identified by pandas in the output file instead of skipping those lines

I have to convert text files into csv's after processing the contents of the text file as pandas dataframe.
Below is the code i am using. out_txt is my input text file and out_csv is my output csv file.
df = pd.read_csv(out_txt, sep='\s', header=None, on_bad_lines='warn', encoding = "ANSI")
df = df.replace(r'[^\w\s]|_]/()|~"{}="', '', regex=True)
df.to_csv(out_csv, header=None)
If "on_bad_lines = 'warn'" is not decalred the csv files are not created. But if i use this condition those bad lines are getting skipped (obviously) with the warning
Skipping line 6: Expected 8 fields in line 7, saw 9. Error could possibly be due to quotes being ignored when a multi-char delimiter is used.
I would like to retain these bad lines in the csv. I have highlighted the bad lines detected in the below image (my input text file).
Below is the contents of the text file which is getting saved. In this content i would like to remove characters like #, &, (, ).
75062 220 8 6 110 220 250 <1
75063 260 5 2 584 878 950 <1
75064 810 <2 <2 456 598 3700 <1
75065 115 5 2 96 74 5000 <1
75066 976 <5 2 5 68 4200 <1
75067 22 210 4 348 140 4050 <1
75068 674 5 4 - 54 1130 3850 <1
75069 414 5 y) 446 6.6% 2350 <1
75070 458 <5 <2 548 82 3100 <1
75071 4050 <5 2 780 6430 3150 <1
75072 115 <7 <1 64 5.8% 4050 °#&4«x<i1
75073 456 <7 4 46 44 3900 <1
75074 376 <7 <2 348 3.8% 2150 <1
75075 378 <6 y) 30 40 2000 <1
I would split on \s later with str.split rather than read_csv :
df = (
pd.read_csv(out_txt, header=None, encoding='ANSI')
.replace(r'[^\w\s]|_]/()|~"{}="', '', regex=True)
.squeeze().str.split(expand=True)
)
Another variant (skipping everything that comes in-between the numbers):
df = (
pd.read_csv(out_txt, header=None, encoding='ANSI')
[0].str.findall(r"\b(\d+)\b"))
.str.split(expand=True)
)
​
Output :
print(df)
0 1 2 3 4 5 6 7
0 375020 1060 115 38 440 350 7800 1
1 375021 920 80 26 310 290 5000 1
2 375022 1240 110 28 460 430 5900 1
3 375023 830 150 80 650 860 6200 1
4 375024 185 175 96 800 1020 2400 1
5 375025 680 370 88 1700 1220 172 1
6 375026 550 290 72 2250 1460 835 2
7 375027 390 120 60 1620 1240 158 1
8 375028 630 180 76 820 1360 180 1
9 375029 460 280 66 380 790 3600 1
10 375030 660 260 62 11180 1040 300 1
11 375031 530 200 84 1360 1060 555 1

how to map two dataframes on condition while having different rows

I have two dataframes that need to be mapped (or joined?) based on some condition. These are the dataframes:
df_1
img_names img_array
0 1_rel 253
1 1_rel_right 255
2 1_rel_top 250
3 4_rel 180
4 4_rel_right 182
5 4_rel_top 189
6 7_rel 217
7 7_rel_right 183
8 7_rel_top 196
df_2
List_No time
0 1 38
1 4 23
2 7 32
After mapping I would like to get the following dataframe:
df_3
img_names img_array List_No time
0 1_rel 253 1 38
1 1_rel_right 255 1 38
2 1_rel_top 250 1 38
3 4_rel 180 4 23
4 4_rel_right 182 4 23
5 4_rel_top 189 4 23
6 7_rel 217 7 32
7 7_rel_right 183 7 32
8 7_rel_top 196 7 32
Basically, df_2's each row is populated 3 times to match the number of rows in df_1 and the mapping (if we can say so) is done by the split string in each row of df_1's img_name column. The names of row elements in img_names may have different names, but each of them always starts with the some number (1,4,7 in this case) and an undescore, etc. So I need to split the correspongding number in each row and map it with the row elements of List_No.
I hope the example above is clear.
Thank you.
Looks like you can just extract the digit parts and merge:
df_1['List_No'] = df_1['img_names'].str.split('_').str[0].astype(int)
df_3 = df_1.merge(df_2, on='List_No')
Output:
img_names img_array List_No time
0 1_rel 253 1 38
1 1_rel_right 255 1 38
2 1_rel_top 250 1 38
3 4_rel 180 4 23
4 4_rel_right 182 4 23
5 4_rel_top 189 4 23
6 7_rel 217 7 32
7 7_rel_right 183 7 32
8 7_rel_top 196 7 32
An alternative to #QuangHoang's answer (which I believe you should pick, as it is more robust). This uses the map method, and assumes every value in df2's time is in df1:
df1.assign(
List_No=df1.img_names.str.extract(r"(\d)", expand=False).astype(int),
time=lambda x: x.List_No.map(df2["time"]),
)
img_names img_array List_No time
0 1_rel 253 1 38
1 1_rel_right 255 1 38
2 1_rel_top 250 1 38
3 4_rel 180 4 23
4 4_rel_right 182 4 23
5 4_rel_top 189 4 23
6 7_rel 217 7 32
7 7_rel_right 183 7 32
8 7_rel_top 196 7 32

How to extract lines from a file when the second columns of a file matches the values in another file

I got two files.
file 1:
4
14
18
45
53
60
64
102
106
158
162
file2:
28 1 2
54 1 2
90 1 1
103 1 1
155 1 17
191 1 1
235 1 1
245 4 1
275 4 1
362 4 1
377 18 1
391 18 1
413 18 2
466 18 2
492 18 2
494 18 41
498 45 1
522 45 1
529 57 3
542 53 1
560 58 6
562 164 25
568 164 5
I want to extract the value from file2 if the second column of file two matches the value in file 1.
So the expected output will be:
245 4 1
275 4 1
362 4 1
377 18 1
391 18 1
413 18 2
466 18 2
492 18 2
494 18 41
498 45 1
522 45 1
542 53 1
I saw many of the solution online is using python or Perl, however, I want to use linux command to do this, any idea?
This should do it?
awk 'FNR==NR{a[$0]++};FNR!=NR{if($2 in a){print}}' file1 file2
245 4 1
275 4 1
362 4 1
377 18 1
391 18 1
413 18 2
466 18 2
492 18 2
494 18 41
498 45 1
522 45 1
542 53 1
Explanation:
we hand awk both files (order is important in this case!).
as long as we read the first file (FNR==NR) we store each value in an array a[$1]++
when we reach the second file we just check if values from the second file's second column ($2) are in the array; if yes, we print them.

group-by values obtained from splitting indexes

I need to find the max of two columns (p_1_logreg, p_2_logreg) where the comparison should be limited only to 14 rows.
My csv file
I tried to slice my index into:
int1_str1_str2_int2_str3_int4
The max should be found between rows where int1, str1, str2 int2 and str3 are fixed, and only the int4 would change (from index 0 to index 13, and so on).
I tried to fix each element at a time and use groupby, but I couldn't iterate over int4 value only.
Here is the code to find the max for column p_1_label, but the result is not what I am looking for.
max_1_row=raw_prob.loc[raw_prob.groupby(raw_prob['id'].str.split('_').str[1])['p_1_'+label].idxmax()]
max_1_row=max_1_row.loc[raw_prob.groupby(raw_prob['id'].str.split('_').str[3])['p_1_'+label].idxmax()]
max_1_row=max_1_row.loc[raw_prob.groupby(raw_prob['id'].str.split('_').str[5])['p_1_'+label].idxmax()]
Any ideas?
I think you need DataFrameGroupBy.idxmax by replaced last _ with empty string and then select by loc:
df = pd.read_csv('myProb.csv', index_col=[0])
idx = df.drop('id', 1).groupby(df['id'].str.replace('_\d+$', '')).idxmax()
print (idx.head(15))
p_0_logreg p_1_logreg p_2_logreg
id
6_PanaCleanerJune_sub_12_ICA 2 9 6
6_PanaCleanerJune_sub_13_ICA 17 19 23
6_PanaCleanerJune_sub_14_ICA 34 37 33
6_PanaCleanerJune_sub_15_ICA 52 51 43
6_PanaCleanerJune_sub_17_ICA 66 67 69
6_PanaCleanerJune_sub_18_ICA 82 79 76
6_PanaCleanerJune_sub_19_ICA 89 87 90
6_PanaCleanerJune_sub_20_ICA 98 103 104
6_PanaCleanerJune_sub_21_ICA 114 117 112
6_PanaCleanerJune_sub_22_ICA 129 133 127
6_PanaCleanerJune_sub_23_ICA 145 146 143
6_PanaCleanerJune_sub_24_ICA 155 166 161
6_PanaCleanerJune_sub_25_ICA 176 173 174
6_PanaCleanerJune_sub_26_ICA 186 191 189
6_PanaCleanerJune_sub_27_ICA 202 203 209
df1 = df.loc[idx['p_1_logreg']]
print (df1.head(15))
id p_0_logreg p_1_logreg p_2_logreg
9 6_PanaCleanerJune_sub_12_ICA_10 0.013452 0.985195 0.001353
19 6_PanaCleanerJune_sub_13_ICA_6 0.051184 0.948816 0.000000
37 6_PanaCleanerJune_sub_14_ICA_10 0.013758 0.979351 0.006890
51 6_PanaCleanerJune_sub_15_ICA_10 0.076056 0.923944 0.000000
67 6_PanaCleanerJune_sub_17_ICA_12 0.051060 0.947660 0.001280
79 6_PanaCleanerJune_sub_18_ICA_10 0.051184 0.948816 0.000000
87 6_PanaCleanerJune_sub_19_ICA_4 0.078162 0.917751 0.004087
103 6_PanaCleanerJune_sub_20_ICA_6 0.076400 0.921263 0.002337
117 6_PanaCleanerJune_sub_21_ICA_6 0.155002 0.791753 0.053245
133 6_PanaCleanerJune_sub_22_ICA_8 0.000000 0.998623 0.001377
146 6_PanaCleanerJune_sub_23_ICA_7 0.017549 0.973995 0.008457
166 6_PanaCleanerJune_sub_24_ICA_13 0.025215 0.974785 0.000000
173 6_PanaCleanerJune_sub_25_ICA_6 0.025656 0.960220 0.014124
191 6_PanaCleanerJune_sub_26_ICA_10 0.098872 0.895526 0.005602
203 6_PanaCleanerJune_sub_27_ICA_8 0.066493 0.932470 0.001037
df2 = df.loc[idx['p_2_logreg']]
print (df2.head(15))
id p_0_logreg p_1_logreg p_2_logreg
6 6_PanaCleanerJune_sub_12_ICA_7 0.000000 0.000351 0.999649
23 6_PanaCleanerJune_sub_13_ICA_10 0.000000 0.000351 0.999649
33 6_PanaCleanerJune_sub_14_ICA_6 0.080748 0.000352 0.918900
43 6_PanaCleanerJune_sub_15_ICA_2 0.017643 0.000360 0.981996
69 6_PanaCleanerJune_sub_17_ICA_14 0.882449 0.000290 0.117261
76 6_PanaCleanerJune_sub_18_ICA_7 0.010929 0.000360 0.988711
90 6_PanaCleanerJune_sub_19_ICA_7 0.010929 0.000351 0.988720
104 6_PanaCleanerJune_sub_20_ICA_7 0.006714 0.000360 0.992925
112 6_PanaCleanerJune_sub_21_ICA_1 0.869393 0.000339 0.130269
127 6_PanaCleanerJune_sub_22_ICA_2 0.000000 0.000351 0.999649
143 6_PanaCleanerJune_sub_23_ICA_4 0.017218 0.000360 0.982421
161 6_PanaCleanerJune_sub_24_ICA_8 0.369685 0.000712 0.629603
174 6_PanaCleanerJune_sub_25_ICA_7 0.307056 0.000496 0.692448
189 6_PanaCleanerJune_sub_26_ICA_8 0.850195 0.000368 0.149437
209 6_PanaCleanerJune_sub_27_ICA_14 0.000000 0.000351 0.999649
Detail:
print (df['id'].str.replace('_\d+$', '').head(15))
0 6_PanaCleanerJune_sub_12_ICA
1 6_PanaCleanerJune_sub_12_ICA
2 6_PanaCleanerJune_sub_12_ICA
3 6_PanaCleanerJune_sub_12_ICA
4 6_PanaCleanerJune_sub_12_ICA
5 6_PanaCleanerJune_sub_12_ICA
6 6_PanaCleanerJune_sub_12_ICA
7 6_PanaCleanerJune_sub_12_ICA
8 6_PanaCleanerJune_sub_12_ICA
9 6_PanaCleanerJune_sub_12_ICA
10 6_PanaCleanerJune_sub_12_ICA
11 6_PanaCleanerJune_sub_12_ICA
12 6_PanaCleanerJune_sub_12_ICA
13 6_PanaCleanerJune_sub_12_ICA
14 6_PanaCleanerJune_sub_13_ICA
Name: id, dtype: object

sumproduct using different criteria

I have the above excel table and i would like to calculate the total per company per departament per year. I used:
=SUMPRODUCT(--($A$2:$A$9=A12),--($B$2:$B$9=B12)*$C$2:$F$9)
dosen`t seems to work.
A B C D E F
1 COMPANY DEPART. QUARTER 1 QUARTER 2 QUARTER 3 QUARTER 4
2 AB PRO 123 223 3354 556
3 CD PIV 222 235 223 568
4 CD PRO 236 254 184 223
5 AB STA 254 221 96 265
6 EF PIV 254 112 485 256
7 CD STA 558 185 996 231
8 GH PRO 548 696 698 895
9 AB PRO 148 254 318 229
10
11 TOAL PER COMPANY PER DEPARTAMENT PER YEAR:
12 AB PRO =
Asusming that in Row 12, Col A = AB, and Row 12, Col B == PRO, then:
=SUMPRODUCT((A2:A9=A12)*(B2:B9=B12) *C2:F9)
Example:

Resources