Remove duplicates from multiple cells of a column seperated by "|" - linux

I want to remove duplicates from multiple cells of the column 5 with delimiter "|". The data I have looks like this:
Col1 Col2 Col3 Col4 Col5
1048563 93750984 5 0.499503476 HTR7|HTR7|HTR7
1048564 93751210 5 0.499503476 ABHD3|ABHD3|ABHD3|ABHD3|ABHD3|ABHD3
1048566 93751298 5 0.499503476 ADCYAP1|ADCYAP1|ADCYAP1|ADCYAP1
And I want the result to be:
Col1 Col2 Col3 Col4 Col5
1048563 93750984 5 0.499503476 HTR7
1048564 93751210 5 0.499503476 ABHD3
1048566 93751298 5 0.499503476 ADCYAP1
The number of rows and columns are different.The length of the text in column 5 is not always the same

Related

excel find the count of 2 filtered columns

There are paired columns that I am comparing(col1 and col2, col3 and col4) with either blank or '0' or '1'. I basically want to know how many are intersect
id col1 col2 col3 col4
id1 0 1
id2 1 1 0
id3 0 1 1
id4
id5 0
for this table I want to count of how many ids are 0 or 1(between col1 and col2). If I use countA(b2:c4) I get 4 but I need to get 3 as only 3 ids are affected for each pair
. Is therea formula that would actually give 3 for col1 and col2 and 3 for col3 and col4.
SUMPRODUCT(--(B$2:B$7+C$2:C$7=0))
fails here and provides 3 instead of 5

Identify the relationship between two columns and its respective value count in pandas

I have a Data frame as below :
Col1 Col2 Col3 Col4
1 111 a Test
2 111 b Test
3 111 c Test
4 222 d Prod
5 333 e Prod
6 333 f Prod
7 444 g Test
8 555 h Prod
9 555 i Prod
Expected output :
Column 1 Column 2 Relationship Count
Col2 Col3 One-to-One 2
Col2 Col3 One-to-Many 3
Explanation :
I need to identify the relationship between Col2 & Col3 and also the value counts.
For Eg. 111(col2) is repeated 3 times and has 3 different respective values a,b,c in Col3.
This means col2 and col3 has one-to-Many relationship - count_1 : 1
222(col2) is not repeated and has only one respective value d in col3.
This means col2 and col3 has one-to-one relationshipt - count_2 : 1
333(col2) is repeated twice and has 2 different respective values e,f in col3.
This means col2 and col3 has one-to-Many relationship - count_1 : 1+1 ( increment this count for every one-to-many relationship)
Similarly for other column values increment the respective counter and display the final results as the expected dataframe.
If you only need to check the relationship between col2 and col3, you can do:
(
df.groupby(by='Col2').Col3
.apply(lambda x: 'One-to-One' if len(x)==1 else 'One-to-Many')
.to_frame('Relationship')
.groupby('Relationship').Relationship
.count().to_frame('Count').reset_index()
.assign(**{'Column 1':'Col2', 'Column 2':'Col3'})
.reindex(columns=['Column 1', 'Column 2', 'Relationship', 'Count'])
)
Output:
Column 1 Column 2 Relationship Count
0 Col2 Col3 One-to-Many 3
1 Col2 Col3 One-to-One 2

Grouping corresponding Rows based on One column

I have an Excel Sheet Dataframe with no fixed number of rows and columns.
eg.
Col1 Col2 Col3
A 1 -
A - 2
B 3 -
B - 4
C 5 -
I would like to Group Col1 which has the same content. Like the following.
Col1 Col2 Col3
A 1 2
B 3 4
C 5 -
I am using pandas GroupBy, but not getting what I wanted.
Try using groupby:
print(df.replace('-', pd.np.nan).groupby('Col1', as_index=False).first().fillna('-'))
Output:
Col1 Col2 Col3
0 A 1 2
1 B 3 4
2 C 5 -

How to flip string using linux in specific columns

I have few columns as shown below:
col1 col2 col3 col4 a/t t/g g/t f/g
col3 col2 col4 col5 t/a g/t f/g g/t
I would need to flip the values in columns after 4, and the sample output is shown below:
col1 col2 col3 col4 t/a g/t t/g g/f
col3 col2 col4 col5 a/t t/g g/f t/g
I tried using the -rev option in bash but it prints the whole row in the inverted direction (mirror image). Is there an alternate solution for this just to flip the strings as shown in the output? Thanks in advance.
You don't say what the first 4 column may contain, so I assume this would be enough
sed 's/\(\w\)\/\(\w\)/\2\/\1/g' <yourfile>
like:
$ cat test
col1 col2 col3 col4 t/a g/t t/g g/f
col3 col2 col4 col5 a/t t/g g/f t/g
$ sed 's/\(\w\)\/\(\w\)/\2\/\1/g' test
col1 col2 col3 col4 a/t t/g g/t f/g
col3 col2 col4 col5 t/a g/t f/g g/t
if you want to save the result to a file, redirect sed output:
$ sed 's/\(\w\)\/\(\w\)/\2\/\1/g' test > newfile
perl -lane 'print join " ", #F[0..3], map { scalar reverse $_} #F[4..$#F]'

How to add header to a specific column in linux

I have a file with x columns and i would like to add header to nth column. For instance i have a sample tab-delimited file as shown below:
col1 col2 col4
1 2 3 4
3 4 4 5
I would like to add header to column3. the output looks like:
col1 col2 col3 col4
1 2 3 4
3 4 4 5
Could anyone suggest to do this?
To make a change to record number 1, it's as simple as:
awk 'NR==1{print $1" "$2" col3 "$3;next}{print}' inputfile
On that first line, it will insert the extra column. All others will be printed as is.

Resources