Parsing table with streaks of binaries to select larger group element - python-3.x

I have a table like the following (only much longer):
# time binary frequency
0 2.1 0 0.65
1 3.2 1 0.72
2 5.8 0 0.64
3 7.1 0 0.63
4 9.5 1 0.72
5 14.1 1 0.74
6 21.5 0 0.62
7 27.3 0 0.61
8 29.5 1 1.00
9 32.1 1 1.12
10 35.5 1 0.99
I want to collect all the times correspondent to only binary == 1 and, among the small groups, those whose correspondent frequency value is higher. In the table above, this would result in:
times = 3.2, 14.1, 32.1
I am not sure how to approach the sequentiality of the table on the first place, and then how to compare the values among them returning only the correspondent time (and not, for example, the largest frequency). Time hides a periodicity, so I would avoid to build another table with only binary == 1 elements.
Having my time, binary, and frequency arrays, I can isolate relevant elements by:
condition = (binary == 1)
time1 = time(condition)
frequency1 = frequency(condition)
but I do not know how to proceed to isolate the various streaks. What are useful functions I can use?

I don't know that there are any clever functions to use for this. Here's some code that will do the job. Please note that I removed the headers from your file.
binary is either zero or one, depending on whether the rows other values are to be included in a group. Initially in_group is set to False to indicate that no group has started. As rows are read, when binary is zero, if the code has been reading rows for a group and, therefore, in_group is True, in_group is set to False because now that a zero has been encountered that group has come to an end. Since processing of the group has ended, it's time to print results for it. As rows are read, when binary is one, if in_group is True then the code has already started processing rows are a group and the code checks whether the newest frequency is greater than what has been see before. If so, it updates both rep_time and rep_frequency. If in_group is False then this is the first row of a new group and in_group is set True and initial values of rep_time and rep_frequency are set.
with open('pyser.txt') as pyser:
in_group = False
for line in pyser:
_, time, binary, frequency = [float(_) for _ in line.rstrip().split()]
if binary == 0:
if in_group:
in_group = False
print (rep_time)
else:
if in_group:
if frequency > rep_frequency:
rep_time, rep_frequency = time, frequency
else:
in_group = True
rep_time, rep_frequency = time, frequency
if in_group:
print (rep_time)
Output:
3.2
14.1
32.1
Edit: We seem to be using different definitions of the problem.
In the first group, we agree. But, in the second group, the maximum amplitude is about 4.07E-01, which corresponds to a time of about 5.4740E+04.
I've also written code in Pandas:
>>> import pandas as pd
>>> df = pd.read_csv('Gyd9P1rb.txt', sep='\s+', skiprows=2, header=None, names='Row TSTOP PSRTIME DETECTED FDOTMAX AMPLITUDE AMPLITUDE_ERR'.split())
>>> del df['Row']
>>> del df['TSTOP']
>>> del df['FDOTMAX']
>>> del df['AMPLITUDE_ERR']
>>> groups = []
>>> in_group = False
>>> group_number = 1
>>> for b in df['DETECTED']:
... if b:
... if not in_group:
... group_number +=1
... in_group = True
... groups.append(group_number)
... else:
... groups.append(0)
... in_group = False
...
>>> df['groups'] = pd.Series(groups, index=df.index)
>>> df.head()
PSRTIME DETECTED AMPLITUDE groups
0 54695.471283 1 0.466410 2
1 54698.532412 1 0.389607 2
2 54701.520814 1 0.252858 2
3 54704.557583 0 0.103460 0
4 54707.557563 0 0.088215 0
>>> gb = df.groupby(by=df['groups'])
>>> def f(x):
... the_max = x['AMPLITUDE'].idxmax()
... print ( x['groups'][the_max], x['PSRTIME'][the_max])
...
>>> gb.apply(f)
0 58064.3656376
0 58064.3656376
2 54695.4712834
3 54740.4917137
4 54788.477571
5 54836.472922
6 54881.4605511
7 54926.4664883
8 54971.4932866
9 55019.5021472
10 55064.5029133
11 55109.4948108
12 55154.414381
13 55202.488766
14 55247.4721132
15 55292.5301332
16 55340.4728542
17 55385.5229596
18 55430.5332147
19 55478.4812671
20 55523.4894451
21 55568.4626766
22 55616.4630348
23 55661.4969604
24 55709.4504634
25 55754.4711994
26 55799.4736923
27 55844.5050404
28 55892.4699313
29 55937.4721754
30 55985.4677572
31 56030.5119765
32 56075.5517149
33 56168.4447074
34 56213.507484
35 56306.5133063
36 56351.4943058
37 56396.579122
38 56441.5683651
39 56489.5321173
40 56534.4838082
41 56582.469025
42 56627.4135202
43 56672.4926625
44 56720.582296
45 56768.5232469
46 56813.4997925
47 56858.3890558
48 56903.5182596
49 56951.4892721
50 56996.5787435
51 57086.3948136
52 57179.5421833
53 57272.5059448
54 57362.452523
55 57635.5013047
56 57728.4925251
57 57773.5235416
58 57821.5390364
59 57866.5205882
60 57911.5590132
61 57956.5699637
62 58001.4331976
Empty DataFrame
Columns: []
Index: []
The results of the two methods are the same, up to differences in presentation precision.
I also created a small set of data that would give easily calculable results. This is it. The original program performed correctly.
0 -1 0 -1
1 0 1 2
2 -1 0 -1
3 -1 0 -1
4 0 1 0
5 1 1 1
6 -1 0 -1
7 -1 0 -1
8 -1 0 -1
9 0 1 4
10 1 1 3
11 2 1 2
12 -1 0 -1
13 -1 0 -1
14 -1 0 -1
15 -1 0 -1
16 0 1 0
17 1 1 1
18 2 1 2
19 3 1 3
20 -1 0 -1
21 -1 0 -1
22 -1 0 -1
23 -1 0 -1
24 -1 0 -1
25 0 1 6
26 1 1 5
27 2 1 4
28 3 1 3
29 4 1 2
30 -1 0 -1
31 -1 0 -1
32 -1 0 -1
33 -1 0 -1
34 -1 0 -1
35 -1 0 -1
36 0 1 0
37 1 1 1
38 2 1 2
39 3 1 3
40 4 1 4
41 5 1 5
41 -1 0 -1
41 -1 0 -1

Related

Pandas dataframe: Count no of rows which meet a set of conditions across multiple columns [duplicate]

I have a dataframe(edata) as given below
Domestic Catsize Type Count
1 0 1 1
1 1 1 8
1 0 2 11
0 1 3 14
1 1 4 21
0 1 4 31
From this dataframe I want to calculate the sum of all counts where the logical AND of both variables (Domestic and Catsize) results in Zero (0) such that
1 0 0
0 1 0
0 0 0
The code I use to perform the process is
g=edata.groupby('Type')
q3=g.apply(lambda x:x[((x['Domestic']==0) & (x['Catsize']==0) |
(x['Domestic']==0) & (x['Catsize']==1) |
(x['Domestic']==1) & (x['Catsize']==0)
)]
['Count'].sum()
)
q3
Type
1 1
2 11
3 14
4 31
This code works fine, however, if the number of variables in the dataframe increases then the number of conditions grows rapidly. So, is there a smart way to write a condition that states that if the ANDing the two (or more) variables result in a zero then perform the sum() function
You can filter first using pd.DataFrame.all negated:
cols = ['Domestic', 'Catsize']
res = df[~df[cols].all(1)].groupby('Type')['Count'].sum()
print(res)
# Type
# 1 1
# 2 11
# 3 14
# 4 31
# Name: Count, dtype: int64
Use np.logical_and.reduce to generalise.
columns = ['Domestic', 'Catsize']
df[~np.logical_and.reduce(df[columns], axis=1)].groupby('Type')['Count'].sum()
Type
1 1
2 11
3 14
4 31
Name: Count, dtype: int64
Before adding it back, use map to broadcast:
u = df[~np.logical_and.reduce(df[columns], axis=1)].groupby('Type')['Count'].sum()
df['NewCol'] = df.Type.map(u)
df
Domestic Catsize Type Count NewCol
0 1 0 1 1 1
1 1 1 1 8 1
2 1 0 2 11 11
3 0 1 3 14 14
4 1 1 4 21 31
5 0 1 4 31 31
how about
columns = ['Domestic', 'Catsize']
df.loc[~df[columns].prod(axis=1).astype(bool), 'Count']
and then do with it whatever you want.
for logical AND the product does the trick nicely.
for logcal OR you can use sum(axis=1) with proper negation in advance.

How to take mean of 3 values before flag change 0 to 1python

I have dataframe with columns A,B and flag. I want to calculate mean of 2 values before flag change from 0 to 1 , and record value when flag change from 0 to 1 and record value when flag changes from 1 to 0.
# Input dataframe
df=pd.DataFrame({'A':[1,3,4,7,8,11,1,15,20,15,16,87],
'B':[1,3,4,6,8,11,1,19,20,15,16,87],
'flag':[0,0,0,0,1,1,1,0,0,0,0,0]})
# Expected output
df_out=df=pd.DataFrame({'A_mean_before_flag_change':[5.5],
'B_mean_before_flag_change':[5],
'A_value_before_change_flag':[7],
'B_value_before_change_flag':[6]})
I try to create more general solution:
df=pd.DataFrame({'A':[1,3,4,7,8,11,1,15,20,15,16,87],
'B':[1,3,4,6,8,11,1,19,20,15,16,87],
'flag':[0,0,0,0,1,1,1,0,0,1,0,1]})
print (df)
A B flag
0 1 1 0
1 3 3 0
2 4 4 0
3 7 6 0
4 8 8 1
5 11 11 1
6 1 1 1
7 15 19 0
8 20 20 0
9 15 15 1
10 16 16 0
11 87 87 1
First create groups by mask for 0 with next 1 values of flag:
m1 = df['flag'].eq(0) & df['flag'].shift(-1).eq(1)
df['g'] = m1.iloc[::-1].cumsum()
print (df)
A B flag g
0 1 1 0 3
1 3 3 0 3
2 4 4 0 3
3 7 6 0 3
4 8 8 1 2
5 11 11 1 2
6 1 1 1 2
7 15 19 0 2
8 20 20 0 2
9 15 15 1 1
10 16 16 0 1
11 87 87 1 0
then filter out groups with size less like N:
N = 4
df1 = df[df['g'].map(df['g'].value_counts()).ge(N)].copy()
print (df1)
A B flag g
0 1 1 0 3
1 3 3 0 3
2 4 4 0 3
3 7 6 0 3
4 8 8 1 2
5 11 11 1 2
6 1 1 1 2
7 15 19 0 2
8 20 20 0 2
Filter last N rows:
df2 = df1.groupby('g').tail(N)
And aggregate last with mean:
d = {'mean':'_mean_before_flag_change', 'last': '_value_before_change_flag'}
df3 = df2.groupby('g')['A','B'].agg(['mean','last']).sort_index(axis=1, level=1).rename(columns=d)
df3.columns = df3.columns.map(''.join)
print (df3)
A_value_before_change_flag B_value_before_change_flag \
g
2 20 20
3 7 6
A_mean_before_flag_change B_mean_before_flag_change
g
2 11.75 12.75
3 3.75 3.50
I'm assuming that this needs to work for cases with more than one rising edge and that the consecutive values and averages get appended to the output lists:
# the first step is to extract the rising and falling edges using diff(), identify sections and length
df['flag_diff'] = df.flag.diff().fillna(0)
df['flag_sections'] = (df.flag_diff != 0).cumsum()
df['flag_sum'] = df.flag.groupby(df.flag_sections).transform('sum')
# then you can get the relevant indices by checking for the rising edges
rising_edges = df.index[df.flag_diff==1.0]
val_indices = [i-1 for i in rising_edges]
avg_indices = [(i-2,i-1) for i in rising_edges]
# and finally iterate over the relevant sections
df_out = pd.DataFrame()
df_out['A_mean_before_flag_change'] = [df.A.loc[tpl[0]:tpl[1]].mean() for tpl in avg_indices]
df_out['B_mean_before_flag_change'] = [df.B.loc[tpl[0]:tpl[1]].mean() for tpl in avg_indices]
df_out['A_value_before_change_flag'] = [df.A.loc[idx] for idx in val_indices]
df_out['B_value_before_change_flag'] = [df.B.loc[idx] for idx in val_indices]
df_out['length'] = [df.flag_sum.loc[idx] for idx in rising_edges]
df_out.index = rising_edges

Calibrate with cph function (with external validation)

I have two questions for calibrate with cph function.
My data have 5 independent variables(from BMI to RT), and 2 dependent variables (time, event).
> head(data)
BMI Taxanes Surgery LND RT Event Time
1 19 0 0 2 5 0 98
2 20 0 0 3 3 0 97
3 21 0 0 8 2 0 17
4 18 0 0 1 3 0 35
5 20 1 0 3 1 0 27
6 20 1 0 2 3 1 2
> str(data)
$ BMI : num 19 20 21 18 20 20 20 ...
$ Taxanes: int 0 0 0 0 1 1 1 0 0 0 ...
$ Surgery: num 0 0 0 0 0 0 1 0 0 0 ...
$ LND : int 2 3 8 1 3 2 2 2 5 2 ...
$ RT : Factor w/ 7 levels "0","1","2","3",..: 5 3 2 3 1 3 ...
$ Event : int 0 0 0 0 0 1 0 0 0 0 ...
$ Time : num 98 97 17 35 27 2 22 ...
(1) With this data, I did survival analysis with cph model. And I want to make a calibration plot using this data. But I got an error which "Error in x(x) : argument "y" is missing, with no default". I was finding lots of material. But I don't know the reason for this error. Even if I found the calibrate function in web, But I can't find for the element 'y'. please help me for this question.
> ddist <- datadist(data)
> options(datadist='ddist')
>
> fit = cph(Surv(Time,Event) ~ BMI + Surgery + Taxanes + RT + LND, data=data, x=TRUE, y=TRUE, surv=TRUE, dxy=TRUE, time.inc=36)
> plot(calibrate(fit))
Using Cox survival estimates at 36 Days
**Error in x(x) : argument "y" is missing, with no default**
(2) Eventually I want to do external validation for this cph model(fit).
If new data name is kind of dat2 (which has the same variable with data), then what is the observed and predicted survival? I know that the predicted value calculate like this code
val<-val.surv(fit, newdata=dat2, S=Surv(dat2$Time,dat2$Event))
But how I get a actual(observed) survival in new data(dat2)? Please help for this problem. Thank you so much in advance!

How to select rows in a DataFrame based on every transition for particular values in a particular column?

I have a DataFrame that has a ID column and Value column that only consist (0,1,2). I want to capture only those rows, if there is a transition from (0-1) or (1-2) in value column. This process has to be done for each ID separately.
I tried to do the groupby for ID and using a difference aggregation function. So that i can take those rows for which difference of values is 1. But it is failing in certain condition.
df=df.loc[df['values'].isin([0,1,2])]
df = df.sort_values(by=['Id'])
df.value.diff()
Given DataFrame:
Index UniqID Value
1    a    1
2    a    0
3    a    1
4    a    0
5    a    1
6    a    2
7    b    0
8    b    2
9    b    1
10    b    2
11    b    0
12    b    1
13    c    0
14    c    1
15    c    2
16    c    2
Expected Output:
2    a    0
3    a    1
4    a    0
5    a    1
6    a    2
9    b    1
10    b    2
11    b    0
12    b    1
13    c    0
14    c    1
15    c    2
Only expecting those rows when there is a transition from either 0-1 or 1-2.
Thank you in advance.
Use this my solution working for groups with tuples of patterns:
np.random.seed(123)
N = 100
d = {
'UniqID': np.random.choice(list('abcde'), N),
'Value': np.random.choice([0,1,2], N),
}
df = pd.DataFrame(d).sort_values('UniqID')
#print (df)
pat = [(0, 1), (1, 2)]
a = np.array(pat)
s = (df.groupby('UniqID')['Value']
.rolling(2, min_periods=1)
.apply(lambda x: np.all(x[None :] == a, axis=1).any(), raw=True))
mask = (s.mask(s == 0)
.groupby(level=0)
.bfill(limit=1)
.fillna(0)
.astype(bool)
.reset_index(level=0, drop=True))
df = df[mask]
print (df)
UniqID Value
99 a 1
98 a 2
12 a 1
63 a 2
38 a 0
41 a 1
9 a 1
72 a 2
64 b 1
67 b 2
33 b 0
68 b 1
57 b 1
71 b 2
10 b 0
8 b 1
61 c 1
66 c 2
46 c 0
0 c 1
40 c 2
21 d 0
74 d 1
15 d 1
85 d 2
6 d 1
88 d 2
91 d 0
83 d 1
4 d 1
34 d 2
96 d 0
48 d 1
29 d 0
84 d 1
32 e 0
62 e 1
37 e 1
55 e 2
16 e 0
23 e 1
Assuming, transition is strictly from 1 -> 2 and 0 -> 1. (This assumption is valid as well.)
Similar Sample data:
index,id,value
1,a,1
2,a,0
3,a,1
4,a,0
5,a,1
6,a,2
7,b,0
8,b,2
9,b,1
10,b,2
11,b,0
12,b,1
13,c,0
14,c,1
15,c,2
16,c,2
Load this in pandas dataframe.
Then,
Using below code:
def grp_trns(x):
x['dif']=x.value.diff().fillna(0)
return pd.DataFrame(list(x[x.dif==1]['index']-1)+list(x[x.dif==1]['index']))
target_index=df.groupby('id').apply(lambda x:grp_trns(x)).values.squeeze()
print(df[df['index'].isin(target_index)][['index', 'id','value']])
It gives desired dataframe based on assumption:
index id value
1 2 a 0
2 3 a 1
3 4 a 0
4 5 a 1
5 6 a 2
8 9 b 1
9 10 b 2
10 11 b 0
11 12 b 1
12 13 c 0
13 14 c 1
14 15 c 2
Edit: To include transition 1->0, below is updated function:
def grp_trns(x):
x['dif']=x.value.diff().fillna(0)
index1=list(x[x.dif==1]['index']-1)+list(x[x.dif==1]['index'])
index2=list(x[(x.dif==-1)&(x.value==0)]['index']-1)+list(x[(x.dif==-1)&(x.value==0)]['index'])
return pd.DataFrame(index1+index2)
My version is using shift and diff() to delete all lines with diff value equal to 0,2 or -2
df = pandas.DataFrame({'index':[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],'UniqId':['a','a','a','a','a','a','b','b','b','b','b','b','c','c','c','c'],'Value':[1,0,1,0,1,2,0,2,1,2,0,1,0,1,2,2]})
df['diff']=np.NaN
for element in df['UniqId'].unique():
df['diff'].loc[df['UniqId']==element]=df.loc[df['UniqId']==element]['Value'].diff()
df['diff']=df['diff'].shift(-1)
df=df.loc[(df['diff']!=-2) & (df['diff']!=2) & (df['diff']!=0)]
print(df)
Actually waiting for updates about the 2-1 and 1-2 relationship

Subset and Loop to create a new column [duplicate]

With the DataFrame below as an example,
In [83]:
df = pd.DataFrame({'A':[1,1,2,2],'B':[1,2,1,2],'values':np.arange(10,30,5)})
df
Out[83]:
A B values
0 1 1 10
1 1 2 15
2 2 1 20
3 2 2 25
What would be a simple way to generate a new column containing some aggregation of the data over one of the columns?
For example, if I sum values over items in A
In [84]:
df.groupby('A').sum()['values']
Out[84]:
A
1 25
2 45
Name: values
How can I get
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45
In [20]: df = pd.DataFrame({'A':[1,1,2,2],'B':[1,2,1,2],'values':np.arange(10,30,5)})
In [21]: df
Out[21]:
A B values
0 1 1 10
1 1 2 15
2 2 1 20
3 2 2 25
In [22]: df['sum_values_A'] = df.groupby('A')['values'].transform(np.sum)
In [23]: df
Out[23]:
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45
I found a way using join:
In [101]:
aggregated = df.groupby('A').sum()['values']
aggregated.name = 'sum_values_A'
df.join(aggregated,on='A')
Out[101]:
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45
Anyone has a simpler way to do it?
This is not so direct but I found it very intuitive (the use of map to create new columns from another column) and can be applied to many other cases:
gb = df.groupby('A').sum()['values']
def getvalue(x):
return gb[x]
df['sum'] = df['A'].map(getvalue)
df
In [15]: def sum_col(df, col, new_col):
....: df[new_col] = df[col].sum()
....: return df
In [16]: df.groupby("A").apply(sum_col, 'values', 'sum_values_A')
Out[16]:
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45

Resources