How to select bunch of rows - python-3.x

I have dataframe with multiple columns , i want to select bunch of rows if column B have consecutive 1 and check in these rows if column A have any value equal to 0.04 then need this bunch of rows and extract start value and end value of column A for this bunch of rows
Here is my dataframe
Here is my desired output:

filtter Consecutive groups .diff().abs().cumsum().bfill() not following the specific considitons (x['B'].eq(1).any() and x['A'].eq(0.04).any()
agg first and last
followed by grouping consecutivity column to extract first and last rows with use of agg fun
df['temp'] = df.B.diff().abs().cumsum().bfill()
df.groupby('temp').filter(lambda x: (x['B'].eq(1).any() and x['A'].eq(0.04).any()))\
.groupby('temp').agg({'A':['first','last']})
Out:
A
first last
temp
3.0 344.0 39.9

Related

how to iterate over column and each iteration result save in result dataframe python

I have one dataframe with multiple columns ,i need to calculate same thing for all columns , is there any way to do this ? i have many columns so can not do one by one
df=pd.DataFrame({r'A':[1,24,69,67],r'A\0001\delta':[1,46,454,67],r'A\0002\delta':[1,46,454,67],r'A\00100\delta':[1,46,70,67]})
i want to calculate:
diff=df[r'A\0001\delta'].diff()
if diff greater than 60 save row in result dataframe
same thing i want to do for more than 100 columns and want to save results in result dataframe by rows
At least one value greater than 60 on a row
>>> df.loc[df.diff().gt(60).any(axis=1)]
A A\0001\delta A\0002\delta A\00100\delta
2 69 454 454 70
All values greater than 60 on a row:
>>> df.loc[df.diff().gt(60).all(axis=1)]
Empty DataFrame
Columns: [A, A\0001\delta, A\0002\delta, A\00100\delta]
Index: []

How to add an empty column in a dataframe using pandas (without specifying column names)?

I have a dataframe with only one column (headerless). I want to add another empty column to it having the same number of rows.
To make it clearer, currently, the size of my data frame is 1050 (since only one column), I want the new size to be 1050*2 with the second column being completely empty.
In pandas in DataFrame are always columns, so for new default column filled by missing values use length of columns:
s = pd.Series([2,3,4])
df = s.to_frame()
df[len(df.columns)] = np.nan
#what is same for one column df like
#df[1] = np.nan
print (df)
0 1
0 2 NaN
1 3 NaN
2 4 NaN

Compare row with all other previous string in one column and change value of another column in Python

I have a csv file named namelist.csv, it includes:
Index String Size Name
1 AAA123000DDD 10 One
2 AAA123DDDQQQ 20 One
3 AAA123000DDD 25 One
4 AAA123D 20 One
5 ABA 15 One
6 FFFrrrSSSBBB 60 Two
7 FFFrrrSSSBBB 30 Two
8 FFFrrrSS 50 Two
9 AAA12 70 Two
I want to compare row in column String of each name group: if the string in each row is match or is substring of all above rows then remove the previous rows and sum the value of Size column to the value of subtring row.
Example: i take row 3rd: AAA123000DDD, i compare it to 2 row 1st and 2nd, it see that it is a match with 1st row, it will remove the 1st row then sum value of the 1st row column Size to the 3rd row column Size .
then the table will be like:
Index String Size Name
2 AAA123DDDQQQ 20 One
3 AAA123000DDD 35 One
4 AAA123D 20 One
...
the final result will be:
Index String Size Name
3 AAA123000DDD 35 One
4 AAA123D 40 One
5 ABA 15 One
8 FFFrrrSS 140 Two
9 AAA12 70 Two
i think of using groupby of pandas to group all Name column, but i don't know how to apply the comparison of String column and sum of Size column.
I am new to Python so any help I will very appreciate.
Assuming Name is distinct with String, here's how you would do the aggregation. I kept Name so that it also shows in the final DataFrame.
df_group = df.groupby(['String', 'Name'])['Size'].sum().reset_index()
Edit:
To match the substrings (and using the example above that it appears that a substring will not match with multiple strings), you can make a mapping of substrings to full strings and then group by the full string column as before:
all_strings = set(df['Strings'])
substring_dict = dict()
for row in df.itertuples():
for item in all_strings:
if row.String in item:
substring_dict[row.String] = item
def match_substring(x):
return substring_dict[x]
df['full_strings'] = df.String.apply(match_substring)
df_group = df.groupby(['full_strings', 'Name'])['Size'].sum().reset_index()

pandas - grouping values by pair of columns and pivoting

Been struggling to think what to do here, pivoting and melting and whatnot doesn't seem to be working out. I was trying to join the names of the to/from destinations together and then re-order the combined names but it was a total mess
My data concerns flows from one location to another, it's in the format:
pd.DataFrame(columns=['from_location','to_location','flow'],data =[['a','b',1],['b','a',3]])
from_location to_location flow
0 a b 1
1 b a 3
but my output needs to be the format:
pd.DataFrame(columns=['connection','flow','back flow','net'],data =[['a -> b',1,3,2]])
connection flow back flow net
0 a -> b 1 3 2
Any nice built in functions that can rearrange things like this? I'm not even sure what keywords to search by
Use:
#df = df.sort_values(['from_location','to_location'])
df1 = pd.DataFrame(np.sort(df[['from_location','to_location']], axis=1),
columns=list('ab'), index=df.index)
s = df1['a'] + ' -> ' + df1['b']
df2 = df.groupby(s)['flow'].agg(['first','last']).assign(net=lambda x: x['last'] - x['first'])
print (df2)
first last net
a -> b 1 3 2
Explanation:
If necessary first sort_values if possible some paired rows are swapped
Sort columns per rows by numpy.sort and join columns together with splitter
Then groupby by joined values and aggregate by agg with first and last
Last if need subtract columns add new column by assign

pandas generates a new column based on values from another column considering duplicates

I am working on a dataframe which has a column that each value is a list, now I want to derive a new column which only considers list whose size is greater than 1, assigns a unique integer to the corresponding row as id. If elements in two lists are the same but with a different order, the two lists should be assigned the same id. A sample dataframe is like,
document_no_list cluster_id
[1,2,3] 1
[3,2,1] 1
[4,5,6,7] 2
[8] 0
[9,10] 3
[10,9] 3
column cluster_id only considers the 1st, 2nd, 3rd, 5th and 6th row, each of which has a size greater than 1, and assigns a unique integer id to its corresponding cell in the column, also [1,2,3], [3,2,1] and [9,10], [10,9] should be assigned the same cluster_id.
I was asking a similar question without considering duplicates list values, at
pandas how to derived values for a new column base on another column
I am wondering how to do that in pandas.
First, you need to assign a column with the list lengths, and another column with the lists as set objects sorted:
df['list_len'] = df.document_no_list.apply(len)
df['list_sorted'] = df.document_no_list.apply(sorted)
Then you need to assign the cluster_id for each set sorted list:
ids = df.loc[df.list_len > 1, ['list_sorted']].drop_duplicates()
ids['cluster_id'] = range(1,len(ids)+1)
Left join this onto the original dataframe, and fill whatever that hasn't been joined (the singletons) with zeros:
df.merge(ids, how = 'left').fillna({'cluster_id':0})

Resources