How to concat same column values row by axis 0 in pandas - python-3.x

In my dataframe i want to concat same value of column x rows horizontally ,
here is my dataframe:
df=pd.DataFrame({'x':[-2,-4,-6,-7,-9,-2,-4,-6,-7,-9],'dd':[1,2,3,4,5,6,7,8,9,10]})
df_out:
df=pd.DataFrame({'x':[-2,-4,-6,-7,-9],'dd':[1,2,3,4,5],'dd1':['6,7,8,9,10']})

Use GroupBy.cumcount for counter with reshape by Series.unstack:
df = (df.set_index(['x', df.groupby('x').cumcount()])['dd']
.unstack()
.sort_index(ascending=False)
.add_prefix('dd')
.reset_index())
print (df)
x dd0 dd1
0 -2 1 6
1 -4 2 7
2 -6 3 8
3 -7 4 9
4 -9 5 10

Related

How to calculate the having statement in pandas dataframe [duplicate]

I'm using groupby on a pandas dataframe to drop all rows that don't have the minimum of a specific column. Something like this:
df1 = df.groupby("item", as_index=False)["diff"].min()
However, if I have more than those two columns, the other columns (e.g. otherstuff in my example) get dropped. Can I keep those columns using groupby, or am I going to have to find a different way to drop the rows?
My data looks like:
item diff otherstuff
0 1 2 1
1 1 1 2
2 1 3 7
3 2 -1 0
4 2 1 3
5 2 4 9
6 2 -6 2
7 3 0 0
8 3 2 9
and should end up like:
item diff otherstuff
0 1 1 2
1 2 -6 2
2 3 0 0
but what I'm getting is:
item diff
0 1 1
1 2 -6
2 3 0
I've been looking through the documentation and can't find anything. I tried:
df1 = df.groupby(["item", "otherstuff"], as_index=false)["diff"].min()
df1 = df.groupby("item", as_index=false)["diff"].min()["otherstuff"]
df1 = df.groupby("item", as_index=false)["otherstuff", "diff"].min()
But none of those work (I realized with the last one that the syntax is meant for aggregating after a group is created).
Method #1: use idxmin() to get the indices of the elements of minimum diff, and then select those:
>>> df.loc[df.groupby("item")["diff"].idxmin()]
item diff otherstuff
1 1 1 2
6 2 -6 2
7 3 0 0
[3 rows x 3 columns]
Method #2: sort by diff, and then take the first element in each item group:
>>> df.sort_values("diff").groupby("item", as_index=False).first()
item diff otherstuff
0 1 1 2
1 2 -6 2
2 3 0 0
[3 rows x 3 columns]
Note that the resulting indices are different even though the row content is the same.
You can use DataFrame.sort_values with DataFrame.drop_duplicates:
df = df.sort_values(by='diff').drop_duplicates(subset='item')
print (df)
item diff otherstuff
6 2 -6 2
7 3 0 0
1 1 1 2
If possible multiple minimal values per groups and want all min rows use boolean indexing with transform for minimal values per groups:
print (df)
item diff otherstuff
0 1 2 1
1 1 1 2 <-multiple min
2 1 1 7 <-multiple min
3 2 -1 0
4 2 1 3
5 2 4 9
6 2 -6 2
7 3 0 0
8 3 2 9
print (df.groupby("item")["diff"].transform('min'))
0 1
1 1
2 1
3 -6
4 -6
5 -6
6 -6
7 0
8 0
Name: diff, dtype: int64
df = df[df.groupby("item")["diff"].transform('min') == df['diff']]
print (df)
item diff otherstuff
1 1 1 2
2 1 1 7
6 2 -6 2
7 3 0 0
The above answer worked great if there is / you want one min. In my case there could be multiple mins and I wanted all rows equal to min which .idxmin() doesn't give you. This worked
def filter_group(dfg, col):
return dfg[dfg[col] == dfg[col].min()]
df = pd.DataFrame({'g': ['a'] * 6 + ['b'] * 6, 'v1': (list(range(3)) + list(range(3))) * 2, 'v2': range(12)})
df.groupby('g',group_keys=False).apply(lambda x: filter_group(x,'v1'))
As an aside, .filter() is also relevant to this question but didn't work for me.
I tried everyone's method and I couldn't get it to work properly. Instead I did the process step-by-step and ended up with the correct result.
df.sort_values(by='item', inplace=True, ignore_index=True)
df.drop_duplicates(subset='diff', inplace=True, ignore_index=True)
df.sort_values(by=['diff'], inplace=True, ignore_index=True)
For a little more explanation:
Sort items by the minimum value you want
Drop the duplicates of the column you want to sort with
Resort the data because the data is still sorted by the minimum values
If you know that all of your "items" have more than one record you can sort, then use duplicated:
df.sort_values(by='diff').duplicated(subset='item', keep='first')

How to split dataframe by column value condition, pandas

I want to split a dataframe in to different lists based on column value condition.
Here is a dataframe example.
df=pd.DataFrame({'flag_1':[1,2,3,1,2,500,498,495,1,1,1,1,1,500,440,430,2,3,4,4],'dd':[1,1,1,7,7,7,8,8,8,1,1,1,7,7,7,8,8,8,5,7]})
df_out
df_out=pd.DataFrame({'flag_1':[500,498,495,500,440,430],'dd':[7,8,8,7,7,8]})
Try this:
grp = (df['flag_1']<500).cumsum()
pd.concat({n: g[1:] for n, g in tuple(df.groupby(grp)) if len(g) > 1}, ignore_index=True)
Output:
flag_1 dd
0 500 7
1 598 8
2 595 8
3 500 7
4 540 7
5 5430 8

Count positive values for each column in all dataframe

Is it possible to count positive values of each column in a dataframe ?
I tried to do this with 'count'
import pandas as pd
import numpy as np
np.random.seed(18)
df = pd.DataFrame(np.random.randint(-10,10,size=(5, 4)), columns=list('ABCD'))
print(df)
A B C D
0 0 9 -5 7
1 4 8 -8 -2
2 -8 7 -5 5
3 0 0 1 -6
4 -6 1 -9 -7
positive_count = df.gt(0).count()
print(positive_count)
A 5
B 5
C 5
D 5
dtype: int64
The "gt" (greater than) seems doesn't work.
I tried with 'value_counts', and it works for column 'A' in this example
positive_count = df['A'].gt(0).value_counts()[1]
But I would like to get this result for all columns at one time.
Does anyone have an idea to help me?

How to perform arithmetic operations with specific elements of a dataframe?

I am trying to understand how to perform arithmetic operations on a dataframe in python.
import pandas as pd
import numpy as np
df = pd.DataFrame({'col1':[2,38,7,5],'col2':[1,3,2,4]})
print (unsorted_df.sum())
This is what I'm getting (in terms of the output), but I want to have more control over which sum I am getting.
col1 52
col2 10
dtype: int64
Just wondering how I would add individual elements in the dataframe together.
Your question is not very clear but still I will try to cover all possible scenarios,
Input:
df
col1 col2
0 2 1
1 38 3
2 7 2
3 5 4
If you want the sum of columns,
df.sum(axis = 0)
Output:
col1 52
col2 10
dtype: int64
If you want the sum of rows,
df.sum(axis = 1)
0 3
1 41
2 9
3 9
dtype: int64
If you want to add a list of numbers into a column,
num = [1, 2, 3, 4]
df['col1'] = df['col1'] + num
df
Output:
col1 col2
0 3 1
1 40 3
2 10 2
3 9 4
If you want to add a list of numbers into a row,
num = [1, 2]
df.loc[0] = df.loc[0] + num
df
Output:
col1 col2
0 3 3
1 38 3
2 7 2
3 5 4
If you want to add a single number to a column,
df['col1'] = df['col1'] + 2
df
Output:
col1 col2
0 4 1
1 40 3
2 9 2
3 7 4
If you want to add a single number to a row,
df.loc[0] = df.loc[0] + 2
df
Output:
col1 col2
0 4 3
1 38 3
2 7 2
3 5 4
If you want to add a number to any number(an element of row i and column j),
df.iloc[1,1] = df.iloc[1,1] + 5
df
Output:
col1 col2
0 2 1
1 38 8
2 7 2
3 5 4

Placing n rows of pandas a dataframe into their own dataframe

I have a large dataframe with many rows and columuns.
An example of the structure is:
a = np.random.rand(6,3)
df = pd.DataFrame(a)
I'd like to split the DataFrame into seperate data frames each consisting of 3 rows.
you can use groupby
g = df.groupby(np.arange(len(df)) // 3)
for n, grp in g:
print(grp)
0 1 2
0 0.278735 0.609862 0.085823
1 0.836997 0.739635 0.866059
2 0.691271 0.377185 0.225146
0 1 2
3 0.435280 0.700900 0.700946
4 0.796487 0.018688 0.700566
5 0.900749 0.764869 0.253200
to get it into a handy dictionary
mydict = {k: v for k, v in g}
You can use numpy.split() method:
In [8]: df = pd.DataFrame(np.random.rand(9, 3))
In [9]: df
Out[9]:
0 1 2
0 0.899366 0.991035 0.775607
1 0.487495 0.250279 0.975094
2 0.819031 0.568612 0.903836
3 0.178399 0.555627 0.776856
4 0.498039 0.733224 0.151091
5 0.997894 0.018736 0.999259
6 0.345804 0.780016 0.363990
7 0.794417 0.518919 0.410270
8 0.649792 0.560184 0.054238
In [10]: for x in np.split(df, len(df)//3):
...: print(x)
...:
0 1 2
0 0.899366 0.991035 0.775607
1 0.487495 0.250279 0.975094
2 0.819031 0.568612 0.903836
0 1 2
3 0.178399 0.555627 0.776856
4 0.498039 0.733224 0.151091
5 0.997894 0.018736 0.999259
0 1 2
6 0.345804 0.780016 0.363990
7 0.794417 0.518919 0.410270
8 0.649792 0.560184 0.054238

Resources