I have a dataframe on which I divide into 3 sub dataframes. Then I am applying aggregate functions. After than, I merge the 3 dataframes.
However, when comparing the number of rows prior and post the merge, it shows a significant loss, although I used a command to fill in blanks to preserve the row count. I think the aggregation code is that trimmed everything. Maybe there is a better way to write that portion of the code which will fix the rest of it.
In: df.info()
Out:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 505960 entries, 640051 to 204623
Data columns (total 4 columns):
id 505960 non-null int64
session_number 505960 non-null int64
date 505960 non-null datetime64[ns]
purchases 505960 non-null int64
dtypes: datetime64[ns](1), int64(3)
memory usage: 19.3 MB
In: df.shape
Out: (505960, 4)
In:
#slice main dataframe
df_test=df[['id','purchases','session_number','date']].copy()
#aggregations I THINK HERE IS THE PROBLEM SOURCE!
df_1=df_test.groupby(['id'])["purchases"].apply(lambda x : x.astype(int).sum()).reset_index()
df_2=df_test.groupby(['id'])["session_number"].apply(lambda y : y.max()-y.min()).astype(int).reset_index()
df_3=df_test.groupby(['id'])["date"].apply(lambda z : z.max()-z.min()).reset_index()
#merge dfs sequentially by id
df_a=pd.merge(df_1, df_2, on='id', how='left').fillna(0)
df=pd.merge(df_a, df_3, on='id', how='left').fillna(0)
in: df.shape
Out: (292291, 4)
You can see that my rows shrunk from 505,960 to 292,291! What am I doing wrong with the aggregation portion of the code and how to fix?
By looking at the given code and metadata information about the data, groupby would aggregate records with the same id into a single GroupBy object, hence the total number of record counts will decrease if the id's are not unique. The count of unique id's should be same as the final count of records after groupby.
df['id'].nunique() will give you the count of unique id's, which should match your final count.
When you do df_test.groupby(['id']) this, it generates a GroupBy object and it sets group key as index an, which is 'id' in this case.
Hence, do the following rather:
df_a = df_1.merge(df_2, left_index = True, right_index =True).fillna(0)
df = df_a.merge(df_3, left_index = True, right_index =True).fillna(0)
Related
I have the following data, with each row per ID and DATE. A person with the same ID can occupy multiple rows, hence multiple dates. I want to aggregate it into one person (or ID) per row, and the dates will be aggregated into a list of date
From this
ID DATE
1 2012-03-04
1 2013-04-15
1 2019-01-09
2 2013-04-09
2 2016-01-01
2 2018-05-09
To this
ID DATE
1 [2012-03-04, 2013-04-15, 2019-01-09]
2 [2013-04-09, 2016-01-01, 2018-05-09]
Here is my attempt
df.sort_values(by=['ID', 'DATE'], ascending=True, inplace=True)
df = df[['ID', 'DATE']]
df_pivot = df.groupby('ID').aggregate(lambda tdf: tdf.unique().tolist())
df_pivot = pd.DataFrame(df_pivot.to_records())
The problem is it returns something like this
ID DATE
1 [1375228800000000000, 1411948800000000000, 1484524800000000000]
2 [1524528000000000000, 1529539200000000000, 1529542200000000000]
What kind of date format is this? I can't seem to find the right function to convert it back to the typical date format.
If need unique values in lists use DataFrame.drop_duplicates before aggregate lists:
df = (df.sort_values(by=['ID', 'DATE'], ascending=True)
.drop_duplicates(['ID', 'DATE'])
.groupby('ID')['DATE']
.agg(list))
In your solution should working, but it is slow:
df_pivot = df.groupby('ID')['DATE'].aggregate(lambda tdf: tdf.drop_duplicates().tolist())
What kind of date format is this?
If is native datetimes, called also unix datetime in nanoseconds.
Many ways... agg preferred because apply can be very slow
df.groupby('ID')['DATE'].agg(list)
Or
df.groupby('ID')['DATE'].apply(lambda x: x.to_list())
Simply use groupby() and apply() method:
result=df.groupby('ID')['DATE'].apply(list)
OR
result=df.groupby('ID')['DATE'].agg(list)
Now If you print result you will get your desired output:
ID
1 [ 2012-03-04, 2013-04-15, 2019-01-09]
2 [ 2013-04-09, 2016-01-01, 2018-05-09]
Name: DATE, dtype: object
The above code is giving you Series,If you want Dataframe Then use:
result=df.groupby('ID')['DATE'].apply(list).reset_index()
I have 2 dataframes
df1
Report Period 170 non-null object
Links 170 non-null object
and
df2
reportmonth 1965 non-null object
links 1965 non-null object
I want to compare df1 to df2 on ['reportmonth'] = ['Report Period'] and ['Links'] = ['links']
I want to return all records from df1 that dont exists in df2 based on condition above
Something to the effect of
full_table[full_table['Links'] not in (dfold['links']) and full_table['Report Period'] not in (dfold['reportmonth'])]
but that code gave me error
'Series' objects are mutable, thus they cannot be hashed
So basically I have 3 columns in my dataframe as follows:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 158143 entries, 0 to 203270
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 users 158143 non-null int64
1 dates 158143 non-null datetime64[ns]
2 medium_of_ans 158143 non-null object
And I want it to be reshaped such that each entry in medium_of_ans value has a separate column and dates as row indices with users of a particular answer medium on a particular date resides in the junction of that row and column. In pandas similar functionality can be achieved by pivoting the dataframe although I am not able to achieve that as following attempt:
df.pivot(columns= 'medium_of_ans', index = 'dates', values = 'users')
throws this error:
ValueError: Index contains duplicate entries, cannot reshape
And I'm not sure why as a dataframe to be pivoted will obviously have duplicates in indices. That's why it is being pivoted. Resetting dataframe index as follows:
df.reset_index().pivot(columns= 'medium_of_ans', index = 'dates', values = 'users')
does not help either and error persists.
You have duplicates not just by the index, dates, but by the combination of index and column together, the combined dates and medium_of_ans.
You can find these duplicates with something like this:
counts = df.groupby(['dates', 'medium_of_ans']).size().reset_index(name='n')
duplicates = counts[counts['n'] > 1]
If you want to combine the duplicates, for example by taking the mean of users for the cell, then you can use pivot_table.
df.pivot_table(columns='medium_of_ans', index='dates', values='users', aggfunc='mean')
Taking the mean is the default, but I have added the explicit parameter for clarity.
I am using pandas.merge to join two dataframes based on two columns values. I checked the data types in both dataframes, and they are the same. Besides, for each of the two columns in both datasets I calculated the intersection between sets and it is definitely not empty. However, merge is not working properly and I just cannot find a reason for it.
I will post a piece of code here, though I don't have a minimum dataset to reproduce it, the data are too big. There are datasets df and q0, both of them have columns permno date, and I want to merge based on them.
# just to make sure the data types are the same
df['permno'] = df['permno'].astype(int)
q0['permno'] = q0['permno'].astype(int)
df['date'] = df['date'].dt.to_period('D')
q0['date'] = q0['date'].dt.to_period('D')
print(df.dtypes, q0.dtypes)
Outpit for q0:
date period[D]
permno int64
sum float64
dtype: object
Output for df:
permno int64
date period[D]
sic int64
prc float64
...
Another step to make sure is to take the intersection between column values:
print(len(set(df.date.unique())&set(q0.date.unique())))
print(len(set(df.permno.unique())&set(q0.permno.unique())))
Output:
9154
5925
Merge:
df = pd.merge(df, q0, on=['permno', 'date'], how='inner')
print(len(df))
Output:
0
I tried it so many times but I can't figure out why it is not working now.
I have 2 dataframes:
restaurant_ids_dataframe
Data columns (total 13 columns):
business_id 4503 non-null values
categories 4503 non-null values
city 4503 non-null values
full_address 4503 non-null values
latitude 4503 non-null values
longitude 4503 non-null values
name 4503 non-null values
neighborhoods 4503 non-null values
open 4503 non-null values
review_count 4503 non-null values
stars 4503 non-null values
state 4503 non-null values
type 4503 non-null values
dtypes: bool(1), float64(3), int64(1), object(8)`
and
restaurant_review_frame
Int64Index: 158430 entries, 0 to 229905
Data columns (total 8 columns):
business_id 158430 non-null values
date 158430 non-null values
review_id 158430 non-null values
stars 158430 non-null values
text 158430 non-null values
type 158430 non-null values
user_id 158430 non-null values
votes 158430 non-null values
dtypes: int64(1), object(7)
I would like to join these two DataFrames to make them into a single dataframe using the DataFrame.join() command in pandas.
I have tried the following line of code:
#the following line of code creates a left join of restaurant_ids_frame and restaurant_review_frame on the column 'business_id'
restaurant_review_frame.join(other=restaurant_ids_dataframe,on='business_id',how='left')
But when I try this I get the following error:
Exception: columns overlap: Index([business_id, stars, type], dtype=object)
I am very new to pandas and have no clue what I am doing wrong as far as executing the join statement is concerned.
any help would be much appreciated.
You can use merge to combine two dataframes into one:
import pandas as pd
pd.merge(restaurant_ids_dataframe, restaurant_review_frame, on='business_id', how='outer')
where on specifies field name that exists in both dataframes to join on, and how
defines whether its inner/outer/left/right join, with outer using 'union of keys from both frames (SQL: full outer join).' Since you have 'star' column in both dataframes, this by default will create two columns star_x and star_y in the combined dataframe. As #DanAllan mentioned for the join method, you can modify the suffixes for merge by passing it as a kwarg. Default is suffixes=('_x', '_y'). if you wanted to do something like star_restaurant_id and star_restaurant_review, you can do:
pd.merge(restaurant_ids_dataframe, restaurant_review_frame, on='business_id', how='outer', suffixes=('_restaurant_id', '_restaurant_review'))
The parameters are explained in detail in this link.
Joining fails if the DataFrames have some column names in common. The simplest way around it is to include an lsuffix or rsuffix keyword like so:
restaurant_review_frame.join(restaurant_ids_dataframe, on='business_id', how='left', lsuffix="_review")
This way, the columns have distinct names. The documentation addresses this very problem.
Or, you could get around this by simply deleting the offending columns before you join. If, for example, the stars in restaurant_ids_dataframe are redundant to the stars in restaurant_review_frame, you could del restaurant_ids_dataframe['stars'].
In case anyone needs to try and merge two dataframes together on the index (instead of another column), this also works!
T1 and T2 are dataframes that have the same indices
import pandas as pd
T1 = pd.merge(T1, T2, on=T1.index, how='outer')
P.S. I had to use merge because append would fill NaNs in unnecessarily.
In case, you want to merge two DataFrames horizontally, then use this code:
df3 = pd.concat([df1, df2],axis=1, ignore_index=True, sort=False)