Let's say we have two tables, trans and product. Hypothetically the trans table consists of over a billion rows of purchases bought by users.
I am trying to find paired products that are often purchased together(purchased on the same date) by the same user, such as wine and bottle openers, chips and beer, etc..
I am trying to find the top five paired products and their names.
trans and prod dataframe :-
trans = {'ID':[1,1,2,2,3,3,1,5,5,6,6,6],
'productID':[11,22,11,22,33,77,11,77,88,11,22,77],
'Year':['2022-01-01','2022-01-01','2020-01-05','2020-01-05','2019-01-01','2019-01-01','2020-01-07','2020-01-08',
'2020-01-08','2021-06-01','2021-06-01','2021-06-01']}
trans = pd.DataFrame(trans)
trans['Year'] = pd.to_datetime(trans['Year'])
trans
product = {'productID':[11,22,33,44,55,77,88],
'prodname':['phone','Charger','eaphones','headset','scratchgaurd','pin','cover']}
product = pd.DataFrame(product)
product
My code till now where was trying to Rank the items with same ID and Year and then try to get the product names.
transprod = pd.merge(trans,product,on='productID' , how='inner')
transprod
transprod['Rank'] = transprod.groupby('ID')['Year'].rank(method = 'dense').astype(int)
transprod = transprod.sort_values(['ID','productID','Rank'])
transprod
Desired Output:
Product 1 | Product 2 | Count
phone charger 3
Charger pin 1
eaphones pin 1
pin cover 1
Any help is really appreciated. Thanks in advance
You could group the transactions table by ID (and date) and list all product pairs for each order. itertools.combinations is useful here. By taking the set over an order first, you can ignore multiple equal items.
Since it does not matter in which order a pair appears, you could then construct a flat list of all the pairs and use a collections.Counter instance to count them. Sorting each pair first makes sure that you can disregard the order of items within a pair.
The product table can be transformed into a dictionary for easy lookup. This will provide a way to add the product names to the table of results.
from itertools import combinations
from collections import Counter
pairs_by_trans = trans.groupby(['ID', 'Year'])['productID'].agg(
lambda x: list(combinations(set(x), 2)))
pairs_flat = [tuple(sorted(pair)) for row in pairs_by_trans for pair in row]
counts = Counter(pairs_flat)
top_counts = pd.DataFrame(counts.most_common(5),
columns=['pair', 'count'])
prodname = {k: v for k, v in product.values}
top_counts['names'] = top_counts['pair'].apply(lambda x: (prodname[x[0]],
prodname[x[1]]))
top_counts
pair count names
0 (11, 22) 3 (phone, Charger)
1 (33, 77) 1 (eaphones, pin)
2 (77, 88) 1 (pin, cover)
3 (11, 77) 1 (phone, pin)
4 (22, 77) 1 (Charger, pin)
The below solution works perfectly fine for me
transprod = pd.merge(trans,product,on='productID' , how='inner')
transprod['Rank'] = transprod.groupby('ID')['Year'].rank(method = 'dense').astype(int)
transprod = transprod.sort_values(['ID','productID','Rank'])
def checkprod(x):
v1 = (x['Rank']==x['Rank'].shift(-1))
return (x[v1 | v1.shift(1)])
out = transprod.groupby('ID').apply(checkprod).reset_index(drop=True)
pairs = out.groupby(['ID','Rank'])['prodname'].agg(
lambda x: list(combinations(set(x), 2)))
Counter(list(itertools.chain(*pairs)))
Related
I have a list of specific company identifications numbers.
ex. companyID = ['1','2','3']
and I have a dataframe of different attributes relating to company business.
ex. company_df
There are multiple columns where values from my list could be.
ex. 'company_number', 'company_value', 'job_referred_by', etc.
How can I check if any value from my companyID list exists anywhere in my company_df, regardless of datatype, and return only the columns where a companyID is found?
This is what I have tried, to no luck:
def find_any(company_df, companyID):
found = company_df.isin(companyID).any()
foundCols = found.index[found].tolist()
print(foundCols)
Create a df from your list of companyIDs and then merge the two dfs on company ID. Then filter the df to show only the rows that match.
For datatypes, you can convert int to string no problem, but the other way around would crash if you have a string that can't be converted to int (e.g., 'a'), so I'd use string.
Here's a toy example:
company_df = pd.DataFrame({'co_id': [1, 2, 4, 9]})
company_df['co_id'] = company_df['co_id'].astype(str)
companyID = ['1','2','3']
df_companyID = pd.DataFrame(companyID, columns=['co_id'])
company_df = company_df.merge(df_companyID, on='co_id', how='left', indicator=True)
print(company_df)
# co_id _merge
# 0 1 both
# 1 2 both
# 2 4 left_only
# 3 9 left_only
company_df_hits_only = company_df[company_df['_merge'] == 'both']
del company_df['_merge']
del company_df_hits_only['_merge']
print(company_df_hits_only)
# co_id
# 0 1
# 1 2
I'd like to take an existing DataFrame with a single level of columns and modify it to use a MultiIndex based on a reference list of tuples and have the proper ordering/alignment. To illustrate by example:
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randn(10,5), columns = ['nyc','london','canada','chile','earth'])
coltuples = [('cities','nyc'),('countries','canada'),('countries','usa'),('countries','chile'),('planets','earth'),('planets','mars'),('cities','sf'),('cities','london')]
I'd like to create a new DataFrame which has a top level consisting of 'cities', 'countries', and 'planets' with the corresponding original columns underneath. I am not concerned about order but definitely proper alignment.
It can be assumed that 'coltuples' will not be missing any of the columns from 'df', but may have extraneous pairs, and the ordering of the pairs can be random.
I am trying something along the lines of:
coltuplesuse = [x for x in coltuples if x[1] in df.columns]
cols = pd.MultiIndex.from_tuples(coltuplesuse, names=['level1','level2'])
df.reindex(columns=cols)
which seems to be on the right track but the underlying data in the DataFrame is 'nan'
thanks in advance!
Two things to notice: you want the command set_axis rather than reindex, and sorting by the original column order will ensure the correct label is assigned to the correct column (this is done in the sorted... key= bit).
use_cols = [tup for tup in coltuples if tup[1] in df.columns]
use_cols = sorted(use_cols, key=lambda x: list(df.columns).index(x[1]))
multi_index = pd.MultiIndex.from_tuples(use_cols, names=['level1', 'level2'])
df.set_axis(multi_index, axis=1)
output:
level1 cities countries planets
level2 nyc london canada chile earth
0 0.028033 0.540977 -0.056096 1.675698 -0.328630
1 1.170465 -1.003825 0.882126 0.453294 -1.127752
2 -0.187466 -0.192546 0.269802 -1.225172 -0.548491
3 2.272900 -0.085427 0.029242 -2.258696 1.034485
4 -1.243871 -1.660432 -0.051674 2.098602 -2.098941
5 -0.820820 -0.289754 0.019348 0.176778 0.395959
6 1.346459 -0.260583 0.212008 -1.071501 0.945545
7 0.673351 1.133616 1.117379 -0.531403 1.467604
8 0.332187 -3.541103 -0.222365 1.035739 -0.485742
9 -0.605965 -1.442371 -1.628210 -0.711887 -2.104755
Imagine I have a huge dataset which I partitionBy('id'). Assume that id is unique to a person, so there could be n number of rows per id and the goal is to reduce it to one.
Basically, aggregating to make id distinct.
w = Window().partitionBy(id).rowsBetween(-sys.maxsize, sys.maxsize)
test1 = {
key: F.first(key, True).over(w).alias(key)
for key in some_dict.keys()
if (some_dict[key] == 'test1')
}
test2 = {
key: F.last(key, True).over(w).alias(k)
for k in some_dict.keys()
if (some_dict[k] == 'test2')
}
Assume that I have some_dict with values either as test1 or test2 and based on the value, I either take the first or last as shown above.
How do I actually call aggregate and reduce this?
cols = {**test1, **test2}
cols = list(cols.value())
df.select(*cols).groupBy('id').agg(*cols) # Doesnt work
The above clearly doesn't work. Any ideas?
Goal here is : I have 5 unique IDs and 25 rows with each ID having 5 rows. I want to reduce it to 5 rows from 25.
Let assume you dataframe name df which contains duplicate use below method
from pyspark.sql.functions import row_number
from pyspark.sql.window import Window
window = Window.partitionBy(df['id']).orderBy(df['id'])
final = df.withColumn("row_id", row_number.over(window)).filter("row_id = 1")
final.show(10,False)
change the order by condition in case there is specific criteria so that particular record will be on top of partition
I quite often write a function to return different dataframes based on the parameters I enter. Here's an example dataframe:
np.random.seed(1111)
df = pd.DataFrame({
'Category':np.random.choice( ['Group A','Group B','Group C','Group D'], 10000),
'Sub-Category':np.random.choice( ['X','Y','Z'], 10000),
'Sub-Category-2':np.random.choice( ['G','F','I'], 10000),
'Product':np.random.choice( ['Product 1','Product 2','Product 3'], 10000),
'Units_Sold':np.random.randint(1,100, size=(10000)),
'Dollars_Sold':np.random.randint(100,1000, size=10000),
'Customer':np.random.choice(pd.util.testing.rands_array(10,25,dtype='str'),10000),
'Date':np.random.choice( pd.date_range('1/1/2016','12/31/2018',
freq='M'), 10000)})
I then created a function to perform sub-totals for me like this:
def some_fun(DF1, agg_column, myList=[], *args):
y = pd.concat([
DF1.assign(**{x:'[Total]' for x in myList[i:]})\
.groupby(myList).agg(sumz = (agg_column,'sum')) for i in range(1,len(myList)+1)]).sort_index().unstack(0)
return y
I then write out lists that I'll pass as arguments to the function:
list_one = [pd.Grouper(key='Date',freq='A'),'Category','Product']
list_two = [pd.Grouper(key='Date',freq='A'),'Category','Sub-Category','Sub-Category-2']
list_three = [pd.Grouper(key='Date',freq='A'),'Sub-Category','Product']
I then have to run each list through my function creating new dataframes:
df1 = some_fun(df,'Units_Sold',list_one)
df2 = some_fun(df,'Dollars_Sold',list_two)
df3 = some_fun(df,'Units_Sold',list_three)
I then use a function to write each of these dataframes to an Excel worksheet. This is just an example - I perform this same exercise 10+ times.
My question - is there a better way to perform this task than to write out df1, df2, df3 with the function information applied? Should I be looking at using a dictionary or some other data type to do this my pythonically with a function?
A dictionary would be my first choice:
variations = ([('Units Sold', list_one), ('Dollars_Sold',list_two),
..., ('Title', some_list)])
df_variations = {}
for i, v in enumerate(variations):
name = v[0]
data = v[1]
df_variations[i] = some_fun(df, name, data)
You might further consider setting the keys to unique / helpful titles for the variations, that goes beyond something like 'Units Sold', which isn't unique in your case.
IIUC,
as Thomas has suggested we can use a dictionary to parse through your data, but with some minor modifications to your function, we can use the dictionary to hold all the required data then pass that through to your function.
the idea is to pass two types of keys, the list of columns and the arguments to your pd.Grouper call.
data_dict = {
"Units_Sold": {"key": "Date", "freq": "A"},
"Dollars_Sold": {"key": "Date", "freq": "A"},
"col_list_1": ["Category", "Product"],
"col_list_2": ["Category", "Sub-Category", "Sub-Category-2"],
"col_list_3": ["Sub-Category", "Product"],
}
def some_fun(dataframe, agg_col, dictionary,column_list, *args):
key = dictionary[agg_col]["key"]
frequency = dictionary[agg_col]["freq"]
myList = [pd.Grouper(key=key, freq=frequency), *dictionary[column_list]]
y = (
pd.concat(
[
dataframe.assign(**{x: "[Total]" for x in myList[i:]})
.groupby(myList)
.agg(sumz=(agg_col, "sum"))
for i in range(1, len(myList) + 1)
]
)
.sort_index()
.unstack(0)
)
return y
Test.
df1 = some_fun(df,'Units_Sold',data_dict,'col_list_3')
print(df1)
sumz
Date 2016-12-31 2017-12-31 2018-12-31
Sub-Category Product
X Product 1 18308 17839 18776
Product 2 18067 19309 18077
Product 3 17943 19121 17675
[Total] 54318 56269 54528
Y Product 1 20699 18593 18103
Product 2 18642 19712 17122
Product 3 17701 19263 20123
[Total] 57042 57568 55348
Z Product 1 19077 17401 19138
Product 2 17207 21434 18817
Product 3 18405 17300 17462
[Total] 54689 56135 55417
[Total] [Total] 166049 169972 165293
as you want to automate the writing of the 10x worksheets, we can again do that with a dictionary call over your function:
matches = {'Units_Sold': ['col_list_1','col_list_3'],
'Dollars_Sold' : ['col_list_2']}
then a simple for loop to write all the files to a single excel sheet, change this to match your required behavior.
writer = pd.ExcelWriter('finished_excel_file.xlsx')
for key,value in matches.items():
for items in value:
dataframe = some_fun(df,k,data_dict,items)
dataframe.to_excel(writer,f'{key}_{items}')
writer.save()
I'm printing out the frequency of murders in each state in each particular decade. However, I just want to print the state, decade, and it's victim count. What I have right now is that it's printing out all the columns with the same frequencies. How do I change it so that I just have 3 columns, State, Decade, and Victim Count?
I'm currently using the groupby function to group by the state and decade and setting that equal to a variable called count.
xl = pd.ExcelFile('Wyoming.xlsx')
df = xl.parse('Sheet1')
df['Decade'] = (df['Year'] // 10) * 10
counts = df.groupby(['State', 'Decade']).count()
print(counts)
The outcome is printing out all the columns in the file with the same frequencies whereas I just want 3 columns: State Decade Victim Count
Sample Text File
You should reset_index of the groupby object, and then select the columns from the new dataframe.
Something like
xl = pd.ExcelFile('Wyoming.xlsx')
df = xl.parse('Sheet1')
df['Decade'] = (df['Year'] // 10) * 10
counts = df.groupby(['State', 'Decade']).count()
counts = counts.reset_index()[['State', 'Decade','Vistim Count']]
print(counts)
Select the columns that you want:
counts = df.loc[:,['State', 'Decade','Vistim Count']].groupby(['State', 'Decade']).count()
or
print(count.loc[:,['State', 'Decade','Vistim Count']])