Calculating based on date pandas - python-3.x

I have this dataframe:
a = [1, 2, 3, 4, 5]
b = ['2019-08-01', '2019-09-01', '2019-10-23', '2019-11-12', '2019-11-30']
c = [12, 0, 0, 0, 0]
d = [0, 23, 0, 0, 0]
e = [12, 24, 35, 0, 0]
f = [0, 0, 44, 56, 82]
g = [21, 22, 17, 75, 63]
df = pd.DataFrame({'ID': a, 'Date': b, 'Unit_sold_8': c,
'Unit_sold_9': d, 'Unit_sold_10': e, 'Unit_sold_11': f,
'Unit_sold_12': g})
df['Date'] = pd.to_datetime(df['Date'])
I want to calculate average sales of each ID which are based on Date. For example, If ID's open date was on Sep, so the average sales of this ID would start on Sep. I tried np.select but I realized that this method would make my code super long.
col = df.columns
mask1 = (df['Date'] >= "08/01/2019") & (df['Date'] < "09/01/2019")
mask2 = (df['Date'] >= "09/01/2019") & (df['Date'] < "10/01/2019")
mask3 = (df['Date'] >= "10/01/2019") & (df['Date'] < "11/01/2019")
mask4 = (df['Date'] >= "11/01/2019") & (df['Date'] < "12/01/2019")
mask5 = (df['Date'] >= "12/01/2019")
condition2 = [mask1, mask2, mask3, mask4, mask5]
result2 = [df[col[2:]].mean(skipna = True, axis = 1),
df[col[3:]].mean(skipna = True, axis = 1),
df[col[4:]].mean(skipna = True, axis = 1),
df[col[5:]].mean(skipna = True, axis = 1),
df[col[6:]].mean(skipna = True, axis = 1)]
df.loc[:, 'Mean'] = np.select(condition2, result2, default = np.nan)
Are there any way faster to solve this problem? Especially when the time range is expanded (12 months, 24 months, .etc)

Does it help you?
from datetime import datetime
import numpy as np
from dateutil import relativedelta
check_date = datetime.today()
df['n_months'] = df['Date'].apply(lambda x: relativedelta.relativedelta( check_date,x).months)
df['total'] = df.iloc[:,range(2,df.shape[1]-1)].sum(axis=1)
df['avg'] = df['total'] / df['n_months']
print(df)
ID Date Unit_sold_8 ... n_months total avg
0 1 2019-08-01 12 ... 5 45 9.00
1 2 2019-09-01 0 ... 4 69 17.25
2 3 2019-10-23 0 ... 3 96 32.00
3 4 2019-11-12 0 ... 2 131 65.50
4 5 2019-11-30 0 ... 2 145 72.50

M= df
#melt data to pull units as variables
.melt(id_vars=['ID','Date'])
#create temp variables to pull out Month from Date and Units
.assign(Mth=lambda x: x['Date'].dt.month,
oda_detail = lambda x: x.variable.str.split('_').str[-1])
.sort_values(['ID','Mth'])
#keep only rows where the Mth is less than or equal to other detail
.loc[lambda x : x['Mth'].astype(int).le(x['oda_detail'].astype(int))]
#groupby and get the mean
.groupby(['ID','Date'])['value'].mean()
.reset_index()
.drop(['ID','Date'],axis=1)
.rename({'value':'Mean'},axis=1)
Join back to original dataframe:
pd.concat([df,M],axis=1)
ID Date Unit_sold_8 Unit_sold_9 Unit_sold_10 Unit_sold_11
Unit_sold_12 Mean
0 1 2019-08-01 12 0 12 0 21 9.00
1 2 2019-09-01 0 23 24 0 22 17.25
2 3 2019-10-23 0 0 35 44 17 32.00
3 4 2019-11-12 0 0 0 56 75 65.50
4 5 2019-11-30 0 0 0 82 63 72.50

Related

Python Hypothesis mixing strategies bahavior for DataFrames

The following works as expected
from datetime import datetime
from hypothesis.extra.pandas import columns, data_frames, indexes
import hypothesis.strategies as st
def boundarize(d: datetime):
return d.replace(minute=15 * (d.minute // 15), second=0, microsecond=0)
min_date = datetime(2022, 4, 1, 22, 22, 22)
max_date = datetime(2022, 5, 1, 22, 22, 22)
dfs = data_frames(
index=indexes(
elements=st.datetimes(min_value=min_date, max_value=max_date).map(boundarize),
min_size=3,
max_size=5,
).map(lambda idx: idx.sort_values()),
columns=columns("A B C".split(), dtype=int),
)
dfs.example()
with an output similar to
A B C
2022-04-06 12:45:00 -11482 1588438979 -1994987295
2022-04-08 15:45:00 -833447611 3 -51
2022-04-24 06:15:00 -465371373 990274387 -14969
2022-05-01 01:15:00 1750446827 1214440777 116
2022-05-01 06:15:00 -44089 30508 58737
now when I try to generate a similar DataFrame with evenly spaced DatetimeIndex values via
from datetime import datetime
from hypothesis.extra.pandas import columns, data_frames, indexes
import hypothesis.strategies as st
def boundarize(d: datetime):
return d.replace(minute=15 * (d.minute // 15), second=0, microsecond=0)
min_date_start = datetime(2022, 4, 1, 11, 11, 11)
max_date_start = datetime(2022, 4, 2, 11, 11, 11)
min_date_end = datetime(2022, 5, 1, 22, 22, 22)
max_date_end = datetime(2022, 5, 2, 22, 22, 22)
dfs = data_frames(
index=st.builds(pd.date_range,
start=st.datetimes(min_value=min_date_start, max_value=max_date_start).map(boundarize),
end=st.datetimes(min_value=min_date_end, max_value=max_date_end).map(boundarize),
freq=st.just("15T"),
),
columns=columns("A B C".split(), dtype=int),
)
dfs.example()
The output is the following, note that the integer columns are always zero when they were not in the first example:
A B C
2022-04-01 15:45:00 0 0 0
2022-04-01 16:00:00 0 0 0
2022-04-01 16:15:00 0 0 0
2022-04-01 16:30:00 0 0 0
2022-04-01 16:45:00 0 0 0
... .. .. ..
2022-05-01 21:15:00 0 0 0
2022-05-01 21:30:00 0 0 0
2022-05-01 21:45:00 0 0 0
2022-05-01 22:00:00 0 0 0
2022-05-01 22:15:00 0 0 0
[2907 rows x 3 columns]
is this expected behavior or am I missing something?
Edit:
Sidestepping the approach of "random consecutive subsets" (see my comments below), I also tried with a pre-defined index
from datetime import datetime
from hypothesis.extra.pandas import columns, data_frames
import hypothesis.strategies as st
min_date_start = datetime(2022, 4, 1, 8, 0, 0)
dfs = data_frames(
index=st.just(pd.date_range(start=min_date_start, periods=10, freq="15T")),
columns=columns("A B C".split(), dtype=int),
)
dfs.example()
which gives all zero columns as well
A B C
2022-04-01 08:00:00 0 0 0
2022-04-01 08:15:00 0 0 0
2022-04-01 08:30:00 0 0 0
2022-04-01 08:45:00 0 0 0
2022-04-01 09:00:00 0 0 0
2022-04-01 09:15:00 0 0 0
2022-04-01 09:30:00 0 0 0
2022-04-01 09:45:00 0 0 0
2022-04-01 10:00:00 0 0 0
2022-04-01 10:15:00 0 0 0
Edit 2:
I tried to come up with a handmade version of consecutive subsets which should reduce the space of values to leave enough entropy for the column values as per #zac-hatfield-dodds answer, but empirically it still generates mostly all zero column values
from datetime import datetime
import math
import hypothesis.strategies as st
from hypothesis.extra.pandas import columns, data_frames
import pandas as pd
time_start = datetime(2022, 4, 1, 8, 0, 0)
time_stop = datetime(2022, 4, 2, 8, 0, 0)
r = pd.date_range(start=time_start, end=time_stop, freq="15T")
def build_indices(sequence):
first = 0
if len(sequence) % 2 == 0:
mid_ceiling = len(sequence) // 2
mid_floor = mid_ceiling - 1
else:
mid_floor = math.floor(len(sequence) / 2)
mid_ceiling = mid_floor + 1
second = len(sequence) - 1
return first, mid_floor, mid_ceiling, second
first, mid_floor, mid_ceiling, second = build_indices(r)
a = st.integers(min_value=first, max_value=mid_floor)
b = st.integers(min_value=mid_ceiling, max_value=second)
def indexer(sequence, lower, upper):
return sequence[lower:upper]
dfs = data_frames(
index=st.builds(lambda lower, upper: indexer(r, lower, upper), lower=a, upper=b),
columns=columns("A B C".split(), dtype=int),
)
dfs.example()
Your problem is that the latter indexes are way way larger, and Hypothesis is running out of entropy to generate column contents. If you limit the index to at most a few dozen entries, everything should work fine.
We have this soft-cap in order to limit otherwise unbounded recursive structures, so the overall design is working as intended though I acknowledge that in this case it's neither necessary nor desirable.

Edited: K means clustering and finding points closest to the centroid

I am trying to apply k means to cluster actors based on the information in the following columns
Actors Movies TvGuest Awards Shorts Special LiveShows
Robert De Niro 111 2 6 0 0 0
Jack Nicholson 70 2 4 0 5 0
Marlon Brando 64 2 5 0 0 28
Denzel Washington 25 2 3 24 0 0
Katharine Hepburn 90 1 2 0 0 0
Humphrey Bogart 105 2 1 0 0 52
Meryl Streep 27 2 2 5 0 0
Daniel Day-Lewis 90 2 1 0 71 22
Sidney Poitier 63 2 3 0 0 0
Clark Gable 34 2 4 0 3 0
Ingrid Bergman 22 2 2 3 0 4
Tom Hanks 82 11 6 21 11 22
#began by scaling my data
X = StandardScaler().fit_transform(data)
#used an elbow plot to find optimal k value
sum_of_squared_distances = []
K = range(1,15)
for k in K:
k_means = KMeans(n_clusters=k)
model = k_means.fit(X)
sum_of_squared_distances.append(k_means.inertia_)
plt.plot(K, sum_of_squared_distances, 'bx-')
plt.show()
#found yhat for the calculated k value
kmeans = KMeans(n_clusters=3)
model = kmeans.fit(X)
yhat = kmeans.predict(X)
Unable to figure out create scatter plots by actors.
EDIT:
Is there a way to find which actors are closest to centroids if the centroids were also plotted using
centers = kmeans.cluster_centers_ (The kmeans here refers to Eric's solution below)
plt.scatter(centers[:,0],centers[:,1],color='purple',marker='*',label='centroid')
K means clustering in Pandas - Scatter plot
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import pandas as pd
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
df = pd.DataFrame(columns=['Actors', 'Movies', 'TvGuest', "Awards", "Shorts"])
df.loc[0] = ["Robert De Niro", 111, 2, 6, 0]
df.loc[1] = ["Jack Nicholson", 70, 2, 4, 0]
df.loc[2] = ["Marlon Brando", 64, 4, 5, 0]
df.loc[3] = ["Denzel Washington", 25, 2, 3, 24]
df.loc[4] = ["Katharine Hepburn", 90, 1, 2, 0]
df.loc[5] = ["Humphrey Bogart", 105, 2, 1, 0]
df.loc[6] = ["Meryl Streep", 27, 3, 2, 5]
df.loc[7] = ["Daniel Day-Lewis", 90, 2, 1, 0]
df.loc[8] = ["Sidney Poitier", 63, 2, 3, 0]
df.loc[9] = ["Clark Gable", 34, 2, 4, 0]
df.loc[10] = ["Ingrid Bergman", 22, 5, 2, 3]
kmeans = KMeans(n_clusters=4)
y = kmeans.fit_predict(df[['Movies', 'TvGuest', 'Awards']])
df['Cluster'] = y
plt.scatter(df.Movies, df.TvGuest, c=df.Cluster, alpha = 0.6)
plt.title('K-means Clustering 2 dimensions and 4 clusters')
plt.show()
Shows:
Notice the data points presented on the 2 dimensional scatterplot is Movies and TvGuest, however the Kmeans fit was given 3 variables: Movies, TvGuest, Awards. Imagine there is an additional dimension going into the screen which are used to calculate membership to a cluster.
Source links:
https://datasciencelab.wordpress.com/2013/12/12/clustering-with-k-means-in-python/
https://datascience.stackexchange.com/questions/48693/perform-k-means-clustering-over-multiple-columns
https://towardsdatascience.com/visualizing-clusters-with-pythons-matplolib-35ae03d87489
You can calculate Euclidean distance between points and centroid and find the min distance which indicates closest point to centroids
dist = numpy.linalg.norm(centroid-point)

Python Pandas Conditional Sum and subtract previous row

I am new here and i need some help with python pandas.
I need help creating a new column where i get sum of another columns + previous row of this calculated row.
This is my example:
df = pd.DataFrame({
'column0': ['x', 'x', 'y', 'x', 'y', 'y', 'x'],
'column1': [50, 100, 30, 0, 30, 80, 0],
'column2': [0, 0, 0, 10, 0, 0, 30],
})
print(df)
column0 column1 column2
0 x 50 0
1 x 100 0
2 y 30 0
3 x 0 10
4 y 30 0
5 y 80 0
6 x 0 30
I have used loc to filter this DataFrame like this:
df = df.loc[df['column0'] == 'x']
df = df.reset_index(drop=True)
Now...when i try to get the output, i don't get correct result:
df['Result'] = df['column1'] + df['column2']
df['Result'] = df['column1'] + df['column2'] + df['Result'].shift(1)
print(df)
column0 column1 column2 Result
0 x 50 0 NaN
1 x 100 0 100.0
2 x 0 10 10.0
3 x 0 30 30.0
I just want this output....
column0 column1 column2 Result
0 x 50 0 50
1 x 100 0 150.0
2 x 0 10 160.0
3 x 0 30 190.0
Thank you very much!
You can use .cumsum() to calculate a cumulative sum of the column:
df = pd.DataFrame({
'column1': [50, 100, 30, 0, 30, 80, 0],
'column2': [0, 0, 0, 10, 0, 0, 30],
})
df['column3'] = df['column1'].cumsum() - df['column2'].cumsum()
This results in:
column1 column2 column3
0 50 0 50
1 100 0 150
2 30 0 180
3 0 10 170
4 30 0 200
5 80 0 280
6 0 30 250

Pandas: apply list of functions on columns, one function per column

Setting: for a dataframe with 10 columns I have a list of 10 functions which I wish to apply in a function1(column1), function2(column2), ..., function10(column10) fashion. I have looked into pandas.DataFrame.apply and pandas.DataFrame.transform but they seem to broadcast and apply each function on all possible columns.
IIUC, with zip and a for loop:
Example
def function1(x):
return x + 1
def function2(x):
return x * 2
def function3(x):
return x**2
df = pd.DataFrame({'A': [1, 2, 3], 'B': [1, 2, 3], 'C': [1, 2, 3]})
functions = [function1, function2, function3]
print(df)
# A B C
# 0 1 1 1
# 1 2 2 2
# 2 3 3 3
for col, func in zip(df, functions):
df[col] = df[col].apply(func)
print(df)
# A B C
# 0 2 2 1
# 1 3 4 4
# 2 4 6 9
You could do something like:
# list containing functions
fun_list = []
# assume df is your dataframe
for i, fun in enumerate(fun_list):
df.iloc[:,i] = fun(df.iloc[:,i])
You can probably try to map your N functions to each row by using a lambda containing a Series with your operations, check the following code:
import pandas as pd
matrix = [(22, 34, 23), (33, 31, 11), (44, 16, 21), (55, 32, 22), (66, 33, 27),
(77, 35, 11)]
df = pd.DataFrame(matrix, columns=list('xyz'), index=list('abcdef'))
Will produce:
x y z
a 22 34 23
b 33 31 11
c 44 16 21
d 55 32 22
e 66 33 27
f 77 35 11
and then:
res_df = df.apply(lambda row: pd.Series([row[0] + 1, row[1] + 2, row[2] + 3]), axis=1)
will give you:
0 1 2
a 23 36 26
b 34 33 14
c 45 18 24
d 56 34 25
e 67 35 30
f 78 37 14
You can simply apply to specific column
df['x'] = df['X'].apply(lambda x: x*2)
Similar to #Chris Adams's answer but makes a copy of the dataframe using dictionary comprehension and zip.
def function1(x):
return x + 1
def function2(x):
return x * 2
def function3(x):
return x**2
df = pd.DataFrame({'A': [1, 2, 3], 'B': [1, 2, 3], 'C': [1, 2, 3]})
functions = [function1, function2, function3]
print(df)
# A B C
# 0 1 1 1
# 1 2 2 2
# 2 3 3 3
df_2 = pd.DataFrame({col: func(df[col]) for col, func in zip(df, functions)})
print(df_2)
# A B C
# 0 2 2 1
# 1 3 4 4
# 2 4 6 9

How to use two for loops to copy a list variable into a location variable in a dataframe?

I have a dataframe which has 2 columns called locStuff and data. Someone was kind enough to show me how to index a location range in the df so that it correctly changes the data to a single integer attached to locStuff instead of the dataframe index, that works fine, now I cannot see how to change the data values of that location range with a list of values.
import pandas as pd
INDEX = list(range(1, 11))
LOCATIONS = [3, 10, 6, 2, 9, 1, 7, 5, 8, 4]
DATA = [94, 43, 85, 10, 81, 57, 88, 11, 35, 86]
# Make dataframe
DF = pd.DataFrame(LOCATIONS, columns=['locStuff'], index=INDEX)
DF['data'] = pd.Series(DATA, index=INDEX)
# Location and new value inputs
LOC_TO_CHANGE = 8
NEW_LOC_VALUE = 999
NEW_LOC_VALUE = [999,666,333]
LOC_RANGE = list(range(3, 6))
DF.iloc[3:6, 1] = ('%03d' % NEW_LOC_VALUE)
print(DF)
#I TRIED BOTH OF THESE SEPARATELY
for i in NEW_LOC_VALUE:
for j in LOC_RANGE:
DF.iloc[j, 1] = ('%03d' % NEW_LOC_VALUE[i])
print (DF)
i=0
while i<len(NEW_LOC_VALUE):
for j in LOC_RANGE:
DF.iloc[j, 1] = ('%03d' % NEW_LOC_VALUE[i])
i=+1
print(DF)
Neither of these work:
for i in NEW_LOC_VALUE:
for j in LOC_RANGE:
DF.iloc[j, 1] = ('%03d' % NEW_LOC_VALUE[i])
print (DF)
i=0
while i<len(NEW_LOC_VALUE):
for j in LOC_RANGE:
DF.iloc[j, 1] = ('%03d' % NEW_LOC_VALUE[i])
i=+1
I know how to do this using loops or list comprehensions for an empty list but no idea how to adapt what I have above for a DataFrame.
Expected behaviour would be:
locStuff data
1 3 999
2 10 43
3 6 85
4 2 10
5 9 81
6 1 57
7 7 88
8 5 333
9 8 35
10 4 666
Try setting locStuff as index, assign values, and reset_index:
DF.set_index('locStuff', inplace=True)
DF.loc[LOC_RANGE, 'data'] = NEW_LOC_VALUE
DF.reset_index(inplace=True)
Output:
locStuff data
0 3 999
1 10 43
2 6 85
3 2 10
4 9 81
5 1 57
6 7 88
7 5 333
8 8 35
9 4 666

Resources