cutoff time and training window at featuretools - featuretools

Suppose I have two datasets (corresponding to two entities in my entityset):
First one: customers (cust_id, name, birthdate, customer_since)
Second one: bookings (booking_id, service, chargeamount, booking_date)
Now I want to create a dataset with features built from all customers (no matter since when they are customer) but only bookings from the last two years.
How do I have to use the "last_time_index"? Can I set a "last_time_index" only to one entity? In this case only for the bookings entity, because I want ALL customers, but not all bookings.
If used this code to create the features:
feature_matrix, features = ft.dfs(entityset=es,
target_entity="customers",
cutoff_time= pd.to_datetime('30/05/2018'),
training_window = ft.Timedelta(2*365,"d"),
agg_primitives=["count"],
trans_primitives=["time_since","year"],
cutoff_time_in_index = True)

The time_index of an entity specifies the first time an instance is valid for use. In that way, the choices you make in setting a time index can impact your final result. Depending on how you set up your time_index, it is possible to use ft.dfs with exactly the settings in your example to get the desired output. Here is a toy example similar to the data you've described:
bookings_df = pd.DataFrame()
bookings_df['booking_id'] = [1, 2, 3, 4]
bookings_df['cust_id'] = [1, 1, 2, 5]
bookings_df['booking_date'] = pd.date_range('1/1/2014', periods=4, freq='Y')
customer_df = pd.DataFrame()
customer_df['cust_id'] = [1, 2, 5]
customer_df['customer_since'] = pd.to_datetime(['2014-01-01', '2016-01-01', '2017-01-01'])
es = ft.EntitySet('Bookings')
es.entity_from_dataframe('bookings', bookings_df, 'booking_id', time_index='booking_date')
es.entity_from_dataframe('customers', customer_df, 'cust_id')
es.add_relationship(ft.Relationship(es['customers']['cust_id'], es['bookings']['cust_id']))
We have set up our bookings_df with one event a year for the past four years. The dataframe looks like this:
booking_id cust_id booking_date
0 1 1 2014-12-31
1 2 1 2015-12-31
2 3 2 2016-12-31
3 4 5 2017-12-31
Notice that we have not set the time index for customers, meaning that all customers data is always valid for use. Running DFS without the training_window argument will return
YEAR(customer_since) COUNT(bookings)
cust_id
1 2014 2.0
2 2016 1.0
5 2017 1.0
while by adding that the training_window of two years (as in your example), we only see results using two of the previous four bookings:
YEAR(customer_since) COUNT(bookings)
cust_id
1 2014 0.0
2 2016 1.0
5 2017 1.0

Related

Keep only the last record if the value occurs continuously

Keep only the last record if the values occurs continuously.
Input_df:
Date
Value
2022/01/01
5
2022/01/03
4
2022/01/05
3
2022/01/06
3
2022/01/07
3
2022/01/08
4
2022/01/09
3
Output_df:
Date
Value
2022/01/01
5
2022/01/03
4
2022/01/07
3
2022/01/08
4
2022/01/09
3
-- The value 3 repeats continuously for 3 dates, so we only keep the latest record out of the three continuous dates and if there is a different value transmitted in between the continuity breaks, so do not delete the record.
You can use pandas.Series.diff to create a flag and see is the column value is continous or not. See the documentation here.
Then drop line that are continous.
# Create the dataframe
df = pd.DataFrame({
"Date" : ["2022/01/01", "2022/01/03", "2022/01/05", "2022/01/06", "2022/01/07", "2022/01/08", "2022/01/09"],
"Value" : [5, 4, 3, 3, 3, 4, 3]
})
# Create a flag
df['Diff'] = df['Value'].diff(periods = -1).fillna(1)
df = df.loc[df['Diff'] != 0, :].drop('Diff', axis = 1)
Try this with sql
SELECT distinct date, VALUE,
max(case
when value=lead(value) then
lead(date) else date end)
Over (order by Null) from table;

python3.7 & pandas - use column value in row as lookup value to return different column value

I've got a tricky situation - tricky for me since I'm really new to python. I've got a dataframe in pandas and I need to logic my way through building a new column that will be used later in a data match from a difference source. Basically, the picture tells what I can't figure out.
For any of the LOW labels I need to retrieve their MID_LEVEL label and copy it to a new column. The DESIRED OUTPUT column is what I need to create.
You can see that the LABEL_PATH is formatted in a way that I can use the first 9 digits as a "lookup" to find the corresponding LABEL, but I can't figure out how to achieve that. As an example, for any row that the LABEL_PATH starts with "0.02.0004" the desired output needs to be "MID_LEVEL1".
This dataset has around 25k rows, so wanted to avoid row iteration as well.
Any help would be greatly appreciated!
Chosing a similar example as you did:
df = pd.DataFrame({"a":["1","1.1","1.1.1","1.1.2","2"],"b":range(5)})
df["c"] = np.nan
mask = df.a.apply(lambda x: len(x.split(".")) < 3)
df.loc[mask,"c"] = df.b[mask]
df.c.fillna(method="ffill", inplace=True)
Most of the magic takes place in the line where mask is defined, but it's not that difficult: if the value in a gets split into less than 3 parts (i.e., has at most one dot), mark it as True, otherwise not.
Use that mask to copy over the values, and then fill unspecified values with valid values from above.
I am using this data for comparison :
test_dict = {"label_path": [1, 2, 3, 4, 5, 6], "label": ["low1", "low2", "mid1", "mid2", "high1", "high2"], "desired_output": ["mid1", "mid2", "mid1", "mid2", "high1", "high2"]}
df = pd.DataFrame(test_dict)
Which gives :
label_path label desired_output
0 1 low1 mid1
1 2 low2 mid2
2 3 mid1 mid1
3 4 mid2 mid2
4 5 high1 high1
5 6 high2 high2
With a bit ogf logic and a merge :
desired_label_df = df.drop_duplicates("desired_output", keep="last")
desired_label_df = desired_label_df[["label_path", "desired_output"]]
desired_label_df.columns = ["desired_label_path", "desired_output"]
df = df.merge(desired_label_df, on="desired_output", how="left")
Gives us :
label_path label desired_output desired_label_path
0 1 low1 mid1 3
1 2 low2 mid2 4
2 3 mid1 mid1 3
3 4 mid2 mid2 4
4 5 high1 high1 5
5 6 high2 high2 6
Edit: if you want to create the desired_output column, just do the following :
df["desired_output"] = df["label"].apply(lambda x: x.replace("low", "mid"))

Sequentially comparing groupby values conditionally

Given a dataframe
data = [['Bob','25'],['Alice','46'],['Alice','47'],['Charlie','19'],
['Charlie','19'],['Charlie','19'],['Doug','23'],['Doug','35'],['Doug','35.5']]
df = pd.DataFrame(data, columns = ['Customer','Sequence'])
Calculate the following:
First Sequence in each group is assigned a GroupID of 1.
Compare first Sequence to subsequent Sequence values in each group.
If difference is greater than .5, increment GroupID.
If GroupID was incremented, instead of comparing subsequent values to the first, use the current Sequence.
In the desired results table below...
Bob only has 1 record so the GroupID is 1.
Alice has 2 records and the difference between the two Sequence values (46 & 47) is greater than .5 so the GroupID is incremented.
Charlie's Sequence values are all the same, so all records get GroupID 1.
For Doug, the difference between the first two Sequence values (23 & 35) is greater than .5, so the GroupID for the second Sequence becomes 2. Now, since the GroupID was incremented, I want to compare the next value of 35.5 to 35, not 23, which means the last two rows share the same GroupID.
Desired results:
CustomerID
Sequence
GroupID
Bob
25
1
Alice
46
1
Alice
47
2
Charlie
19
1
Charlie
19
1
Charlie
19
1
Doug
23
1
Doug
35
2
Doug
35.5
2
My implementation:
# generate unique ID based on each customers Sequence
df['EventID'] = df.groupby('Customer')[
'Sequence'].transform(lambda x: pd.factorize(x)[0]) + 1
# impute first Sequence for each customer for comparison
df['FirstSeq'] = np.where(
df['EventID'] == 1, df['Sequence'], np.nan
)
# groupby and fill first Sequence forward
df['FirstSeq'] = df.groupby('Customer')[
'FirstSeq'].transform(lambda v: v.ffill())
# get difference of first Sequence and all others
df['FirstSeqDiff'] = abs(df['FirstSeq'] - df['Sequence'])
# create unique GroupID based on Sequence difference from first Sequence
df["GroupID"] = np.cumsum(df.FirstSeqDiff > 0.5) + 1
The above works for cases like Bob, Alice and Charlie but not Doug because it is always comparing to the first Sequence. How can I modify the code to change the compared Sequence value if the GroupID is incremented?
EDIT:
The dataframe will always be sorted by Customer and Sequence. I guess a better way to explain my goal is to assign a unique ID to all Sequence values whose difference are .5 or less, grouping by Customer.
The code has errors -> add df = df.astype({'Customer':str,'Sequence':np.float64}) would fix it. But still you cannot get what you want with this design. Try to define your own lambda function myfunc, which solves your problem directly:
data = [['Bob','25'],['Alice','46'],['Alice','47'],['Charlie','19'],
['Charlie','19'],['Charlie','19'],['Doug','23'],['Doug','35'],['Doug','35.5']]
df = pd.DataFrame(data, columns = ['Customer','Sequence'])
df = df.astype({'Customer':str,'Sequence':np.float64})
def myfunc(series):
ret = []
series = series.sort_values().values
for i,val in enumerate(series):
if i==0:
ret.append(1)
else:
ret.append(ret[-1]+(series[i]-series[i-1]>0.5))
return ret
df['EventID'] = df.groupby('Customer')[
'Sequence'].transform(lambda x: myfunc(x))
print (df)
Happy coding my friend.

How to organise different datasets on Excel into the same layout/order (using pandas)

I have multiple Excel spreadsheets containing the same types of data but they are not in the same order. For example, if file 1 has the results of measurements A, B, C and D from River X printed in columns 1, 2, 3 and 4, respectively but file 2 has the same measurements taken for a different river, River Y, printed in columns 6, 7, 8, and 9 respectively, is there a way to use pandas to reorganise one dataframe to match the layout of another dataframe (i.e. make it so that Sheet2 has the measurements for River Y printed in columns 1, 2, 3 and 4)? Sometimes the data is presented horizontally, not vertically as described above, too. If I have the same measurements for, say, 400 different rivers on 400 separate sheets, but the presentation/layout of data is erratic with regards to each individual file, it would be useful to be able to put a single order on every spreadsheet without having to manually shift columns on Excel.
Is there a way to use pandas to reorganise one dataframe to match the layout of another dataframe?
You can get a list of columns from one of your dataframes and then sort that. Next you can use the sorted order to reorder your remaining dataframes. I've created an example below:
import pandas as pd
import numpy as np
# Create an example of your problem
root = 'River'
suffix = list('123')
cols_1 = [root + '_' + each_suffix for each_suffix in suffix]
cols_2 = [root + '_' + each_suffix for each_suffix in suffix[::]]
data = np.arange(9).reshape(3,3)
df_1 = pd.DataFrame(columns=cols_1, data=data)
df_2 = pd.DataFrame(columns=cols_2, data=data)
df_1
[out] River_1 River_2 River_3
0 0 1 2
1 3 4 5
2 6 7 8
df_2
[out] River_3 River_2 River_1
0 0 1 2
1 3 4 5
2 6 7 8
col_list = df_1.columns.to_list() # Get a list of column names use .sort() to sort in place or
sorted_col_list = sorted(col_list, reverse=False) # Use reverse True to invert the order
def rearrange_df_cols(df, target_order):
df = df[target_order]
print(df)
return df
rearrange_df_cols(df_1, sorted_col_list)
[out] River_1 River_2 River_3
0 0 1 2
1 3 4 5
2 6 7 8
rearrange_df_cols(df_2, sorted_col_list)
[out] River_1 River_2 River_3
0 2 1 0
1 5 4 3
2 8 7 6
You can write a function based on what's above and apply it to all of your file/sheets provided that all columns names exist (NB the must be written identically).
Sometimes the data is presented horizontally, not vertically as described above, too.
This would be better as a separate question. In principle you should check the dimension of your data e.g. df.shape and based of the shape you can either use df.transpose() and then your function to reorder the columns names or directly use your function to reorder the column names.

How to get values of one column based on another column using specific match values

I have 5 columns contains [ Voltage,Bus,Load,load_Values,transmission, transmission_Values]. all the column name with Values contain numerical value based on their corresponding value.The csv files looks like that below
Voltage Bus Load load_Values transmission transmission_Values
Voltage(1) 2 load(1) 3 transmission(1) 2
Voltage(2) 2 load(2) 4 transmission(2) 3
Voltage(5) 3 load(3) 5 transmission(3) 5
I have to fetch value of Bus based on Transmission and load. for example
To get the value of bus. First, I need to fetch the value of transmission(2) which is 3. Now based on this value, I need to get the value of load which is load(3)=5.Next, Based on this value, I have to get the value of Voltage(5) which is 3.
I tried to get the value of single column based on the their corresponding column value.
total=df[df['load']=='load(1)']['load_Values']
next_total= df[df['transmission']=='transmission['total']']['transmission_Values']
v_total= df[df['Voltage']=='Voltage(5)']['Voltage_Values']
How to get all these values automatically. For example, if i have 1100 values in every column, How I can fetch all the values for 1100 in these columns.
This is how dataset looks like
So to get the Value of VRES_LD which is new column. For that I have to look for the I__ND_LD Column which has value I__ND_LD(1) and corressponding value stored in I__ND_LD_Values which is 10. Once I get the value 10 now based on that I ahve to Look for I__BS_ND column which has I__BS__ND(10) and its value is 5.0 in I__BS_ND_Values. Based on this value, I have to find the value of V_BS(5) which is 0.986009. Now this value should be store in the new column VRES_LD. Please let me know if you get it now.
I generalized your solution so you can work with as many values as you want.
I changed the name "Load_Value" to "load_value_name" to avoid confusion since there is a variable named "load_value" in lowercase.
You can start with as many values as you want; in our example we start with "1":
start_values = [1]
load_value_name = [f"^I__ND_LD({n})" for n in start_values]
#Output: but you'll have more than one if needed
['^I__ND_LD(1)']
Then we fetch all the values:
load_values=df[df['I__ND_LD'].isin(load_names)]['I__ND_LD_Values'].values.astype(np.int)
#output: again, more if needed
array([10])
let's get the bus names:
bus_names = [f"^I__BS_ND({n})" for n in load_values]
bus_values = df[df['I__BS_ND'].isin(bus_names)]['I__BS_ND_Values'].values.astype(np.int)
#output
array([5])
And finally voltage:
voltage_bus_value = [f"^V_BS({n})" for n in bus_values]
voltage_values = df[df['V_BS'].isin(voltage_names)]['V_BS_Values'].values
#output
array([0.98974069])
Notes:
Instead of rounding I downcasted to int; and .isin() method looks for all occurances so you can fetch all of the values.
If I understand correctly, you should be able to create key/value tables and use merge. The step to voltage is a little unclear, but the basic idea below should work, I think:
df = pd.DataFrame({'voltage': {0: 'Voltage(1)', 1: 'Voltage(2)', 2: 'Voltage(5)'},
'bus': {0: 2, 1: 2, 2: 3},
'load': {0: 'load(1)', 1: 'load(2)', 2: 'load(3)'},
'load_values': {0: 3, 1: 4, 2: 5},
'transmission': {0: 'transmission(1)',
1: 'transmission(2)',
2: 'transmission(3)'},
'transmission_values': {0: 2, 1: 3, 2: 5}})
load = df[['load', 'load_values']].copy()
trans = df[['transmission','transmission_values']].copy()
load['load'] = load['load'].str.extract('(\d)').astype(int)
trans['transmission'] = trans['transmission'].str.extract('(\d)').astype(int)
(df[['bus']].merge(trans, how='left', left_on='bus', right_on='transmission')
.merge(load, how='left', left_on='transmission_values', right_on='load'))
resulting in:
bus transmission transmission_values load load_values
0 2 2 3 3.0 5.0
1 2 2 3 3.0 5.0
2 3 3 5 NaN NaN
I think you need to do 3 things.
1.You need to put a number inside a string. You do it like this:
n_cookies = 3
f"I want {n_cookies} cookies"
#Output
I want 3 cookies
2.Let's say the values you need to fetch are:
transmission_values = [2,5,20]
You than need to fetch those load values:
load_values_to_fetch = [f"transmission({n})" for n in transmission_values]
#output
[transmission(2),transmission(5),transmission(20)]
3.Get all the voltage values from the df. Use .isin() method:
voltage_value= df[df['Voltage'].isin(load_values_to_fetch )]['Voltage_Values'].values
I hope I understood the problem correctly. Try and let us know because I can't try the code without data

Resources