Last Day previous Month - python-3.x

I have this dataframe
import pandas as pd
df = pd.DataFrame({'Found':['A','A','A','A','A','B','B','B','B'],
'Date':['14/10/2021','19/10/2021','29/10/2021','30/09/2021','20/09/2021','20/10/2021','29/10/2021','15/10/2021','10/09/2021'],
'LastDayMonth':['29/10/2021','29/10/2021','29/10/2021','30/09/2021','30/09/2021','29/10/2021','29/10/2021','29/10/2021','30/09/2021'],
'Mark':[1,2,3,4,3,1,2,3,2]
})
print(df)
Found Date LastDayMonth Mark
0 A 14/10/2021 29/10/2021 1
1 A 19/10/2021 29/10/2021 2
2 A 29/10/2021 29/10/2021 3
3 A 30/09/2021 30/09/2021 4
4 A 20/09/2021 30/09/2021 3
5 B 20/10/2021 29/10/2021 1
6 B 29/10/2021 29/10/2021 2
7 B 15/10/2021 29/10/2021 3
8 B 10/09/2021 30/09/2021 2
based on this dataframe I need to create a new column that is the "Mark" of the last day of the month to form this new column.
that is, I need the value of the 'Mark' column of the last day of the month of each Found
how i did
mark_last_day = df.loc[df.apply(lambda x: x['Date']==x['LastDayMonth'], 1)]
df.merge(mark_last_day[['Found', 'LastDayMonth', 'Mark']],
how='left',
on=['Found', 'LastDayMonth'],
suffixes=('', '_LastDayMonth'))
# Output
Found Date LastDayMonth Mark Mark_LastDayMonth
0 A 14/10/2021 29/10/2021 1 3
1 A 19/10/2021 29/10/2021 2 3
2 A 29/10/2021 29/10/2021 3 3
3 A 30/09/2021 30/09/2021 4 4
4 A 20/09/2021 30/09/2021 3 4
5 B 20/10/2021 29/10/2021 1 2
6 B 29/10/2021 29/10/2021 2 2
7 B 15/10/2021 29/10/2021 3 2
So far so good but I'm having trouble creating a new column with the Mark_LastDayMonth of the previous month or I need the last day of the current month and the previous month
how do i do it
Ex.
Found Date LastDayMonth Mark Mark_LastDayMonth Mark_LastDayPrevious_Month
0 A 14/10/2021 29/10/2021 1 3 4
1 A 19/10/2021 29/10/2021 2 3 4
2 A 29/10/2021 29/10/2021 3 3 4
3 A 30/09/2021 30/09/2021 4 4 x
4 A 20/09/2021 30/09/2021 3 4 x
5 B 20/10/2021 29/10/2021 1 2 1
6 B 29/10/2021 29/10/2021 2 2 1
7 B 15/10/2021 29/10/2021 3 2 1
8 B 10/09/2021 30/09/2021 1 1 x

Here is a function to get the last day of the previous month
import datetime
def get_prev_month(date_str):
format_str = '%d/%m/%Y'
datetime_obj = datetime.datetime.strptime(date_str, format_str)
first_day_of_this_month = datetime_obj.replace(day=1)
last_day_of_prev_month = first_day_of_this_month - datetime.timedelta(days=1)
return last_day_of_prev_month.strftime("%d/%m/%Y")
Here is a function to get the mark of any "date" and "found" from your mark_last_day variable
def get_mark_of(date_str, found):
same_date = last_day_mark.Date==date_str
same_found = last_day_mark.Found == found
return last_day_mark.where(same_date & same_found).dropna().Mark
If you want to add the LastDayPrevMonth column You don't need to do so unless you want it
df["LastDayPrevMonth"] = df.LastDayMonth.apply(lambda x: get_prev_month(x))
And at last the creating the column Mark_LastDayPrevMonth, and setting 0 if there exist no that previous month in the dataset.
df["Mark_LastDayPrevMonth"] = df.apply(lambda x: get_mark_of(get_prev_month(x["LastDayMonth"]), x["Found"]), axis=1).fillna(0).astype(int)

Use the date offset MonthEnd
from pandas.tseries.offsets import MonthEnd
df['LastDayPreviousMonth'] = df['Date'] - MonthEnd()
>>> df[['Date', 'LastDayPreviousMonth']]
Date LastDayPreviousMonth
0 2021-10-14 2021-09-30
1 2021-10-19 2021-09-30
2 2021-10-29 2021-09-30
3 2021-09-30 2021-08-31
4 2021-09-20 2021-08-31
5 2021-10-20 2021-09-30
6 2021-10-29 2021-09-30
7 2021-10-15 2021-09-30
Then do a similarly merge as you did for 'LastDayMonth'.
Does this help you complete the solution?
Note: I'm assuming 'Date' and 'LastDayPreviousMonth' are datetime-like. If they aren't you need to convert them first using
df[['Date', 'LastDayMonth']] = df[['Date', 'LastDayMonth']].apply(pd.to_datetime)

Related

pandas compare 1 row value with every other row value and create a matrix

DF in hand
Steps I want to perform:
compare A001 data with A002, A003,...A00N
for every value that matches raise a counter by 1
do not increment the count if NA
repeat for row A002 with all other rows
create a matrix using the index with total count of matching values
DF creation:
data = {'name':['A001', 'A002', 'A003',
'A004','A005','A006','A007','A008'],
'Q1':[2,1,1,1,2,1,1,5],
'Q2':[4,4,4,2,4,2,5,4]
'Q3':[2,2,3,2,2,3,2,2]
'Q4':[5,3,5,2,3,2,4,5]
'Q5':[2,2,3,2,2,2,2,2]}
df = pd.DataFrame(data)
df.at[7, 'Q3'] = None
desired output
thanks in advance.
IIUC,
df = pd.DataFrame({'name':['A001', 'A002', 'A003', 'A004','A005','A006','A007','A008'],
'Q1':[2,1,1,1,2,1,1,5],
'Q2':[4,4,4,2,4,2,5,4],
'Q3':[2,2,3,2,2,3,2,2],
'Q4':[5,3,5,2,3,2,4,5],
'Q5':[2,2,3,2,2,2,2,2]})
dfm = df.merge(df, how='cross').set_index(['name_x','name_y'])
dfm.columns = dfm.columns.str.split('_', expand=True)
df_out = dfm.stack(0).apply(pd.to_numeric, errors='coerce').diff(axis=1).eq(0).sum(axis=1).groupby(level=[0,1]).sum().unstack()
output:
name_y A001 A002 A003 A004 A005 A006 A007 A008
name_x
A001 5 3 2 2 4 1 2 4
A002 3 5 2 3 4 2 3 3
A003 2 2 5 1 1 2 1 2
A004 2 3 1 5 2 4 3 2
A005 4 4 1 2 5 1 2 3
A006 1 2 2 4 1 5 2 1
A007 2 3 1 3 2 2 5 2
A008 4 3 2 2 3 1 2 5

Sum of all rows based on specific column values

I have a df like this:
Index Parameters A B C D E
1 Apple 1 2 3 4 5
2 Banana 2 4 5 3 5
3 Potato 3 5 3 2 1
4 Tomato 1 1 1 1 1
5 Pear 4 5 5 4 3
I want to add all the rows which has Parameter values as "Apple" , "Banana" and "Pear".
Output:
Index Parameters A B C D E
1 Apple 1 2 3 4 5
2 Banana 2 4 5 3 5
3 Potato 3 5 3 2 1
4 Tomato 1 1 1 1 1
5 Pear 4 5 5 4 3
6 Total 7 11 13 11 13
My Effort:
df[:,'Total'] = df.sum(axis=1) -- Works but I want specific values only and not all
Tried by the index in my case 1,2 and 5 but in my original df the index can vary from time to time and hence rejected that solution.
Saw various answers on SO but none of them could solve my problem!!
First idea is create index by Parameters column and select rows for sum and last convert index to column:
L = ["Apple" , "Banana" , "Pear"]
df = df.set_index('Parameters')
df.loc['Total'] = df.loc[L].sum()
df = df.reset_index()
print (df)
Parameters A B C D E
0 Apple 1 2 3 4 5
1 Banana 2 4 5 3 5
2 Potato 3 5 3 2 1
3 Tomato 1 1 1 1 1
4 Pear 4 5 5 4 3
5 Total 7 11 13 11 13
Or add new row for filtered rows by membership with Series.isin and overwrite last added value by Total:
last = len(df)
df.loc[last] = df[df['Parameters'].isin(L)].sum()
df.loc[last, 'Parameters'] = 'Total'
print (df)
Parameters A B C D E
Index
1 Apple 1 2 3 4 5
2 Banana 2 4 5 3 5
3 Potato 3 5 3 2 1
4 Tomato 1 1 1 1 1
5 Total 7 11 13 11 13
Another similar solution is filtering all columns without first and add value in one element list:
df.loc[len(df)] = ['Total'] + df.iloc[df['Parameters'].isin(L).values, 1:].sum().tolist()

How to randomly generate an unobserved data in Python3

I have an dataframe which contain the observed data as:
import pandas as pd
d = {'humanID': [1, 1, 2,2,2,2 ,2,2,2,2], 'dogID':
[1,2,1,5,4,6,7,20,9,7],'month': [1,1,2,3,1,2,3,1,2,2]}
df = pd.DataFrame(data=d)
The df is follow
humanID dogID month
0 1 1 1
1 1 2 1
2 2 1 2
3 2 5 3
4 2 4 1
5 2 6 2
6 2 7 3
7 2 20 1
8 2 9 2
9 2 7 2
We total have two human and twenty dog, and above df contains the observed data. For example:
The first row means: human1 adopt dog1 at January
The second row means: human1 adopt dog2 at January
The third row means: human2 adopt dog1 at Febuary
========================================================================
My goal is randomly generating two unobserved data for each (human, month) that are not appear in the original observed data.
like for human1 at January, he does't adopt the dog [3,4,5,6,7,..20] And I want to randomly create two unobserved sample (human, month) in triple form
humanID dogID month
1 20 1
1 10 1
However, the follow sample is not allowed since it appear in original df
humanID dogID month
1 2 1
For human1, he doesn't have any activity at Feb, so we don't need to sample the unobserved data.
For human2, he have activity for Jan, Feb and March. Therefore, for each month, we want to randomly create the unobserved data. For example, In Jan, human2 adopt dog1, dog4 and god 20. The two random unobserved samples can be
humanID dogID month
2 2 1
2 6 1
same process can be used for Feb and March.
I want to put all of the unobserved in one dataframe such as follow unobserved
humanID dogID month
0 1 20 1
1 1 10 1
2 2 2 1
3 2 6 1
4 2 13 2
5 2 16 2
6 2 1 3
7 2 20 3
Any fast way to do this?
PS: this is a code interview for a start-up company.
Using groupby and random.choices:
import random
dogs = list(range(1,21))
dfs = []
n_sample = 2
for i,d in df.groupby(['humanID', 'month']):
h_id, month = i
sample = pd.DataFrame([(h_id, dogID, month) for dogID in random.choices(list(set(dogs)-set(d['dogID'])), k=n_sample)])
dfs.append(sample)
new_df = pd.concat(dfs).reset_index(drop=True)
new_df.columns = ['humanID', 'dogID', 'month']
print(new_df)
humanID dogID month
0 1 11 1
1 1 5 1
2 2 19 1
3 2 18 1
4 2 15 2
5 2 14 2
6 2 16 3
7 2 18 3
If I understand you correctly, you can use np.random.permutation() for the dogID column to generate random permutations of the column,
df_new=df.copy()
df_new['dogID']=np.random.permutation(df.dogID)
print(df_new.sort_values('month'))
humanID dogID month
0 1 1 1
1 1 20 1
4 2 9 1
7 2 1 1
2 2 4 2
5 2 5 2
8 2 2 2
9 2 7 2
3 2 7 3
6 2 6 3
Or to create random sampling of missing values within the range of dogID:
df_new=df.copy()
a=np.random.permutation(range(df_new.dogID.min(),df_new.dogID.max()))
df_new['dogID']=np.random.choice(a,df_new.shape[0])
print(df_new.sort_values('month'))
humanID dogID month
0 1 18 1
1 1 16 1
4 2 1 1
7 2 8 1
2 2 4 2
5 2 2 2
8 2 16 2
9 2 14 2
3 2 4 3
6 2 12 3

Subset and Loop to create a new column [duplicate]

With the DataFrame below as an example,
In [83]:
df = pd.DataFrame({'A':[1,1,2,2],'B':[1,2,1,2],'values':np.arange(10,30,5)})
df
Out[83]:
A B values
0 1 1 10
1 1 2 15
2 2 1 20
3 2 2 25
What would be a simple way to generate a new column containing some aggregation of the data over one of the columns?
For example, if I sum values over items in A
In [84]:
df.groupby('A').sum()['values']
Out[84]:
A
1 25
2 45
Name: values
How can I get
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45
In [20]: df = pd.DataFrame({'A':[1,1,2,2],'B':[1,2,1,2],'values':np.arange(10,30,5)})
In [21]: df
Out[21]:
A B values
0 1 1 10
1 1 2 15
2 2 1 20
3 2 2 25
In [22]: df['sum_values_A'] = df.groupby('A')['values'].transform(np.sum)
In [23]: df
Out[23]:
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45
I found a way using join:
In [101]:
aggregated = df.groupby('A').sum()['values']
aggregated.name = 'sum_values_A'
df.join(aggregated,on='A')
Out[101]:
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45
Anyone has a simpler way to do it?
This is not so direct but I found it very intuitive (the use of map to create new columns from another column) and can be applied to many other cases:
gb = df.groupby('A').sum()['values']
def getvalue(x):
return gb[x]
df['sum'] = df['A'].map(getvalue)
df
In [15]: def sum_col(df, col, new_col):
....: df[new_col] = df[col].sum()
....: return df
In [16]: df.groupby("A").apply(sum_col, 'values', 'sum_values_A')
Out[16]:
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45

pandas moving aggregate string

from pandas import *
import StringIO
df = read_csv(StringIO.StringIO('''id months state
1 1 C
1 2 3
1 3 6
1 4 9
2 1 C
2 2 C
2 3 3
2 4 6
2 5 9
2 6 9
2 7 9
2 8 C
'''), delimiter= '\t')
I want to create a column show the cumulative state of column state, by id.
id months state result
1 1 C C
1 2 3 C3
1 3 6 C36
1 4 9 C369
2 1 C C
2 2 C CC
2 3 3 CC3
2 4 6 CC36
2 5 9 CC69
2 6 9 CC699
2 7 9 CC6999
2 8 C CC6999C
Basically the cum concatenation of string columns. What is the best way to do it?
So long as the dtype is str then you can do the following:
In [17]:
df['result']=df.groupby('id')['state'].apply(lambda x: x.cumsum())
df
Out[17]:
id months state result
0 1 1 C C
1 1 2 3 C3
2 1 3 6 C36
3 1 4 9 C369
4 2 1 C C
5 2 2 C CC
6 2 3 3 CC3
7 2 4 6 CC36
8 2 5 9 CC369
9 2 6 9 CC3699
10 2 7 9 CC36999
11 2 8 C CC36999C
Essentially we groupby on 'id' column and then apply a lambda with a transform to return the cumsum. This will perform a cumulative concatenation of the string values and return a Series with it's index aligned to the original df so you can add it as a column

Resources