Convert column values into rows in the order in which columns are present - python-3.x

Below is a sample dataframe I have. I need to convert each row into multiple rows based on month.
df = pd.DataFrame({'Jan': [100,200,300],
'Feb': [400,500,600],
'March':[700,800,900],
})
Desired output :
Jan 100
Feb 400
March 700
Jan 200
Feb 500
March 800
Jan 300
Feb 600
March 900
Tried using pandas melt function but what it does is it will group Jan together, then Feb and March. It will be like 3 rows for Jan, then 3 for Feb and same for March. But i want to achieve the above output. Could someone please help ?

Use DataFrame.stack with some data cleaning by Series.reset_index with Series.rename_axis:
df1 = (df.stack()
.reset_index(level=0, drop=True)
.rename_axis('months')
.reset_index(name='val'))
Or use numpy - flatten values and repeat columns names by numpy.tile:
df1 = pd.DataFrame({'months': np.tile(df.columns, len(df)),
'val': df.values.reshape(1,-1).ravel()})
print (df1)
months val
0 Jan 100
1 Feb 400
2 March 700
3 Jan 200
4 Feb 500
5 March 800
6 Jan 300
7 Feb 600
8 March 900

Related

Excel function to dynamically SUM UP data based on matching rows and columns

I have a table with metrics shown as rows and month shown as columns.
Example is below:
Quarter
2022-01-01
2022-01-01
2022-01-01
2022-04-01
2022-04-01
2022-04-01
2022-07-01
2022-07-01
2022-07-01
2022-10-01
2022-10-01
2022-10-01
Month
2022-01-01
2022-02-01
2022-03-01
2022-04-01
2022-05-01
2022-06-01
2022-07-01
2022-08-01
2022-09-01
2022-10-01
2022-11-01
2022-12-01
Metrics
Jan 2022
Feb 2022
Mar 2022
Apr 2022
May 2022
Jun 2022
Jul 2022
Aug 2022
Sep 2022
Oct 2022
Nov 2022
Dec 2022
Revenue
1000
1000
1000
500
500
500
100
100
100
0
0
0
Cost
10
10
10
10
10
10
20
20
20
0
5
10
I want to have a dynamic summary table of quarterly data. I can use sumifs and look up the quarter month using this function:
SUMIFS([Value row range],[Quarter range],[Quarter wanted])
However, i still have to manually select the correct value row range to sum. Is it possible to select the entire table and then match the correct row based on matching labels (metric in this case)?
Insert Report Month
Dec-22
Last 3 quarter report
Metrics
Q2 2022
Q3 2022
Q4 2022
Revenue
1500
300
0
Cost
30
60
15
I'm aware of the index & match function, but it only looks for the first match and does not sum up all months in the same quarter.
Thanks for helping!
Excel 365 for MAC should have the BYCOL function,
Given:
Your data table is a Table named Metrics
Report_Month is a Named Range containing a "real date" in the month of the final month of the desired quarter.
The following formula will return your output and will adjust as you add columns to the data table.
A11: =Metrics[[#All],[Metrics]]
B11: =LET(x,EDATE(Report_Month,SEQUENCE(,3,-6,3)),TEXT(MONTH(x)/3,"\Q0 ") & YEAR(x))
B12: =BYCOL(XLOOKUP(TEXT(DATE(YEAR(Report_Month),MONTH(Report_Month)-9+SEQUENCE(3,,1,1)+SEQUENCE(,3,0,3),1),"mmm-yy"),Metrics[#Headers],INDEX(Metrics,XMATCH(A12,Metrics[Metrics]),0)),LAMBDA(arr,SUM(arr)))
Select B12 and fill down as far as needed.
Notes
DATE(YEAR(Report_Month),MONTH(Report_Month)-9+SEQUENCE(3,,1,1)+SEQUENCE(,3,0,3),1)
creates a matrix of the previous nine month starting dates with each column consisting of a given quarter:
So for 12/1/2022 =>
The TEXT function then formats the same as the column headers in the Metrics table.
XLOOKUP will then return the appropriate columns from the table into that matrix, and using the BYCOL allows us to SUM by column which is the relevant quarter.

How to sum by month in timestamp Data Frame?

i have dataframe like this :
trx_date
trx_amount
2013-02-11
35
2014-03-10
26
2011-02-9
10
2013-02-12
5
2013-01-11
21
how do i filter that into month and year? so that i can sum the trx_amount
example expected output :
trx_monthly
trx_sum
2013-02
40
2013-01
21
2014-02
35
You can convert values to month periods by Series.dt.to_period and then aggregate sum:
df['trx_date'] = pd.to_datetime(df['trx_date'])
df1 = (df.groupby(df['trx_date'].dt.to_period('m').rename('trx_monthly'))['trx_amount']
.sum()
.reset_index(name='trx_sum'))
print (df1)
trx_monthly trx_sum
0 2011-02 10
1 2013-01 21
2 2013-02 40
3 2014-03 26
Or convert datetimes to strings in format YYYY-MM by Series.dt.strftime:
df2 = (df.groupby(df['trx_date'].dt.strftime('%Y-%m').rename('trx_monthly'))['trx_amount']
.sum()
.reset_index(name='trx_sum'))
print (df2)
trx_monthly trx_sum
0 2011-02 10
1 2013-01 21
2 2013-02 40
3 2014-03 26
Or convert to month and years, then output is different - 3 columns:
df2 = (df.groupby([df['trx_date'].dt.year.rename('year'),
df['trx_date'].dt.month.rename('month')])['trx_amount']
.sum()
.reset_index(name='trx_sum'))
print (df2)
year month trx_sum
0 2011 2 10
1 2013 1 21
2 2013 2 40
3 2014 3 26
You can try this -
df['trx_month'] = df['trx_date'].dt.month
df_agg = df.groupby('trx_month')['trx_sum'].sum()

Find earliest date within daterange

I have the following market data:
data = pd.DataFrame({'year': [2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020],
'month': [10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11],
'day': [1,2,5,6,7,8,9,12,13,14,15,16,19,20,21,22,23,26,27,28,29,30,2,3,5,6,9,10,11,12,13,16,17,18,19,20,23,24,25,26,27,30]})
data['date'] = pd.to_datetime(data)
data['spot'] = [77.3438,78.192,78.1044,78.4357,78.0285,77.3507,76.78,77.13,77.0417,77.6525,78.0906,77.91,77.6602,77.3568,76.7243,76.5872,76.1374,76.4435,77.2906,79.2239,78.8993,79.5305,80.5313,79.3615,77.0156,77.4226,76.288,76.5648,77.1171,77.3568,77.374,76.1758,76.2325,76.0401,76.0529,76.1992,76.1648,75.474,75.551,75.7018,75.8639,76.3944]
data = data.set_index('date')
I'm trying to find the spot value for the first day of the month in the date column. I can find the first business day with below:
def get_month_beg(d):
month_beg = (d.index + pd.offsets.BMonthEnd(0) - pd.offsets.MonthBegin(normalize=True))
return month_beg
data['month_beg'] = get_month_beg(data)
However, due to data issues, sometimes the earliest date from my data does not match up with the first business day of the month.
We'll call the earliest spot value of each month the "strike", which is what I'm trying to find. So for October, the spot value would be 77.3438 (10/1/21) and in Nov it would be 80.5313 (which is on 11/2/21 NOT 11/1/21).
I tried below, which only works if my data's earliest date matches up with the first business date of the month (eg it works in Oct, but not in Nov)
data['strike'] = data.month_beg.map(data.spot)
As you can see, I get NaN in Nov because the first business day in my data is 11/2 (spot rate 80.5313) not 11/1. Does anyone know how to find the earliest date within a date range (in this case the earliest date of each month)?
I was hoping the final df would like like below:
data = pd.DataFrame({'year': [2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020],
'month': [10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11],
'day': [1,2,5,6,7,8,9,12,13,14,15,16,19,20,21,22,23,26,27,28,29,30,2,3,5,6,9,10,11,12,13,16,17,18,19,20,23,24,25,26,27,30]})
data['date'] = pd.to_datetime(data)
data['spot'] = [77.3438,78.192,78.1044,78.4357,78.0285,77.3507,76.78,77.13,77.0417,77.6525,78.0906,77.91,77.6602,77.3568,76.7243,76.5872,76.1374,76.4435,77.2906,79.2239,78.8993,79.5305,80.5313,79.3615,77.0156,77.4226,76.288,76.5648,77.1171,77.3568,77.374,76.1758,76.2325,76.0401,76.0529,76.1992,76.1648,75.474,75.551,75.7018,75.8639,76.3944]
data['strike'] = [77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,77.3438,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313,80.5313]
data = data.set_index('date')
I Believe, We can get the first() for every year and month combination and later on join that with main data.
data2=data.groupby(['year','month']).first().reset_index()
#join data 2 with data based on month and year later on
year month day spot
0 2020 10 1 77.3438
1 2020 11 2 80.5313
Based on the question, What i have understood is that we need to take every month's first day and respective 'SPOT' column value.
Correct me if i have understood it wrong.
Strike = Spot value from first day of each month
To do this, we need to do the following:
Step 1. Get the Year/Month value from the Date column. Alternate, we
can use Year and Month columns you already have in the DataFrame.
Step 2: We need to groupby Year and Month. That will give all the
records by Year+Month. From this, we need to get the first record
(which will be the earliest date of the month). The earliest date can
either be 1st or 2nd or 3rd of the month depending on the data in the
column.
Step 3: By using transform in Groupby, pandas will send back the
results to match the dataframe length. So for each record, it will
send the same result. In this example, we have only 2 months (Oct &
Nov). However, we have 42 rows. Transform will send us back 42 rows.
The code: groupby('[year','month'])['date'].transform('first') will give
first day of month.
Use This:
data['dy'] = data.groupby(['year','month'])['date'].transform('first')
or:
data['dx'] = data.date.dt.to_period('M') #to get yyyy-mm value
Step 4: Using transform, we can also get the Spot value. This can be
assigned to Strike giving us the desired result. Instead of getting
first day of the month, we can change it to return Spot value.
The code will be: groupby('date')['spot'].transform('first')
Use this:
data['strike'] = data.groupby(['year','month'])['spot'].transform('first')
or
data['strike'] = data.groupby('dx')['spot'].transform('first')
Putting all this together
The full code to get Strike Price using Spot Price from first day of month
import pandas as pd
import numpy as np
data = pd.DataFrame({'year': [2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020],
'month': [10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11],
'day': [1,2,5,6,7,8,9,12,13,14,15,16,19,20,21,22,23,26,27,28,29,30,2,3,5,6,9,10,11,12,13,16,17,18,19,20,23,24,25,26,27,30]})
data['date'] = pd.to_datetime(data)
data['spot'] = [77.3438,78.192,78.1044,78.4357,78.0285,77.3507,76.78,77.13,77.0417,77.6525,78.0906,77.91,77.6602,77.3568,76.7243,76.5872,76.1374,76.4435,77.2906,79.2239,78.8993,79.5305,80.5313,79.3615,77.0156,77.4226,76.288,76.5648,77.1171,77.3568,77.374,76.1758,76.2325,76.0401,76.0529,76.1992,76.1648,75.474,75.551,75.7018,75.8639,76.3944]
#Pick the first day of month Spot price as the Strike price
data['strike'] = data.groupby(['year','month'])['spot'].transform('first')
#This will give you the first row of each month
print (data)
The output of this will be:
year month day date spot strike
0 2020 10 1 2020-10-01 77.3438 77.3438
1 2020 10 2 2020-10-02 78.1920 77.3438
2 2020 10 5 2020-10-05 78.1044 77.3438
3 2020 10 6 2020-10-06 78.4357 77.3438
4 2020 10 7 2020-10-07 78.0285 77.3438
5 2020 10 8 2020-10-08 77.3507 77.3438
6 2020 10 9 2020-10-09 76.7800 77.3438
7 2020 10 12 2020-10-12 77.1300 77.3438
8 2020 10 13 2020-10-13 77.0417 77.3438
9 2020 10 14 2020-10-14 77.6525 77.3438
10 2020 10 15 2020-10-15 78.0906 77.3438
11 2020 10 16 2020-10-16 77.9100 77.3438
12 2020 10 19 2020-10-19 77.6602 77.3438
13 2020 10 20 2020-10-20 77.3568 77.3438
14 2020 10 21 2020-10-21 76.7243 77.3438
15 2020 10 22 2020-10-22 76.5872 77.3438
16 2020 10 23 2020-10-23 76.1374 77.3438
17 2020 10 26 2020-10-26 76.4435 77.3438
18 2020 10 27 2020-10-27 77.2906 77.3438
19 2020 10 28 2020-10-28 79.2239 77.3438
20 2020 10 29 2020-10-29 78.8993 77.3438
21 2020 10 30 2020-10-30 79.5305 77.3438
22 2020 11 2 2020-11-02 80.5313 80.5313
23 2020 11 3 2020-11-03 79.3615 80.5313
24 2020 11 5 2020-11-05 77.0156 80.5313
25 2020 11 6 2020-11-06 77.4226 80.5313
26 2020 11 9 2020-11-09 76.2880 80.5313
27 2020 11 10 2020-11-10 76.5648 80.5313
28 2020 11 11 2020-11-11 77.1171 80.5313
29 2020 11 12 2020-11-12 77.3568 80.5313
30 2020 11 13 2020-11-13 77.3740 80.5313
31 2020 11 16 2020-11-16 76.1758 80.5313
32 2020 11 17 2020-11-17 76.2325 80.5313
33 2020 11 18 2020-11-18 76.0401 80.5313
34 2020 11 19 2020-11-19 76.0529 80.5313
35 2020 11 20 2020-11-20 76.1992 80.5313
36 2020 11 23 2020-11-23 76.1648 80.5313
37 2020 11 24 2020-11-24 75.4740 80.5313
38 2020 11 25 2020-11-25 75.5510 80.5313
39 2020 11 26 2020-11-26 75.7018 80.5313
40 2020 11 27 2020-11-27 75.8639 80.5313
41 2020 11 30 2020-11-30 76.3944 80.5313
Previous Answer to get the first day of each month (within the column data)
One way to do it is to create a dummy column to store the first day of each month. Then use drop_duplicates() and retain only the first row.
Key assumption:
The assumption with this logic is that we have at least 2 rows for each month. If there is only one row for a month, then it will not be part of the duplicates and you will NOT get that month's data.
That will give you the first day of each month.
import pandas as pd
import numpy as np
data = pd.DataFrame({'year': [2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020,2020],
'month': [10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11],
'day': [1,2,5,6,7,8,9,12,13,14,15,16,19,20,21,22,23,26,27,28,29,30,2,3,5,6,9,10,11,12,13,16,17,18,19,20,23,24,25,26,27,30]})
data['date'] = pd.to_datetime(data)
data['spot'] = [77.3438,78.192,78.1044,78.4357,78.0285,77.3507,76.78,77.13,77.0417,77.6525,78.0906,77.91,77.6602,77.3568,76.7243,76.5872,76.1374,76.4435,77.2906,79.2239,78.8993,79.5305,80.5313,79.3615,77.0156,77.4226,76.288,76.5648,77.1171,77.3568,77.374,76.1758,76.2325,76.0401,76.0529,76.1992,76.1648,75.474,75.551,75.7018,75.8639,76.3944]
#create a dummy column to store the first day of the month
data['dx'] = data.date.dt.to_period('M')
#drop duplicates while retaining only the first row of each month
dx = data.drop_duplicates('dx',keep='first')
#This will give you the first row of each month
print (dx)
The output of this will be:
year month day date spot dx
0 2020 10 1 2020-10-01 77.3438 2020-10
22 2020 11 2 2020-11-02 80.5313 2020-11
If there is only one row for a given month, then you can use groupby the month and take the first record.
data.groupby(['dx']).first()
This will give you:
year month day date spot
dx
2020-10 2020 10 1 2020-10-01 77.3438
2020-11 2020 11 2 2020-11-02 80.5313
data['strike']=data.groupby(['year','month'])['spot'].transform('first')
I guess this can be achieved by this without creating any other dataframe.

Pandas: How to ctrate DateTime index

There is Pandas Dataframe as:
year month count
0 2014 Jan 12
1 2014 Feb 10
2 2015 Jan 12
3 2015 Feb 10
How to create DateTime index from 'year' and 'month',so result would be :
count
2014.01.31 12
2014.02.28 10
2015.01.31 12
2015.02.28 10
Use to_datetime with DataFrame.pop for use and remove columns and add offsets.MonthEnd:
dates = pd.to_datetime(df.pop('year').astype(str) + df.pop('month'), format='%Y%b')
df.index = dates + pd.offsets.MonthEnd()
print (df)
count
2014-01-31 12
2014-02-28 10
2015-01-31 12
2015-02-28 10
Or:
dates = pd.to_datetime(df.pop('year').astype(str) + df.pop('month'), format='%Y%b')
df.index = dates + pd.to_timedelta(dates.dt.daysinmonth - 1, unit='d')
print (df)
count
2014-01-31 12
2014-02-28 10
2015-01-31 12
2015-02-28 10

Pandas unpivot dataframe using datetime elements from column names

Say I have a pandas dataframe as follows:
Here Store serves as id, Jan 18 - Mar 18 columns represent sales of said stores in respective years and months and Trading Area is an example of time-invariant feature of a store.
For simplicity assume sales column names are already converted to proper datetime format.
Expected result:
I was thinking about using pandas.melt, however I'm not sure how to properly use datetime information contained within column names to construct columns for year and month (obviously this can be done manually in a loop but I need to apply this to arbitrarily large dataframes and this is where it gets tedious, surely a more elegant solution exists).
Any help is appreciated.
Edit: data = pd.DataFrame({'Store':['A', 'B', 'C'], 'Jan 18':[100, 50, 60], 'Feb 18':[120, 70, 80], 'Mar 18':[140, 90, 100], 'Trading Area':[500, 800, 700]})
You could use melt in the following way:
# melt
melted = data.melt(id_vars=['Store', 'Trading Area'], var_name='Month', value_name='Sales')
# extract month and year
melted[['Month', 'Year']] = melted.Month.str.split(expand=True)
# format year
melted['Year'] = pd.to_datetime(melted.Year, yearfirst=True, format='%y').dt.year
print(melted.sort_values('Store'))
Output
Store Trading Area Month Sales Year
0 A 500 Jan 100 2018
3 A 500 Feb 120 2018
6 A 500 Mar 140 2018
1 B 800 Jan 50 2018
4 B 800 Feb 70 2018
7 B 800 Mar 90 2018
2 C 700 Jan 60 2018
5 C 700 Feb 80 2018
8 C 700 Mar 100 2018
You can do a wide_to_long followed by a stack:
(pd.wide_to_long(df=data,
stubnames=['Jan','Feb', 'Mar'],
i=['Store','Trading Area'],
j='Year',
sep=' '
)
.stack()
.reset_index(name='Sales')
.rename(columns={'level_3':'Month'})
)
Output:
Store Trading Area Year Month Sales
0 A 500 18 Jan 100
1 A 500 18 Feb 120
2 A 500 18 Mar 140
3 B 800 18 Jan 50
4 B 800 18 Feb 70
5 B 800 18 Mar 90
6 C 700 18 Jan 60
7 C 700 18 Feb 80
8 C 700 18 Mar 100

Resources