row substraction in lambda pandas dataframe - python-3.x

I have a dataframe with multiple columns. One of the column is the cumulative revenue column. If the year is not ended then the revenue will be constant for the rest of the period because the coming daily revenue is 0.
The dataframe looks like this
Now I want to create a new column where the row is substracted by the last row and if the result is 0 then print 0 for that row in the new column. If not zero then use the row value. The new dataframe should look like this:
My idea was to do this with the apply lambda method. So this is the thinking:
{df['2017new'] = df['2017'].apply(lambda x: 0 if row - lastrow == 0 else x)}
But i do not know how to write the row - lastrow part of the code. How to do this? Thanks in advance!

By using np.where
df2['New']=np.where(df2['2017'].diff().eq(0),0,df2['2017'])
df2
Out[190]:
2016 2017 New
0 10 21 21
1 15 34 34
2 70 40 40
3 90 53 53
4 93 53 0
5 99 53 0

We can shift the data and fill the values based on condition using np.where i.e
df['new'] = np.where(df['2017']-df['2017'].shift(1)==0,0,df['2017'])
or with df.where i.e
df['new'] = df['2017'].where(df['2017']-df['2017'].shift(1)!=0,0)
2016 2017 new
0 10 21 21
1 15 34 34
2 70 40 40
3 90 53 53
4 93 53 0
5 99 53 0

Related

Grouping data based on month-year in pandas and then dropping all entries except the latest one- Python

Below is my example dataframe
Date Indicator Value
0 2000-01-30 A 30
1 2000-01-31 A 40
2 2000-03-30 C 50
3 2000-02-27 B 60
4 2000-02-28 B 70
5 2000-03-31 C 90
6 2000-03-28 C 100
7 2001-01-30 A 30
8 2001-01-31 A 40
9 2001-03-30 C 50
10 2001-02-27 B 60
11 2001-02-28 B 70
12 2001-03-31 C 90
13 2001-03-28 C 100
Desired Output
Date Indicator Value
2000-01-31 A 40
2000-02-28 B 70
2000-03-31 C 90
2001-01-31 A 40
2001-02-28 B 70
2001-03-31 C 90
I want to write a code that groups data by particular month-year and then keep the entry of latest date in that particular month-year and drop the rest. The data is till year 2020
I was only able to fetch the count by month-year. I am not able to drop create a proper code that helps to group data as per month-year and indicator and get the correct results
Use Series.dt.to_period for months periods, aggregate index of maximal date per groups by DataFrameGroupBy.idxmax and then pass to DataFrame.loc:
df['Date'] = pd.to_datetime(df['Date'])
print (df['Date'].dt.to_period('m'))
0 2000-01
1 2000-01
2 2000-03
3 2000-02
4 2000-02
5 2000-03
6 2000-03
7 2001-01
8 2001-01
9 2001-03
10 2001-02
11 2001-02
12 2001-03
13 2001-03
Name: Date, dtype: period[M]
df = df.loc[df.groupby(df['Date'].dt.to_period('m'))['Date'].idxmax()]
print (df)
Date Indicator Value
1 2000-01-31 A 40
4 2000-02-28 B 70
5 2000-03-31 C 90
8 2001-01-31 A 40
11 2001-02-28 B 70
12 2001-03-31 C 90

Maximum for each column, return value of other for max, create new dataframe of returns

I hope the title is not misleading.
I need to go from this dataframe:
Column_1 Columns_2 First Second Third
0 Element_1 to_be_ignored 10 5 77
1 Element_2 to_be_ignored 30 30 11
2 Element_3 to_be_ignored 60 7 3
3 Element_4 to_be_ignored 20 87 90
to:
New_Column New_Column_1 Max
0 Element_3 First 60
1 Element_4 Second 87
2 Element_4 Third 90
get maximum value of every column
get responding value of Column_1 for maximum value
transform to new dataframe
what i got so far:
data = {'Column_1': ['Element_1', 'Element_2', 'Element_3', 'Element_4'],
'Columns_2': ['to_be_ignored', 'to_be_ignored', 'to_be_ignored', 'to_be_ignored'],
'First': [10,30,60,20], 'Second': [5,30,7,87], 'Third': [77,11,3,90]}
df = pd.DataFrame(data)
df.loc[df.iloc[:, 1:].idxmax(), ['Column_1']
so i am able to get the index position and value for the maximum in the columns.
2 Element_3
3 Element_4
3 Element_4
Unfortunately i can't figure out the rest.
THX
IIUC melt then sort_values + drop_duplicates
df.melt(['Column_1','Columns_2']).sort_values('value').drop_duplicates(['variable'],keep='last')
Column_1 Columns_2 variable value
2 Element_3 to_be_ignored First 60
7 Element_4 to_be_ignored Second 87
11 Element_4 to_be_ignored Third 90

Evaluate same logic condition on multiple columns of DataFrame

I have a pandas dataframe that contains several measurements (time-series). Amongst these measurements, some come from similar pieces of equipment (in this case, 6 pumps)
This is a sample of my data:
pd.DataFrame(data = {'pump1': [20,20],'pump2': [21,30],'pump3': [24,38],'pump4': [23,30],'pump5': [22,30],'pump6': [2,36], 'current':[10,200], 'flow': [50,50]})
pump1 pump2 pump3 pump4 pump5 pump6 current flow
0 20 21 24 23 22 2 10 50
1 20 30 38 30 30 36 200 50
I am trying to evaluate if all 6 pumps meet the same criteria. That is,whether the value of every pump on the row is less than 25.
In this case, row 0 would return True, and row 1 would return False
I could write:
df["idle"] = (df["pump1"]<25) & (df["pump2"]<25) & (df["pump3"]<25) & (df["pump4"]<25) & (df["pump5"]<25) & (df["pump6"]<25)
But it's pretty ugly! Is there a better way to write this?
I thought I could use something like .all() and write the condition <25 only once... but I don't know where to start!
Use filter with all:
df["result"] = (df.filter(like="pump",axis=1)<25).all(1)
print (df)
pump1 pump2 pump3 pump4 pump5 pump6 current flow result
0 20 21 24 23 22 2 10 50 True
1 20 30 38 30 30 36 200 50 False

EXCEL: Sum columns based on common index

I'm trying to sum columns that have a common index but the difficulty is that the index is not in the same row.
Here is an example
DayIndex2018 Value2018 DayIndex2017 Value 2017
empty empty 1 20
1 50 2 45
2 60 3 55
3 70 4 33
4 32 5 23
What I would like is to have the sum of "value 2017" for indexes that are in common with indexes for 2018: 20 + 45 + 55 + 33
I though of doing this with a sumif but rows need to be perfectly aligned.
Any idea of how I could do this?
Thanks
Use sumproduct to sum the array of sumif
Formula in cell F2:
=SUMPRODUCT(SUMIF(C2:C6,A2:A6,D2:D6))

Pandas multi-index subtract from value based on value in other column part 2

Based on a thorough and accurate response to this question, I am now faced with a new issue based on slightly different data.
Given this data frame:
df = pd.DataFrame({
('A', 'a'): [23,3,54,7,32,76],
('B', 'b'): [23,'n/a',54,7,32,76],
('possible','possible'):[100,100,100,100,100,100]
})
df
A B possible
a b possible
0 23 23 100
1 3 n/a 100
2 54 54 100
3 7 n/a 100
4 32 32 100
5 76 76 100
I'd like to subtract 4 from 'possible', per row, for any instance (column) where the value is 'n/a' for that row (and then change all 'n/a' values to 0).
A B possible
a b possible
0 23 23 100
1 3 n/a 96
2 54 54 100
3 7 n/a 96
4 32 32 100
5 76 76 100
Some conditions:
It may occur that a column is all floats (though they appear to be integers upon inspection). This was not factored into the original question.
It may also occur that a row contains two instances (columns) of 'n/a' values. This was addressed by the previous solution.
Here is the previous solution:
idx = pd.IndexSlice
df.loc[:, idx['possible', 'possible']] -= (df.loc[:, idx[('A','B'),:]] == 'n/a').sum(axis=1) * 4
df.replace({'n/a':0}, inplace=True)
It works, except for where a column (A or B) contains all floats (seemingly integers). When that's the case, this error occurs:
TypeError: Could not compare ['n/a'] with block values
I think you can add casting to string by astype to condition:
idx = pd.IndexSlice
df.loc[:, idx['possible', 'possible']] -=
(df.loc[:, idx[('A','B'),:]].astype(str) == 'n/a').sum(axis=1) * 4
df.replace({'n/a':0}, inplace=True)
print df
A B possible
a b possible
0 23 23 100
1 3 0 96
2 54 54 100
3 7 0 96
4 32 32 100
5 76 76 100

Resources