Evaluate same logic condition on multiple columns of DataFrame - python-3.x

I have a pandas dataframe that contains several measurements (time-series). Amongst these measurements, some come from similar pieces of equipment (in this case, 6 pumps)
This is a sample of my data:
pd.DataFrame(data = {'pump1': [20,20],'pump2': [21,30],'pump3': [24,38],'pump4': [23,30],'pump5': [22,30],'pump6': [2,36], 'current':[10,200], 'flow': [50,50]})
pump1 pump2 pump3 pump4 pump5 pump6 current flow
0 20 21 24 23 22 2 10 50
1 20 30 38 30 30 36 200 50
I am trying to evaluate if all 6 pumps meet the same criteria. That is,whether the value of every pump on the row is less than 25.
In this case, row 0 would return True, and row 1 would return False
I could write:
df["idle"] = (df["pump1"]<25) & (df["pump2"]<25) & (df["pump3"]<25) & (df["pump4"]<25) & (df["pump5"]<25) & (df["pump6"]<25)
But it's pretty ugly! Is there a better way to write this?
I thought I could use something like .all() and write the condition <25 only once... but I don't know where to start!

Use filter with all:
df["result"] = (df.filter(like="pump",axis=1)<25).all(1)
print (df)
pump1 pump2 pump3 pump4 pump5 pump6 current flow result
0 20 21 24 23 22 2 10 50 True
1 20 30 38 30 30 36 200 50 False

Related

How to sum certain columns ending a certain word of a dataframe in python pandas?

I am trying to get 'summing' of columns ending 'Load' and 'Gen' to two new columns.
My dataframe is:
Date A_Gen A_Load B_Gen B_Load
1-1-2010 30 20 40 30
1-2-2010 45 25 35 25
The result wanted is:
Date A_Gen A_Load B_Gen B_Load S_Gen S_Load
1-1-2010 30 20 40 30 70 50
1-2-2010 45 25 35 25 80 50
Try using filter(like='..') to get the relevant columns, sum along axis=1, and return your 2 new columns:
df['S_Gen'] , df['B_Load'] = df.filter(like='Load').sum(1) , df.filter(like='Gen').sum(1)
Output:
df
Out[146]:
Date A_Gen A_Load B_Gen B_Load S_Gen
0 2010-01-01 30 20 40 70 50
1 2010-02-01 45 25 35 80 50

How to select a set of values in pandas data frame (multiple colums with multiple row conditions)

I have a huge ass csv file like given below which I opened as dataframe using pandas. I want to extract data from multiple columns at different date sets.
I want to select from a particular date and hour to another for the last 3 column values. The slicing options I tried and googled were for single column.
date heure PM10 NO2 O3
0 01/01/2016 1 27 22 36
1 01/01/2016 2 25 29 27
2 01/01/2016 3 26 47 10
3 01/01/2016 4 16 40 13
4 01/01/2016 5 15 34 13
5 02/01/2016 1 15 34 13
6 02/01/2016 2 15 34 13
Target output - taking data from a particular data and hour to another one.
3 01/01/2016 4 16
4 01/01/2016 5 15
Thank you. The data set is obviously way bigger than 4 No.
You can do this:
df_selected = df[(df.date >= "01/01/2016") &
(df['hour']>=4) &
(df.date < "02/01/2016") &
(df['hour']<6)
].iloc[:,:3] #first three columns
Alternatively, for the columns selection you can use .loc[:,['name', 'of', 'columns']] or for the last n columns .iloc[:,-n:].
Be careful with date because I'm not sure what happens with an "English" date, maybe you have to change the date using df['date'] = pd.to_datetime(df.date).

Mark sudden changes in prices in a dataframe time series and color them

I have a Pandas dataframe of prices for different months and years (timeseries), 80 columns. I want to be able to detect significant changes in prices either up or down and color them differently in a dataframe. Is that possible and what would be the best approach?
Jan-2001 Feb-2001 Jan-2002 Feb-2002 ....
100 30 10 ...
110 25 1 ...
40 5 50
70 11 4
120 35 2
Here in the first column 40 and 70 should be marked, in the second column 5 and 11 should be marked, in the third column not really sure but probably 1, 50, 4, 2...
Your question involves 2 problems I can see.
Printing the highlighting depends on the output method your trying to get to, be it STDOUT, file, or some program specific.
Identification of outliers based on the Column data. Its hard to interpret if you want it based on the entire dataset, vice the previous data in the column like a rolling outlier, ie the data previous is calculated to identify if the next thing is out of wack.
In the below instance I provide a method to go at the data with std dev/zscoring based on the mean of the data in the entire column. You will have to tweak the > < items to get to your desired state, there is many intricacies dealing with this concept and I would suggest taking a look at a few resources about this subject.
For your data:
Jan-2001,Feb-2001,Jan-2002
100,30,10
110,25,1
40,5,50
70,11,4
120,35,20000
I am aware of methods to highlight, but not in the terminal. The https://pandas.pydata.org/pandas-docs/stable/style.html method works in a few programs.
To get at the original item, identification of outliers in your data, you could use something like below to identify based on standard deviation and zscore.
Sample Code:
df = pd.read_csv("full.txt")
original = df.columns
print(df)
for col in df.columns:
col_zscore = col + "_zscore"
df[col_zscore] = (df[col] - df[col].mean())/df[col].std(ddof=0)
print(df[col].loc[(df[col_zscore] > 1.5) | (df[col_zscore] < -.5)])
print(df)
Output 1: # prints the original dataframe
Jan-2001 Feb-2001 Jan-2002
100 30 10
110 25 1
40 5 50
70 11 4
120 35 20000
Output 2: # Identifies the outliers
2 40
3 70
Name: Jan-2001, dtype: int64
2 5
3 11
Name: Feb-2001, dtype: int64
0 10
1 1
3 4
4 20000
Name: Jan-2002, dtype: int64
Output 3: # Prints the full dataframe created, with zscore of each item based on the column
Jan-2001 Feb-2001 Jan-2002 Jan-2001_std Jan-2001_zscore \
0 100 30 10 32.710854 0.410152
1 110 25 1 32.710854 0.751945
2 40 5 50 32.710854 -1.640606
3 70 11 4 32.710854 -0.615227
4 120 35 2 32.710854 1.093737
Feb-2001_std Feb-2001_zscore Jan-2002_std Jan-2002_zscore
0 12.735776 0.772524 20.755722 -0.183145
1 12.735776 0.333590 20.755722 -0.667942
2 12.735776 -1.422147 20.755722 1.971507
3 12.735776 -0.895426 20.755722 -0.506343
4 12.735776 1.211459 20.755722 -0.614076
Resources for zscore are here:
https://statistics.laerd.com/statistical-guides/standard-score-2.php

row substraction in lambda pandas dataframe

I have a dataframe with multiple columns. One of the column is the cumulative revenue column. If the year is not ended then the revenue will be constant for the rest of the period because the coming daily revenue is 0.
The dataframe looks like this
Now I want to create a new column where the row is substracted by the last row and if the result is 0 then print 0 for that row in the new column. If not zero then use the row value. The new dataframe should look like this:
My idea was to do this with the apply lambda method. So this is the thinking:
{df['2017new'] = df['2017'].apply(lambda x: 0 if row - lastrow == 0 else x)}
But i do not know how to write the row - lastrow part of the code. How to do this? Thanks in advance!
By using np.where
df2['New']=np.where(df2['2017'].diff().eq(0),0,df2['2017'])
df2
Out[190]:
2016 2017 New
0 10 21 21
1 15 34 34
2 70 40 40
3 90 53 53
4 93 53 0
5 99 53 0
We can shift the data and fill the values based on condition using np.where i.e
df['new'] = np.where(df['2017']-df['2017'].shift(1)==0,0,df['2017'])
or with df.where i.e
df['new'] = df['2017'].where(df['2017']-df['2017'].shift(1)!=0,0)
2016 2017 new
0 10 21 21
1 15 34 34
2 70 40 40
3 90 53 53
4 93 53 0
5 99 53 0

AGGREGAT with critiera and duplicates in array

I have the following Excel spreadsheet:
A B C D E
1 ProdID Price Unique ProdID 1. Biggest 2. Biggest
2 2606639 40 2606639 50 50
3 2606639 50 4633523 45 35
4 2606639 20 3911436 25 25
5 2606639 50
6 4633523 45
7 4633523 20
8 4633523 35
9 3911436 20
10 3911436 25
11 3911436 25
12 3911436 15
In Cells D2:E4 I want to show the 1. biggest and 2. biggest price of each ProdID in Column A. Therefore, I use the following formula:
D2 =AGGREGAT(14,6,$B$2:$B$12/($A$2:$A$12=$C2),1)
E2 =AGGREGAT(14,6,$B$2:$B$12/($A$2:$A$12=$C2),2)
This formula works as long as the prices are unique in Column B as you can see on the second ProdID (4633523).
However, once the price is not unique in Column B (for example 50 for ProdID 26026639 and 25 for ProdID 3911436) the functions in Cells D2:E4 does not show the right results.
Do you have an idea if you can solve this issue with the AGGREGAT-Formula and wihtout using an ARRAY-Formula?
you could check number of occurences of the first ProdID-price combinations and use that in the last argument of the AGGREGAT function. So instead of
=AGGREGAT(14,6,$B$2:$B$12/($A$2:$A$12=$C2),2)
you would have
=AGGREGAT(14,6,$B$2:$B$12/($A$2:$A$12=$C2),2+COUNTIFS(A:A,C2,B:B,D2)-1)
of course you can just put "1+COUNTIFS..." but I put it this way so it can be better understood that it uses position 2 + number of occurences of the combination of ProdID with biggest number after the first occurence.

Resources