updating dataframe with iterrows - python-3.x

I want to compute values in a dataframe doing it by rows with iterrows, as below:
df = pd.DataFrame([ list( range( 0, 6)) + [np.NaN] * 5,
list( range(10,16)) + [np.NaN] * 5,
list( range(20,26)) + [np.NaN] * 5,
list( range(30,36)) + [np.NaN] * 5])
for (index, row) in df.iterrows():
df.loc[ index, 6: 11] = row[ 1: 6] - row [ 0]
Why df is not updated ?
I even tried to replace row[ 1: 6] - row [ 0] with df.loc[ index, 1: 6] - df.loc[ index, 0] and it doesn't work. Is it a trivial mistake or more subtile concept I don't master ? And also is there something more performant ?

Pandas assignment with loc does index alignment before assignment. Your columns names will be misaligned here. Do this:
for (index, row) in df.iterrows():
df.loc[ index, 6: 11] = (row[ 1: 6] - row [ 0]).values
df
Out[23]:
0 1 2 3 4 5 6 7 8 9 10
0 0 1 2 3 4 5 1.0 2.0 3.0 4.0 5.0
1 10 11 12 13 14 15 1.0 2.0 3.0 4.0 5.0
2 20 21 22 23 24 25 1.0 2.0 3.0 4.0 5.0
3 30 31 32 33 34 35 1.0 2.0 3.0 4.0 5.0
Documentation here for more information:
Warning pandas aligns all AXES when setting Series and DataFrame from
.loc, .iloc and .ix. This will not modify df because the column
alignment is before value assignment.

You rarely ever need to iterate through a dataframe. I would just do this:
import pandas
import numpy
x = numpy.array([
list(range(0, 6)) + [numpy.NaN] * 5,
list(range(10, 16)) + [numpy.NaN] * 5,
list(range(20, 26)) + [numpy.NaN] * 5,
list(range(30, 36)) + [numpy.NaN] * 5
])
x[:, 6:] = x[:, 1:6] - x[:, [0]]
pandas.DataFrame(x)
Gives me:
0 1 2 3 4 5 6 7 8 9 10
0 0.0 1.0 2.0 3.0 4.0 5.0 1.0 2.0 3.0 4.0 5.0
1 10.0 11.0 12.0 13.0 14.0 15.0 1.0 2.0 3.0 4.0 5.0
2 20.0 21.0 22.0 23.0 24.0 25.0 1.0 2.0 3.0 4.0 5.0
3 30.0 31.0 32.0 33.0 34.0 35.0 1.0 2.0 3.0 4.0 5.0

Thx. I added up the two solutions :
df = pd.DataFrame([ list( range( 0, 6)) + [np.NaN] * 5,
list( range(10,16)) + [np.NaN] * 5,
list( range(20,26)) + [np.NaN] * 5,
list( range(30,36)) + [np.NaN] * 5])
df.loc[ :, 6: 11] = (row[ 1: 6] - row [ 0]).values
df
Out[10]:
0 1 2 3 4 5 6 7 8 9 10
0 0 1 2 3 4 5 1.0 2.0 3.0 4.0 5.0
1 10 11 12 13 14 15 1.0 2.0 3.0 4.0 5.0
2 20 21 22 23 24 25 1.0 2.0 3.0 4.0 5.0
3 30 31 32 33 34 35 1.0 2.0 3.0 4.0 5.0
EDIT:
As a matter of fact this is not working! In my real example there is a problem and data is not what it should be looking at this small example.
The iterrows() solution is slow (my data frame is around 9000*500) so I'm going to numpy array solution. Converting the data frame to numpy array, doing the calculation and going back to data frame.
import numpy as np
import pandas as pd
df = pd.DataFrame([ list( range( 0, 6)) + [np.NaN] * 5,
list( range(10,16)) + [np.NaN] * 5,
list( range(20,26)) + [np.NaN] * 5,
list( range(30,36)) + [np.NaN] * 5])
x = df.as_matrix()
x[ :, 6:] = x[ :, 1: 6] - x[ :, [ 0]]
df = pd.DataFrame( x, columns=df.columns, index=df.index, dtype='int8')
df
Out[15]:
0 1 2 3 4 5 6 7 8 9 10
0 0 1 2 3 4 5 1 2 3 4 5
1 10 11 12 13 14 15 1 2 3 4 5
2 20 21 22 23 24 25 1 2 3 4 5
3 30 31 32 33 34 35 1 2 3 4 5
In [ ]:

Related

Pandas: customed rank function based on quantile

I have the following data frame.
item_id price quantile
0 1 10 0.1
1 3 20 0.2
2 4 30 0.3
3 6 40 0.4
4 11 50 0.5
5 12 60 0.6
6 15 70 0.7
7 20 80 0.8
8 25 90 0.9
9 26 100 1.0
I would like to have a customed rank function, which starts from the record whose quantile closest to 0.44, then goes down, and goes up, then goes down, and goes up ...
The result should look like:
item_id price quantile customed_rank
0 1 10 0.1 6
1 3 20 0.2 4
2 4 30 0.3 2
3 6 40 0.4 1
4 11 50 0.5 3
5 12 60 0.6 5
6 15 70 0.7 7
7 20 80 0.8 8
8 25 90 0.9 9
9 26 100 1.0 10
Other then looping over the entire data frame to do that, is there a more elegant way to achieve this? Thanks!
You want to rank by the absolute value of the difference between quantile and 0.44.
(df['quantile'] - 0.44).abs().rank()
0 7.0
1 5.0
2 3.0
3 1.0
4 2.0
5 4.0
6 6.0
7 8.0
8 9.0
9 10.0
Name: quantile, dtype: float64
A faster (but uglier) alternative is to argsort twice.
(df['quantile'] - 0.44).abs().values.argsort().argsort() + 1
array([ 7, 5, 3, 1, 2, 4, 6, 8, 9, 10])
Note that this solution is only faster if you work with Numpy array objects (through the values property), rather than Pandas series objects.

python - Iterating over multi-index pandas dataframe

I´m trying to iterate over a huge pandas dataframe (over 370.000 rows) based on the index.
For each row the code should look back on the last 12 entries of this index (if available) and sum up based on (running) quarters / semesters / year.
If there is no information or not enough information (only last 3 months) then the code should consider the other months / quarters as 0.
Here is a sample of my dataframe:
This is the expected output:
So looking at DateID "1" we don´t have any other information for this row. DateID "1" is the last month in this case (month 12 so to say) and therefore in Q4 and H2. All other previous month are not existing and therefore not considered.
I already found a working solution but its very inefficient and takes a huge amount of time that is not acceptable.
Here is my code sample:
for company_name, c in df.groupby('Account Name'):
for i, row in c.iterrows():
i += 1
if i < 4:
q4 = c.iloc[:i]['Value$'].sum()
q3 = 0
q2 = 0
q1 = 0
h2 = q4 + q3
h1 = q2 + q1
year = q4 + q3 + q2 + q1
elif 3 < i < 7:
q4 = c.iloc[i-3:i]['Value$'].sum()
q3 = c.iloc[:i-3]['Value$'].sum()
q2 = 0
q1 = 0
h2 = q4 + q3
h1 = q2 + q1
year = q4 + q3 + q2 + q1
elif 6 < i < 10:
q4 = c.iloc[i-3:i]['Value$'].sum()
q3 = c.iloc[i-6:i-3]['Value$'].sum()
q2 = c.iloc[:i-6]['Value$'].sum()
q1 = 0
h2 = q4 + q3
h1 = q2 + q1
year = q4 + q3 + q2 + q1
elif 9 < i < 13:
q4 = c.iloc[i-3:i]['Value$'].sum()
q3 = c.iloc[i-6:i-3]['Value$'].sum()
q2 = c.iloc[i-9:i-6]['Value$'].sum()
q1 = c.iloc[:i-9]['Value$'].sum()
h2 = q4 + q3
h1 = q2 + q1
year = q4 + q3 + q2 + q1
else:
q4 = c.iloc[i-3:i]['Value$'].sum()
q3 = c.iloc[i-6:i-3]['Value$'].sum()
q2 = c.iloc[i-9:i-6]['Value$'].sum()
q1 = c.iloc[i-12:i-9]['Value$'].sum()
h2 = q4 + q3
h1 = q2 + q1
year = q4 + q3 + q2 + q1
new_df = new_df.append({'Account Name':row['Account Name'], 'DateID': row['DateID'],'Q4':q4,'Q3':q3,'Q2':q2,'Q1':q1,'H1':h1,'H2':h2,'Year':year},ignore_index=True)
As I said I´m looking for a more efficient way to calculate these numbers as I have almost 10.000 Account Names and 30 Date ID´s per Account.
Thanks a lot!
If I got you right, this should calculate your figures:
grouped= df.groupby('Account Name')['Value$']
last_3= grouped.apply(lambda ser: ser.rolling(window=3, min_periods=1).sum())
last_6= grouped.apply(lambda ser: ser.rolling(window=6, min_periods=1).sum())
last_9= grouped.apply(lambda ser: ser.rolling(window=9, min_periods=1).sum())
last_12= grouped.apply(lambda ser: ser.rolling(window=12, min_periods=1).sum())
df['Q4']= last_3
df['Q3']= last_6 - last_3
df['Q2']= last_9 - last_6
df['Q1']= last_12 - last_9
df['H1']= df['Q1'] + df['Q2']
df['H2']= df['Q3'] + df['Q4']
This outputs:
Out[19]:
Account Name DateID Value$ Q4 Q3 Q2 Q1 H1 H2
0 A 0 33 33.0 0.0 0.0 0.0 0.0 33.0
1 A 1 20 53.0 0.0 0.0 0.0 0.0 53.0
2 A 2 24 77.0 0.0 0.0 0.0 0.0 77.0
3 A 3 21 65.0 33.0 0.0 0.0 0.0 98.0
4 A 4 22 67.0 53.0 0.0 0.0 0.0 120.0
5 A 5 31 74.0 77.0 0.0 0.0 0.0 151.0
6 A 6 30 83.0 65.0 33.0 0.0 33.0 148.0
7 A 7 23 84.0 67.0 53.0 0.0 53.0 151.0
8 A 8 11 64.0 74.0 77.0 0.0 77.0 138.0
9 A 9 35 69.0 83.0 65.0 33.0 98.0 152.0
10 A 10 32 78.0 84.0 67.0 53.0 120.0 162.0
11 A 11 31 98.0 64.0 74.0 77.0 151.0 162.0
12 A 12 32 95.0 69.0 83.0 65.0 148.0 164.0
13 A 13 20 83.0 78.0 84.0 67.0 151.0 161.0
14 A 14 15 67.0 98.0 64.0 74.0 138.0 165.0
15 B 0 44 44.0 0.0 0.0 0.0 0.0 44.0
16 B 1 43 87.0 0.0 0.0 0.0 0.0 87.0
17 B 2 31 118.0 0.0 0.0 0.0 0.0 118.0
18 B 3 10 84.0 44.0 0.0 0.0 0.0 128.0
19 B 4 13 54.0 87.0 0.0 0.0 0.0 141.0
20 B 5 20 43.0 118.0 0.0 0.0 0.0 161.0
21 B 6 28 61.0 84.0 44.0 0.0 44.0 145.0
22 B 7 14 62.0 54.0 87.0 0.0 87.0 116.0
23 B 8 20 62.0 43.0 118.0 0.0 118.0 105.0
24 B 9 41 75.0 61.0 84.0 44.0 128.0 136.0
25 B 10 39 100.0 62.0 54.0 87.0 141.0 162.0
26 B 11 46 126.0 62.0 43.0 118.0 161.0 188.0
27 B 12 26 111.0 75.0 61.0 84.0 145.0 186.0
28 B 13 24 96.0 100.0 62.0 54.0 116.0 196.0
29 B 14 34 84.0 126.0 62.0 43.0 105.0 210.0
32 C 2 12 12.0 0.0 0.0 0.0 0.0 12.0
33 C 3 15 27.0 0.0 0.0 0.0 0.0 27.0
34 C 4 45 72.0 0.0 0.0 0.0 0.0 72.0
35 C 5 22 82.0 12.0 0.0 0.0 0.0 94.0
36 C 6 48 115.0 27.0 0.0 0.0 0.0 142.0
37 C 7 45 115.0 72.0 0.0 0.0 0.0 187.0
38 C 8 11 104.0 82.0 12.0 0.0 12.0 186.0
39 C 9 27 83.0 115.0 27.0 0.0 27.0 198.0
For the following test data:
data= {'Account Name': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C'],
'DateID': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 2, 3, 4, 5, 6, 7, 8, 9],
'Value$': [33, 20, 24, 21, 22, 31, 30, 23, 11, 35, 32, 31, 32, 20, 15, 44, 43, 31, 10, 13, 20, 28, 14, 20, 41, 39, 46, 26, 24, 34, 12, 15, 45, 22, 48, 45, 11, 27]
}
df= pd.DataFrame(data)
Edit:: If you want to count the unique entires over the same period, you can do that as follows:
def get_nunique(np_array):
unique, counts= np.unique(np_array, return_counts=True)
return len(unique)
df['Category'].rolling(window=3, min_periods=1).apply(get_nunique)
I didn't want to overload the answer above completely, so I add a new one for your second part:
# define a function that
# creates the unique counts
# by aggregating period_length times
# so 3 times for the quarter mapping
# and 6 times for the half year
# it's basically doing something like
# a sliding window aggregation
def get_mapping(df, period_lenght=3):
df_mapping= None
for offset in range(period_lenght):
quarter= (df['DateID']+offset) // period_lenght
aggregated= df.groupby([quarter, df['Account Name']]).agg({'DateID': max, 'Category': lambda ser: len(set(ser))})
incomplete_data= ((aggregated['DateID']+offset+1)//period_lenght <= aggregated.index.get_level_values(0)) & (aggregated.index.get_level_values(0) >= period_lenght)
aggregated.drop(aggregated.index[incomplete_data].to_list(), inplace=True)
aggregated.set_index('DateID', append=True, inplace=True)
aggregated= aggregated.droplevel(0, axis='index')
if df_mapping is None:
df_mapping= aggregated
else:
df_mapping= pd.concat([df_mapping, aggregated], axis='index')
return df_mapping
# apply it for 3 months and merge it to the source df
df_mapping= get_mapping(df, period_lenght=3)
df_mapping.columns= ['unique_3_months']
df_with_3_months= df.merge(df_mapping, left_on=['Account Name', 'DateID'], how='left', right_index=True)
# do the same for 6 months and merge it again
df_mapping= get_mapping(df, period_lenght=6)
df_mapping.columns= ['unique_6_months']
df_with_6_months= df_with_3_months.merge(df_mapping, left_on=['Account Name', 'DateID'], how='left', right_index=True)
This results in:
Out[305]:
Account Name DateID Value$ Category unique_3_months unique_6_months
0 A 0 10 1 1 1
1 A 1 12 2 2 2
2 A 1 38 1 2 2
3 A 2 20 3 3 3
4 A 3 25 3 3 3
5 A 4 24 4 2 4
6 A 5 27 8 3 5
7 A 6 30 5 3 6
8 A 7 47 7 3 5
9 A 8 30 4 3 5
10 A 9 17 7 2 4
11 A 10 20 8 3 4
12 A 11 33 8 2 4
13 A 12 45 9 2 4
14 A 13 19 2 3 5
15 A 14 24 10 3 3
15 A 14 24 10 3 4
15 A 14 24 10 3 4
15 A 14 24 10 3 5
15 A 14 24 10 3 1
15 A 14 24 10 3 2
16 B 0 41 2 1 1
17 B 1 13 9 2 2
18 B 2 17 6 3 3
19 B 3 45 7 3 4
20 B 4 11 6 2 4
21 B 5 38 8 3 5
22 B 6 44 8 2 4
23 B 7 15 8 1 3
24 B 8 50 2 2 4
25 B 9 27 7 3 4
26 B 10 38 10 3 4
27 B 11 25 6 3 5
28 B 12 25 8 3 5
29 B 13 14 7 3 5
30 B 14 25 9 3 3
30 B 14 25 9 3 4
30 B 14 25 9 3 5
30 B 14 25 9 3 5
30 B 14 25 9 3 1
30 B 14 25 9 3 2
31 C 2 31 9 1 1
32 C 3 31 7 2 2
33 C 4 26 5 3 3
34 C 5 11 2 3 4
35 C 6 15 8 3 5
36 C 7 22 2 2 5
37 C 8 33 2 2 4
38 C 9 16 5 2 3
38 C 9 16 5 2 3
38 C 9 16 5 2 3
38 C 9 16 5 2 1
38 C 9 16 5 2 2
38 C 9 16 5 2 2
The output is based on the following input data:
data= {
'Account Name': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C'],
'DateID': [0, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 2, 3, 4, 5, 6, 7, 8, 9],
'Value$': [10, 12, 38, 20, 25, 24, 27, 30, 47, 30, 17, 20, 33, 45, 19, 24, 41, 13, 17, 45, 11, 38, 44, 15, 50, 27, 38, 25, 25, 14, 25, 31, 31, 26, 11, 15, 22, 33, 16],
'Category': [1, 2, 1, 3, 3, 4, 8, 5, 7, 4, 7, 8, 8, 9, 2, 10, 2, 9, 6, 7, 6, 8, 8, 8, 2, 7, 10, 6, 8, 7, 9, 9, 7, 5, 2, 8, 2, 2, 5]
}
df= pd.DataFrame(data)

groupby, count and average in numpy, pandas in python

I have a dataframe that looks like this:
userId movieId rating
0 1 31 2.5
1 1 1029 3.0
2 1 3671 3.0
3 2 10 4.0
4 2 17 5.0
5 3 60 3.0
6 3 110 4.0
7 3 247 3.5
8 4 10 4.0
9 4 112 5.0
10 5 3 4.0
11 5 39 4.0
12 5 104 4.0
I need to get a dataframe which has unique userId, number of ratings by the user and the average rating by the user as shown below:
userId count mean
0 1 3 2.83
1 2 2 4.5
2 3 3 3.5
3 4 2 4.5
4 5 3 4.0
Can someone help?
df1 = df.groupby('userId')['rating'].agg(['count','mean']).reset_index()
print(df1)
userId count mean
0 1 3 2.833333
1 2 2 4.500000
2 3 3 3.500000
3 4 2 4.500000
4 5 3 4.000000
Drop movieId since we're not using it, groupby userId, and then apply the aggregation methods:
import pandas as pd
df = pd.DataFrame({'userId': [1,1,1,2,2,3,3,3,4,4,5,5,5],
'movieId':[31,1029,3671,10,17,60,110,247,10,112,3,39,104],
'rating':[2.5,3.0,3.0,4.0,5.0,3.0,4.0,3.5,4.0,5.0,4.0,4.0,4.0]})
df = df.drop('movieId', axis=1).groupby('userId').agg(['count','mean'])
print(df)
Which produces:
rating
count mean
userId
1 3 2.833333
2 2 4.500000
3 3 3.500000
4 2 4.500000
5 3 4.000000
Here's a NumPy based approach using the fact that userID column appears to be sorted -
unq, tags, count = np.unique(df.userId.values, return_inverse=1, return_counts=1)
mean_vals = np.bincount(tags, df.rating.values)/count
df_out = pd.DataFrame(np.c_[unq, count], columns = (('userID', 'count')))
df_out['mean'] = mean_vals
Sample run -
In [103]: df
Out[103]:
userId movieId rating
0 1 31 2.5
1 1 1029 3.0
2 1 3671 3.0
3 2 10 4.0
4 2 17 5.0
5 3 60 3.0
6 3 110 4.0
7 3 247 3.5
8 4 10 4.0
9 4 112 5.0
10 5 3 4.0
11 5 39 4.0
12 5 104 4.0
In [104]: df_out
Out[104]:
userID count mean
0 1 3 2.833333
1 2 2 4.500000
2 3 3 3.500000
3 4 2 4.500000
4 5 3 4.000000

How to perform conditional dataframe operations?

Given df
A = pd.DataFrame([[1, 5, 2, 1, 2], [2, 4, 4, 1, 2], [3, 3, 1, 1, 2], [4, 2, 2, 3, 0],
[5, 1, 4, 3, -4], [1, 5, 2, 3, -20], [2, 4, 4, 2, 0], [3, 3, 1, 2, -1],
[4, 2, 2, 2, 0], [5, 1, 4, 2, -2]],
columns=['a', 'b', 'c', 'd', 'e'],
index=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
How can I create a column 'f', that corresponds to the last value in column 'e' before a change in value in column 'd', and holds that value until the next change in value in column 'd' the output would be:
a b c d e f
1 1 5 2 1 2 nan
2 2 4 4 1 2 nan
3 3 3 1 1 2 nan
4 4 2 2 3 0 2
5 5 1 4 3 -4 2
6 1 5 2 3 -20 2
7 2 4 4 2 0 -20
8 3 3 1 2 -1 -20
9 4 2 2 2 0 -20
10 5 1 4 2 -2 -20
Edit: #Noobie presented a solution that when applied in real data, it breaks down when there's a smaller than previous value in column 'd'
I think we should offer better native support for dealing with contiguous groups, but until then you can us the compare-cumsum-groupby pattern:
g = (A["d"] != A["d"].shift()).cumsum()
A["f"] = A["e"].groupby(g).last().shift().loc[g].values
which gives me
In [41]: A
Out[41]:
a b c d e f
1 1 5 2 1 2 NaN
2 2 4 4 1 2 NaN
3 3 3 1 1 2 NaN
4 4 2 2 2 0 2.0
5 5 1 4 2 -4 2.0
6 1 5 2 2 -20 2.0
7 2 4 4 3 0 -20.0
8 3 3 1 3 -1 -20.0
9 4 2 2 3 0 -20.0
10 5 1 4 3 -2 -20.0
This works because g is a count corresponding to each contiguous group of d values. Note that in this case, using the example you posted, g is the same as column "d", but that needn't be the case. Once we have g, we can use it to group column e:
In [55]: A["e"].groupby(g).last()
Out[55]:
d
1 2
2 -20
3 -2
Name: e, dtype: int64
and then
In [57]: A["e"].groupby(g).last().shift()
Out[57]:
d
1 NaN
2 2.0
3 -20.0
Name: e, dtype: float64
In [58]: A["e"].groupby(g).last().shift().loc[g]
Out[58]:
d
1 NaN
1 NaN
1 NaN
2 2.0
2 2.0
2 2.0
3 -20.0
3 -20.0
3 -20.0
3 -20.0
Name: e, dtype: float64
easy my friend. unleash the POWER OF PANDAS !
A.sort_values(by = 'd', inplace = True)
A['lag'] = A.e.shift(1)
A['output'] = A.groupby('d').lag.transform(lambda x : x.iloc[0])
A
Out[57]:
a b c d e lag output
1 1 5 2 1 2 NaN NaN
2 2 4 4 1 2 2.0 NaN
3 3 3 1 1 2 2.0 NaN
4 4 2 2 2 0 2.0 2.0
5 5 1 4 2 -4 0.0 2.0
6 1 5 2 2 -20 -4.0 2.0
7 2 4 4 3 0 -20.0 -20.0
8 3 3 1 3 -1 0.0 -20.0
9 4 2 2 3 0 -1.0 -20.0
10 5 1 4 3 -2 0.0 -20.0

Overwrite Value in Dataframe with checking Line before

So the DataFrame is:
1 28.3
2 27.9
3 22.4
4 18.1
5 15.5
6 7.1
7 5.1
8 12.0
9 15.1
10 10.1
Now i want to replace all over 25 with HSE and all below with LSE. Everthing else is "Middle". But i want to know if it was over 25 or below 8, before it got "Middle". So if it was over 25 before I would replace the value with "fHtM" and if it was below 8 before I would replace the value with "fLtM".
Thank you in advance.
Desired output:
Maybe like that:
1 S4
2 S4
3 S4
4 dS3 (down to class S3)
5 dS3
6 dS2
7 dS1
8 uS2 (up to class S2)
9 uS3
10 dS2
You can use cut:
bins = [-np.inf, 6, 13, 19, np.inf]
labels=['S1','S2','S3','S4']
df['label'] = pd.cut(df['value'], bins=bins, labels=labels)
print (df)
a value label
0 1 28.3 S4
1 2 27.9 S4
2 3 22.4 S4
3 4 18.1 S3
4 5 15.5 S3
5 6 7.1 S2
6 7 5.1 S1
7 8 12.0 S2
8 9 15.1 S3
9 10 10.1 S2
And if need add trend, use diff:
Explaining:
First get from column label second characters by str[1], convert it to int number and count diff. If duplicates, you get 0, so need replace them by NaN and forward fill by ffill().
dif = (df.label.str[1].astype(int).diff().replace(0,np.nan).ffill())
print (dif)
0 NaN
1 NaN
2 NaN
3 -1.0
4 -1.0
5 -1.0
6 -1.0
7 1.0
8 1.0
9 -1.0
Name: label, dtype: float64
Then use numpy.where for creating u where value is 1, d where is -1 and empty string if something else what is added to column label.
df['label1'] = dif.where(dif.isnull(), np.where(dif == 1.0, 'u','d')).fillna('') + df.label.astype(str)
print (df)
a value label
0 1 28.3 S4
1 2 27.9 S4
2 3 22.4 S4
3 4 18.1 dS3
4 5 15.5 dS3
5 6 7.1 dS2
6 7 5.1 dS1
7 8 12.0 uS2
8 9 15.1 uS3
9 10 10.1 dS2

Resources