Having a dataset like this:
y x size type total_neighbours res
113040 29 1204 15 3 2 0
66281 52 402 9 3 3 0
32296 21 1377 35 0 3 0
48367 3 379 139 0 4 0
33501 1 66 17 0 3 0
... ... ... ... ... ... ...
131230 39 1002 439 3 4 6
131237 40 1301 70 1 2 1
131673 26 1124 365 1 2 1
131678 27 1002 629 3 3 6
131684 28 1301 67 1 2 1
I would like to use random forest algorithm to predict the value of res column (res column can only take integer values between [0-6])
I'm doing it like this:
labels = np.array(features['res'])
features= features.drop('res', axis = 1)
features = np.array(features)
train_features, test_features, train_labels, test_labels = train_test_split(features, labels, test_size = 0.25,
random_state = 42)
rf = RandomForestRegressor(n_estimators= 1000, random_state=42)
rf.fit(train_features, train_labels);
predictions = rf.predict(test_features)
The prediction I get are the following:
array([1.045e+00, 4.824e+00, 4.608e+00, 1.200e-01, 5.982e+00, 3.660e-01,
4.659e+00, 5.239e+00, 5.982e+00, 1.524e+00])
I have no experience on this field so I don't quite understand the predictions.
How do I interpret them?
Is there any way to limit the predictions to the res column values (integers between [0-6])?
Thanks
As #MaxNoe said, I had a misconception about the model. I was using a regression to predict a discrete variable.
RandomForestClassifier is giving the expected output.
Related
I want to use two transformation techniques on a data frame, mean centering and standardization. How can I perform the mean centering method on my dataframe?
I have performed standardization using StandardScaler() from sklearn.preprocessing.
from sklearn.preprocessing import StandardScaler()
standard.iloc[:,1:-1] = StandardScaler().fit_transform(standard.iloc[:,1:-1])
I am expecting a transformed data frame which is mean-centered
dataxx = {'Name':['Tom', 'gik','Tom','Tom','Terry','Jerry','Abel','Dula','Abel'],
'Age':[20, 21, 19, 18,88,89,95,96,97],'gg':[1, 1,1, 30, 30,30,40,40,40]}
dfxx = pd.DataFrame(dataxx)
dfxx["meancentered"] = dfxx.Age - dfxx.Age.mean()
Index
Name
Age
gg
meancentered
0
Tom
20
1
-40.333333
1
gik
21
1
-39.333333
2
Tom
19
1
-41.333333
3
Tom
18
30
-42.333333
4
Terry
88
30
27.666667
5
Jerry
89
30
28.666667
6
Abel
95
40
34.666667
7
Dula
96
40
35.666667
8
Abel
97
40
36.666667
I have a following dataframe-
A B C Result
0 232 120 9 91
1 243 546 1 12
2 12 120 5 53
I want to perform the operation of the following kind-
A B C Result A-B/A+B A-C/A+C B-C/B+C
0 232 120 9 91 0.318182 0.925311 0.860465
1 243 546 1 12 -0.384030 0.991803 0.996344
2 12 120 5 53 -0.818182 0.411765 0.920000
which I am doing using
df['A-B/A+B']=(df['A']-df['B'])/(df['A']+df['B'])
df['A-C/A+C']=(df['A']-df['C'])/(df['A']+df['C'])
df['B-C/B+C']=(df['B']-df['C'])/(df['B']+df['C'])
which I believe is a very crude and ugly way to do.
How to do it in a more correct way?
You can do the following:
# take columns in a list except the last column
colnames = df.columns.tolist()[:-1]
# compute
for i, c in enumerate(colnames):
if i != len(colnames):
for k in range(i+1, len(colnames)):
df[c + '_' + colnames[k]] = (df[c] - df[colnames[k]]) / (df[c] + df[colnames[k]])
# check result
print(df)
A B C Result A_B A_C B_C
0 232 120 9 91 0.318182 0.925311 0.860465
1 243 546 1 12 -0.384030 0.991803 0.996344
2 12 120 5 53 -0.818182 0.411765 0.920000
This is a perfect case to use DataFrame.eval:
cols = ['A-B/A+B','A-C/A+C','B-C/B+C']
x = pd.DataFrame([df.eval(col).values for col in cols], columns=cols)
df.assign(**x)
A B C Result A-B/A+B A-C/A+C B-C/B+C
0 232 120 9 91 351.482759 786.753086 122.000000
1 243 546 1 12 240.961207 243.995885 16.583333
2 12 120 5 53 128.925000 546.998168 124.958333
The advantage of this method respect to the other solution, is that it does not depend on the order of the operation sings that appear as column names, but rather as mentioned in the documentation it is used to:
Evaluate a string describing operations on DataFrame columns.
I have the following df,
group_id code amount date
1 100 20 2017-10-01
1 100 25 2017-10-02
1 100 40 2017-10-03
1 100 25 2017-10-03
2 101 5 2017-11-01
2 102 15 2017-10-15
2 103 20 2017-11-05
I like to groupby group_id and then compute scores to each group based on the following features:
if code values are all the same in a group, score 0 and 10 otherwise;
if amount sum is > 100, score 20 and 0 otherwise;
sort_values by date in descending order and sum the differences between the dates, if the sum < 5, score 30, otherwise 0.
so the result df looks like,
group_id code amount date score
1 100 20 2017-10-01 50
1 100 25 2017-10-02 50
1 100 40 2017-10-03 50
1 100 25 2017-10-03 50
2 101 5 2017-11-01 10
2 102 15 2017-10-15 10
2 103 20 2017-11-05 10
here are the functions that correspond to each feature above:
def amount_score(df, amount_col, thold=100):
if df[amount_col].sum() > thold:
return 20
else:
return 0
def col_uniq_score(df, col_name):
if df[col_name].nunique() == 1:
return 0
else:
return 10
def date_diff_score(df, col_name):
df.sort_values(by=[col_name], ascending=False, inplace=True)
if df[col_name].diff().dropna().sum() / np.timedelta64(1, 'D') < 5:
return score + 30
else:
return score
I am wondering how to apply these functions to each group and calculate the sum of all the functions to give a score.
You can try groupby.transform for same size of Series as original DataFrame with numpy.where for if-else for Series:
grouped = df.sort_values('date', ascending=False).groupby('group_id', sort=False)
a = np.where(grouped['code'].transform('nunique') == 1, 0, 10)
print (a)
[10 10 10 0 0 0 0]
b = np.where(grouped['amount'].transform('sum') > 100, 20, 0)
print (b)
[ 0 0 0 20 20 20 20]
c = np.where(grouped['date'].transform(lambda x:x.diff().dropna().sum()).dt.days < 5, 30, 0)
print (c)
[30 30 30 30 30 30 30]
df['score'] = a + b + c
print (df)
group_id code amount date score
0 1 100 20 2017-10-01 40
1 1 100 25 2017-10-02 40
2 1 100 40 2017-10-03 40
3 1 100 25 2017-10-03 50
4 2 101 5 2017-11-01 50
5 2 102 15 2017-10-15 50
6 2 103 20 2017-11-05 50
My input data looks like this:
> x <- rnorm(10*9, sd = 10) %>% matrix(10) %>% round
> colnames(x) <- c(paste0(2014, c("a","b", "c")), paste0(2015, c("a","b", "c")), paste0(2016, c("a","b", "c")))
> x
2014a 2014b 2014c 2015a 2015b 2015c 2016a 2016b 2016c
[1,] 1 -11 3 3 6 5 17 5 15
[2,] 9 8 0 -1 10 8 -3 -11 6
[3,] -6 22 -3 1 -1 -4 -3 11 -9
[4,] 10 -15 0 -2 4 14 11 -11 3
[5,] 5 4 5 5 15 -9 2 5 1
[6,] -24 16 9 -7 2 -12 1 18 -2
[7,] 1 13 5 -14 1 -10 15 -1 14
[8,] -8 4 4 -15 -1 -20 -6 14 5
[9,] 10 19 -15 15 -4 3 -1 -11 8
[10,] 10 -11 -9 -1 16 3 24 -8 4
My outcome variable is continuous (i.e.: this is a regression problem).
I want to fit a model with an architecture that looks like this:
Basically, I've got granular data from separate years that aggregate to form a set of annual phenomena, which may themselves interact. If I had enough data, I could just fit a bunch of fully-connected layers. But those would be inefficient with my modest sample size.
This isn't exactly a conv net, because I don't want the "tiles" to overlap.
I also want apply both dropout and a global L2 penalty.
I'm new to Keras, but not to neural nets. How can I implement this, and how is it referred-to in Keras terminology?
You can use the functional API to have multiple inputs and create that computation graph. Something along the lines of:
inputs = [Input(shape=(3,)) for _ in range(3)]
latents = list()
for i in range(3):
latent = Dense(3, activation='relu')(inputs[i])
latent = Dense(3, activation='relu')(latent)
latents.append(latent)
merged = concatenate(latents)
out = Dense(4, activation='relu')(merged)
out = Dense(4, activation='relu')(out)
out = Dense(1)(out)
Your architecture diagram assumes you have fixed number year inputs, in this case 3 years. If you have variable number of years you have to use shared Dense layers and use TimeDistributed wrapper to apply the Dense layers to every year before merging:
in = Inputs(shape=(3,3)) # this time we have 2d array of 3 years
latent = TimeDistributed(Dense(3, activation='relu'))(in) # apply same dense to every year
latent = TimeDistributed(Dense(3, activation='relu'))(latent)
merged = Flatten()(latent)
out = ...
This time the Dense layers are shared across years, they have the same weights essentially.
my data
y n Rh y2
1 1 1.166666667 1
-1 2 0.5 1
-1 3 0.333333333 1
-1 4 0.166666667 1
1 5 1.666666667 2
1 6 1.333333333 1
-1 7 0.333333333 1
-1 8 0.333333333 1
1 9 0.833333333 1
1 10 2.333333333 2
1 11 1 1
-1 12 0.166666667 1
1 13 0.666666667 1
1 14 0.833333333 1
1 15 0.833333333 1
-1 16 0.333333333 1
-1 17 0.166666667 1
1 18 2 2
1 19 0.833333333 1
1 20 1.333333333 1
1 21 1.333333333 1
-1 22 0.166666667 1
-1 23 0.166666667 1
-1 24 0.333333333 1
-1 25 0.166666667 1
-1 26 0.166666667 1
-1 27 0.333333333 1
-1 28 0.166666667 1
-1 29 0.166666667 1
-1 30 0.5 1
1 31 0.833333333 1
-1 32 0.166666667 1
-1 33 0.333333333 1
-1 34 0.166666667 1
-1 35 0.166666667 1
my codes r
data=xlsread('btpdata.xlsx',1.)
A = data(1:end,2:3)
B = data(1:end,1)
svmStruct = svmtrain(A,B,'showplot',true)
hold on
C = data(1:end,2:3)
D = data(1:end,4)
svmStruct = svmtrain(C,D,'showplot',true)
hold off
How can i get the approximate equations of this black lines in the given mat-lab plot?
It depends what package you did use, but as it is a linear Support Vector Machine there are more or less two options:
Your trained svm contains the equation of the line in a property coefs (sometimes called w or weights) and b (or intercept), so your line is <coefs, X> + b = 0
Your svm containes alphas (dual coefficients, Lagrange multipliers) and then coefs = SUM_i alphas_i * y_i * SV_i where SV_i is i'th support vector (the ones in circles on your plot) and y_i is its label (-1 or +1). Sometimes alphas are already multiplied by y_i, then your coefs = SUM_i alphas_i * SV_i.
If you are trying to get the equation from the actual plot (image), then you can only read it (and it is more or less y = 0.6, meaning that coefs = [0 1] and b = -0.6. Image analysis based approach (for arbitrary such plot) would require:
detecting image part (object detection)
reading the ticks/scale (OCR + object detection) <- this would be actually the hardest part
filtering out everything non-black and performing linear regression to points left, then trasforming through scale detected earlier.
I have had the same problem. To build the linear equation (y = mx + b) of the decision boundary you need the gradient (m) and the y-intercept (b). SVMStruct.Bias is the b-term. The gradient is determined by the SVM beta weights, which SVMStruct does not contain so you need to calculate them from the alphas (which are included in SVMStruct):
alphas = SVMStruct.Alpha;
SV = SVMStruct.SupportVectors;
betas = sum(alphas.*SV);
m = betas(1)/betas(2)
By the way, if your SVM has scaled the data, then I think you will need to unscale it.