I have trained and stored a random forest binary classification model. Now I'm trying to simulate processing new (out-of-sample) data with this model. My Python (Anaconda 3.6) code is:
import h2o
import pandas as pd
import sys
localH2O = h2o.init(ip = "localhost", port = 54321, max_mem_size = "8G", nthreads = -1)
h2o.remove_all()
model_path = "C:/sm/BottleRockets/rf_model/DRF_model_python_1501621766843_28117";
model = h2o.load_model(model_path)
new_data = h2o.import_file(path="C:/sm/BottleRockets/new_data.csv")
print(new_data.head(10))
predict = model.predict(new_data) # predict returns a data frame
print(predict.describe())
predicted = predict[0,0]
probability = predict[0,2] # probability the prediction is a "1"
print('prediction: ', predicted, ', probability: ', probability)
When I run this code I get:
>>> import h2o
>>> import pandas as pd
>>> import sys
>>> localH2O = h2o.init(ip = "localhost", port = 54321, max_mem_size = "8G", nthreads = -1)
Checking whether there is an H2O instance running at http://localhost:54321. connected.
-------------------------- ------------------------------
H2O cluster uptime: 22 hours 22 mins
H2O cluster version: 3.10.5.4
H2O cluster version age: 18 days
H2O cluster name: H2O_from_python_Charles_0fqq0c
H2O cluster total nodes: 1
H2O cluster free memory: 6.790 Gb
H2O cluster total cores: 8
H2O cluster allowed cores: 8
H2O cluster status: locked, healthy
H2O connection url: http://localhost:54321
H2O connection proxy:
H2O internal security: False
Python version: 3.6.1 final
-------------------------- ------------------------------
>>> h2o.remove_all()
>>> model_path = "C:/sm/BottleRockets/rf_model/DRF_model_python_1501621766843_28117";
>>> model = h2o.load_model(model_path)
>>> new_data = h2o.import_file(path="C:/sm/BottleRockets/new_data.csv")
Parse progress: |█████████████████████████████████████████████████████████| 100%
>>> print(new_data.head(10))
BoxRatio Thrust Velocity OnBalRun vwapGain
---------- -------- ---------- ---------- ----------
1.502 55.044 0.38 37 0.845
[1 row x 5 columns]
>>> predict = model.predict(new_data) # predict returns a data frame
drf prediction progress: |████████████████████████████████████████████████| 100%
>>> print(predict.describe())
Rows:1
Cols:3
predict p0 p1
------- --------- ------------------ -------------------
type enum real real
mins 0.8849431818181818 0.11505681818181818
mean 0.8849431818181818 0.11505681818181818
maxs 0.8849431818181818 0.11505681818181818
sigma 0.0 0.0
zeros 0 0
missing 0 0 0
0 1 0.8849431818181818 0.11505681818181818
None
>>> predicted = predict[0,0]
>>> probability = predict[0,2] # probability the prediction is a "1"
>>> print('prediction: ', predicted, ', probability: ', probability)
prediction: 1 , probability: 0.11505681818181818
>>>
I am confused by the contents of the "predict" data frame. Please tell me what the numbers in the columns labeled "p0" and "p1" mean. I hope they are probabilities, and as you can see by my code, I am trying to get the predicted classification (0 or 1) and a probability that this classification is correct. Does my code correctly do that?
Any comments will be greatly appreciated.
Charles
p0 is the probability (between 0 and 1) that class 0 is chosen.
p1 is the probability (between 0 and 1) that class 1 is chosen.
The thing to keep in mind is that the "prediction" is made by applying a threshold to p1. That threshold point is chosen depending on whether you want to reduce false positives or false negatives. It's not just 0.5.
The threshold chosen for "the prediction" is max-F1. But you can extract out p1 yourself and threshold it any way you like.
Darren Cook asked me to post the first few lines of my training data. Here is is:
BoxRatio Thrust Velocity OnBalRun vwapGain Altitude
0 0.000 0.000 2.186 4.534 0.361 1
1 0.000 0.000 0.561 2.642 0.909 1
2 2.824 2.824 2.199 4.748 1.422 1
3 0.442 0.452 1.702 3.695 1.186 0
4 0.084 0.088 0.612 1.699 0.700 1
The response column is labeled "Altitude". Class 1 is what I want to see from new "out-of-sample" data. "1" is good, and it means that "Altitude" was reached (true positive). "0" means that "Altitude" was not reached (true negative). In the predict table above, "1" was predicted with a probability of 0.11505681818181818. This does not make sense to me.
Charles
Related
I do not understand very well the logic behind sklearn function train_test_split and StratifiedKFold for obtaining balanced splits according to multiple "columns" and not only according to the target distribution. I know the previous sentence is a bit obscure so I hope the following code helps.
import numpy as np
import pandas as pd
import random
n_samples = 100
prob = 0.2
pos = int(n_samples * prob)
neg = n_samples - pos
target = [1] * pos + [0] * neg
cat = ["a"] * 50 + ["b"] * 50
random.shuffle(target)
random.shuffle(cat)
ds = pd.DataFrame()
ds["target"] = target
ds["cat"] = cat
ds["f1"] = np.random.random(size=(n_samples,))
ds["f2"] = np.random.random(size=(n_samples,))
print(ds.head())
This is a 100-example dataset, target distribution is governed by p, in this case we have 20% positive examples. There is a binary categorical column cat, perfectly balanced. The output of the previous code is:
target cat f1 f2
0 0 a 0.970585 0.134268
1 0 a 0.410689 0.225524
2 0 a 0.638111 0.273830
3 0 b 0.594726 0.579668
4 0 a 0.737440 0.667996
with train_test_split(), stratify on target and cat, if we study the frequencies, we get:
from sklearn.model_selection import train_test_split, StratifiedKFold
# with train_test_split
training, valid = train_test_split(range(n_samples),
test_size=20,
stratify=ds[["target", "cat"]])
print("---")
print("* training")
print(ds.loc[training, ["target", "cat"]].value_counts() / len(training)) # balanced
print("* validation")
print(ds.loc[valid, ["target", "cat"]].value_counts() / len(valid)) # balanced
we get this:
* dataset
0 0.8
1 0.2
Name: target, dtype: float64
target cat
0 a 0.4
b 0.4
1 a 0.1
b 0.1
dtype: float64
---
* training
target cat
0 a 0.4
b 0.4
1 a 0.1
b 0.1
dtype: float64
* validation
target cat
0 a 0.4
b 0.4
1 a 0.1
b 0.1
dtype: float64
It is perfectly stratified.
Now with StratifiedKFold:
# with stratified k-fold
skf = StratifiedKFold(n_splits=5)
try:
for train, valid in skf.split(X=range(len(ds)), y=ds[["target", "cat"]]):
pass
except:
print("! does not work")
for train, valid in skf.split(X=range(len(ds)), y=ds.target):
print("happily iterating")
output:
! does not work
happily iterating
happily iterating
happily iterating
happily iterating
happily iterating
How do I obtain what I got with train_test_split with StratifiedKFold? I know there might be data distributions not allowing such stratifications in k-fold cross validation, but I cannot understand why train_test_split accepts two or more columns and the other method does not.
This doesn't seem readily possible currently.
Multilabel isn't exactly what you're looking for, but related. That's been asked here before, and was an Issue on sklearn's github (not sure why it got closed).
As a bit of a hack, you should be able to just combine your two columns into a new one with ordered pairs, and stratify on that?
I am building a classification model in h2o DRF and GBM. I want to change probability of prediction such that if p0 <0.2 then predict= 0 else predict=1
Currently, you need to do this manually. It would be easier if we had a threshold argument for the predict() method, so I created a JIRA ticket ticket to make this a bit more straight-forward.
See a Python example below of how to do this manually below.
import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator
h2o.init()
# Import a sample binary outcome train/test set into H2O
train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
test = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_test_5k.csv")
# Identify predictors and response
x = train.columns
y = "response"
x.remove(y)
# For binary classification, response should be a factor
train[y] = train[y].asfactor()
test[y] = test[y].asfactor()
# Train and cross-validate a GBM
my_gbm = H2OGradientBoostingEstimator(distribution="bernoulli", seed=1)
my_gbm.train(x=x, y=y, training_frame=train)
# Predict on a test set using default threshold
pred = my_gbm.predict(test_data=test)
Look at the pred frame:
In [16]: pred.tail()
Out[16]:
predict p0 p1
--------- -------- --------
1 0.484712 0.515288
0 0.693893 0.306107
1 0.319674 0.680326
0 0.582344 0.417656
1 0.471658 0.528342
1 0.079922 0.920078
1 0.150146 0.849854
0 0.835288 0.164712
0 0.639877 0.360123
1 0.54377 0.45623
[10 rows x 3 columns]
Here's how to manually create the predictions you want. More info on how to slice H2OFrames is available in the H2O User Guide.
# Binary column which is 1 if >=0.2 and 0 if <0.2
newpred = pred["p1"] >= 0.2
newpred.tail()
Look at the binary column:
In [23]: newpred.tail()
Out[23]:
p1
----
1
1
1
1
1
1
1
0
1
1
[10 rows x 1 column]
Now you have the predictions you want. You could also replace the "predict" column with the new predicted labels.
pred["predict"] = newpred
Now re-examine the pred frame:
In [24]: pred.tail()
Out[24]:
predict p0 p1
--------- -------- --------
1 0.484712 0.515288
1 0.693893 0.306107
1 0.319674 0.680326
1 0.582344 0.417656
1 0.471658 0.528342
1 0.079922 0.920078
1 0.150146 0.849854
0 0.835288 0.164712
1 0.639877 0.360123
1 0.54377 0.45623
[10 rows x 3 columns]
I am new to H2O in python. I am trying to model my data using ensemble model following the example codes from H2O's web site. (http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/stacked-ensembles.html)
I have applied GBM and RF as base models. And then using stacking, I tried to merge them in ensemble model. In addition, in my training data I created one additional column named 'fold' to be used in fold_column = "fold"
I applied 10 fold cv and I observed that I received results from cv1. However, all the predictions coming from other 9 cvs, they are empty. What am I missing here?
Here is my sample data:
code:
import h2o
from h2o.estimators.random_forest import H2ORandomForestEstimator
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.estimators.stackedensemble import H2OStackedEnsembleEstimator
from h2o.grid.grid_search import H2OGridSearch
from __future__ import print_function
h2o.init(port=23, nthreads=6)
train = h2o.H2OFrame(ens_df)
test = h2o.H2OFrame(test_ens_eq)
x = train.drop(['Date','EQUITY','fold'],axis=1).columns
y = 'EQUITY'
cat_cols = ['A','B','C','D']
train[cat_cols] = train[cat_cols].asfactor()
test[cat_cols] = test[cat_cols].asfactor()
my_gbm = H2OGradientBoostingEstimator(distribution="gaussian",
ntrees=10,
max_depth=3,
min_rows=2,
learn_rate=0.2,
keep_cross_validation_predictions=True,
seed=1)
my_gbm.train(x=x, y=y, training_frame=train, fold_column = "fold")
Then when I check cv results with
my_gbm.cross_validation_predictions():
Plus when I try the ensemble in the test set I get the warning below:
# Train a stacked ensemble using the GBM and GLM above
ensemble = H2OStackedEnsembleEstimator(model_id="mlee_ensemble",
base_models=[my_gbm, my_rf])
ensemble.train(x=x, y=y, training_frame=train)
# Eval ensemble performance on the test data
perf_stack_test = ensemble.model_performance(test)
pred = ensemble.predict(test)
pred
/mgmt/data/conda/envs/python3.6_4.4/lib/python3.6/site-packages/h2o/job.py:69: UserWarning: Test/Validation dataset is missing column 'fold': substituting in a column of NaN
warnings.warn(w)
Am I missing something about fold_column?
Here is an example of how to use a custom fold column (created from a list). This is a modified version of the example Python code in the Stacked Ensemble page in the H2O User Guide.
import h2o
from h2o.estimators.random_forest import H2ORandomForestEstimator
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.estimators.stackedensemble import H2OStackedEnsembleEstimator
from h2o.grid.grid_search import H2OGridSearch
from __future__ import print_function
h2o.init()
# Import a sample binary outcome training set into H2O
train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
# Identify predictors and response
x = train.columns
y = "response"
x.remove(y)
# For binary classification, response should be a factor
train[y] = train[y].asfactor()
# Add a fold column, generate from a list
# The list has 10 unique values, so there will be 10 folds
fold_list = list(range(10)) * 1000
train['fold_id'] = h2o.H2OFrame(fold_list)
# Train and cross-validate a GBM
my_gbm = H2OGradientBoostingEstimator(distribution="bernoulli",
ntrees=10,
keep_cross_validation_predictions=True,
seed=1)
my_gbm.train(x=x, y=y, training_frame=train, fold_column="fold_id")
# Train and cross-validate a RF
my_rf = H2ORandomForestEstimator(ntrees=50,
keep_cross_validation_predictions=True,
seed=1)
my_rf.train(x=x, y=y, training_frame=train, fold_column="fold_id")
# Train a stacked ensemble using the GBM and RF above
ensemble = H2OStackedEnsembleEstimator(base_models=[my_gbm, my_rf])
ensemble.train(x=x, y=y, training_frame=train)
To answer your second question about how to view the cross-validated predictions in a model. They are stored in two places, however, the method that you probably want to use is: .cross_validation_holdout_predictions() This method returns a single H2OFrame of the cross-validated predictions, in the original order of the training observations:
In [11]: my_gbm.cross_validation_holdout_predictions()
Out[11]:
predict p0 p1
--------- -------- --------
1 0.323155 0.676845
1 0.248131 0.751869
1 0.288241 0.711759
1 0.407768 0.592232
1 0.507294 0.492706
0 0.6417 0.3583
1 0.253329 0.746671
1 0.289916 0.710084
1 0.524328 0.475672
1 0.252006 0.747994
[10000 rows x 3 columns]
The second method, .cross_validation_predictions() is a list which stores the predictions from each fold in an H2OFrame that has the same number of rows as the original training frame, but the rows that are not active in that fold have a value of zero. This is not usually the format that people find most useful, so I'd recommend using the other method instead.
In [13]: type(my_gbm.cross_validation_predictions())
Out[13]: list
In [14]: len(my_gbm.cross_validation_predictions())
Out[14]: 10
In [15]: my_gbm.cross_validation_predictions()[0]
Out[15]:
predict p0 p1
--------- -------- --------
1 0.323155 0.676845
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
[10000 rows x 3 columns]
My Random Forest model code concludes with:
print('\nModel performance:')
performance = best_nn.model_performance(test_data = test)
accuracy = performance.accuracy()
precision = performance.precision()
F1 = performance.F1()
auc = performance.auc()
print(' accuracy.................', accuracy)
print(' precision................', precision)
print(' F1.......................', F1)
print(' auc......................', auc)
and this code produces the following output:
Model performance:
accuracy................. [[0.6622929108639558, 0.9078947368421053]]
precision................ [[0.6622929108639558, 1.0]]
F1....................... [[0.304835115538703, 0.5853658536585366]]
auc...................... 0.9103448275862068
Why am I getting two numbers for accuracy, precision and F1, and what do they mean?
Charles
PS: My environment is:
H2O cluster uptime: 6 mins 02 secs
H2O cluster version: 3.10.4.8
H2O cluster version age: 2 months and 9 days
H2O cluster name: H2O_from_python_Charles_wdmhb7
H2O cluster total nodes: 1
H2O cluster free memory: 21.31 Gb
H2O cluster total cores: 8
H2O cluster allowed cores: 4
H2O cluster status: locked, healthy
H2O connection url: http://localhost:54321
H2O connection proxy:
H2O internal security: False
Python version: 3.6.2 final
the two numbers are the threshold and the value for that metric respectively. Once the threshold is determined the accuracy or precision metric can be calculated.
if you use model.confusion_matrix() you can see what threshold was used.
for example in binary classification, the "threshold" is the value (between 0 and 1) that determines what the predicted class label is. If your model predicts a 0.2 for a particular test case, and your threshold is 0.4, the predicted class label will be 0. If your threshold were 0.15, then the predicted class label would be 1.
I have Normalize my data and apply regression analysis to predict yield(y).
but my predicted output also gives in normalized (in 0 to 1)
I want my predicted answer in my correct data numbers,not in 0 to 1.
Data:
Total_yield(y) Rain(x)
64799.30 720.1
77232.40 382.9
88487.70 1198.2
77338.20 341.4
145602.05 406.4
67680.50 325.8
84536.20 791.8
99854.00 748.6
65939.90 1552.6
61622.80 1357.7
66439.60 344.3
Next,I have normalize data using this code :
from sklearn.preprocessing import Normalizer
import pandas
import numpy
dataframe = pandas.read_csv('/home/desktop/yield.csv')
array = dataframe.values
X = array[:,0:2]
scaler = Normalizer().fit(X)
normalizedX = scaler.transform(X)
print(normalizedX)
Total_yield Rain
0 0.999904 0.013858
1 0.999782 0.020872
2 0.999960 0.008924
3 0.999967 0.008092
4 0.999966 0.008199
5 0.999972 0.007481
6 0.999915 0.013026
7 0.999942 0.010758
8 0.999946 0.010414
9 0.999984 0.005627
10 0.999967 0.008167
Next, I use this normalize value to calculate R-sqaure using following code :
array=normalizedX
data = pandas.DataFrame(array,columns=['Total_yield','Rain'])
import statsmodels.formula.api as smf
lm = smf.ols(formula='Total_yield ~ Rain', data=data).fit()
lm.summary()
Output :
<class 'statsmodels.iolib.summary.Summary'>
"""
OLS Regression Results
==============================================================================
Dep. Variable: Total_yield R-squared: 0.752
Model: OLS Adj. R-squared: 0.752
Method: Least Squares F-statistic: 1066.
Date: Thu, 09 Feb 2017 Prob (F-statistic): 2.16e-108
Time: 14:21:21 Log-Likelihood: 941.53
No. Observations: 353 AIC: -1879.
Df Residuals: 351 BIC: -1871.
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept 1.0116 0.001 948.719 0.000 1.009 1.014
Rain -0.3013 0.009 -32.647 0.000 -0.319 -0.283
==============================================================================
Omnibus: 408.798 Durbin-Watson: 1.741
Prob(Omnibus): 0.000 Jarque-Bera (JB): 40636.533
Skew: -4.955 Prob(JB): 0.00
Kurtosis: 54.620 Cond. No. 10.3
==============================================================================
Now, R-square = 0.75 ,
regression model : y = b0 + b1 *x
Yield = b0 + b1 * Rain
Yield = intercept + coefficient for Rain * Rain
Now when I use my data value for Rain data then it will gives this answer :
Yield = 1.0116 + ( -0.3013 * 720.1(mm)) = -215.95
-215.95yield is wrong,
And when I use normalize value for rain data then predicted yield comes in normalize value in between 0 to 1.
I want predict if rainfall will be 720.1 mm then how many yield will be there?
If anyone help me how to get predicted yield ? I want to compare Predicted yield vs given yield.
First, you should not use Normalizer in this case. It doesn't normalize across features. It does it along rows. You may not want it.
Use MinMaxScaler or RobustScaler to scale each feature. See the preprocessing docs for more details.
Second, these classes have a inverse_transform() function which can convert the predicted y value back to original units.
x = np.asarray([720.1,382.9,1198.2,341.4,406.4,325.8,
791.8,748.6,1552.6,1357.7,344.3]).reshape(-1,1)
y = np.asarray([64799.30,77232.40,88487.70,77338.20,145602.05,67680.50,
84536.20,99854.00,65939.90,61622.80,66439.60]).reshape(-1,1)
scalerx = RobustScaler()
x_scaled = scalerx.fit_transform(x)
scalery = RobustScaler()
y_scaled = scalery.fit_transform(y)
Call your statsmodel.OLS on these scaled data.
While predicting, first transform your test data:
x_scaled_test = scalerx.transform([720.1])
Apply your regression model on this value and get the result. This result of y will be according to the scaled data.
Yield_scaled = b0 + b1 * x_scaled_test
So inverse transform it to get data in original units.
Yield_original = scalery.inverse_transform(Yield_scaled)
But in my opinion, this linear model will not give much accuracy, because when I plotted your data, this is the result.
This data will not be fitted with linear models. Use other techniques, or get more data.