Pipeline with XGBoost - Imputer and Scaler prevent Model from learning - scikit-learn

I'm trying to build a pipeline for data preprocessing for my XGBoost model. The data contains NaNs and needs to be scaled. This is the relevant code:
xgb_pipe = Pipeline(steps=[
('imputer', SimpleImputer(strategy='mean')),
('scaler', preprocessing.StandardScaler()),
('regressor', xgboost.XGBRegressor(n_estimators=100, eta=0.1, objective = "reg:squarederror"))])
xgb_pipe.fit(train_x.values, train_y.values,
regressor__early_stopping_rounds=20,
regressor__eval_metric = "rmse",
regressor__eval_set = [[train_x.values, train_y.values],[test_x.values, test_y.values]])
The loss immediately increases and the training stops after 20 iterations.
If I remove the imputer and the scaler from the pipeline, it works and trains for the full 100 iterations. If I manually preprocess the data it also works as intended, so I know that the problem is not the data.
What am I missing?

The problem is that the preprocessing doesn't get applied to your eval sets, and so the model performs quite badly on them, and early stopping kicks in very early.
I'm not sure there's a simple way to do this that would keep everything in one pipeline, unfortunately. You need to apply the preprocessing steps of the pipeline to the eval sets, so those need to be fitted before setting that parameter.
Separate preprocessing
As two objects it's no problem:
preproc = Pipeline(steps=[
('imputer', SimpleImputer(strategy='mean')),
('scaler', preprocessing.StandardScaler()),
])
reg = xgboost.XGBRegressor(n_estimators=100, eta=0.1, objective="reg:squarederror")
train_x_preproc = preproc.fit_transform(train_x.values, train_y.values)
test_x_preproc = preproc.transform(test_x)
reg.fit(train_x.values, train_y.values,
regressor__early_stopping_rounds=20,
regressor__eval_metric = "rmse",
regressor__eval_set = [[train_x_preproc, train_y.values], [test_x_preproc, test_y.values]],
)
After fitting you could put these now-fitted estimators together into a pipeline (pipelines don't clone their estimators) for prediction if you'd like.
Custom estimator
There are a lot of ways to go about this, but inheriting from Pipeline means you can initialize the same way you do your current setup, and we just assume the last step is an xgboost model, and the rest are preprocessing that need to apply to the eval sets as well as fitting and predicting sets. I think everything else can be left to the inherited methods from Pipeline?
class PreprocEarlyStoppingXGB(Pipeline):
def fit(self, X, y, eval_set):
preproc = self.steps[:-1]
X_preproc = preproc.fit_transform(X, y)
eval_preproc = []
for eval in eval_set:
eval_preproc.append([preproc.transform(eval[0]), eval[1]])
self.steps[-1].fit(X_preproc, y, eval_set=eval_preproc)
return self
To your usecase from the comments, what happens when you cross-validate with this object? On each training fold, the preprocessing steps are fitted. Those are then applied to the training fold, and all eval sets (the entire training set as well as the external test set), and finally when scoring the test fold. The xgboost model trains on the preprocessed training fold, and watches the score on the entire training set and the external testing set (both having been preprocessed), the latter getting used for early stopping.

Related

Keras fitting setting in TensorFlow Extended (TFX)

I try to construct a TFX pipeline with a trainer component with a Keras model defined like this:
def run_fn(fn_args: components.FnArgs):
transform_output = TFTransformOutput(fn_args.transform_output)
train_dataset = input_fn(fn_args.train_files,
fn_args.data_accessor,
transform_output,
num_batch)
eval_dataset = input_fn(fn_args.eval_files,
fn_args.data_accessor,
transform_output,
num_batch)
history = model.fit(train_dataset,
epochs=num_epochs,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
This works. However, if I change fitting to the following, this doesn't work:
history = model.fit(train_dataset,
epochs=num_epochs,
batch_size=num_batch,
validation_split=0.1)
Now, I have two questions:
Why does fitting work only with steps_per_epochs only? I couldn't find any explicit statement supporting this but this is the only way. Somehow I conclude that it must be something TFX specific (TFX handles input data only in a generator-like way?).
Let's say my train_dataset contains 100 instances and steps_per_epoch=1000 (with epochs=1). Is that mean that my 100 input instances are feed 10x each in order to reach the defined 1000 step? Isn't that counter-productive from training perspective?

cross Validation in Sklearn using a Custom CV

I am dealing with a binary classification problem.
I have 2 lists of indexes listTrain and listTest, which are partitions of the training set (the actual test set will be used only later). I would like to use the samples associated with listTrain to estimate the parameters and the samples associated with listTest to evaluate the error in a cross validation process (hold out set approach).
However, I am not be able to find the correct way to pass this to the sklearn GridSearchCV.
The documentation says that I should create "An iterable yielding (train, test) splits as arrays of indices". However, I do not know how to create this.
grid_search = GridSearchCV(estimator = model, param_grid = param_grid,cv = custom_cv, n_jobs = -1, verbose = 0,scoring=errorType)
So, my question is how to create custom_cv based on these indexes to be used in this method?
X and y are respectivelly the features matrix and y is the vector of labels.
Example: Supose that I only have one hyperparameter alpha that belongs to the set{1,2,3}. I would like to set alpha=1, estimate the parameters of the model (for instance the coefficients os a regression) using the samples associated with listTrain and evaluate the error using the samples associated with listTest. Then I repeat the process for alpha=2 and finally for alpha=3. Then I choose the alpha that minimizes the error.
EDIT: Actual answer to question. Try passing cv command a generator of the indices:
def index_gen(listTrain, listTest):
yield listTrain, listTest
grid_search = GridSearchCV(estimator = model, param_grid =
param_grid,cv = index_gen(listTrain, listTest), n_jobs = -1,
verbose = 0,scoring=errorType)
EDIT: Before Edits:
As mentioned in the comment by desertnaut, what you are trying to do is bad ML practice, and you will end up with a biased estimate of the generalisation performance of the final model. Using the test set in the manner you're proposing will effectively leak test set information into the training stage, and give you an overestimate of the model's capability to classify unseen data. What I suggest in your case:
grid_search = GridSearchCV(estimator = model, param_grid = param_grid,cv = 5,
n_jobs = -1, verbose = 0,scoring=errorType)
grid_search.fit(x[listTrain], y[listTrain]
Now, your training set will be split into 5 (you can choose the number here) folds, trained using 4 of those folds on a specific set of hyperparameters, and tested the fold that was left out. This is repeated 5 times, till all of your training examples have been part of a left out set. This whole procedure is done for each hyperparameter setting you are testing (5x3 in this case)
grid_search.best_params_ will give you a dictionary of the parameters that performed the best over all 5 folds. These are the parameters that you use to train your final classifier, using again only the training set:
clf = LogisticRegression(**grid_search.best_params_).fit(x[listTrain],
y[listTrain])
Now, finally your classifier is tested on the test set and an unbiased estimate of the generalisation performance is given:
predictions = clf.predict(x[listTest])

Obtaining Same Result as Sklearn Pipeline without Using It

How would one correctly standardize the data without using pipeline? I am just wanting to make sure my code is correct and there is no data leakage.
So if I standardize the entire dataset once, right at the beginning of my project, and then go on to try different CV tests with different ML algorithms, will that be the same as creating an Sklearn Pipeline and performing the same standardization in conjunction with each ML algorithm?
y = df['y']
X = df.drop(columns=['y', 'Date'])
scaler = preprocessing.StandardScaler().fit(X)
X_transformed = scaler.transform(X)
clf1 = DecisionTreeClassifier()
clf1.fit(X_transformed, y)
clf2 = SVC()
clf2.fit(X_transformed, y)
####Is this the same as the below code?####
pipeline1 = []
pipeline1.append(('standardize', StandardScaler()))
pipeline1.append(('clf1', DecisionTreeClassifier()))
pipeline1.fit(X_transformed,y)
pipeline2 = []
pipeline2.append(('standardize', StandardScaler()))
pipeline2.append(('clf2', DecisionTreeClassifier()))
pipeline2.fit(X_transformed,y)
Why would anybody choose the latter other than personal preference?
They are the same. It is possible that you may want one or the other from a maintainability standpoint, but the outcome of a test set prediction will be identical.
Edit Note that this is only the case because the StandardScaler is idempotent. It is strange that you fit the pipeline on the data that has already been scaled...

How to use cross_val_predict to predict probabilities for a new dataset?

I am using sklearn's cross_val_predict for training like so:
myprobs_train = cross_val_predict(LogisticRegression(),X = x_old, y=y_old, method='predict_proba', cv=10)
I am happy with the returned probabilities, and would like now to score up a brand-new dataset. I tried:
myprobs_test = cross_val_predict(LogisticRegression(), X =x_new, y= None, method='predict_proba',cv=10)
but this did not work, it's complaining about y having zero shape. Does it mean there's no way to apply the trained and cross-validated model from cross_val_predict on new data? Or am I just using it wrong?
Thank you!
You are looking at a wrong method. Cross validation methods do not return a trained model; they return values that evaluate the performance of a model (logistic regression in your case). Your goal is to fit some data and then generate prediction for new data. The relevant methods are fit and predict of the LogisticRegression class. Here is the basic structure:
logreg = linear_model.LogisticRegression()
logreg.fit(x_old, y_old)
predictions = logreg.predict(x_new)
I have the same concern as #user3490622. If we can only use cross_val_predict on training and testing sets, why y (target) is None as the default value? (sklearn page)
To partially achieve the desired results of multiple predicted probability, one could use the fit then predict approach repeatedly to mimic the cross-validation.

How to use hidden layer activations to construct loss function and provide y_true during fitting in Keras?

Assume I have a model like this. M1 and M2 are two layers linking left and right sides of the model.
The example model: Red lines indicate backprop directions
During training, I hope M1 can learn a mapping from L2_left activation to L2_right activation. Similarly, M2 can learn a mapping from L3_right activation to L3_left activation.
The model also needs to learn the relationship between two inputs and the output.
Therefore, I should have three loss functions for M1, M2, and L3_left respectively.
I probably can use:
model.compile(optimizer='rmsprop',
loss={'M1': 'mean_squared_error',
'M2': 'mean_squared_error',
'L3_left': mean_squared_error'})
But during training, we need to provide y_true, for example:
model.fit([input_1,input_2], y_true)
In this case, the y_true is the hidden layer activations and not from a dataset.
Is it possible to build this model and train it using it's hidden layer activations?
If you have only one output, you must have only one loss function.
If you want three loss functions, you must have three outputs, and, of course, three Y vectors for training.
If you want loss functions in the middle of the model, you must take outputs from those layers.
Creating the graph of your model: (if the model is already defined, see the end of this answer)
#Here, all "SomeLayer(blabla)" could be replaced by a "SomeModel" if necessary
#Example of using a layer or a model:
#M1 = SomeLayer(blablabla)(L12)
#M1 = SomeModel(L12)
from keras.models import Model
from keras.layers import *
inLef = Input((shape1))
inRig = Input((shape2))
L1Lef = SomeLayer(blabla)(inLef)
L2Lef = SomeLayer(blabla)(L1Lef)
M1 = SomeLayer(blablaa)(L2Lef) #this is an output
L1Rig = SomeLayer(balbla)(inRig)
conc2Rig = Concatenate(axis=?)([L1Rig,M1]) #Or Add, or Multiply, however you're joining the models
L2Rig = SomeLayer(nlanlab)(conc2Rig)
L3Rig = SomeLayer(najaljd)(L2Rig)
M2 = SomeLayer(babkaa)(L3Rig) #this is an output
conc3Lef = Concatenate(axis=?)([L2Lef,M2])
L3Lef = SomeLayer(blabla)(conc3Lef) #this is an output
Creating your model with three outputs:
Now you've got your graph ready and you know what the outputs are, you create the model:
model = Model([inLef,inRig], [M1,M2,L3Lef])
model.compile(loss='mse', optimizer='rmsprop')
If you want different losses for each output, then you create a list:
#example of custom loss function, if necessary
def lossM1(yTrue,yPred):
return keras.backend.sum(keras.backend.abs(yTrue-yPred))
#compiling with three different loss functions
model.compile(loss = [lossM1, 'mse','binary_crossentropy'], optimizer =??)
But you've got to have three different yTraining too, for training with:
model.fit([input_1,input_2], [yTrainM1,yTrainM2,y_true], ....)
If your model is already defined and you don't create it's graph like I did:
Then, you have to find in yourModel.layers[i] which ones are M1 and M2, so you create a new model like this:
M1 = yourModel.layers[indexForM1].output
M2 = yourModel.layers[indexForM2].output
newModel = Model([inLef,inRig], [M1,M2,yourModel.output])
If you want that two outputs be equal:
In this case, just subtract the two outputs in a lambda layer, and make that lambda layer be an output of your model, with expected values = 0.
Using the exact same vars as before, we'll just create two addictional layers to subtract outputs:
diffM1L1Rig = Lambda(lambda x: x[0] - x[1])([L1Rig,M1])
diffM2L2Lef = Lambda(lambda x: x[0] - x[1])([L2Lef,M2])
Now your model should be:
newModel = Model([inLef,inRig],[diffM1L1Rig,diffM2L2lef,L3Lef])
And training will expect those two differences to be zero:
yM1 = np.zeros((shapeOfM1Output))
yM2 = np.zeros((shapeOfM2Output))
newModel.fit([input_1,input_2], [yM1,yM2,t_true], ...)
Trying to answer to the last part: how to make gradients only affect one side of the model.
...well.... at first that sounds unfeasible to me. But, if that is similar to "train only a part of the model", then it's totally ok by defining models that only go to a certain point and making part of the layers untrainable.
By doing that, nothing will affect those layers. If that's what you want, then you can do it:
#using the previous vars to define other models
modelM1 = Model([inLef,inRig],diffM1L1Rig)
This model above ends in diffM1L1Rig. Before compiling, you must set L2Right untrainable:
modelM1.layers[??].trainable = False
#to find which layer is the right one, you may define then using the "name" parameter, or see in the modelM1.summary() the shapes, types etc.
modelM1.compile(.....)
modelM1.fit([input_1, input_2], yM1)
This suggestion makes you train only a single part of the model. You can repeat the procedure for M2, locking the layers you need before compiling.
You can also define a full model taking all layers, and lock only the ones you want. But you won't be able (I think) to make half gradients pass by one side and half the gradients pass by the other side.
So I suggest you keep three models, the fullModel, the modelM1, and the modelM2, and you cycle them in training. One epoch each, maybe....
That should be tested....

Resources