TL/DR: How to track and serve the input transformation for keras-flavored mlflow models?
Neural network training usually involves preprocessing steps in which
continuous variables are scaled and shifted to have unit width and zero mean,
categorical variables (integer or string) are transformed to one-hot encoding.
When the model is applied to new data, the scaling weights and the category-to-index association needs to be known.
In keras there are generally two options to perform preprocessing:
Option 1: Using preprocessing layers, or
Option 2: perform the transformation before the training when the dataset is loaded.
With Option 1, the transformation is part of the model and will be automatically applied when the network is used and served as a mlflow model.
My question concerns Option 2: What is the recommended way
to keep track of the input transformation in mlflow for different experiments,
and how to apply the same transformations when the model is served, e.g. with mlflow model serve?
Related
I am working on a case study where i want to make a comparison of performance between a standard LSTM model and a cascaded lstm models as provided in the picture (you could see the block diagram). I would like to know what function could be useful to stack these models. it worth mentioning that each output sequence is an input to the next block, i.e. the LSTM-1hr model has been cascaded with each other and the output block was separately trained in a supervised manner while freezing weights for the input block. The secondary block is initialized with the weights from the basic 1hr model.
the image shows the block diagram of the models that i want to build
I want to evaluate different classifiers in performing the link-prediction task by using node embedding algorithms. More specifically, I want to evaluate if node embedding can improve the accuracy of different classifiers predicting new links between nodes.
My idea is the following:
I create a dataset containing both positive and negative samples (real links and non-existing links)
I split the dataset in Development Test (DS) and Evaluation Test (ES).
I use the DS to perform the Grid Search cross-validation (CV) to find the best model
I train the best model on the entire DS, and then I evaluate its performance on ES.
The problem is the following: I cannot use node embedding algorithms on the entire dataset because, in this case, ES will contain information related to the original graph topology. Therefore, I need to extract node embeddings from the training and test sets generated during the Grid Search CV, but how can I do it by using the sklearn.model_selection.GridSearchCV class?
How can I use the weights of a pre-trained network in my tensorflow project?
I know some theory information about this but no information about coding in tensorflow.
As been pointed out by #Matias Valdenegro in the comments, your first question does not make sense. For your second question however, there are multiple ways to do so. The term that you're searching for is Transfer Learning (TL). TL means transferring the "knowledge" (basically it's just the weights) from a pre-trained model into your model. Now there are several types of TL.
1) You transfer the entire weights from a pre-trained model into your model and use that as a starting point to train your network.
This is done in a situation where you now have extra data to train your model but you don't want to start over the training again. Therefore you just load the weights from your previous model and resume the training.
2) You transfer only some of the weights from a pre-trained model into your new model.
This is done in a situation where you have a model trained to classify between, say, 5 classes of objects. Now, you want to add/remove a class. You don't have to re-train the whole network from the start if the new class that you're adding has somewhat similar features with (an) existing class(es). Therefore, you build another model with the same exact architecture as your previous model except the fully-connected layers where now you have different output size. In this case, you'll want to load the weights of the convolutional layers from the previous model and freeze them while only re-train the fully-connected layers.
To perform these in Tensorflow,
1) The first type of TL can be performed by creating a model with the same exact architecture as the previous model and simply loading the model using tf.train.Saver().restore() module and continue the training.
2) The second type of TL can be performed by creating a model with the same exact architecture for the parts where you want to retain the weights and then specify the name of the weights in which you want to load from the previous pre-trained weights. You can use the parameter "trainable=False" to prevent Tensorflow from updating them.
I hope this helps.
In package tf.estimator, there's a lot of defined estimators. I want to use them in Keras.
I checked TF docs, there's only one converting method that could convert keras. Model to tf. estimator, but no way to convert from estimator to Model.
For example, if we want to convert the following estimator:
tf.estimator.DNNLinearCombinedRegressor
How could it be converted into Keras Model?
You cannot because estimators can run arbitrary code in their model_fn functions and Keras models must be much more structured, whether sequential or functional they must consist of layers, basically.
A Keras model is a very specific type of object that can therefore be easily wrapped and plugged into other abstractions.
Estimators are based on arbitrary Python code with arbitrary control flow and so it's quite tricky to force any structure onto them.
Estimators support 3 modes - train, eval and predict. Each of these could in theory have completely independent flows, with different weights, architectures etc. This is almost unthinkable in Keras and would essentially amount to 3 separate models.
Keras, in contrast, supports 2 modes - train and test (which is necessary for things like Dropout and Regularisation).
I was trying to find the best features that dominate for the output of my regression model, Following is my code.
seed = 7
np.random.seed(seed)
estimators = []
estimators.append(('mlp', KerasRegressor(build_fn=baseline_model, epochs=3,
batch_size=20)))
pipeline = Pipeline(estimators)
rfe = RFE(estimator= pipeline, n_features_to_select=5)
fit = rfe.fit(X_set, Y_set)
But I get the following runtime error when running.
RuntimeError: The classifier does not expose "coef_" or "feature_importances_" attributes
How to overcome this issue and select best features for my model? If not, Can I use algorithms like LogisticRegression() provided and supported by RFE in Scikit to achieve the task of finding best features for my dataset?
I assume your Keras model is some kind of a neural network. And with NN in general it is kind of hard to see which input features are relevant and which are not. The reason for this is that each input feature has multiple coefficients that are linked to it - each corresponding to one node of the first hidden layer. Adding additional hidden layers makes it even more complicated to determine how big of an impact the input feature has on the final prediction.
On the other hand, for linear models it is very straightforward since each feature x_i has a corresponding weight/coefficient w_i and its magnitude directly determines how big of an impact it has in prediction (assuming that features are scaled of course).
The RFE estimator (Recursive feature elimination) assumes that your prediction model has an attribute coef_ (linear models) or feature_importances_(tree models) that has the length of input features and that it represents their relevance (in absolute terms).
My suggestion:
Feature selection: (Option a) Run the RFE on any linear / tree model to reduce the number of features to some desired number n_features_to_select. (Option b) Use regularized linear models like lasso / elastic net that enforce sparsity. The problem here is that you cannot directly set the actual number of selected features. (Option c) Use any other feature selection technique from here.
Neural Network: Use only features from (1) for your neural network.
Suggestion:
Perform the RFE algorithm on a sklearn-based algorithm to observe feature importance. Finally, you use the most importantly observed features to train your algorithm based on Keras.
To your question: Standardization is not required for logistic regression