PyTorch equivalent for Keras model - keras

How to get the perfect copy of this Keras sequential network in PyTorch?
model = Sequential()
model.add(Masking(mask_value=-99., input_shape=(sequence_length, train_array.shape[2])))
model.add(LSTM(32, activation='tanh'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam') # the model is recompiled to reset the optimizer
model.load_weights('simple_lstm_weights.h5') # weights are reloaded to ensure reproducible results
history = model.fit(train_split_array, train_split_label,
validation_data=(val_split_array, val_split_label),
epochs=5,
batch_size=32)
How to get the perfect copy of this Keras sequential network in PyTorch?

Related

How to get 90%+ test accuracy on IMDB data?

I was trying to train a model using IMDB data. I am getting expected train accuracy about 96%+ but I am not satisfied with the test accuracy.Now my expectation is to get 90%+ test accuracy on test data. I tried by using several classifier but each time I am getting 84% to 89% accuracy on test data. Here I am going to include some classifiers I already tried. Most of the cases I tried some parameter tuning by increasing epoch or changing the optimizer. Now my concern is how can I increase the test accuracy to 90%+ .
Classifiers I tried so far:
First:
model = Sequential()
model.add(Embedding(vocab_size, 32, input_length = max_words))
model.add(Bidirectional(LSTM(32, return_sequences = True)))
model.add(GlobalMaxPool1D())
model.add(Dense(20, activation="relu"))
model.add(Dropout(0.05))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train,y_train,validation_data=(x_test, y_test),epochs=10,batch_size=100)
Second:
model = Sequential([
Embedding(vocab_size, 32, input_length=max_words),
Dropout(0.2),
ZeroPadding1D(padding=1),
Convolution1D(64, 5, activation='relu'),
Dropout(0.2),
MaxPooling1D(),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.2),
Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train,y_train,validation_data=(x_test, y_test),epochs=10,batch_size=100)
By checking on State-of-the-art analysis on IMDB dataset, I don't think you can get to ^90% with simple models like those you are using. However, you may try using pretrained embedding like glove instead of training your own embedding. Also, I found this repo have BERT implementation in keras, providing demo of IMBD classification, it is able to get ~99% acc.

How can I get intermediate parameters after a training batch in keras?

I have trained a keras LSTM model. But after training, all i get is the final parameters of the models after training with 10 epochs and batch size=120. How can i get intermediate parameter after a batch keras.
Example: after 120 sample in each batch i can get the intermediate parameter of this step.
I have tried callback method and backend in keras, but i do not know how to get the
'''python
model = Sequential()
model.add(Embedding(max_features, 32))
#model.add(LSTM(32, return_sequences=True, input_shape=(1,texts.shape[0])))
model.add(LSTM(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
history_ltsm = model.fit(texts_train, y_train, epochs=10, batch_size=120, validation_split=0.2)
'''
I expected the model run step by step based on each batch to show the intermediate parameters, but not the all epochs.
Thank you very much!

Is it possible to train a CNN starting at an intermediate layer (in general and in Keras)?

I'm using mobilenet v2 to train a model on my images. I've frozen all but a few layers and then added additional layers for training. I'd like to be able to train from an intermediate layer rather than from the beginning. My questions:
Is it possible to provide the output of the last frozen layer as the
input for training (it would be a tensor of (?, 7,7,1280))?
How does one specify training to start from that first trainable
(non-frozen) layer? In this case, mbnetv2_conv.layer[153].
What is y_train in this case? I don't quite understand how y_train
is being used during the training process- in general, when does the
CNN refer back to y_train?
Load mobilenet v2
image_size = 224
mbnetv2_conv = MobileNetV2(weights='imagenet', include_top=False, input_shape=(image_size, image_size, 3))
# Freeze all layers except the last 3 layers
for layer in mbnetv2_conv.layers[:-3]:
layer.trainable = False
# Create the model
model = models.Sequential()
model.add(mbnetv2_conv)
model.add(layers.Flatten())
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(3, activation='softmax'))
model.summary()
# Build an array (?,224,224,3) from images
x_train = np.array(all_images)
# Get layer output
from keras import backend as K
get_last_frozen_layer_output = K.function([mbnetv2_conv.layers[0].input],
[mbnetv2_conv.layers[152].output])
last_frozen_layer_output = get_last_frozen_layer_output([x_train])[0]
# Compile the model
from keras.optimizers import SGD
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['acc'])
# how to train from a specific layer and what should y_train be?
model.fit(last_frozen_layer_output, y_train, batch_size=2, epochs=10)
Yes, you can. Two different ways.
First, the hard way makes you build two new models, one with all your frozen layers, one with all your trainable layers. Add a Flatten() layer to the frozen-layers-only model. And you will copy the weights from mobilenet v2 layer by layer to populate the weights of the frozen-layers-only model. Then you will run your input images through the frozen-layers-only model, saving the output to disk in CSV or pickle form. This is now the input for your trainable-layers model, which you train with the model.fit() command as you did above. Save the weights when you're done training. Then you will have to build the original model with both sets of layers, and load the weights into each layer, and save the whole thing. You're done!
However, the easier way is to save the weights of your model separately from the architecture with:
model.save_weights(filename)
then modify the layer.trainable property of the layers in MobileNetV2 before you add it into a new empty model:
mbnetv2_conv = MobileNetV2(weights='imagenet', include_top=False, input_shape=(image_size, image_size, 3))
for layer in mbnetv2_conv.layers[:153]:
layer.trainable = False
model = models.Sequential()
model.add(mbnetv2_conv)
then reload the weights with
newmodel.load_weights(filename)
This lets you adjust which layers in your mbnetv2_conv model you will be training on the fly, and then just call model.fit() to continue training.

Find Most Important Input from a Neural Network

I trained a neural network with 37 Inputs. It has around 85% accuracy. Is it possible for me to find out which Input has the most effect. I tried this code but I cannot figure out how to find most important Input
weights = model.layers[0].get_weights()[0]
biases = model.layers[0].get_weights()[1]
One possible solution is to wrap your model with keras.wrappers.scikit_learn and then use Recursive Feature elimination in scikit-learn:
def create_model():
# create model
model = Sequential()
model.add(Dense(512, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(10, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model, epochs=100, batch_size=128, verbose=0)
rfe = RFE(estimator=model, n_features_to_select=1, step=1)
rfe.fit(X, y)
ranking = rfe.ranking_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking, cmap=plt.cm.Blues)
plt.colorbar()
plt.title("Ranking of pixels with RFE")
plt.show()
If you need to visualize weights see here.

Input of RNN model layer how it works?

I don't understand input of RNN model. Why it show None before node size every layer. Why it is (None,1) (None,12)
This is my code.
K.clear_session()
model = Sequential()
model.add(Dense(12, input_dim=1, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.summary()
This is not a RNN, it's just a fully connected network (FC or Dense).
The first dimension of every tensor in a Keras network is the batch_size, which represents the number of "samples" or "examples" you are passing to the model. The value is None because this dimension is not fixed, you can have batches of any size you want.

Resources