How to get weights from another ONNX models - onnx

I have 2 ONNX model, the model1 is like: (conv1-pooling1), the model2 is like: (conv2-pooling2-fc2). How can I generate a new model3, which is (conv1-pooling1-fc2), weights of conv1-pooling1 are from model1 and weights of fc2 are from model2.

Related

Saving and applying a keras model to unseen data

I am building a model for my multiclass text classification problem which looks like this:
model = tf.keras.Sequential()
model.add(hub_layer)
for units in [128, 128, 64 , 32]: # automatically adding layers through a for loop
model.add(tf.keras.layers.Dense(units, activation='relu'))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Dense(18, activation='softmax'))
model.summary()
# Optimiser Function
model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_data_f,
epochs=20,
validation_data=test_data_f,
verbose=1,
class_weight=class_weights
)
Once the build is done, I would like to do the following:
Save the model
Load the model and apply it to an unseen text (For Example: the column on which I would want to apply this model is df['Unseen_Text]
how can I achieve this?

How to save all the weights of LSTM keras model in 1D array

I have a saved LSTM model and its weights. I want to extract all the weights from each layer of this LSTM model and save it 1D array.
I tried
for layer in model.layers:
l = layer.get_weights()
but it only returns the weight of the last layer

Interpretation of predictions of sparse_categorical_crossentropy keras model

I am trying to train a model to classify news. I'm using the bbc-text database:
Data
I have transformed both, the output and input variables to be used in a keras model using Tokenizer().
Finally, I have set up the following model:
model = tf.keras.Sequential([
tf.keras.layers.Embedding(20000, 200, input_length=250),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(6, activation='softmax')])
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['sparse_categorical_accuracy'])
model.fit(X_train_text, y_train_text, batch_size=32, epochs=50, validation_data=(X_test_text, y_test_text))
While I get a good accuracy for this model, I do not know how to interpret the predictions. I would have expected to get, for each input, a list of 5 probabilities (summing up to 1), as my output has 5 possible categories.
Instead, I get this:
Predictions
Any help please?

How to reinitialize a layer in Keras but not the weights

I want to copy some layers of a given model in an other new model (with all their attributes like stride, padding etc.). I want to keep the weights and attributes but not the inbound_node/outbound_node property linked to the old model.
I don't think there is any function in Keras to do that but it's very easy with the get_config, get_weights() and set_weights() method of Keras Layers.
new_fresh_layer = layer.__class__(**layer.get_config())
old_layer_weights = layer.get_weights()
x = new_fresh_layer(layer_inputs)
new_fresh_layer.set_weights(old_layer_weights)
if also works when adding layers sequentialy to a model:
model1 = Sequential()
old_layer = Dense(10, input_shape=(10,))
model1.add(old_layer)
model2 = Sequential()
new_layer = old_layer.__class__(**old_layer.get_config())
model2.add(new_layer)
new_layer.set_weights(old_layer.get_weights())
assert (new_layer.get_weights()[0] == old_layer.get_weights()[0]).all()
assert (new_layer.get_weights()[1] == old_layer.get_weights()[1]).all()

Is it possible to train a CNN starting at an intermediate layer (in general and in Keras)?

I'm using mobilenet v2 to train a model on my images. I've frozen all but a few layers and then added additional layers for training. I'd like to be able to train from an intermediate layer rather than from the beginning. My questions:
Is it possible to provide the output of the last frozen layer as the
input for training (it would be a tensor of (?, 7,7,1280))?
How does one specify training to start from that first trainable
(non-frozen) layer? In this case, mbnetv2_conv.layer[153].
What is y_train in this case? I don't quite understand how y_train
is being used during the training process- in general, when does the
CNN refer back to y_train?
Load mobilenet v2
image_size = 224
mbnetv2_conv = MobileNetV2(weights='imagenet', include_top=False, input_shape=(image_size, image_size, 3))
# Freeze all layers except the last 3 layers
for layer in mbnetv2_conv.layers[:-3]:
layer.trainable = False
# Create the model
model = models.Sequential()
model.add(mbnetv2_conv)
model.add(layers.Flatten())
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(3, activation='softmax'))
model.summary()
# Build an array (?,224,224,3) from images
x_train = np.array(all_images)
# Get layer output
from keras import backend as K
get_last_frozen_layer_output = K.function([mbnetv2_conv.layers[0].input],
[mbnetv2_conv.layers[152].output])
last_frozen_layer_output = get_last_frozen_layer_output([x_train])[0]
# Compile the model
from keras.optimizers import SGD
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['acc'])
# how to train from a specific layer and what should y_train be?
model.fit(last_frozen_layer_output, y_train, batch_size=2, epochs=10)
Yes, you can. Two different ways.
First, the hard way makes you build two new models, one with all your frozen layers, one with all your trainable layers. Add a Flatten() layer to the frozen-layers-only model. And you will copy the weights from mobilenet v2 layer by layer to populate the weights of the frozen-layers-only model. Then you will run your input images through the frozen-layers-only model, saving the output to disk in CSV or pickle form. This is now the input for your trainable-layers model, which you train with the model.fit() command as you did above. Save the weights when you're done training. Then you will have to build the original model with both sets of layers, and load the weights into each layer, and save the whole thing. You're done!
However, the easier way is to save the weights of your model separately from the architecture with:
model.save_weights(filename)
then modify the layer.trainable property of the layers in MobileNetV2 before you add it into a new empty model:
mbnetv2_conv = MobileNetV2(weights='imagenet', include_top=False, input_shape=(image_size, image_size, 3))
for layer in mbnetv2_conv.layers[:153]:
layer.trainable = False
model = models.Sequential()
model.add(mbnetv2_conv)
then reload the weights with
newmodel.load_weights(filename)
This lets you adjust which layers in your mbnetv2_conv model you will be training on the fly, and then just call model.fit() to continue training.

Resources