loading weights keras LSTM not working - keras

I am trying to load the weights from a Keras 1.0 Model into a Keras 2.0 model I created. I am sure the model architecture is exactly the same. The issues I am having is the load_weights() function is loading all the weights.
When I print the weights to a text file from the original model (loaded via load_model) and from the new model with load_weights() the later is missing many entry and are actually different. This also shows itself when making predictions as the accuracy is lower.
This problem only occurs in my LSTM layers. The embedding layers is fine and the Dense layer is also fine.
Any thoughts? I can not use load_model() as the original saved model was done in keras 1.0 and I need to use keras 2.0
EDIT MORE:
I should note I think the issue is the internal states not being loaded. Let me explain though. When I use get_weights() on each layer and I print it too terminal or a file the original model outputs a much larger matrix.
After using load_weights and then get_weights and print the weight matrix is missing many elements. I'm thinking it's the internal states.

The problem was that there was parameters for a compiled graph that were saved. I think it's safe to just port over the weights and continue training to let it catch up (maybe 1-2 epochs) if you can.
Gl

Related

PyTorch Lightening: load model from pre-trained; values of weights remains same

I have a pre-trained model, by changing final fc layers I have created model for downstream task. Now, I want to load weights from pertained weights. I tried self.model.load_from_checkpoint (self.pretrained_model_path). But, when I print weight values from model layers, they are exactly same which indicates weights were not loaded/updated. Note that is not giving me any warning/error.
Edit:
self.model.backbone = self.model.load_from_checkpoint (self.pretrained_model_path).backbone
updates the parameters with pre_trained weights. There might be optimal way but I found this fix.

How to use a pre-trained object detection in tensorflow?

How can I use the weights of a pre-trained network in my tensorflow project?
I know some theory information about this but no information about coding in tensorflow.
As been pointed out by #Matias Valdenegro in the comments, your first question does not make sense. For your second question however, there are multiple ways to do so. The term that you're searching for is Transfer Learning (TL). TL means transferring the "knowledge" (basically it's just the weights) from a pre-trained model into your model. Now there are several types of TL.
1) You transfer the entire weights from a pre-trained model into your model and use that as a starting point to train your network.
This is done in a situation where you now have extra data to train your model but you don't want to start over the training again. Therefore you just load the weights from your previous model and resume the training.
2) You transfer only some of the weights from a pre-trained model into your new model.
This is done in a situation where you have a model trained to classify between, say, 5 classes of objects. Now, you want to add/remove a class. You don't have to re-train the whole network from the start if the new class that you're adding has somewhat similar features with (an) existing class(es). Therefore, you build another model with the same exact architecture as your previous model except the fully-connected layers where now you have different output size. In this case, you'll want to load the weights of the convolutional layers from the previous model and freeze them while only re-train the fully-connected layers.
To perform these in Tensorflow,
1) The first type of TL can be performed by creating a model with the same exact architecture as the previous model and simply loading the model using tf.train.Saver().restore() module and continue the training.
2) The second type of TL can be performed by creating a model with the same exact architecture for the parts where you want to retain the weights and then specify the name of the weights in which you want to load from the previous pre-trained weights. You can use the parameter "trainable=False" to prevent Tensorflow from updating them.
I hope this helps.

Keras predict returns same results despite loading model and loading weights

I trained and saved a keras model in json format and saved the weights in h5 format. The only issue is that each time I run predict, I get different results. There is no randomness when loading the data, I simply load the data (without shuffling), load the keras model and weights, and then run predict. I don't run the compile command, since it's not needed. Something I was wondering is if the dropout layers I added to the model contribute to this, but I read that they are automatically disabled for prediction/evaluation stages. Any ideas on why this might be happening?

How to use a NN in a keras generator?

I am setting up a fit_generator to train a DNN by keras. But don't know how to use a CNN inside this generator.
Basically, I have a pre-trained image generator using fully-connected convolutional networks (we can named it as GEN-NET). Now I want to used this Fully-CNN in my fit_generator to generate unlimited number of images to train another classifier (called CLASS-NET) in keras. But it always crashed my training and the error message is:
ValueError: Tensor Tensor("decoder/transform_output/mul:0", shape=(?, 128, 128, 1), dtype=float32) is not an element of this graph.
This "decoder/transform_output/mul:0" is the output of my CNN GEN-NET.
So my question is that can I use CNN based GEN-NET in my fit_generator to train GLASS-NET or it is not permitted in keras?
Keras does not really like running two separate models in a single session. You could use K.clear_session() after using the model but this would produce a lot of overhead!
Best way to do this, IMHO, is by pre-generating these images and then loading them using a generator. Basically splitting your program into two separate programs.
Otherwise, if you are using tensorflow as back-end there might be a way to do it by switching the default graph on the tf.Session, you could Google that but I would not recommend it! :)
Seems like you might have things a bit mixed up! The CNN (convolutional neural network) needs to be trained to your data, unless you're using a pretrained network for predictions. If you're going to train the CNN, you can do that with either the fit() or the fit_generator() function. Use fit() if you're feeding data directly, and use fit_generator() if your data is handled by Image Data Generators. If you've loaded a pre-trained model/weights only to make predictions, you don't need to use any fit function, since no training needs to be done.

Replicating TF model in Keras from pbtxt and ckpts

I'm not sure if this is actually possible with the information I have, but please let me know if that's the case.
From a previously trained Tensorflow model I have the following files:
graph.pbtxt, checkpoint, model.ckpt-10000.data-00000-of-00001, model.ckpt-10000.index, and model.ckpt-10000.meta
I was told that the input size of this model was a Dense layer of size 5000 and the output was a Dense sigmoid binary classification, but I don't know how many/what size layers were in between. (I'm also not 100% positive that the input size is correct).
From this information and associated files, is there a way to replicate the TF model with trained weights into a Keras functional model?
(The idea was that this small dense network was added onto the last FC layer of VGG-16, so that'll be my end goal.)

Resources