How to use a NN in a keras generator? - keras

I am setting up a fit_generator to train a DNN by keras. But don't know how to use a CNN inside this generator.
Basically, I have a pre-trained image generator using fully-connected convolutional networks (we can named it as GEN-NET). Now I want to used this Fully-CNN in my fit_generator to generate unlimited number of images to train another classifier (called CLASS-NET) in keras. But it always crashed my training and the error message is:
ValueError: Tensor Tensor("decoder/transform_output/mul:0", shape=(?, 128, 128, 1), dtype=float32) is not an element of this graph.
This "decoder/transform_output/mul:0" is the output of my CNN GEN-NET.
So my question is that can I use CNN based GEN-NET in my fit_generator to train GLASS-NET or it is not permitted in keras?

Keras does not really like running two separate models in a single session. You could use K.clear_session() after using the model but this would produce a lot of overhead!
Best way to do this, IMHO, is by pre-generating these images and then loading them using a generator. Basically splitting your program into two separate programs.
Otherwise, if you are using tensorflow as back-end there might be a way to do it by switching the default graph on the tf.Session, you could Google that but I would not recommend it! :)

Seems like you might have things a bit mixed up! The CNN (convolutional neural network) needs to be trained to your data, unless you're using a pretrained network for predictions. If you're going to train the CNN, you can do that with either the fit() or the fit_generator() function. Use fit() if you're feeding data directly, and use fit_generator() if your data is handled by Image Data Generators. If you've loaded a pre-trained model/weights only to make predictions, you don't need to use any fit function, since no training needs to be done.

Related

CNN with CTC loss

I want to extract features using a pretrained CNN model(ResNet50, VGG, etc) and use the features with a CTC loss function.
I want to build it as a text recognition model.
Anyone on how can i achieve this ?
I'm not sure if you are looking to finetune the pretrained models or to use the models for feature extraction. To do the latter freeze the petrained model weights (there are several ways to do this in PyTorch, the simplest being calling .eval() on the model), and feed the logits from the last layer of the model to your new output head. See the PyTorch tutorial here for a more in depth guide.

Why cleverhans pytorch tutorial using log_softmax instead of logits as output

When generating adversarial examples, it is typically using logits as the output of the neural network, and then train the network with cross-entropy.
However, I found that the tutorial of cleverhans uses log softmax and then convert the pytorch model to a tensorflow model, and finally train the model.
https://github.com/tensorflow/cleverhans/blob/master/cleverhans_tutorials/mnist_tutorial_pytorch.py#L65
I am wondering if anyone has the idea about whether using logits instead of log_softmax will make any difference?
As you said, when we get logits from a neural network, we train it using CrossEntropyLoss. An alternative way is to compute the log_softmax and then train the network by minimizing the negative log-likelihood (NLLLoss).
Both approaches are basically the same if you are training a network for classification tasks. However, if you have a different objective function, you may find one of these two techniques, particularly useful in your scenario.
Reference
CrossEntropyLoss
NLLLoss

Autoencoder with Transfer Learning?

Is there a way I can train an autoencoder model using a pre-trained model like ResNet?
I'm trying to train an autoencoder model with input as an image and output as a masked version of that image.
Is it possible to use weights from a pretrained model here?
Yes! you can definitely do transfer learning using a pre-trained network, i.e. ResNet50 as the encoder in an autoencoder. For reference, check out the following link. https://github.com/hsinyilin19/ResNetVAE
From what I know, there is no proven method to do this. I'd train the autoencoder from scratch.
In theory, if you find a pre-trained CNN which does not use max pooling, you can use those weights and architecture for the encoder stage in your autoencoder. You can also extract features from a pre-trained model and concatenate/merge them to your autoencoder. But the value add is not clear, and the architecture might become overly complex.

loading weights keras LSTM not working

I am trying to load the weights from a Keras 1.0 Model into a Keras 2.0 model I created. I am sure the model architecture is exactly the same. The issues I am having is the load_weights() function is loading all the weights.
When I print the weights to a text file from the original model (loaded via load_model) and from the new model with load_weights() the later is missing many entry and are actually different. This also shows itself when making predictions as the accuracy is lower.
This problem only occurs in my LSTM layers. The embedding layers is fine and the Dense layer is also fine.
Any thoughts? I can not use load_model() as the original saved model was done in keras 1.0 and I need to use keras 2.0
EDIT MORE:
I should note I think the issue is the internal states not being loaded. Let me explain though. When I use get_weights() on each layer and I print it too terminal or a file the original model outputs a much larger matrix.
After using load_weights and then get_weights and print the weight matrix is missing many elements. I'm thinking it's the internal states.
The problem was that there was parameters for a compiled graph that were saved. I think it's safe to just port over the weights and continue training to let it catch up (maybe 1-2 epochs) if you can.
Gl

Feed an unseen example to a pre-trained model made in Keras

I've implemented a neural network using Keras. Once trained and tested for final test accuracy, using a matrix with a bunch of rows containing features (plus corresponding labels), I have a model which I should be able to use for prediction.
How can I feed a single unseen example, meaning a feature vector to the model, to obtain a class prediction?
I've looked at their documentation here but could not find a method for it.
What you want is the predict method, it takes a batch of input samples and produces predictions, which are the outputs computer by your network. To feed a single example you can just put it inside a numpy ndarray wrapper.

Resources