I am using model.train_on_batch in keras in order to be able to handle different batches of input data differently. Essentially I cannot use model.fit
But I need to store histogram and images of activations and weights in Tensorboard. Is there a way to do this?
You can do it manually by calling summary.histogram and passing the weights of each layer as below
with summary_writer.as_default():
for layer in self.model.layers:
for weight in layer.weights:
tf.summary.histogram('weights/{}'.format(layer.name), weight, step=your_step)
Related
I've been trying to reshape to NCHW format using tf.keras.layers.Reshape function but the final xml file outputs this :
There's a Transpose layer after reshaping to (1,6,26,26) and final shape is (1,26,6,26)
I'm not sure why there's a Transpose layer , I want the shapes to be (1,6,26,26)
What's the reason ?
First and foremost, you need to understand the topology of your model's network. Then you can reshape and batch.
You can use the model optimizer to manipulate your input size.
This is how you can do it: Model Optimizer Advanced Reshape, Batching
I am setting up a fit_generator to train a DNN by keras. But don't know how to use a CNN inside this generator.
Basically, I have a pre-trained image generator using fully-connected convolutional networks (we can named it as GEN-NET). Now I want to used this Fully-CNN in my fit_generator to generate unlimited number of images to train another classifier (called CLASS-NET) in keras. But it always crashed my training and the error message is:
ValueError: Tensor Tensor("decoder/transform_output/mul:0", shape=(?, 128, 128, 1), dtype=float32) is not an element of this graph.
This "decoder/transform_output/mul:0" is the output of my CNN GEN-NET.
So my question is that can I use CNN based GEN-NET in my fit_generator to train GLASS-NET or it is not permitted in keras?
Keras does not really like running two separate models in a single session. You could use K.clear_session() after using the model but this would produce a lot of overhead!
Best way to do this, IMHO, is by pre-generating these images and then loading them using a generator. Basically splitting your program into two separate programs.
Otherwise, if you are using tensorflow as back-end there might be a way to do it by switching the default graph on the tf.Session, you could Google that but I would not recommend it! :)
Seems like you might have things a bit mixed up! The CNN (convolutional neural network) needs to be trained to your data, unless you're using a pretrained network for predictions. If you're going to train the CNN, you can do that with either the fit() or the fit_generator() function. Use fit() if you're feeding data directly, and use fit_generator() if your data is handled by Image Data Generators. If you've loaded a pre-trained model/weights only to make predictions, you don't need to use any fit function, since no training needs to be done.
I trained a LeNet architecture on a first dataset. I want to train a VGG architecture on an other dataset by initializing the weights of VGG with weights obtained from LeNet.
All initialization functions in keras are predefined and I do not find how to customize them. For example :
keras.initializers.Zeros()
Any idea how I can set the weights?
https://keras.io/layers/about-keras-layers/
According to the Keras documentation above:
layer.set_weights(weights) sets the weights of the layer from a list of Numpy arrays
layer.get_weights() returns the weights of the layer as a list of Numpy arrays
So, you can do this as follows:
model = Sequential()
model.add(Dense(32))
... building the model's layers ...
# access any nth layer by calling model.layers[n]
model.layers[0].set_weights( your_weights_here )
Of course, you'll need to make sure you are setting the weights of each layer to the appropriate shape they should be.
Is there any way I can use the ImageNet weights for ResNet50 for my project which has images of shape (224,224,4)? The image has R,G,B,Y channels.
At the moment, I am simply using
model = ResNet50(include_top=True, weights=None, input_tensor=None, input_shape=input_shape, pooling=None, classes=num_classes)
Now, if I need to use the ImageNet weights, I need to always set the number of classes to 1000. I tried doing that, and then popping the last layer, and adding my own Dense(num_classes) layer. However, now, the number of channels is an issue.
Can anyone suggest a way to accommodate 4 channels in the model while using the ImageNet weights?
I am working on a project in which I need to edit individual weights and biases.
Is there any way to actually get access to the layers weights and biases so I can edit them manually?
from tf.layers.dense()
So far I have created my own model and stored the biases outside like so:
for _ in range(population_size):
hidden_layer.append(tf.Variable(tf.truncated_normal([11, 20])))
output_layer.append(tf.Variable(tf.truncated_normal([20, 9])))
population.append([hidden_layer, output_layer])
I am then trying to feed population into the model using feed dict. It's turning out to be a real hell because I cannot feed them into the models because of the shapes of the Variables are not the same.
Is there any native support for getting the weights from the dense layer?
From the keras doc:
All Keras layers have a number of methods in common:
layer.get_weights(): returns the weights of the layer as a list of Numpy arrays.
layer.set_weights(weights): sets the weights of the layer from a list of Numpy arrays (with the same shapes as the output of get_weights).
You can easily access all layers inside your model with yourmodel.layers.