Graph disconnected: cannot obtain value for tensor Tensor - python-3.x

I have to train a GAN network with Generator and Discriminator. My Generator Network is as below.
def Generator(image_shape=(512,512,3):
inputs = Input(image_shape)
# 5 convolution Layers
# 5 Deconvolution Layers along with concatenation
# output shape is (512,512,3)
model=Model(inputs=inputs,outputs=outputs, name='Generator')
return model, output
My Discriminator Network is as below. The first step in Discriminator network is that I have to concatenate the input of discriminator with output of Generator.
def Discriminator(Generator_output, image_shape=(512,512,3)):
inputs=Input(image_shape)
concatenated_input=concatenate([Generator_output, inputs], axis=-1)
# Now start applying Convolution Layers on concatenated_input
# Deconvolution Layers
return Model(inputs=inputs,outputs=outputs, name='Discriminator')
Initiating the Architectures
G, Generator_output=Generator(image_shape=(512,512,3))
G.summary
D=Discriminator(Generator_output, image_shape=(512,512,3))
D.summary()
My Problem is when I pass concatenated_input to convolution layers it gets me the following error.
Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(?, 512, 512, 3), dtype=float32) at layer "input_1". The following previous layers were accessed without issue: []
If I remove the concatenation layer it works perfectly but why it's not working after concatenation layer although the shape of inputs and Generator_output in concatenation is also same i.e. (512,512,3).

The key insight that will help you here is that Models are just like layers in Keras but self contained. So to connect one model output to another, you need to say the second model receieves an input of matching shape rather than directly passing that tensor:
def Discriminator(gen_output_shape, image_shape=(512,512,3)):
inputs=Input(image_shape)
gen_output=Input(gen_output_shape)
concatenated_input=concatenate([gen_output, inputs], axis=-1)
# Now start applying Convolution Layers on concatenated_input
# Deconvolution Layers
return Model(inputs=[inputs, gen_output],outputs=outputs, name='Discriminator')
And then you can use it like a layer:
G=Generator(image_shape=(512,512,3))
D=Discriminator((512,512,3), image_shape=(512,512,3))
some_other_image_input = Input((512,512,3))
discriminator_output = D(some_other_image_input, G) # model is used like a layer
# so the output of G is connected to the input of D
D.summary()
gan = Model(inputs=[all,your,inputs], outputs=[outputs,for,training])
# you can still use G and D like separate models, save them, train them etc
To train them together you can create another Model that has all the required inputs, calls the generator / discriminator. Think of using a lock and key idea, every model has some inputs and you can use them like layers in another Model so long you provide the correct inputs.

Related

ValueError: Layer conv2d_41 was called with an input that isn't a symbolic tensor. All inputs to the layer should be tensors

I try transfer learning with custom input of backbone:
(I can not transfer learning normally because my input shape is N*N*8, so I need add small network_1 to reach N*N*3)
model_1
|
|
add model_2
|
add some layer
My code:
model_1.add(model_2)
model_1 is my small network:
model_2 is Mobilenet or VGG16, or Densenet .....
model_1 = Sequential()
model_1.add(InputLayer(input_shape=(size, size, F), name="InputLayer"))
model_1.add(Convolution2D(3, 128, padding = 'same'))
from keras.applications.densenet import DenseNet169
model_2=DenseNet169(weights='imagenet',include_top=False)
model_2.layers.pop(0) # remove input_layer of model_2
model_1.add(model_2) # output model_1 is input model_2?
model_1 = GlobalAveragePooling2D()(model_1)
model_1 = Dropout(0.2)(model_1)
model_1 = Dense(256*256, activation='softmax')(model_1)
model_1 = Reshape(256, 256)(model_1)
I got errors:
ValueError: Layer global_average_pooling2d_3 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.engine.sequential.Sequential'>. Full input: [<keras.engine.sequential.Sequential object at 0x7f74f6621d68>]. All inputs to the layer should be tensors.
What wrong in my code?
Global Average Pooling 2D is a layer that performs an operation on a tensor, or multi-dimensional array. Therefore, passing a model architecture like DenseNet throws an error because the model has no idea what it's looking at. Model Architecture files are totally different from tensors.
To achieve what I think you're trying to achieve, run DenseNet and then pass the output of DenseNet into the model you're creating, instead of passing the model itself. Good luck!

how to enforce feature orthogonality in keras

I am new to Keras and Tensorflow. I want to add a penalty to my categorical cross entropy loss function based on some of the outputs in the network. Specifically, I decompose the outputs of a fully connected layer into 8 partitions want these outputs to be orthogonal. So I append the activations into a list and convert it into a stack by using Keras backend. Here is how I partitioned the activations:
for i in range(8):
x_sub = Lambda(lambda x: x[:,i*128:i*128+128])(x)
features.append(x_sub)
#convert batch of feature lists in to 8x(128*batch_size) keras tensor
outs.append(Lambda(lambda x: K.reshape(K.stack(x, axis=0), (8, -1)))(features))
net = Model(inputs=[net.input], outputs=outs)
And then I define the loss as follows:
def OrthLoss(features):
W = K.l2_normalize(features, axis=1)
diff = K.dot(W, K.transpose(W)) - K.eye(8)
return K.mean(diff)
However, this does not seem to converge. Is this a right way to accomplish this? I first tried to enforce the orthogonality as a regularizer on weights but as far as I understand Keras performs regularization on each layer separately and found no way to define a regularizer on multiple weights.

How to correctly get layer weights from Conv2D in keras?

I have Conv2D layer defines as:
Conv2D(96, kernel_size=(5, 5),
activation='relu',
input_shape=(image_rows, image_cols, 1),
kernel_initializer=initializers.glorot_normal(seed),
bias_initializer=initializers.glorot_uniform(seed),
padding='same',
name='conv_1')
This is the first layer in my network.
Input dimensions are 64 by 160, image is 1 channel.
I am trying to visualize weights from this convolutional layer but not sure how to get them.
Here is how I am doing this now:
1.Call
layer.get_weights()[0]
This returs an array of shape (5, 5, 1, 96). 1 is because images are 1-channel.
2.Take 5 by 5 filters by
layer.get_weights()[0][:,:,:,j][:,:,0]
Very ugly but I am not sure how to simplify this, any comments are very appreciated.
I am not sure in these 5 by 5 squares. Are they filters actually?
If not could anyone please tell how to correctly grab filters from the model?
I tried to display the weights like so only the first 25. I have the same question that you do is this the filter or something else. It doesn't seem to be the same filters that are derived from deep belief networks or stacked RBM's.
Here is the untrained visualized weights:
and here are the trained weights:
Strangely there is no change after training! If you compare them they are identical.
and then the DBN RBM filters layer 1 on top and layer 2 on bottom:
If i set kernel_intialization="ones" then I get filters that look good but the net loss never decreases though with many trial and error changes:
Here is the code to display the 2D Conv Weights / Filters.
ann = Sequential()
x = Conv2D(filters=64,kernel_size=(5,5),input_shape=(32,32,3))
ann.add(x)
ann.add(Activation("relu"))
...
x1w = x.get_weights()[0][:,:,0,:]
for i in range(1,26):
plt.subplot(5,5,i)
plt.imshow(x1w[:,:,i],interpolation="nearest",cmap="gray")
plt.show()
ann.fit(Xtrain, ytrain_indicator, epochs=5, batch_size=32)
x1w = x.get_weights()[0][:,:,0,:]
for i in range(1,26):
plt.subplot(5,5,i)
plt.imshow(x1w[:,:,i],interpolation="nearest",cmap="gray")
plt.show()
---------------------------UPDATE------------------------
So I tried it again with a learning rate of 0.01 instead of 1e-6 and used the images normalized between 0 and 1 instead of 0 and 255 by dividing the images by 255.0. Now the convolution filters are changing and the output of the first convolutional filter looks like so:
The trained filter you'll notice is changed (not by much) with a reasonable learning rate:
Here is image seven of the CIFAR-10 test set:
And here is the output of the first convolution layer:
And if I take the last convolution layer (no dense layers in between) and feed it to a classifier untrained it is similar to classifying raw images in terms of accuracy but if I train the convolution layers the last convolution layer output increases the accuracy of the classifier (random forest).
So I would conclude the convolution layers are indeed filters as well as weights.
In layer.get_weights()[0][:,:,:,:], the dimensions in [:,:,:,:] are x position of the weight, y position of the weight, the n th input to the corresponding conv layer (coming from the previous layer, note that if you try to obtain the weights of first conv layer then this number is 1 because only one input is driven to the first conv layer) and k th filter or kernel in the corresponding layer, respectively. So, the array shape returned by layer.get_weights()[0] can be interpreted as only one input is driven to the layer and 96 filters with 5x5 size are generated. If you want to reach one of the filters, you can type, lets say the 6th filter
print(layer.get_weights()[0][:,:,:,6].squeeze()).
However, if you need the filters of the 2nd conv layer (see model image link attached below), then notice for each of 32 input images or matrices you will have 64 filters. If you want to get the weights of any of them for example weights of the 4th filter generated for the 8th input image, then you should type
print(layer.get_weights()[0][:,:,8,4].squeeze()).
enter image description here

Dimensions not matching in keras LSTM model

I want to use an LSTM neural Network with keras to forecast groups of time series and I am having troubles in making the model match what I want. The dimensions of my data are:
input tensor: (data length, number of series to train, time steps to look back)
output tensor: (data length, number of series to forecast, time steps to look ahead)
Note: I want to keep the dimensions exactly like that, no
transposition.
A dummy data code that reproduces the problem is:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, TimeDistributed, LSTM
epoch_number = 100
batch_size = 20
input_dim = 4
output_dim = 3
look_back = 24
look_ahead = 24
n = 100
trainX = np.random.rand(n, input_dim, look_back)
trainY = np.random.rand(n, output_dim, look_ahead)
print('test X:', trainX.shape)
print('test Y:', trainY.shape)
model = Sequential()
# Add the first LSTM layer (The intermediate layers need to pass the sequences to the next layer)
model.add(LSTM(10, batch_input_shape=(None, input_dim, look_back), return_sequences=True))
# add the first LSTM layer (the dimensions are only needed in the first layer)
model.add(LSTM(10, return_sequences=True))
# the TimeDistributed object allows a 3D output
model.add(TimeDistributed(Dense(look_ahead)))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
model.fit(trainX, trainY, nb_epoch=epoch_number, batch_size=batch_size, verbose=1)
This trows:
Exception: Error when checking model target: expected
timedistributed_1 to have shape (None, 4, 24) but got array with shape
(100, 3, 24)
The problem seems to be when defining the TimeDistributed layer.
How do I define the TimeDistributed layer so that it compiles and trains?
The error message is a bit misleading in your case. Your output node of the network is called timedistributed_1 because that's the last node in your sequential model. What the error message is trying to tell you is that the output of this node does not match the target your model is fitting to, i.e. your labels trainY.
Your trainY has a shape of (n, output_dim, look_ahead), so (100, 3, 24) but the network is producing an output shape of (batch_size, input_dim, look_ahead). The problem in this case is that output_dim != input_dim. If your time dimension changes you may need padding or a network node that removes said timestep.
I think the problem is that you expect output_dim (!= input_dim) at the output of TimeDistributed, while it's not possible. This dimension is what it considers as the time dimension: it is preserved.
The input should be at least 3D, and the dimension of index one will
be considered to be the temporal dimension.
The purpose of TimeDistributed is to apply the same layer to each time step. You can only end up with the same number of time steps as you started with.
If you really need to bring down this dimension from 4 to 3, I think you will need to either add another layer at the end, or use something different from TimeDistributed.
PS: one hint towards finding this issue was that output_dim is never used when creating the model, it only appears in the validation data. While it's only a code smell (there might not be anything wrong with this observation), it's something worth checking.

What is an Embedding in Keras?

Keras documentation isn't clear what this actually is. I understand we can use this to compress the input feature space into a smaller one. But how is this done from a neural design perspective? Is it an autoenocder, RBM?
As far as I know, the Embedding layer is a simple matrix multiplication that transforms words into their corresponding word embeddings.
The weights of the Embedding layer are of the shape (vocabulary_size, embedding_dimension). For each training sample, its input are integers, which represent certain words. The integers are in the range of the vocabulary size. The Embedding layer transforms each integer i into the ith line of the embedding weights matrix.
In order to quickly do this as a matrix multiplication, the input integers are not stored as a list of integers but as a one-hot matrix. Therefore the input shape is (nb_words, vocabulary_size) with one non-zero value per line. If you multiply this by the embedding weights, you get the output in the shape
(nb_words, vocab_size) x (vocab_size, embedding_dim) = (nb_words, embedding_dim)
So with a simple matrix multiplication you transform all the words in a sample into the corresponding word embeddings.
The Keras Embedding layer is not performing any matrix multiplication but it only:
1. creates a weight matrix of (vocabulary_size)x(embedding_dimension) dimensions
2. indexes this weight matrix
It is always useful to have a look at the source code to understand what a class does. In this case, we will have a look at the class Embedding which inherits from the base layer class called Layer.
(1) - Creating a weight matrix of (vocabulary_size)x(embedding_dimension) dimensions:
This is occuring at the build function of Embedding:
def build(self, input_shape):
self.embeddings = self.add_weight(
shape=(self.input_dim, self.output_dim),
initializer=self.embeddings_initializer,
name='embeddings',
regularizer=self.embeddings_regularizer,
constraint=self.embeddings_constraint,
dtype=self.dtype)
self.built = True
If you have a look at the base class Layer you will see that the function add_weight above simply creates a matrix of trainable weights (in this case of (vocabulary_size)x(embedding_dimension) dimensions):
def add_weight(self,
name,
shape,
dtype=None,
initializer=None,
regularizer=None,
trainable=True,
constraint=None):
"""Adds a weight variable to the layer.
# Arguments
name: String, the name for the weight variable.
shape: The shape tuple of the weight.
dtype: The dtype of the weight.
initializer: An Initializer instance (callable).
regularizer: An optional Regularizer instance.
trainable: A boolean, whether the weight should
be trained via backprop or not (assuming
that the layer itself is also trainable).
constraint: An optional Constraint instance.
# Returns
The created weight variable.
"""
initializer = initializers.get(initializer)
if dtype is None:
dtype = K.floatx()
weight = K.variable(initializer(shape),
dtype=dtype,
name=name,
constraint=constraint)
if regularizer is not None:
with K.name_scope('weight_regularizer'):
self.add_loss(regularizer(weight))
if trainable:
self._trainable_weights.append(weight)
else:
self._non_trainable_weights.append(weight)
return weight
(2) - Indexing this weight matrix
This is occuring at the call function of Embedding:
def call(self, inputs):
if K.dtype(inputs) != 'int32':
inputs = K.cast(inputs, 'int32')
out = K.gather(self.embeddings, inputs)
return out
This functions returns the output of the Embedding layer which is K.gather(self.embeddings, inputs). What tf.keras.backend.gather exactly does is to index the weights matrix self.embeddings (see build function above) according to the inputs which should be lists of positive integers.
These lists can be retrieved for example if you pass your text/words inputs to the one_hot function of Keras which encodes a text into a list of word indexes of size n (this is NOT one hot encoding - see also this example for more info: https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/).
Therefore, that's all. There is no matrix multiplication.
On the contrary, the Keras Embedding layer is only useful because exactly it avoids performing a matrix multiplication and hence it economizes on some computational resources.
Otherwise, you could just use a Keras Dense layer (after you have encoded your input data) to get a matrix of trainable weights (of (vocabulary_size)x(embedding_dimension) dimensions) and then simply do the multiplication to get the output which will be exactly the same with the output of the Embedding layer.
In Keras, the Embedding layer is NOT a simple matrix multiplication layer, but a look-up table layer (see call function below or the original definition).
def call(self, inputs):
if K.dtype(inputs) != 'int32':
inputs = K.cast(inputs, 'int32')
out = K.gather(self.embeddings, inputs)
return out
What it does is to map each a known integer n in inputs to a trainable feature vector W[n], whose dimension is the so-called embedded feature length.
In simple words (from the functionality point of view), it is a one-hot encoder and fully-connected layer. The layer weights are trainable.

Resources