How do I build a Fully Convolutional Neural Network in Keras? - conv-neural-network

I am working with Keras and when attempting to implement a FCNN with a similar architecture to VGG, I get the error:
TypeError: img must be 4D tensor
Here is my code for the FCNN:
def buildmodel():
model = Sequential()
model.add(Convolution2D(4,3,3,border_mode='same',input_shape=(3,64,64)))
model.add(LeakyReLU(alpha=0.3))
model.add(MaxPooling2D((2,2),strides=(2,2),border_mode='valid', dim_ordering='th'))
model.add(Convolution2D(8,3,3,border_mode='same'))
model.add(LeakyReLU(alpha=0.3))
model.add(MaxPooling2D((2,2),strides=(2,2),border_mode='valid', dim_ordering='th'))
model.add(UpSampling2D((2,2)))
model.add(Convolution2D(16,3,3,border_mode='same',input_shape=x[0].shape))
model.add(LeakyReLU(alpha=0.3))
model.add(MaxPooling2D((2,2),strides=(2,2),border_mode='valid', dim_ordering='th'))
model.add(Convolution2D(32,3,3,border_mode='same'))
model.add(LeakyReLU(alpha=0.3))
model.add(MaxPooling2D((2,2),strides=(2,2),border_mode='valid', dim_ordering='th'))
model.add(UpSampling2D((2,2)))
model.add(Convolution2D(64,3,3,border_mode='same',input_shape=x[0].shape))
model.add(LeakyReLU(alpha=0.3))
model.add(Convolution2D(64,3,3,border_mode='same'))
model.add(LeakyReLU(alpha=0.3))
model.add(MaxPooling2D((2,2),strides=(2,2),border_mode='valid', dim_ordering='th'))
model.add(UpSampling2D((2,2)))
model.add(Convolution2D(128,3,3,border_mode='same',input_shape=x[0].shape))
model.add(LeakyReLU(alpha=0.3))
model.add(Convolution2D(128,3,3,border_mode='same'))
model.add(LeakyReLU(alpha=0.3))
model.add(MaxPooling2D((2,2),strides=(2,2),border_mode='valid', dim_ordering='th'))
model.add(UpSampling2D((2,2)))
model.add(Convolution2D(4,3,3,border_mode='same',input_shape=x[0].shape))
model.add(LeakyReLU(alpha=0.3))
model.add(Convolution2D(1,3,3,border_mode='same',activation='sigmoid'))
return model
inputs = Input(shape=(len(x),3,64,64))
fcnn = buildmodel()
outputs = fcnn(inputs)
print(fcnn.inbound_nodes)
opt = RMSprop(lr=0.001, rho=0.9, epsilon=1e-6)
model = Model(input=inputs, output=outputs)
model.compile(loss='hinge', optimizer=opt)
model.fit(x)
x has a shape (nsamples,3,64,64). When using the VGG model verbatim in Keras with fully connected layers, there seemed to be no problem, so I'm confused as to how the new architecture is causing problems with the image shape.

Related

image augmentation went wrong

I was trying to make image augmentation and see how it will affect the model but for some reason I got this error
TypeError: '>' not supported between instances of 'int' and 'ImageDataGenerator'
I'm using efficientNetb4 with adding my own classifier layer.
augment = ImageDataGenerator(horizontal_flip=True, vertical_flip=True, rotation_range=30, validation_split=0.15)
train = augment.flow_from_directory(path, target_size=(380,380), batch_size=35, subset='training')
valid = augment.flow_from_directory(path, target_size=(380,380), batch_size=35, subset='validation')
base_model = keras.applications.EfficientNetB4(weights="imagenet",include_top=False, input_shape=(380, 380,3))
for layer in base_model.layers:
layer.trainable = False
avg = keras.layers.GlobalAveragePooling2D()(base_model.output)
output = keras.layers.Dense(3, activation="softmax")(avg)
model = keras.Model(inputs=base_model.input, outputs=output)
earlystopping = keras.callbacks.EarlyStopping(monitor='loss', patience=3)
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, decay=0.0001)
model.compile(loss="sparse_categorical_crossentropy",optimizer=optimizer,metrics=["accuracy"])
history = model.fit_generator(train, augment, validation_data=valid, epochs=25, verbose=2, callbacks=[earlystopping])
I think the problem is the batch_size i specified but couldn't understand shy it caused this error
Please check documentation for fit_generator (https://faroit.com/keras-docs/1.2.0/models/model/):
fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose=1, callbacks=[], validation_data=None, nb_val_samples=None, class_weight={}, max_q_size=10, nb_worker=1, pickle_safe=False, initial_epoch=0)
The second argument is 'samples_per_epoch', which is int, but you are passing ImageDataGenerator instead. Hence the error message. I don't see why you need to pass 'augment' here. The following should work:
history = model.fit_generator(train, validation_data=valid, epochs=25, verbose=2, callbacks=[earlystopping])

Error: "ANN Visualizer: Layer not supported for visualizing" in the function ann_viz()

So I am using the ann_visualizer for showing my keras model neural network graphically. The model works properly, but it gives this error whenever I try to visualize it via ann_viz().
"ValueError: ANN Visualizer: Layer not supported for visualizing"
I searched the internet but couldn't find a valid solution.
this is the neural network model code
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28,28)))
model.add(keras.layers.Dense(128,activation=keras.activations.relu))
model.add(keras.layers.Dense(10,activation=keras.activations.softmax))
model.compile(
optimizer="adam",
loss=keras.losses.sparse_categorical_crossentropy,
metrics=["accuracy"]
)
model.fit(train_data, train_lables, epochs=10)
test_loss, test_acc = model.evaluate(test_data, test_lables)
And this is the ann_viz() function call
from ann_visualizer.visualize import ann_viz
ann_viz(model, title="Model")
Any idea how to make it work?
I also got the same error but was able to resolve by removing the Flatten() layer.
#Flatten the input
X = X.reshape(X.shape[0], 28*28)
model = keras.Sequential()
#added flat input shape
model.add(keras.layers.Dense(128,activation=keras.activations.relu, input_shape=(28*28,)))
model.add(keras.layers.Dense(10,activation=keras.activations.softmax))
model.compile(
optimizer="adam",
loss=keras.losses.sparse_categorical_crossentropy,
metrics=["accuracy"]
)
#now you can call the ann_viz
from ann_visualizer.visualize import ann_viz
ann_viz(model, title="Model")
Basically, I Flattened the input, removed the Flatten layer and added the flat input shape in the next layer.
Don't know the exact reason though.

How to add names to layers of Keras sequential model

I use Keras in Tensorflow 2.0 to create a sequential model:
def create_model():
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28,28), name="bla"),
keras.layers.Dense(128, kernel_regularizer=keras.regularizers.l2(REGULARIZE), activation="relu",),
keras.layers.Dropout(DROPOUT_RATE),
keras.layers.Dense(128, kernel_regularizer=keras.regularizers.l2(REGULARIZE), activation="relu"),
keras.layers.Dropout(DROPOUT_RATE),
keras.layers.Dense(10, activation="softmax")
])
model.compile(optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
return model
model = create_model()
# Checkpoint callback
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True)
# Train model
model.fit(train_images,
train_labels,
epochs=EPOCHS,
callbacks=[cp_callback])
If I extract the names after loading the model in separate file I get the following:
# Create model instance
model = create_model()
# Load weights of pre-trained model
model.load_weights(checkpoint_path)
output_names = [layer.name for layer in model.layers]
print(output_names) = ['flatten', 'dense', 'dropout', 'dense_1', 'dropout_1', 'dense_2']
In this case, I would expect bla instead of flatten.
How can I add custom names to my layer?
You're doing it the right way, straight from my jupyter :
from tensorflow import keras
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28,28), name="bla"),
keras.layers.Dense(128, activation="relu",),
keras.layers.Dropout(0.5),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dropout(0.5),
keras.layers.Dense(10, activation="softmax")
])
model.compile(optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
model
tensorflow.python.keras.engine.sequential.Sequential at 0x2b6ecf083c18>
output_names = [layer.name for layer in model.layers]
output_names
['bla', 'dense', 'dropout', 'dense_1', 'dropout_1', 'dense_2']
Edit adding the load/save part :
model.save('my_model.h5')
second_model = keras.models.load_model('my_model.h5')
output_names = [layer.name for layer in second_model.layers]
output_names
['bla', 'dense', 'dropout', 'dense_1', 'dropout_1', 'dense_2']
Can you add your whole code, the problem may lie elsewhere.
Also can you add your tensorflow version?

(deep learning, rnn , cnn) image image captinoing with keras

I have made image captioning tutorial, but it doesn't work. help me...
image captioning is a model that explained the image a person inputs.
I don't have GPU, so I have to make the same model in tutorial,
and then, I will load weights in tutorial directory.
I copy a image captioning tutorial
here is tutorial training model code:
image_model = Sequential([
Dense(embedding_size, input_shape=(2048,), activation='relu'),
RepeatVector(max_len)
])
caption_model = Sequential([
Embedding(vocab_size, embedding_size, input_length=max_len),
LSTM(256, return_sequences=True),
TimeDistributed(Dense(300))
])
final_model = Sequential([
Merge([image_model, caption_model], mode='concat', concat_axis=1),
Bidirectional(LSTM(256, return_sequences=False)),
Dense(vocab_size),
Activation('softmax')
])
final_model.compile(loss='categorical_crossentropy', optimizer=RMSprop(),
metrics=['accuracy'])
However, it doesn't work, some people say that this code is designed with Sequential. So, I change them to Function API. But I don't know well how to change them.
here is my code:
embedding_size = 300
vocab_size = 8256
max_len = 40
image_model = Sequential([
Dense(embedding_size, input_shape=(2048,), activation='relu'),
RepeatVector(max_len)
])
caption_model = Sequential([
Embedding(vocab_size, embedding_size, input_length=max_len),
LSTM(256, return_sequences=True),
TimeDistributed(Dense(300))
])
image_in = Input(shape=(2048,))
caption_in = Input(shape=(max_len, vocab_size))
merged = concatenate([image_model(image_in), caption_model(caption_in)],
axis=0)
latent = Bidirectional(LSTM(256, return_sequences=False))(merged)
out = Dense(vocab_size, activation='softmax')(latent)
model = Model([image_in(image_in), caption_in(caption_in)], out)
model.compile(loss='categorical_crossentropy', optimizer=RMSprop(),
metrics=['accuracy'])
I got an error :
ValueError: "input_length" is 40, but received input has shape (None, 40, 8256)
please help me... I have spent 2 weeks only for this....
As the error message indicates, your input to the embedding layer has the wrong shape:
caption_in = Input(shape=(max_len, vocab_size))
Try changing it to:
caption_in = Input(shape=(max_len,))

Is it possible to train using same model with two inputs?

Hello I have a some question for keras.
currently i want implement some network
using same cnn model, and use two images as input of cnn model
and use two result of cnn model, provide to Dense model
for example
def cnn_model():
input = Input(shape=(None, None, 3))
x = Conv2D(8, (3, 3), strides=(1, 1))(input)
x = GlobalAvgPool2D()(x)
model = Model(input, x)
return model
def fc_model(cnn1, cnn2):
input_1 = cnn1.output
input_2 = cnn2.output
input = concatenate([input_1, input_2])
x = Dense(1, input_shape=(None, 16))(input)
x = Activation('sigmoid')(x)
model = Model([cnn1.input, cnn2.input], x)
return model
def main():
cnn1 = cnn_model()
cnn2 = cnn_model()
model = fc_model(cnn1, cnn2)
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(x=[image1, image2], y=[1.0, 1.0], batch_size=1, ecpochs=1)
i want to implement model something like this, and train models
but i got error message like below :
'All layer names should be unique'
Actually i want use only one CNN model as feature extractor and finally use two features to predict one float value as 0.0 ~ 1.0
so whole system -->>
using two images and extract features from same CNN model, and features are provided to Dense model to get one floating value
Please, help me implement this system and how to train..
Thank you
See the section of the Keras documentation on shared layers:
https://keras.io/getting-started/functional-api-guide/
A code snippet from the documentation above demonstrating this:
# This layer can take as input a matrix
# and will return a vector of size 64
shared_lstm = LSTM(64)
# When we reuse the same layer instance
# multiple times, the weights of the layer
# are also being reused
# (it is effectively *the same* layer)
encoded_a = shared_lstm(tweet_a)
encoded_b = shared_lstm(tweet_b)
# We can then concatenate the two vectors:
merged_vector = keras.layers.concatenate([encoded_a, encoded_b], axis=-1)
# And add a logistic regression on top
predictions = Dense(1, activation='sigmoid')(merged_vector)
# We define a trainable model linking the
# tweet inputs to the predictions
model = Model(inputs=[tweet_a, tweet_b], outputs=predictions)
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit([data_a, data_b], labels, epochs=10)

Resources