DCGAN - Issue in understanding code - python-3.x

This a part of the code for a Deconvolutional-Convoltional Generative Adversarial Network (DC-GAN)
discriminator.trainable = False
ganInput = Input(shape=(100,))
# getting the output of the generator
# and then feeding it to the discriminator
# new model = D(G(input))
x = generator(ganInput)
ganOutput = discriminator(x)
gan = Model(input=ganInput, output=ganOutput)
gan.compile(loss='binary_crossentropy', optimizer=Adam())
Issue 1 - I do not understand what the line ganInput = Input(shape=(100,)) does. Clearly ganInput is a variable but what is Input? Is it a function ? If Input is a function then what will ganInput contain ?
Issue 2 - What is the role of the Model API ? I read about in the keras documentation but failed to understand what it is doing here.
Please ask for any further clarification / details you need.
Keras with TensorFlow backend
COMPLETE SOURCE CODE :
https://github.com/yashk2810/DCGAN-Keras/blob/master/DCGAN.ipynb

Line ganInput = Input(shape=(100,)) is just defining the shape of your input
which is a tensor of shape (100,)
The model will include all layers required in the computation of output given input. In the case of multi-input or multi-output models, you can use lists as well:
model = Model(inputs=[ganInput1, ganInput2], outputs=[ganOutput1, ganOutput2, ganOutput3])
Which means to compute ganOutput1, ganOutput2, ganOutput3 the Model api requires
input layers ganInput1, ganInput2
This is necessary for backtracking so that way the Model api has what it needs to calculate the output
this line loads the mnist data : (X_train, Y_train), (X_test, Y_test) = mnist.load_data() .... X_train and Y_train has training data and its corresponding target values .... X_test, Y_test has training data and its corresponding target values
# ======================================================
# Here the data is being loaded
# X_train = training data, Y_train = training targets
# X_test = testing data , Y_test = testing targets
# ======================================================
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()
# ================================================
# Reshaping the training and testing data
# He has added one extra dimension which is always one
# ================================================
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
X_train = X_train.astype('float32')
# ================================================
# Initially pixel values are in range of 0-255
# he makes the pixel values to be between -1 to 1
#==================================================
X_train = (X_train - 127.5) / 127.5
X_train.shape
# ======================================================================
# He builds the generator model over here
# 1] Dense layer with no of neurons = 128*7*7 & takes 100 numbers as input
# 2] Applying Batch Normalization
# 3] Upsampling layer
# 4] Convolution layer with activation LeakyRELU
# 5] Applying BatchNormalization
# 6] UpSampling2D layer
# 7] Convolution layer with activation LeakyRELU
# ======================================================================
generator = Sequential([
Dense(128*7*7, input_dim=100, activation=LeakyReLU(0.2)),
BatchNormalization(),
Reshape((7,7,128)),
UpSampling2D(),
Convolution2D(64, 5, 5, border_mode='same', activation=LeakyReLU(0.2)),
BatchNormalization(),
UpSampling2D(),
Convolution2D(1, 5, 5, border_mode='same', activation='tanh')
])
generator.summary()
# ======================================================================
# He builds the discriminator model over here
# 1] Convolution layer which takes as input an image of shape (28, 28, 1)
# 2] Dropout layer
# 3] Convolution layer for down-sampling with LeakyReLU as activation
# 4] Dropout layer
# 5] Flatten layer to flatten the output
# 6] 1 output node with sigmoid activation
# ======================================================================
discriminator = Sequential([
Convolution2D(64, 5, 5, subsample=(2,2), input_shape=(28,28,1), border_mode='same', activation=LeakyReLU(0.2)),
Dropout(0.3),
Convolution2D(128, 5, 5, subsample=(2,2), border_mode='same', activation=LeakyReLU(0.2)),
Dropout(0.3),
Flatten(),
Dense(1, activation='sigmoid')
])
discriminator.summary()
generator.compile(loss='binary_crossentropy', optimizer=Adam())
discriminator.compile(loss='binary_crossentropy', optimizer=Adam())
discriminator.trainable = False
# =====================================================================
# Remember above generator takes 100 numbers as input in the first layer
# Dense(128*7*7, input_dim=100, activation=LeakyReLU(0.2))
# Input(shape=(100,)) returns a tensor of this shape (100,)
# ====================================================================
ganInput = Input(shape=(100,))
# getting the output of the generator
# and then feeding it to the discriminator
# new model = D(G(input))
# ===========================================================
# giving the input tensor of shape (100,) to generator model
# ===========================================================
x = generator(ganInput)
# ===========================================================
# the output of generator will be of shape (batch_size, 28, 28, 1)
# this output of generator will go to discriminator as input
# Remember we have defined discriminator input as shape (28, 28, 1)
# ===========================================================
ganOutput = discriminator(x)
# =========================================================================
# Now it is clear that generators output is needed as input to discriminator
# You have to tell this to Model api for backpropogation
# Your Model api is the whole model you have built
# it tells you that your model is a combination of generator and discriminator model where that data flow is from generator to discriminator
# YOUR_Model = generator -> discriminator
# This is something like you want to train generator and discriminator as one single model and not as two different models
# but at the same time they are actually being trained individually (Hope this makes sense)
# =========================================================================
gan = Model(input=ganInput, output=ganOutput)
gan.compile(loss='binary_crossentropy', optimizer=Adam())
gan.summary()
def train(epoch=10, batch_size=128):
batch_count = X_train.shape[0] // batch_size
for i in range(epoch):
for j in tqdm(range(batch_count)):
# Input for the generator
noise_input = np.random.rand(batch_size, 100)
# getting random images from X_train of size=batch_size
# these are the real images that will be fed to the discriminator
image_batch = X_train[np.random.randint(0, X_train.shape[0], size=batch_size)]
# these are the predicted images from the generator
predictions = generator.predict(noise_input, batch_size=batch_size)
# the discriminator takes in the real images and the generated images
X = np.concatenate([predictions, image_batch])
# labels for the discriminator
y_discriminator = [0]*batch_size + [1]*batch_size
# Let's train the discriminator
discriminator.trainable = True
discriminator.train_on_batch(X, y_discriminator)
# Let's train the generator
noise_input = np.random.rand(batch_size, 100)
y_generator = [1]*batch_size
discriminator.trainable = False
gan.train_on_batch(noise_input, y_generator)

Related

How to Determine the Output Shape of a Conv2DTranspose of an AutoEncoder

I am building an autoencoder for learning 28 ultrasound time signals of shape [262144,2] i.e. 262144 pairs of time and voltage points concatenated to form a [262144x2] tensor as input data to a stacked convolutional encoder. The latent space is set to produce a vector of length 16. The problem arise from the decoder, where a 'for loop' is used to stack two Conv2DTranspose layers each with a filter sizes of 64 and 32 and a kernel of 3 to the latent space output in order to reproduce the original input shape of [262144x2]. Instead, the decoder network gives a [262144x4] output tensor which does not match the validation and input data shapes of [262144x2]. What model parameters (filter, kernel, strides and padding) should I use to get appropriate tensor dimensions? the code and output are shown below. Your assistance is greatly appreciated!
from keras.layers import Dense, Input
from keras.layers import Conv2D, Flatten
from keras.layers import Reshape, Conv2DTranspose
from keras.models import Model
from keras.datasets import mnist
from keras.utils import plot_model
from keras import backend as K
import numpy as np
import matplotlib.pyplot as plt
x_Train = Signals
x_Test = Signals1
Sig_size1 = x_Train.shape[1]
Sig_size2 = x_Train.shape[2]
Sig_size11 = x_Test.shape[1]
Sig_size22 = x_Test.shape[2]
x_Train = np.reshape(x_Train,[-1, Sig_size1, Sig_size2, 1])
x_Train = x_Train.astype('float32') / np.max(x_Train)
x_Test = np.reshape(x_Test,[-1, Sig_size11, Sig_size22, 1])
x_Test = x_Test.astype('float32') / np.max(x_Test)
# network parameters# encoder/decoder number of filters per CNN layer
input_shape = (Sig_size1, Sig_size2, 1)
batch_size = 32 # Was 32
kernel_size = 1 # Was 3
latent_dim = 16 # Was 16
# encoder/decoder number of filters per CNN layer
layer_filters = [2, 6]
# build the autoencoder model
# first build the encoder model
inputs = Input(shape=input_shape, name='encoder_input')
x = inputs
# stack of Conv2D(32)-Conv2D(64)
for filters in layer_filters:
x = Conv2D(filters=filters,
kernel_size=kernel_size,
strides=2,
activation='relu',
padding='same')(x)
shape = K.int_shape(x)
# generate latent vector
x = Flatten()(x)
latent = Dense(latent_dim, name='latent_vector')(x)
# instantiate encoder model
encoder = Model(inputs, latent, name='encoder')
encoder.summary()
plot_model(encoder, to_file='encoder.png', show_shapes=True)
# build the decoder model
latent_inputs = Input(shape=(latent_dim,), name='decoder_input')
# use the shape (7, 7, 64) that was earlier saved
x = Dense(shape[1] * shape[2] * shape[3])(latent_inputs)
# from vector to suitable shape for transposed conv
x = Reshape((shape[1], shape[2], shape[3]))(x)
# stack of Conv2DTranspose(64)-Conv2DTranspose(32)
layer_filters = [1,8] ########change
kernel_size = 3 # Was 3
# for filters in layer_filters[::-1]:
for filters in layer_filters[::-1]:
x = Conv2DTranspose(filters=filters,
kernel_size=kernel_size,
strides=2,
activation='relu',
padding='same')(x)
# layer_filters = [64, 32]
kernel_size = 3 # Was 3
# reconstruct the input
outputs = Conv2DTranspose(filters=1, #Was 1
kernel_size=kernel_size,
activation='sigmoid',
padding='same',
name='decoder_output')(x)
# instantiate decoder model
decoder = Model(latent_inputs, outputs, name='decoder')
decoder.summary()
plot_model(decoder, to_file='decoder.png', show_shapes=True)
# autoencoder = encoder + decoder
# instantiate autoencoder model
autoencoder = Model(inputs,
decoder(encoder(inputs)),
name='autoencoder')
autoencoder.summary()
plot_model(autoencoder,
to_file='autoencoder.png',
show_shapes=True)
# Mean Square Error (MSE) loss funtion, Adam optimizer
autoencoder.compile(loss='mse', optimizer='adam')
# train the autoencoder
autoencoder.fit(x_Train,
x_Train,
validation_data=(x_Test, x_Test),
epochs=1,
batch_size=batch_size)
# predict the autoencoder output from test data
x_decoded = autoencoder.predict(x_Test)
This code was adapted from Advanced Deep Learning with Keras by Rowel Atienza (Chapter 3 for a denoising decoder for MNIST data)

Tenserflow Keras model not accepting my python generator output

I am following some tutorials on setting up my first conv NN for some image classifications.
The tutorials load all images into memory and pass them into model.fit(). I can't do that because my data set is too large.
I wrote this generator to "drip feed" preprocessed images to model.fit, but I am getting an error and because I am a newbie I am having trouble diagnosing.
These are processed only as greyscale images also.
Here is the generator that I made...
# need to preprocess image in batches because memory
# tdata expects list of tuples<(string) file_path, (int) class_num)>
def image_generator(tdata, batch_size):
start_from = 0;
while True:
# Slice array into batch data
batch = tdata[start_from:start_from+batch_size]
# Keep track of position
start_from += batch_size
# Create batch lists
batch_x = []
batch_y = []
# Read in each input, perform preprocessing and get labels
for img_path, class_num in batch:
# Read raw img data as np array
# Returns as shape (600, 300, 1)
img_arr = create_np_img_array(img_path)
# Normalize img data (/255)
img_arr = normalize_img_array(img_arr)
# Add to the batch x data list
batch_x.append(img_arr)
# Add to the batch y classification list
batch_y.append(class_num)
yield (batch_x, batch_y)
Creating an instance of the generator:
img_gen = image_generator(training_data, 30)
Setting up my model like so:
# create the model
model = Sequential()
# input layer has the input_shape param which is the dimentions of the np array
model.add( Conv2D(256, (3, 3), activation='relu', input_shape = (600, 300, 1)) )
model.add( MaxPooling2D( (2,2)) )
# second hidden layer
model.add( MaxPooling2D((2, 2)) )
model.add( Conv2D(256, (3, 3), activation='relu') )
# third hidden layer
model.add( MaxPooling2D((2, 2)))
model.add( Conv2D(256, (3, 3), activation='relu') )
# forth hidden layer
model.add( Flatten() )
model.add( Dense(64, activation='relu') )
# ouput layer
model.add( Dense(2) )
model.summary()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# pass generator
model.fit(img_gen, epochs=5)
Then model.fit() fails from trying to call shape on an int.
~\anaconda3\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in _get_dynamic_shape(t)
797
798 def _get_dynamic_shape(t):
--> 799 shape = t.shape
800 # Unknown number of dimensions, `as_list` cannot be called.
801 if shape.rank is None:
AttributeError: 'int' object has no attribute 'shape'
Any suggestions on what I've done wrong??
Converting the outputs from the generator to numpy arrays seems to have stopped the error.
np_x = np.array(batch_x)
np_y = np.array(batch_y)
Seems like it didn't like the classifications as a std list of ints.

Keras multi-class prediction only returning 1 prediction with softmax and categorical_crossentropy

I'm using Keras and Tensorflow to train a model that predicts a matching font based on an image of some letters. My folder contains data with a separate folder with each image of the letter in varying forms. My code for training the model looks like this:
LETTER_IMAGES_FOLDER = "datasets"
MODEL_FILENAME = "fonts_model.hdf5"
MODEL_LABELS_FILENAME = "model_labels.dat"
data = pd.read_csv('annotations.csv')
paths = list(data['Path'].values)
Y = list(data['Font'].values)
encoder = LabelEncoder()
encoder.fit(Y)
Y = encoder.transform(Y)
Y = np_utils.to_categorical(Y)
data = []
# loop over the input images
for image_file in paths:
# Load the image and convert it to grayscale
image = cv2.imread(image_file)
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Add a third channel dimension to the image to make Keras happy
image = np.expand_dims(image, axis=2)
# Add the letter image and it's label to our training data
data.append(image)
data = np.array(data, dtype="float") / 255.0
train_x, test_x, train_y, test_y = model_selection.train_test_split(data,Y,test_size = 0.1, random_state = 0)
# Save the mapping from labels to one-hot encodings.
# We'll need this later when we use the model to decode what it's predictions mean
with open(MODEL_LABELS_FILENAME, "wb") as f:
pickle.dump(encoder, f)
# Build the neural network!
model = Sequential()
# First convolutional layer with max pooling
model.add(Conv2D(20, (5, 5), padding="same", input_shape=(100, 100, 1), activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# Second convolutional layer with max pooling
model.add(Conv2D(50, (5, 5), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Flatten())
model.add(Dense(500, activation="relu"))
print (len(encoder.classes_))
model.add(Dense(len(encoder.classes_), activation="softmax"))
# Ask Keras to build the TensorFlow model behind the scenes
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the neural network
model.fit(train_x, train_y, validation_data=(test_x, test_y), batch_size=32, epochs=2, verbose=1)
# Save the trained model to disk
model.save(MODEL_FILENAME)
Once the model has been created I'm predicting with it as follows:
predictions = model.predict(letter_image)
print (predictions) # this has the length of 1
The problem is that "predictions" is always an array of size 1 and I'm not sure why. I'm using softmax, categorical_crossentropy and my Dense value is greater than 1 in the last layer. Could someone please tell me why I'm not getting the top n predictions here?
I've also tried sigmoid with binary_crossentropy but get the same result. I think there's something more to it that I'm missing.

Error in model performance metrics

Well my neural network is as follows :
# Leaks data input is a 2-D vector of window*size*341 features
# Reshape to match picture format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape= [-1, 16, 341, 2])
# Convolution Layer with 32 filters and a kernel size of 5
conv1 = tf.layers.conv2d(x, 6, 2, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
# Convolution Layer with 64 filters and a kernel size of 3
conv2 = tf.layers.conv2d(conv1, 8, 3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Flatten the data to a 1-D vector for the fully connected layer
fc1 = tf.contrib.layers.flatten(conv2)
# Fully connected layer (in tf contrib folder for now)
fc1 = tf.layers.dense(fc1, 1024)
# Apply Dropout (if is_training is False, dropout is not applied)
fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
# 1-layer LSTM with n_hidden units.
out = tf.layers.dense(fc1, n_classes)
it predicts a multi-label classification vector on len = 339, first i wanted to make sure that i'm fully able to overfit small sample of data to make sure that every thing work okey and well defined.
I trained my neural network on 1700 len data,to measure my model performance i added accuracy as follow :
logits_train = conv_net(features, num_classes, dropout, reuse=False,
is_training=True)
logits_test = conv_net(features, num_classes, dropout, reuse=True,
is_training=False)
# Predictions
pred_classes = tf.cast(tf.greater(logits_test,0.5), tf.float32)
pred_probas = tf.nn.sigmoid(logits_test)
# If prediction mode, early return
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)
# Define loss and optimizer
#tf.one_hot(tf.cast(labels,dtype=tf.int32),depth=2)
loss_op = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.cast(labels,dtype=tf.float32),logits=logits_train))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op,global_step=tf.train.get_global_step())
# Evaluate the accuracy of the model
accuracy = tf.metrics.accuracy(labels=labels , predictions = pred_classes )
#correct_prediction = tf.equal(tf.round(tf.nn.sigmoid(logits_test)), tf.round(labels))
#accuracy1 = tf.metrics.mean(tf.cast(correct_prediction, tf.float32))
#acc_op = tf.metrics.mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=pred_classes,labels=labels))
# TF Estimators requires to return a EstimatorSpec, that specify
# the different ops for training, evaluating, ...
estim_specs = tf.estimator.EstimatorSpec(
mode=mode,
predictions=pred_probas,
loss=loss_op,
train_op=train_op,
eval_metric_ops={'accuracy': accuracy})
return estim_specs
The problem is that with few epochs the performance seems to be very good
for i in range(1,50):
print('Epoch',(i+1))
input_fn = tf.estimator.inputs.numpy_input_fn(x= curr_data_batch,y=curr_target_batch[:,:339] ,batch_size=96, shuffle=False)
model.train(input_fn=input_fn)
if (i+1) % 10 :
# eval the model
eval_model = model.evaluate(input_fn=input_fn)
print('Loss ,',eval_model['loss'] )
print('accuracy ,',eval_model['accuracy'] )
Loss , 0.029562088
accuracy , 0.9958855
Epoch 3:
Loss , 0.028194984
accuracy , 0.99588597
Epoch 4:
Loss , 0.027557796
accuracy , 0.9958862
but when i try to predict same training data i got fully oposet metrics
loss = 0.65
accuracy = 0.33
I don't know where this issue come from did i miss defined something or no ?
Ty

How can I output the last layer in tensorflow using the Estimator class?

The code below implements a CNN in python using Tensorflow.
How do I output the features of the last layer?
I found code online for cases where tf.Session() is invoked, but the code below (adapted from TensorFlow tutorials) does not use tf.Session(). I tried with return pool2_flat in the cnn_model_fn, but I get the error:
ValueError: model_fn should return an EstimatorSpec.
import ...
def cnn_model_fn(features, labels, mode):
# Input Layer
input_layer = tf.reshape(features["x"], [-1, 6, 40, 1])
# Convolutional Layer #1
conv1 = tf.layers.conv2d(inputs=input_layer, filters=32, kernel_size=[3, 3],
padding="same", activation=tf.nn.relu)
# Pooling Layer #1
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Flatten tensor
pool2_flat = tf.reshape(pool1, [-1, 20 * 3 * 64])
# Dense Layer with 1024 neurons
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
# Add dropout operation; 0.6 probability that element will be kept
dropout = tf.layers.dropout(
inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)
# Logits layer
logits = tf.layers.dense(inputs=dropout, units=10)
predictions = {"classes": tf.argmax(input=logits, axis=1),
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Calculate Loss (for both TRAIN and EVAL modes)
onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32), depth=10)
loss = tf.losses.softmax_cross_entropy(
onehot_labels=onehot_labels, logits=logits)
# Configure the Training Op (for TRAIN mode)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
# Add evaluation metrics (for EVAL mode)
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels=labels, predictions=predictions["classes"])}
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
def main(unused_argv):
# Load training and eval data
...
# Create the Estimator
convnetVoc_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn, model_dir=TMP_FOLDER)
# Train the model
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
num_epochs=None,
shuffle=True)
convnetVoc_classifier.train(
input_fn=train_input_fn,
steps=1)
# Evaluate model's accuracy & print results.
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
eval_results = convnetVoc_classifier.evaluate(input_fn=test_input_fn)
print(eval_results)
if __name__ == "__main__":
tf.app.run()
The EstimatorSpec has a field predictions, where you can put a dictionary from string name to any output of the model you want.

Resources