How to save and restore the convolution autoencoder neural network model - python-3.x

I use the convolutional autoencoder neural network method to train my model and then save it, but when I restore my model to reconstruct the image which is similar to the training image, the reconstruction result is very bad and the loss is large. I am not sure if I am wrong with saving and reading files.
Training model and save it!
#--------------------------------------------------------------------------
x = tf.placeholder(tf.float32, [None, dim], name = "X")
y = tf.placeholder(tf.float32, [None, dim], name = "Y")
keepprob = tf.placeholder(tf.float32, name = "K")
pred = cae(x, weights, biases, keepprob, imgsize)["out"]
cost = tf.reduce_sum(tf.square(cae(x, weights, biases, keepprob,imgsize)["out"] - tf.reshape(y, shape=[-1, imgsize, imgsize, 1])))
learning_rate = 0.01
optm = tf.train.AdamOptimizer(learning_rate).minimize(cost)
#--------------------------------------------------------------------------
sess = tf.Session()
save_model = os.path.join(PATH,'temp_saved_model')
saver = tf.train.Saver()
tf.add_to_collection("COST", cost)
tf.add_to_collection("PRED", pred)
sess.run(tf.global_variables_initializer())
mean_img = np.zeros((dim))
batch_size = 100
n_epochs = 1000
for epoch_i in range(n_epochs):
for batch_i in range(ntrain // batch_size):
trainbatch = np.array(train)
trainbatch = np.array([img - mean_img for img in trainbatch])
sess.run(optm, feed_dict={x: trainbatch, y: trainbatch, keepprob: 1.})
save_path = saver.save(sess, save_model)
print('Model saved in file: %s' %save_path)
sess.close()
Restoring the model and try to reconstruct the image.
tf.reset_default_graph()
save_model = os.path.join(PATH + 'SaveModel/','temp_saved_model.meta')
imgsize = 64
dim = imgsize * imgsize
mean_img = np.zeros((dim))
with tf.Session() as sess:
saver = tf.train.import_meta_graph(save_model)
saver.restore(sess, tf.train.latest_checkpoint(PATH + 'SaveModel/'))
cost = tf.get_collection("COST")[0]
pred = tf.get_collection("PRED")[0]
graph = tf.get_default_graph()
x = graph.get_tensor_by_name("X:0")
y = graph.get_tensor_by_name("Y:0")
k = graph.get_tensor_by_name("K:0")
for i in range(10):
test_xs = np.array(data)
test = load_image(test_xs, imgsize)
test = np.array([img - mean_img for img in test])
print ("[%02d/%02d] cost: %.4f" % (i, 10, sess.run(cost, feed_dict={x: test, y: test, K: 1.})))
The loss value in the training process is 1.321..., but the reconstruction loss is 16545.10441... Is there something wrong in my code?

First make sure that Your Restore and Save functions are in Different files.
There are a few problems that I have debugged so far,
keepprob changes from 'K' to 'k' while building graph after restore.
You are facing same Images as Logits and labels (Doesn't make sense until you are trying to learn an Identity function)
You are calculating training cost before saving the model and validation/test cost after restoring the model.
Your code in saver
recon = sess.run(pred, feed_dict={x: testbatch, keepprob: 1.})
fig, axs = plt.subplots(2, n_examples, figsize=(15, 4))
for example_i in range(5):
axs[0][example_i].matshow(np.reshape(testbatch[example_i, :], (imgsize, imgsize)), cmap=plt.get_cmap('gray'))
axs[1][example_i].matshow(np.reshape(np.reshape(recon[example_i, ...], (dim,)) + mean_img, (imgsize, imgsize)), cmap=plt.get_cmap('gray'))
plt.show()
Your code in restore function
recon = sess.run(pred, feed_dict={x: test, k: 1.})
cost = sess.run(cost, feed_dict={x: test, y: test, k: 1.})
if (i % 2) == 0:
fig, axs = plt.subplots(2, n_examples, figsize=(15, 4))
for example_i in range(n_examples):
axs[0][example_i].matshow(np.reshape(test[example_i, :], (imgsize, imgsize)), cmap=plt.get_cmap('gray'))
axs[1][example_i].matshow(np.reshape(np.reshape(recon[example_i, ...], (dim,)) + mean_img, (imgsize, imgsize)), cmap=plt.get_cmap('gray'))
plt.show()
Also nowhere in your code are you printing/plotting cost even in your recover module you are plotting recon variable
If you are trying to test autoencoder-decoder pair, to generate the original Image, your model is a bit too small(Shallow), If that makes sense, try implementing it, if you are confused, check out this link. https://pgaleone.eu/neural-networks/deep-learning/2016/12/13/convolutional-autoencoders-in-tensorflow/
And in any case, feel free to add comments for further clarification.

Related

3-layer feedfoward neural network not predicting regression values accurately

I'm pretty new to Tensorflow. Currently, I'm doing a 3-layer network, with 10 neurons in the hidden layer with ReLU, mini-batch gradient descent size of 8, L2 regularisation weight decay parameter (beta) of 0.001. The Tensorflow version I'm using is 1.14 and I'm on Python 3.6.
The issue that boggles my mind is that my predicted values and testing errors are absolutely off the charts.
For example, I plotted out the test errors and the predicted vs target values for a sample size of 50, and this is what came out.
As you can see, both plots are way off, and I haven't had the slightest clue as to why.
Here's how the dataset roughly looks like. The first column is discarded as it is just a counter value, and the last column is the target.
My code:
NUM_FEATURES = 7
num_neuron = 10
batch_size = 8
beta = 0.001
learning_rate = 0.001
epochs = 4000
seed = 10
np.random.seed(seed)
# read and divide data into test and train sets
total_dataset= np.genfromtxt('dataset_excel.csv', delimiter=',')
X_data, Y_data = total_dataset[1:, 1:8], total_dataset[1:, -1]
Y_data = Y_data.reshape(Y_data.shape[0], 1)
# shuffle input, ensure both are shuffled with the same order
shufflestate = np.random.get_state()
np.random.shuffle(X_data)
np.random.set_state(shufflestate)
np.random.shuffle(Y_data)
# 70% used for training, 30% used for testing
trainX = X_data[:280]
trainY = Y_data[:280]
testX = X_data[280:]
testY = Y_data[280:]
trainX = (trainX - np.mean(trainX, axis=0)) / np.std(trainX, axis=0)
# Create the model
x = tf.placeholder(tf.float32, [None, NUM_FEATURES])
y_ = tf.placeholder(tf.float32, [None, 1])
# get 50 samples for plotting of predicted vs target values
limited50testX = testX[:50]
limited50testY = testY[:50]
# Hidden
with tf.name_scope('hidden'):
weight1 = tf.Variable(tf.truncated_normal([NUM_FEATURES, num_neuron],stddev=1.0,name='weight1'))
bias1 = tf.Variable(tf.zeros([num_neuron]),name='bias1')
hidden = tf.nn.relu(tf.matmul(x, weight1) + bias1)
# output
with tf.name_scope('linear'):
weight2 = tf.Variable(tf.truncated_normal([num_neuron, 1],stddev=1.0 / np.sqrt(float(num_neuron))),name='weight2')
bias2 = tf.Variable(tf.zeros([1]),name='bias2')
logits = tf.matmul(hidden, weight2) + bias2
ridgeLoss = tf.square(y_ - logits)
regularisation = tf.nn.l2_loss(weight1) + tf.nn.l2_loss(weight2)
loss = tf.reduce_mean(ridgeLoss + beta * regularisation)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer.minimize(loss)
error = tf.reduce_mean(tf.square(y_ - logits))
N = len(trainX)
idx = np.arange(N)
predicted=[]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
train_err = []
test_err_ = []
for i in range(epochs):
for batchStart, batchEnd in zip(range(0, trainX.shape[0], batch_size),range(batch_size, trainX.shape[0], batch_size)):
train_op.run(feed_dict={x: trainX[batchStart:batchEnd], y_: trainY[batchStart:batchEnd]})
err = error.eval(feed_dict={x: trainX, y_: trainY})
train_err.append(err)
if i % 100 == 0:
print('iter %d: train error %g' % (i, train_err[i]))
test_err = error.eval(feed_dict={x: testX, y_: testY})
test_err_.append(test_err)
predicted = sess.run(logits, feed_dict={x:limited50testX})
print("predicted values: ", predicted)
print("size of predicted values is", len(predicted))
print("targets: ", limited50testY)
print("size of target values is", len(limited50testY))
#plot predictions vs targets
numberList=np.arange(0, 50, 1).tolist()
predplot = plt.figure(1)
plt.plot(numberList, predicted, label='Predictions')
plt.plot(numberList, limited50testY, label='Targets')
plt.xlabel('50 samples')
plt.ylabel('Value')
plt.legend(loc='lower right')
predplot.show()
# plot training error
trainplot = plt.figure(2)
plt.plot(range(epochs), train_err)
plt.xlabel(str(epochs) + ' iterations')
plt.ylabel('Train Error')
trainplot.show()
#plot testing error
testplot = plt.figure(3)
plt.plot(range(epochs), test_err_)
plt.xlabel(str(epochs) + ' iterations')
plt.ylabel('Test Error')
testplot.show()
Not sure if that's it, but trainX is normalized whereas testX is not. You might want to use the same normalization on testX before predicting.

Saving Learning Progress of Neural Network in Tensorflow

In Python, Tensorflow: I trained and applied a Neural Network using Tensorflow, now I need to save its progress to train it further at a later point in time.
I went through a lot of configurations using
saver = tf.train.Saver()
saver.restore()
tf.train.import_meta_graph('model/model.ckpt.meta')
tf.train.export_meta_graph('model/model.ckpt.meta')
etc...
but it always produces an Error.
Here is my code. It is similar to the Mnist example codes, but uses custom generated input and has a single, continuous output neuron.
x = tf.placeholder('float', [None, 4000]) # 4000 is my structure, just an example
y = tf.placeholder('float') # And I need a single, continuous output
def train_neural_network(x):
testdata_images, testdata_labels = generate_training_data_batch(size = 10)
#Generates test data
data=[]
for i in range(how_many_batches):
data.append(generate_training_data_batch(size = 10))
#Generates training data
prediction = neural_network_model(x)
# neural_network_model() is defined as a 4000x15x15x10x1 neural network
cost = tf.reduce_mean( tf.square( tf.subtract(y, prediction) ) )
optimizer = tf.train.AdamOptimizer(0.01).minimize(cost)
hm_epochs = 10
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
epoch_loss = 0
for i in range(how_many_batches):
epoch_x, epoch_y = data[i]
_, c = sess.run([optimizer, cost], feed_dict = {x: epoch_x, y: epoch_y})
epoch_loss += c
accuracy1 = tf.subtract(y, prediction)
result = sess.run(accuracy1, feed_dict={x: epoch_x, y: epoch_y})
print(result)
# This is just here so I can see what is going on
saver.save(sess, 'model/model.ckpt')
tf.train.export_meta_graph('model/model.ckpt.meta')
tf.reset_default_graph()
Later in the same file I want to use the saved neural network to make some predictions with it:
train_neural_network(x)
X, Y = generate_training_data_batch(size = 1)
prediction = neural_network_model(x)
with tf.Session() as sess:
tf.train.import_meta_graph('model/model.ckpt.meta')
sess.run(tf.global_variables_initializer())
thought = sess.run(prediction, feed_dict={x: X})
print(Y, thought)
With this version i get the Error message
ValueError: Tensor("Variable:0", shape=(4000, 15), dtype=float32_ref) must be from the same graph as Tensor("Placeholder_35:0", shape=(?, 4000), dtype=float32).
I also got Error Messages like
ValueError: At least two variables have the same name: Variable/Adam
I am looking for a solution of this for a few weeks now, so I would be very relieved to finally get this sorted out.

Tensorflow -- Iterating over training and validation sequencially

I have been going throught the Dataset API of tensorflow to feed different dataset with ease to an RNN model.
I got everything working following the not so many blogs together with the docs in the tensorflow website. My working example did the following:
--- Train on X epochs in a training dataset -> validate after all the training has concluded in a validation dataset.
However, I'm unable to develop the following example:
--- Train on X epochs in a training dataset -> validate in each epoch the training model with a validation dataset (a bit like what Keras does)
The problematic issue comes because of the following piece of code:
train_dataset = tf.data.Dataset.from_tensor_slices((x,y)).batch(BATCH_SIZE, drop_remainder=True).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((x,y)).batch(BATCH_SIZE_VAL, drop_remainder=True).repeat()
itr = tf.data.Iterator.from_structure(train_dataset.output_types, train_dataset.output_shapes)
train_init_op = itr.make_initializer(train_dataset)
validation_init_op = itr.make_initializer(val_dataset)
When I create the iterator from_structure, I need to specify an output_shape. Obviously, the output shape of the train dataset and the validation dataset is not the same as they have a different batch_size. However, the validation_init_op is throwing the following error, which it seems counterintuitive because validation sets have always different batch_size:
TypeError: Expected output shapes compatible with (TensorShape([Dimension(256), Dimension(11), Dimension(74)]), TensorShape([Dimension(256), Dimension(3)])) but got dataset with output shapes (TensorShape([Dimension(28), Dimension(11), Dimension(74)]), TensorShape([Dimension(28), Dimension(3)])).
I want to do this second approach to evaluate my model and see the common train and validation plots developed at the same time, to see how can I improve it (stopping the learning early and etc). However, with the first simple approach I don't get all this.
So, the question is: ¿Am I doing something wrong? ¿Does my second approach has to be tackled differently? I can think of creating two iterators, but I don't know if that is the right approach. Also, this answer by #MatthewScarpino points out to a feedable iterator because switching between reinitializable ones makes them to start all over again; however, the above error is not related with that part of the code -- ¿Maybe the reinitializable iterator is not intended to set a different batch size for the validation set and to only iterate it once after training whatever the size it is and without setting it in the .batch() method?
Any help is very much appreciated.
Full code for reference:
N_TIMESTEPS_X = xt.shape[0] ## The stack number
BATCH_SIZE = 256
#N_OBSERVATIONS = xt.shape[1]
N_FEATURES = xt.shape[2]
N_OUTPUTS = yt.shape[1]
N_NEURONS_LSTM = 128 ## Number of units in the LSTMCell
N_EPOCHS = 350
LEARNING_RATE = 0.001
### Define the placeholders anda gather the data.
xt = xt.transpose([1,0,2])
xval = xval.transpose([1,0,2])
train_data = (xt, yt)
validation_data = (xval, yval)
N_BATCHES = train_data[0].shape[0] // BATCH_SIZE
print('The number of batches is: {}'.format(N_BATCHES))
BATCH_SIZE_VAL = validation_data[0].shape[0] // N_BATCHES
print('The validation batch size is: {}'.format(BATCH_SIZE_VAL))
## We define the placeholders as a trick so that we do not break into memory problems, associated with feeding the data directly.
'''As an alternative, you can define the Dataset in terms of tf.placeholder() tensors, and feed the NumPy arrays when you initialize an Iterator over the dataset.'''
batch_size = tf.placeholder(tf.int64)
x = tf.placeholder(tf.float32, shape=[None, N_TIMESTEPS_X, N_FEATURES], name='XPlaceholder')
y = tf.placeholder(tf.float32, shape=[None, N_OUTPUTS], name='YPlaceholder')
# Creating the two different dataset objects.
train_dataset = tf.data.Dataset.from_tensor_slices((x,y)).batch(BATCH_SIZE, drop_remainder=True).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((x,y)).batch(BATCH_SIZE_VAL, drop_remainder=True).repeat()
# Creating the Iterator type that permits to switch between datasets.
itr = tf.data.Iterator.from_structure(train_dataset.output_types, train_dataset.output_shapes)
train_init_op = itr.make_initializer(train_dataset)
validation_init_op = itr.make_initializer(val_dataset)
next_features, next_labels = itr.get_next()
After investigating the best way to do this, I came across with this final implementation that works well on my end. Surely not be the best. So as to maintain the state, I used a feedable iterator.
AIM: This code is intented to be used when you want to train and validate at the same time, preserving the state of each iterator (i.e. validate with the newest model parameters). Together with that, the code saves the model and other stuff, like some information about the hyperparameters and summaries to visualize the training and validation in Tensorboard.
Also, don't get confused: you don't need to have a different batch size for the training set and for the validation set. This is a misconception that I have. The batch sizes must be the same AND you have to deal with the different number of batches, just passing when no more batches are left. This is a requirement so that you can create the iterator, regarding having both datasets the same data type and shape.
Hope it helps others. Just ignore the code that does not relate to your objectives. Many thanks for #kvish for all the help and time.
Code:
def RNNmodelTF(xt, yt, xval, yval, xtest, ytest):
N_TIMESTEPS_X = xt.shape[0] ## The stack number
BATCH_SIZE = 256
#N_OBSERVATIONS = xt.shape[1]
N_FEATURES = xt.shape[2]
N_OUTPUTS = yt.shape[1]
N_NEURONS_LSTM = 128 ## Number of units in the LSTMCell
N_EPOCHS = 350
LEARNING_RATE = 0.001
### Define the placeholders anda gather the data.
xt = xt.transpose([1,0,2])
xval = xval.transpose([1,0,2])
train_data = (xt, yt)
validation_data = (xval, yval)
N_BATCHES = train_data[0].shape[0] // BATCH_SIZE
## We define the placeholders as a trick so that we do not break into memory problems, associated with feeding the data directly.
'''As an alternative, you can define the Dataset in terms of tf.placeholder() tensors, and feed the NumPy arrays when you initialize an Iterator over the dataset.'''
batch_size = tf.placeholder(tf.int64)
x = tf.placeholder(tf.float32, shape=[None, N_TIMESTEPS_X, N_FEATURES], name='XPlaceholder')
y = tf.placeholder(tf.float32, shape=[None, N_OUTPUTS], name='YPlaceholder')
# Creating the two different dataset objects.
train_dataset = tf.data.Dataset.from_tensor_slices((x,y)).batch(BATCH_SIZE, drop_remainder=True).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((x,y)).batch(BATCH_SIZE, drop_remainder=True).repeat()
#################### Creating the Iterator type that permits to switch between datasets.
handle = tf.placeholder(tf.string, shape = [])
iterator = tf.data.Iterator.from_string_handle(handle, train_dataset.output_types, train_dataset.output_shapes)
next_features, next_labels = iterator.get_next()
train_val_iterator = tf.data.Iterator.from_structure(train_dataset.output_types, train_dataset.output_shapes)
train_iterator = train_val_iterator.make_initializer(train_dataset)
val_iterator = train_val_iterator.make_initializer(val_dataset)
###########################
### Create the graph
cellType = tf.nn.rnn_cell.LSTMCell(num_units=N_NEURONS_LSTM, name='LSTMCell')
inputs = tf.unstack(next_features, axis=1)
'''inputs: A length T list of inputs, each a Tensor of shape [batch_size, input_size]'''
RNNOutputs, _ = tf.nn.static_rnn(cell=cellType, inputs=inputs, dtype=tf.float32)
out_weights = tf.get_variable("out_weights", shape=[N_NEURONS_LSTM, N_OUTPUTS], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer())
out_bias = tf.get_variable("out_bias", shape=[N_OUTPUTS], dtype=tf.float32, initializer=tf.zeros_initializer())
predictionsLayer = tf.matmul(RNNOutputs[-1], out_weights) + out_bias
### Define the cost function, that will be optimized by the optimizer.
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=predictionsLayer, labels=next_labels, name='Softmax_plus_Cross_Entropy'))
optimizer_type = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE, name='AdamOptimizer')
optimizer = optimizer_type.minimize(cost)
### Model evaluation
correctPrediction = tf.equal(tf.argmax(predictionsLayer,1), tf.argmax(next_labels,1))
accuracy = tf.reduce_mean(tf.cast(correctPrediction,tf.float32))
confusionMatrix1 = tf.confusion_matrix(tf.argmax(next_labels,1), tf.argmax(predictionsLayer,1), num_classes=3, name='ConfMatrix')
## Saving variables so that we can restore them afterwards.
saver = tf.train.Saver()
save_dir = '/media/SecondDiskHDD/8classModels/DLmodels/tfModels/{}_{}'.format(cellType.__class__.__name__, datetime.now().strftime("%Y%m%d%H%M%S"))
#save_dir = '/home/Desktop/tfModels/{}_{}'.format(cellType.__class__.__name__, datetime.now().strftime("%Y%m%d%H%M%S"))
os.mkdir(save_dir)
varDict = {'nTimeSteps': N_TIMESTEPS_X, 'BatchSize': BATCH_SIZE, 'nFeatures': N_FEATURES,
'nNeuronsLSTM': N_NEURONS_LSTM, 'nEpochs': N_EPOCHS,
'learningRate': LEARNING_RATE, 'optimizerType': optimizer_type.__class__.__name__}
varDicSavingTxt = save_dir + '/varDict.txt'
modelFilesDir = save_dir + '/modelFiles'
os.mkdir(modelFilesDir)
logDir = save_dir + '/TBoardLogs'
os.mkdir(logDir)
acc_summary = tf.summary.scalar('Accuracy', accuracy)
loss_summary = tf.summary.scalar('Cost_CrossEntropy', cost)
summary_merged = tf.summary.merge_all()
with open(varDicSavingTxt, 'w') as outfile:
outfile.write(repr(varDict))
with tf.Session() as sess:
tf.set_random_seed(2)
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter(logDir + '/train', sess.graph)
validation_writer = tf.summary.FileWriter(logDir + '/validation')
# initialise iterator with data
train_val_string = sess.run(train_val_iterator.string_handle())
cm1Total = None
cm2Total = None
print('¡Training starts!')
for epoch in range(N_EPOCHS):
batchAccList = []
batchAccListVal = []
tot_loss_train = 0
tot_loss_validation = 0
for batch in range(N_BATCHES):
sess.run(train_iterator, feed_dict = {x : train_data[0], y: train_data[1], batch_size: BATCH_SIZE})
optimizer_output, loss_value, summary, accBatch, cm1 = sess.run([optimizer, cost, summary_merged, accuracy, confusionMatrix1], feed_dict = {handle: train_val_string})
npArrayPred = predictionsLayer.eval(feed_dict= {handle: train_val_string})
predLabEnc = np.apply_along_axis(thresholdSet, 1, npArrayPred, value=0.5)
npArrayLab = next_labels.eval(feed_dict= {handle: train_val_string})
labLabEnc = np.argmax(npArrayLab, 1)
cm2 = confusion_matrix(labLabEnc, predLabEnc)
tot_loss_train += loss_value
batchAccList.append(accBatch)
try:
sess.run(val_iterator, feed_dict = {x: validation_data[0], y: validation_data[1], batch_size: BATCH_SIZE})
valLoss, valAcc, summary_val = sess.run([cost, accuracy, summary_merged], feed_dict = {handle: train_val_string})
tot_loss_validation += valLoss
batchAccListVal.append(valAcc)
except tf.errors.OutOfRangeError:
pass
if cm1Total is None and cm2Total is None:
cm1Total = cm1
cm2Total = cm2
else:
cm1Total += cm1
cm2Total += cm2
if batch % 10 == 0:
train_writer.add_summary(summary, batch)
validation_writer.add_summary(summary_val, batch)
epochAcc = tf.reduce_mean(batchAccList)
sess.run(train_iterator, feed_dict = {x : train_data[0], y: train_data[1], batch_size: BATCH_SIZE})
epochAcc_num = sess.run(epochAcc, feed_dict = {handle: train_val_string})
epochAccVal = tf.reduce_mean(batchAccListVal)
sess.run(val_iterator, feed_dict = {x: validation_data[0], y: validation_data[1], batch_size: BATCH_SIZE})
epochAcc_num_Val = sess.run(epochAccVal, feed_dict = {handle: train_val_string})
if epoch%10 == 0:
print("Epoch: {}, Loss: {:.4f}, Accuracy: {:.3f}".format(epoch, tot_loss_train / N_BATCHES, epochAcc_num))
print('Validation Loss: {:.4f}, Validation Accuracy: {:.3f}'.format(tot_loss_validation / N_BATCHES, epochAcc_num_Val))
cmLogFile1 = save_dir + '/cm1File.txt'
with open(cmLogFile1, 'w') as outfile:
outfile.write(repr(cm1Total))
cmLogFile2 = save_dir + '/cm2File.txt'
with open(cmLogFile2, 'w') as outfile:
outfile.write(repr(cm2Total))
saver.save(sess, modelFilesDir + '/model.ckpt')

TensorFlow Trained Model Predicts Always Zero

I have one simple TensorFlow model and accuracy for that is 1. But when I try to predict some new inputs it always returns Zero(0).
import numpy as np
import tensorflow as tf
sess = tf.InteractiveSession()
# generate data
np.random.seed(10)
#inputs = np.random.uniform(low=1.2, high=1.5, size=[5000, 150]).astype('float32')
inputs = np.random.randint(low=50, high=500, size=[5000, 150])
label = np.random.uniform(low=1.3, high=1.4, size=[5000, 1])
# reverse_label = 1 - label
reverse_label = np.random.uniform(
low=1.3, high=1.4, size=[5000, 1])
reverse_label1 = np.random.randint(
low=80, high=140, size=[5000, 1])
#labels = np.append(label, reverse_label, 1)
#labels = np.append(labels, reverse_label1, 1)
labels = reverse_label1
print(inputs)
print(labels)
# parameters
learn_rate = 0.001
epochs = 100
n_input = 150
n_hidden = 15
n_output = 1
# set weights/biases
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])
b0 = tf.Variable(tf.truncated_normal([n_hidden], stddev=0.2, seed=0))
b1 = tf.Variable(tf.truncated_normal([n_output], stddev=0.2, seed=0))
w0 = tf.Variable(tf.truncated_normal([n_input, n_hidden], stddev=0.2, seed=0))
w1 = tf.Variable(tf.truncated_normal([n_hidden, n_output], stddev=0.2, seed=0))
# step function
def returnPred(x, w0, w1, b0, b1):
z1 = tf.add(tf.matmul(x, w0), b0)
a2 = tf.nn.relu(z1)
z2 = tf.add(tf.matmul(a2, w1), b1)
h = tf.nn.relu(z2)
return h # return the first response vector from the
y_ = returnPred(x, w0, w1, b0, b1) # predict operation
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=y_, labels=y)) # calculate loss between prediction and actual
model = tf.train.AdamOptimizer(learning_rate=learn_rate).minimize(
loss) # apply gradient descent based on loss
init = tf.global_variables_initializer()
tf.Session = sess
sess.run(init) # initialize graph
for step in range(0, epochs):
sess.run([model, loss], feed_dict={x: inputs, y: labels}) # train model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: inputs, y: labels})) # print accuracy
inp = np.random.randint(low=50, high=500, size=[5, 150])
print(sess.run(tf.argmax(y_, 1), feed_dict={x: inp})) # predict some new inputs
All functions are working properly and my problem is with the latest line of code. I tried only "y_" instead "tf.argmax(y_, 1)" but not worked too.
How can I fix that?
Regards,
There are multiple mistakes in your code.
Starting with this lines of code:
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: inputs, y: labels})) # print accuracy
You are performing linear regression but you are checking accuracy with that of logistic regression methodology. If you want to see how your linear regression network is performing, print the loss. Ensure that your loss is decreasing after each epoch of training.
If you look into that accuracy code, run the following code:
print(y_.get_shape()) # Outputs (?, 1)
There is only one input and both of your function tf.argmax(y,1) and tf.argmax(y_,1) will always return [0,0,..]. So as a result your accuracy will be always 1.0. Delete those three lines of code.
Next, to get the outputs, just run the following code:
print(sess.run(y_, feed_dict={x: inp}))
But since your data is random, don't expect good set of outputs.

Why do I get a low accuracy in this neural network (tensorflow)?

I have made a convolutional neural network with tensorflow, i've trained it and tested it (about 98% accuracy)... I saved the model with
saver = tf.train.Saver()
saver.save(sess, 'model.ckpt')
Then i restored with the saver, but i always get an accuracy lower than 50%... why ?
Here's the code:
import tensorflow as tf
import matplotlib.pyplot as plt
import pickle
import numpy as np
with open('X_train.pickle', 'rb') as y:
u = pickle._Unpickler(y)
u.encoding = 'latin1'
X_train = u.load()
with open('X_test.pickle', 'rb') as y:
u = pickle._Unpickler(y)
u.encoding = 'latin1'
X_test = u.load()
X_test = np.array(X_test).reshape(-1, 2500)
with open('y_train.pickle', 'rb') as y:
u = pickle._Unpickler(y)
u.encoding = 'latin1'
y_train = u.load()
with open('y_test.pickle', 'rb') as y:
u = pickle._Unpickler(y)
u.encoding = 'latin1'
y_test = u.load()
n_classes = 3
batch_size = 100
x = tf.placeholder('float', [None, 2500])
y = tf.placeholder('float')
keep_rate = 0.8
keep_prob = tf.placeholder(tf.float32)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME')
def maxpool2d(x):
# size of window movement of window
return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
def convolutional_neural_network(x):
weights = {'W_conv1':tf.Variable(tf.random_normal([5,5,1,32])),
'W_conv2':tf.Variable(tf.random_normal([5,5,32,64])),
'W_fc':tf.Variable(tf.random_normal([13*13*64,1024])),
'out':tf.Variable(tf.random_normal([1024, n_classes]))}
biases = {'b_conv1':tf.Variable(tf.random_normal([32])),
'b_conv2':tf.Variable(tf.random_normal([64])),
'b_fc':tf.Variable(tf.random_normal([1024])),
'out':tf.Variable(tf.random_normal([n_classes]))}
x = tf.reshape(x, shape=[-1, 50, 50, 1])
conv1 = tf.nn.relu(conv2d(x, weights['W_conv1']) + biases['b_conv1'])
conv1 = maxpool2d(conv1)
conv2 = tf.nn.relu(conv2d(conv1, weights['W_conv2']) + biases['b_conv2'])
conv2 = maxpool2d(conv2)
fc = tf.reshape(conv2,[-1, 13*13*64])
fc = tf.nn.relu(tf.matmul(fc, weights['W_fc'])+biases['b_fc'])
#fc = tf.nn.dropout(fc, keep_rate)
output = tf.matmul(fc, weights['out'])+biases['out']
return output
def use_neural_network(input_data):
prediction = convolutional_neural_network(x)
sess.run(tf.global_variables_initializer())
result = (sess.run(tf.argmax(prediction.eval(feed_dict={x:[input_data]}),1)))
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:X_test, y:y_test}))
return result
with tf.Session() as sess:
c = convolutional_neural_network(x)
saver = tf.train.Saver()
saver.restore(sess, "model.ckpt")
sample = X_train[432].reshape(2500)
res = use_neural_network(sample)
if res == [0]: print('Go straight')
elif res == [1]: print('Turn right')
else: print('Turn left')
img = sample.reshape(50,50)
plt.imshow(img)
plt.show()
sample = X_train[1222].reshape(2500)
res = use_neural_network(sample)
if res == [0]: print('Go straight')
elif res == [1]: print('Turn right')
else: print('Turn left')
img = sample.reshape(50,50)
plt.imshow(img)
plt.show()
sample = X_train[2986].reshape(2500)
res = use_neural_network(sample)
if res == [0]: print('Go straight')
elif res == [1]: print('Turn right')
else: print('Turn left')
img = sample.reshape(50,50)
plt.imshow(img)
plt.show()
The problem can't be overfitting, since i'm testing it with elements of the training dataset ...
I'm quite sure that the problem is the saver, but i can't figure out how to solve it ...
When you train a model using tensorlfow , make sure that your using the tensorflow version 1.0 and above. once you trained model using latest version 3 file will be created named as follows :
modelname.data
It is TensorBundle collection, save the values of all variables.
modelname.index
.index stores the list of variable names and shapes saved.
modelname.meta
this file describes the saved graph structure, includes GraphDef, SaverDef, and so on.
To reload/restore your model use model.load(modelname) it not only loads your model but also the accuracy won't be fluctuated.
Note : Please use TFLearn , TFLearn introduces a High-Level API that makes neural network building and training fast and easy.For more detail visit http://tflearn.org/getting_started/
The simple and Generalized way of building and using CNN using tensorflow is as follows:
Construct Network :
Here your will create n convolution , max-poll layer and fully connected layer then apply whatever activation function you want and return your model object
Train model :
fit your training data into your model using model.fit(X,Y)
Save Model :
Save your model using model.save(modelName)
Reload Model :
Reload your model using model.load(modelName)
This is the generic and simplified way to build and use CNN.
Hope it may help you :)

Resources