Pytorch: using CUDA prevents optimization from working - pytorch

I have a very simple optimization: a straight line. Here is the code:
use_gpu = torch.cuda.is_available()
learning_rate = 0.05
loss_function = nn.MSELoss()
train_inputs = torch.FloatTensor([1,2,3,4,5,6]).T.unsqueeze(0)
y_truth = torch.FloatTensor([10, 15, 20, 25, 30, 35]).unsqueeze(0)
W = torch.nn.Parameter(torch.rand(1), requires_grad=True)
b = torch.nn.Parameter(torch.rand(1), requires_grad=True)
optimizer = optim.Adam([b, W], lr=learning_rate)
# if use_gpu:
# y_truth = y_truth.cuda()
# W = W.cuda()
# b = b.cuda()
# train_inputs = train_inputs.cuda()
for epoch in range(1000):
optimizer.zero_grad()
y_preds = b + W * train_inputs
loss = loss_function(y_truth, y_preds)
loss.backward()
optimizer.step()
if epoch % 100 == 0:
print(loss.data, W.data, b.data)
That code works fine if I do not put the data on the GPU. If I uncomment the if use_gpu bloc, the code runs, but does not minimize anything and the variables do not update.
I would expect the code to work similarly on the GPU or not. Any idea what is happening?
Thanks!

Any idea what is happening?
Yes, the parameters you are training, W and b stayed on the host (CPU).
When you did
W = W.cuda()
b = b.cuda()
you just chose to ignore the actual parameters being optimized.
If you wish to use the GPU for this, you could try:
W = torch.nn.Parameter(torch.rand(1).cuda())
b = torch.nn.Parameter(torch.rand(1).cuda())
instead.

Related

Why is my DQN training with tensorflow becoming slower each iteration?

I’m trying to implement a DQN with experience replay in tensorflow. It seems to be working, i.e. my loss is decreasing. However, as the the training loop is running I’ve noticed that each training iteration becomes slower and slower. It is as if my tensorflow graph is becoming bigger and bigger and slowing down the training. I cannot see myself what the problem with my code is. Any tensorflow guru out there that can point it out? I have made a scaled-down version of my code here which operates on random data but produces the same issue.
import numpy as np
import tensorflow as tf
# Function which initializes tensorflow weights for feed-forward NN.
def InitWeights(LayerSizes):
# Make tensorflow input/output placeholders
X = tf.placeholder(shape = (None,LayerSizes[0]), dtype = tf.float32, name ='InputData')
y = tf.placeholder(shape = (None,LayerSizes[-1]), dtype = tf.float32, name ='OutputData')
# Initialize dictionaries for weights and biases.
W = {}
b = {}
for ii in range(len(LayerSizes)-1):
layername = f'layer%s' % ii
with tf.variable_scope(layername):
ny = LayerSizes[ii]
nx = LayerSizes[ii+1]
# Weights (initialized with xavier initializatiion).
W['Weights_'+layername] = tf.get_variable(
name = 'Weights_'+layername,
shape = (ny, nx),
initializer = tf.contrib.layers.xavier_initializer(),
dtype = tf.float32
)
# Bias (initialized with xavier initializatiion).
b['Bias_'+layername] = tf.get_variable(
name = 'Bias_'+layername,
shape = (nx),
initializer = tf.contrib.layers.xavier_initializer(),
dtype = tf.float32
)
return W, b, X, y
# Function which defines feed-forward neural network operation.
def FeedForward(X, W, b):
a = X
# Loop all layers of the network.
for ii in range(len(W)):
# Use name of each layer as index.
layername = f'layer%s' % ii
# Weighted sum: z = input*W + b
z = tf.add(tf.matmul(a, W['Weights_'+layername], name = 'WeightedSum_z_'+layername), b['Bias_'+layername])
# Passed through actication fcn: a = h(z)
if ii == len(W)-1:
a = z
else:
a = tf.nn.relu(z, name = 'activation_a_'+layername)
return a
# Function used for experience replay
def ExperienceReplay(s, a, r, s_prime, gamma, TermState, X, y, yhat, yhatNN2, train_op, loss, sess):
# Inputs:
# s - state(s)
# a - actions(s)
# r - rewards(s)
# s_prime - new state(s)
# gamma - discount factor
# TermState - scalar of which action is termenating
# X - tensorflow placeholder for network inputs
# y - tensorflow placeholder for network outputs
# yhat - tensorflow operation for feed foward with NN 1
# yhatNN2 - tensorflow operation for feed foward with NN 2
# train_op - tensorflow training operation
# loss - tensorflow fcn for calculating loss
# sess - tensorflow session
# Forward pass throught NN2 using s_prime to find max(Q(s',a',theta')).
Q = sess.run(yhatNN2, feed_dict={X : s_prime})
# Actions that NN1 thinks is best # sprime state
a_argmax = np.argmax(sess.run(yhat, feed_dict={X : s_prime}), axis=1)
# Values from NN2's opinion about the actions NN1 picked.
Qm = np.zeros(len(r))
for obs in range(len(r)):
Qm[obs] = Q[obs,a_argmax[obs]]
# First make all targets equal to NN1's approximation of Q (so the error is 0 in all unobserved cases)
Targets = sess.run(yhat, feed_dict={X : s})
# If the action was experienced, change the target to either real reward or discounted future reward.
for obs in range(len(r)):
# If the action was episode-terminating, use only reward as target.
if int(a[obs]) == TermState:
Targets[obs,int(a[obs])] = r[obs]
# Otherwise use discounted future reward.
else:
Targets[obs,int(a[obs])] = r[obs] + gamma*Qm[obs]
# Gradient decent one step on NN1 weights.
sess.run(train_op, feed_dict={X : s, y : Targets})
# Calculate the losses.
loss_val = sess.run(loss, feed_dict={X : s, y : Targets})
meanloss = np.mean(loss_val)
return loss_val, meanloss
if __name__ == "__main__":
#### Hyperparameter settings
N = 64 # Minibatch size during training
gamma = 0.99 # Discount rate
C = 100 # How many iterations between NN sync NN2 = NN1
lr = 1e-7 # Learning rate of NN during training
nstates = 256 # Number of possible states
nactions = 256 # Number of possible actions
TermState = 255 # Which state ends episode
"""
Initialize tensorflow session and create one NN with two set of weights
"""
# Initialize & configure action-value function Q with random weights theta.
LayerSizes = [nstates, 1024, 1024, nactions]
W, b, X, y = InitWeights(LayerSizes)
# Define loss function to optimize. Here: quadratic loss fcn. (Outputdata-a)^2
yhat = FeedForward(X, W, b)
loss = tf.reduce_sum(tf.square(y - yhat),reduction_indices=[0])
# Define optimizer to use when minimizing loss function.
all_variables = tf.trainable_variables()
optimizer = tf.train.AdamOptimizer(learning_rate = lr)
train_op = optimizer.minimize(loss, var_list = all_variables)
# Initialize target action-value function Qhat with random weights theta_= theta.
with tf.device('/gpu:0'):
W2 = {}
b2 = {}
# Make hard copy of tensorflow Weights and biases
for key in W:
W2[key] = tf.Variable(W[key].initialized_value())
for key in b:
b2[key] = tf.Variable(b[key].initialized_value())
yhatNN2 = FeedForward(X, W2, b2)
# Start tf session and initialize variables.
sess = tf.Session()
sess.run(tf.global_variables_initializer())
## Generate random data representing state transitions <s,a,r,s'>.
# Random states
Ds = np.random.rand(100000,nstates)>0.5
Ds = Ds.astype(np.float32)
# Random actions
Da = np.random.randint(0,nstates,(100000,1)).astype(np.float32)
# Random rewards
Dr = np.random.rand(100000,1).astype(np.float32)
# Random new states
Ds_prime = np.random.rand(100000,nstates)>0.5
Ds_prime = Ds.astype(np.float32)
"""
Pretrain network and report time each C iterations
"""
import time
t0 = time.time()
for i in range(100000):
# Randomly pick minibatch to use
MemsToUse = np.random.choice(len(Dr), N)
s = Ds[MemsToUse,:]
a = Da[MemsToUse,0]
r = Dr[MemsToUse,0]
sprime = Ds_prime[MemsToUse,:]
# Experience replay.
loss_val, meanloss = ExperienceReplay(s, a, r, sprime, gamma,TermState, X, y, yhat, yhatNN2, train_op, loss, sess)
# every C iteration copy NN2 = NN1
if (i % C) == 0:
t1 = time.time()
print('iter: %i meanloss: %0.5f iteration took %0.2f s' %(i,meanloss,t1-t0))
t0 = time.time()
with tf.device('/gpu:0'):
for key in W:
W2[key] = tf.Variable(W[key].initialized_value())
for key in b:
b2[key] = tf.Variable(b[key].initialized_value())
Update: After having timed individual segments of the code it appears as if it’s the first line of my Experience replay function:
Q = sess.run(yhatNN2, feed_dict={X : s_prime})
That is causing most if not all of the slowdown. I don’t understand the logic behind why that is, there are several feedforward passes occurring in the program but only this one seem to cause a problem..

Not found: Key Variable_<x> not found in checkpoint

I am trying to save a trained model and use it later in another instance (function). But, somehow this throws me the variable not found error. After reagin through SO and other forums, I understand the problem is the way I store it.
dictionary, reverse_dictionary = build_dataset(training_data)
vocab_size = len(dictionary)
n_input = 3
n_hidden = 512
# RNN output node weights and biases
weights = {'out': tf.Variable(tf.random_normal([n_hidden, vocab_size]))}
biases = {'out': tf.Variable(tf.random_normal([vocab_size]))}
# tf Graph input
x = tf.placeholder("float", [None, n_input, 1])
y = tf.placeholder("float", [None, vocab_size])
# RNN implementation in Tensorflow
def RNN(x,weights,biases):
x = tf.reshape(x, [-1, n_input])
x = tf.split(x, n_input, 1)
rnn_cell = rnn.BasicLSTMCell(n_hidden)
outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32)
return tf.matmul(outputs[-1], weights['out']) + biases['out']
pred = RNN(x, weights, biases)
learning_rate = 0.001
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(cost)
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
training_iters = 1000
display_step = 500
saver = tf.train.Saver()
# Launch the graph
with tf.Session() as session:
session.run(init)
step = 0
offset = random.randint(0, n_input+1)
end_offset = n_input + 1
acc_total = 0
loss_total = 0
while step < training_iters:
if offset > (len(training_data)-end_offset):
offset = random.randint(0, n_input+1)
symbols_in_keys = [ [dictionary[ str(training_data[i])]] for i in range(offset, offset+n_input) ]
symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1])
symbols_out_onehot = np.zeros([vocab_size], dtype=float)
symbols_out_onehot[dictionary[str(training_data[offset+n_input])]] = 1.0
symbols_out_onehot = np.reshape(symbols_out_onehot, [1, -1])
_, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred], \
feed_dict={x: symbols_in_keys, y: symbols_out_onehot})
loss_total += loss
acc_total += acc
if (step+1) % display_step == 0:
print("Iter= " + str(step+1) + ", Average Loss= " + \
"{:.6f}".format(loss_total/display_step) + ", Average Accuracy= " + \
"{:.2f}%".format(100*acc_total/display_step))
acc_total = 0
loss_total = 0
symbols_in = [training_data[i] for i in range(offset, offset + n_input)]
symbols_out = training_data[offset + n_input]
symbols_out_pred = reverse_dictionary[int(tf.argmax(onehot_pred, 1).eval())]
print("%s - [%s] vs [%s]" % (symbols_in,symbols_out,symbols_out_pred))
step += 1
offset += (n_input+1)
saver.save(session, 'userLocation/Model')
While the model files are generated, but when I try to restore the model using
saver = tf.train.Saver()
with tf.Session() as restored_session:
saver.restore(restored_session, 'userLocation/Model')
Error
tensorflow.python.framework.errors_impl.NotFoundError: Key Variable_3 not found in checkpoint
[[Node: save_1/RestoreV2_7 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save_1/Const_0, save_1/RestoreV2_7/tensor_names, save_1/RestoreV2_7/shape_and_slices)]]
Any pointers as to what am i missing while saving.
I will explain this in 2 different part -
When you save the model in tensorflow, it will save graph in one file(usually the extention is .meta) and variable tensors in other file(usually index file).
Now, while importing you have to do the same 2 step process - a) import the graph first b) then create a session and import variables.
Here is a sample code -
import tensorflow as tf
import numpy as np
tf.set_random_seed(10)
#define graph location in variable
meta_file = 'userLocation/Model.meta'
#importing the graph
ns = tf.train.import_meta_graph(meta_file , clear_devices=True)
#create a session
with tf.Session().as_default() as sess:
#import variables
ns.restore(sess, meta_file[0:len(meta_file)-5])
# for example, if you have 'x' tenbsor in graph
x=tf.get_default_graph().get_tensor_by_name("x:0")
.
.
.
#Further processing/prediction etc

tensorflow-for-onehot-classification , cost is always 0

This follows on from this post (not mine): TensorFlow for binary classification
I had a similar issue and converted my data to use one hot encoding. However I'm still getting a cost of 0. Interestingly the accuracy is correct (90%) when I feed my training data back into it.
Code below:
# Set parameters
learning_rate = 0.02
training_iteration = 2
batch_size = int(np.size(y_vals)/300)
display_step = 1
numOfFeatures = 20 # 784 if MNIST
numOfClasses = 2 #10 if MNIST dataset
# TF graph input
x = tf.placeholder("float", [None, numOfFeatures])
y = tf.placeholder("float", [None, numOfClasses])
# Create a model
# Set model weights to random numbers: https://www.tensorflow.org/api_docs/python/tf/random_normal
W = tf.Variable(tf.random_normal(shape=[numOfFeatures,1])) # Weight vector
b = tf.Variable(tf.random_normal(shape=[1,1])) # Constant
# Construct a linear model
model = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
# Minimize error using cross entropy
# Cross entropy
cost_function = -tf.reduce_sum(y*tf.log(model))
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for iteration in range(training_iteration):
avg_cost = 0.
total_batch = int(len(x_vals)/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs = x_vals[i*batch_size:(i*batch_size)+batch_size]
batch_ys = y_vals_onehot[i*batch_size:(i*batch_size)+batch_size]
# Fit training using batch data
sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})
# Compute average loss
avg_cost += sess.run(cost_function, feed_dict={x: batch_xs, y: batch_ys})/total_batch
# Display logs per eiteration step
if iteration % display_step == 0:
print ("Iteration:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(avg_cost))
print ("Tuning completed!")
# Evaluation function
correct_prediction = tf.equal(tf.argmax(model, 1), tf.argmax(y, 1))
#correct_prediction = tf.equal(model, y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
# Test the model
print ("Accuracy:", accuracy.eval({x: x_vals_test, y: y_vals_test_onehot}))
Your output for cost is using:
"{:.9f}".format(avg_cost)
Therefore, maybe you can replace 9 with bigger number.
Ok here is what I found in the end.
Replace:
b = tf.Variable(tf.random_normal(shape=[1,1]))
with:
b = tf.Variable(tf.zeros([1]))

ResourceExhaustedError using cnn

While trying to run my 3D Convolutional Neural Network, I am getting the following error. What could be the reason ?
ResourceExhaustedError (see above for traceback): OOM when allocating
tensor with shape[54080,1024] [[Node: Variable_10/Adam/Assign =
Assign[T=DT_FLOAT, _class=["loc:#Variable_10"], use_locking=true,
validate_shape=true,
_device="/job:localhost/replica:0/task:0/gpu:0"](Variable_10/Adam, zeros_4)]]
This is the code i have used:
import tensorflow as tf
import numpy as np
IMG_SIZE_PX = 50
SLICE_COUNT = 20
n_classes = 2
batch_size = 10
x = tf.placeholder('float')
y = tf.placeholder('float')
keep_rate = 0.8
def conv3d(x, W):
return tf.nn.conv3d(x, W, strides=[1,1,1,1,1], padding='SAME')
def maxpool3d(x):
return tf.nn.max_pool3d(x, ksize=[1,2,2,2,1], strides=[1,2,2,2,1], padding='SAME')
def convolutional_neural_network(x):
weights = {'W_conv1':tf.Variable(tf.random_normal([3,3,3,1,32])),
'W_conv2':tf.Variable(tf.random_normal([3,3,3,32,64])),
'W_fc':tf.Variable(tf.random_normal([54080,1024])),
'out':tf.Variable(tf.random_normal([1024, n_classes]))}
biases = {'b_conv1':tf.Variable(tf.random_normal([32])),
'b_conv2':tf.Variable(tf.random_normal([64])),
'b_fc':tf.Variable(tf.random_normal([1024])),
'out':tf.Variable(tf.random_normal([n_classes]))}
x = tf.reshape(x, shape=[-1, IMG_SIZE_PX, IMG_SIZE_PX, SLICE_COUNT, 1])
conv1 = tf.nn.relu(conv3d(x, weights['W_conv1']) + biases['b_conv1'])
conv1 = maxpool3d(conv1)
conv2 = tf.nn.relu(conv3d(conv1, weights['W_conv2']) + biases['b_conv2'])
conv2 = maxpool3d(conv2)
fc = tf.reshape(conv2,[-1, 54080])
fc = tf.nn.relu(tf.matmul(fc, weights['W_fc'])+biases['b_fc'])
fc = tf.nn.dropout(fc, keep_rate)
output = tf.matmul(fc, weights['out'])+biases['out']
return output
much_data = np.load('muchdata-50-50-20.npy')
# If you are working with the basic sample data, use maybe 2 instead of 100 here... you don't have enough data to really do this
train_data = much_data[:-100]
validation_data = much_data[-100:]
def train_neural_network(x):
prediction = convolutional_neural_network(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y) )
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(cost)
hm_epochs = 10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
successful_runs = 0
total_runs = 0
for epoch in range(hm_epochs):
epoch_loss = 0
for data in train_data:
total_runs += 1
try:
X = data[0]
Y = data[1]
_, c = sess.run([optimizer, cost], feed_dict={x: X, y: Y})
epoch_loss += c
successful_runs += 1
except Exception as e:
# I am passing for the sake of notebook space, but we are getting 1 shaping issue from one
# input tensor. Not sure why, will have to look into it. Guessing it's
# one of the depths that doesn't come to 20.
pass
#print(str(e))
print('Epoch', epoch+1, 'completed out of',hm_epochs,'loss:',epoch_loss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:[i[0] for i in validation_data], y:[i[1] for i in validation_data]}))
print('Done. Finishing accuracy:')
print('Accuracy:',accuracy.eval({x:[i[0] for i in validation_data], y:[i[1] for i in validation_data]}))
print('fitment percent:',successful_runs/total_runs)
train_neural_network(x)
I am running this using tensorflow-gpu version. I am using a GTX970M and have installed CUDA and imported the cudnn files properly. When running the last command i get the following error. Kindly help !
You have ran out of memory for some reasons.
It can be that you have some application using your GPU (another tensorflow session still active for example).Check if that's not the case. (you can use nvidia-smi to monitor that).
If that's not, it will mainly be because of the size of your model and the size of your GPU's memory. What you can do is try to launch it in CPU mode, list all your variables with tf.Variables, do the math of how much memory it represents, and see if it fit in your GPU.
Until you have done this, there not much more advice I can provide.

How to output a prediction in Tensorflow?

I am trying to use a Tensorflow DNN for a Kaggle Competion. The data is about 100 columns of categorical data, 29 columns of numerical data, and 1 column for the output. What I did was I split it into training and testing with X and y using Scikit's train test split function, where X is a list of each rows without the "id" or the value that needs to be predicted, and y is the value that is needed to be predicted. I then built the model, shown below:
import tensorflow as tf
import numpy as np
import time
import pickle
with open('pickle.pickle', 'rb') as f:
trainX, trainy, testX, testy = pickle.load(f)
trainX = np.array(trainX)
trainy = np.array(trainy)
trainy = trainy.reshape(trainy.shape[0], 1)
testX = np.array(testX)
testy = np.array(testy)
print (trainX.shape)
print (trainy.shape)
testX = testX.reshape(testX.shape[0], 130)
testy = testy.reshape(testy.shape[0], 1)
print (testX.shape)
print (testy.shape)
n_nodes_hl1 = 256
n_nodes_hl2 = 256
n_nodes_hl3 = 256
n_classes = 1
batch_size = 100
# Matrix = h X w
X = tf.placeholder('float', [None, len(trainX[0])])
y = tf.placeholder('float')
def model(data):
hidden_1_layer = {'weights':tf.Variable(tf.random_normal([trainX.shape[1], n_nodes_hl1])),
'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))}
hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))}
hidden_3_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])),
'biases':tf.Variable(tf.random_normal([n_nodes_hl3]))}
output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])),
'biases':tf.Variable(tf.random_normal([n_classes]))}
# (input_data * weights) + biases
l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']), hidden_1_layer['biases'])
l1 = tf.nn.sigmoid(l1)
l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']), hidden_2_layer['biases'])
l2 = tf.nn.sigmoid(l2)
l3 = tf.add(tf.matmul(l2, hidden_3_layer['weights']), hidden_3_layer['biases'])
l3 = tf.nn.sigmoid(l3)
output = tf.matmul(l3, output_layer['weights']) + output_layer['biases']
return output
def train(x):
pred = model(x)
#loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(pred, y))
loss = tf.reduce_mean(tf.square(pred - y))
optimizer = tf.train.AdamOptimizer(0.01).minimize(loss)
epochs = 1
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print ('Beginning Training \n')
for e in range(epochs):
timeS = time.time()
epoch_loss = 0
i = 0
while i < len(trainX):
start = i
end = i + batch_size
batch_x = np.array(trainX[start:end])
batch_y = np.array(trainy[start:end])
_, c = sess.run([optimizer, loss], feed_dict = {x: batch_x, y: batch_y})
epoch_loss += c
i += batch_size
done = time.time() - timeS
print ('Epoch', e + 1, 'completed out of', epochs, 'loss:', epoch_loss, "\nTime:", done, 'seconds\n')
correct = tf.equal(tf.arg_max(pred, 1), tf.arg_max(y, 1))
acc = tf.reduce_mean(tf.cast(correct, 'float'))
print("Accuracy:", acc.eval({x:testX, y:testy}))
train(X)
Output for 1 epoch:
Epoch 1 completed out of 1 loss: 1498498282.5
Time: 1.3765859603881836 seconds
Accuracy: 1.0
I do realize that the loss is very high, and I am using 1 epoch just for testing purposes, and yes, I know my code is quite messy. But all I want to do is print out a prediction. How would I do that? I know that I need to feed a list of features for X, but I just don't understand how to do it. I also don't quite understand why my accuracy is at 1.0, so if you have any suggestions for that, or any ways to change my code, I would be more that happy to listen to any ideas.
Thanks in advance
To get a prediction you just have to evaluate pred, which is the operation that defines the output of the model.
How to do it? With pred.eval(). But you need an input to evalaute its prediction, so you have to provide a feed_dict dictionary to eval() with the sample (or samples) you want to process.
The resulting code looks like:
predictions = pred.eval(feed_dict = {x:testX})
Notice how this is very similar to acc.eval({x:testX, y:testy}), because the idea is the same. You have an operation (acc in this case) which needs some input to be evaluated, and you can evaluate it either by calling acc.eval() or sess.run(acc) with the corresponding feed_dict with the necessary inputs.
The simplest way would be to use the existing session while training (between iterations):
print (sess.run(model, {x:X_example}))
where X_example is some numpy example tensor.
The below line will give you probability scores for every class for example is you 3 classes then the below line will give you a array of shape of 1x3
Considering you want prediction of a single data point X_test you can do the following:
output = sess.run(pred, {x:X_test})
the maximum number in the above variable output will be you prediction so for that we will modify the above statement :
output = sess.run(tf.argmax(pred, 1), {x:X_test})
print("your prediction for X_test is :", output[0])
Other thing you can do is :
output = sess.run(pred, {x:X_test})
output = np.argmax(output)
print("your prediction for X_test is :", output)

Resources