AttributeError: 'DataModuleClass' object has no attribute 'training_dataset' - pytorch

I am trying to learn PyTorch Lightning by writing a very simple DataModuleClass. After prepare_data(), and setup() I am trying to check these functions are working or not. So, I am trying to get the training and validation dataset from setup(). But I am getting an error
AttributeError: 'DataModuleClass' object has no attribute 'training_dataset'
Code
def prepare_data(self):
x = np.random.uniform(0, 10, 10)
e = np.random.normal(0, self.sigma, len(x))
# Making target or labels
y = x + e
# Marging x and e for 2 features
X = np.transpose(np.array([x, e]))
# Converting numpy array to Tensor
self.x_train_tensor = torch.from_numpy(X).float().to(device)
self.y_train_tensor = torch.from_numpy(y).float().to(device)
training_dataset = TensorDataset(self.x_train_tensor, self.y_train_tensor)
self.training_dataset = training_dataset
def setup(self):
data = self.training_dataset
self.train_data, self.val_data = random_split(data, [8, 2])
return self.train_data, self.val_data
def train_dataloader(self):
return DataLoader(self.train_data)
def val_dataloader(self):
return DataLoader(self.val_data)
obj = DataModuleClass()
print(obj.setup())
Could you tell me why I am getting this error?

From the way the code looks to me.
The variable self.training_dataset of the DataModuleClass is initiated in prepare_data and setup need it in the first line.
But you called setup without calling training_dataset.
If prepare_data is expected to be called every time you create a DataModuleClass object then it best to put prepare_data in __init__. Like
def __init__(self, other_params):
..... all your code previously in __init__
self.prepare_data() # put this in the last line of this function
But if you don't need that then you need to call prepare_data before setup
obj = DataModuleClass()
obj.prepare_data()
print(obj.setup())
Or put prepare_data in setup itself.
def setup(self):
self.prepare_data()
data = self.training_dataset
self.train_data, self.val_data = random_split(data, [8, 2])
return self.train_data, self.val_data
Edit 1: See the actual value of self.train_data and self.val_data
The objects returned from setup are torch.utils.data.dataset.Subset.
There are basically 2 ways to get the tensors.
1. Treat them like lists
train_data, val_data = obj.setup()
print(train_data[0])
2. Use for loop
train_data, val_data = obj.setup()
for data in train_data:
print(data)
Note
I'm not sure weather you'll get the tensors or TensorDataset. If it's the latter then use the same trick again, like
train_data, val_data = obj.setup()
train_tensor_data = train_data[0]
print(train_tensor_data[0])

Related

Working with large multiple datasets where each dataset contains multiple values - Pytorch

I'm training a Neural Network and have overall > 15GB of data inside a folder, the folder has multiple pickle files, and each file contains two lists that each holds multiple values.
This looks like the following:
dataset_folder:\
file.pickle
file_2.pickle
...
...
file_n.pickle
Each file_*.pickle contains a variable length list (list x and list y).
How to load all the data to train the model without having memory issue?
By implementing the custom dataset class provided from Pytorch, we need to implement three methods so pytorch loader can work with your data
__len__
__getitem__
__init__
Let's go through how to implement each one of them seperatly.
__init__
def __init__(self):
# Original Data has the following format
"""
dict_object =
{
"x":[],
"y":[]
}
"""
DIRECTORY = "data/raw"
self.dataset_file_name = os.listdir(DIRECTORY)
self.dataset_file_name_index = 0
self.dataset_length =0
self.prefix_sum_idx = list()
# Loop over each file and calculate the length of overall dataset
# you might need to check if file_name is file
for file_name in os.listdir(DIRECTORY):
with (open(f'{DIRECTORY}/{file_name}', "rb")) as openfile:
dict_object = pickle.load(openfile)
curr_page_sum = len(dict_object["x"]) + len(dict_object["y"])
self.prefix_sum_idx.append(curr_page_sum)
self.dataset_length += curr_page_sum
# prefix sum so we have an idea of where each index appeared in which file.
for i in range (1,len(self.prefix_sum_idx)):
self.prefix_sum_idx[i] = self.prefix_sum_idx[i] + self.prefix_sum_idx[i-1]
assert self.prefix_sum_idx[-1] == self.dataset_length
self.x = []
self.y = []
As you can see above, the main idea is to use prefix sum to "treat" all the dataset as once, so the logic is whenever you need to get access to a specific index later, you simply look into prefix_sum_idx to see this where this idx appear.
In the image above, let's say we need to access the index 150. Thanks to prefix sum, we are now able to know that 150 exist in the second .pickle file. Still we need a fast mechanism to know where that idx exist in the prefix_sum_idx. This will be explained in the __getitem__
__getitem__
def read_pickle_file(self, idx):
file_name = self.dataset_file_name[idx]
dict_object = dict()
with (open(f'{YOUR_DIRECTORY}/{file_name}', "rb")) as openfile:
dict_object = pickle.load(openfile)
self.x = dict_object['x']
self.y = #some logic here
......
# Some logic here....
def __getitem__(self,idx):
# Similar to C++ std::upper_bound - O(log n)
temp = bisect.bisect_right(self.prefix_sum_idx, idx)
self.read_pickle_file(temp)
local_idx = idx - self.prefix_sum_idx[temp]
return self.x[local_idx],self.y[local_idx]
check bisect_right() docs for details on how it works, but simply it returns the rightmost place in the sorted list to insert the given element and keep it sorted. In our approach, we're interested only in the following question, "which file should I access in order to get the appropriate data". More importantly, it does so in O(log n)
__len__
def __len__(self):
return self.dataset_length
In order to get the length of our dataset, we loop through each file in and accumulate the results as shown in __init__.
The full code sample goes like this:
import pickle
import torch
import torch.nn as nn
import numpy
import os
import bisect
from torch.utils.data import Dataset, DataLoader
from src.data.make_dataset import main
from torch.nn import functional as F
class dataset(Dataset):
def __init__(self):
# Original Data has the following format
"""
dict_object =
{
"x":[],
"y":[]
}
"""
DIRECTORY = "data/raw"
self.dataset_file_name = os.listdir(DIRECTORY)
self.dataset_file_name_index = 0
self.dataset_length =0
self.prefix_sum_idx = list()
# Loop over each file and calculate the length of overall dataset
# you might need to check if file_name is file
for file_name in os.listdir(DIRECTORY):
with (open(f'{DIRECTORY}/{file_name}', "rb")) as openfile:
dict_object = pickle.load(openfile)
curr_page_sum = len(dict_object["x"]) + len(dict_object["y"])
self.prefix_sum_idx.append(curr_page_sum)
self.dataset_length += curr_page_sum
# prefix sum so we have an idea of where each index appeared in which file.
for i in range (1,len(self.prefix_sum_idx)):
self.prefix_sum_idx[i] = self.prefix_sum_idx[i] + self.prefix_sum_idx[i-1]
assert self.prefix_sum_idx[-1] == self.dataset_length
self.x = []
self.y = []
def read_pickle_file(self, idx):
file_name = self.dataset_file_name[idx]
dict_object = dict()
with (open(f'{YOUR_DIRECTORY}/{file_name}', "rb")) as openfile:
dict_object = pickle.load(openfile)
self.x = dict_object['x']
self.y = #some logic here
......
# Some logic here....
def __getitem__(self,idx):
# Similar to C++ std::upper_bound - O(log n)
temp = bisect.bisect_right(self.prefix_sum_idx, idx)
self.read_pickle_file(temp)
local_idx = idx - self.prefix_sum_idx[temp]
return self.x[local_idx],self.y[local_idx]
def __len__(self):
return self.dataset_length
large_dataset = dataset()
train_size = int (0.8 * len(large_dataset))
validation_size = len(large_dataset) - train_size
train_dataset, validation_dataset = torch.utils.data.random_split(large_dataset, [train_size, validation_size])
validation_loader = DataLoader(validation_dataset, batch_size=64, num_workers=4, shuffle=False)
train_loader = DataLoader(train_dataset,batch_size=64, num_workers=4,shuffle=False)

How to apply a function to convert the paths to arrays using cv2 in tensorflow data pipeline?

Any help will be highly appreciated
I'm trying to load two lists containing image paths and their corresponding labels. Something like this:
p0 = ['a','b',....] #paths to images .tif format
p1 = [1,2,3,......] #paths to images .tif format
labels = [0,1,1,...] #corresponding labels w.r.t both the lists
I used a tf.data in the following way:
def TFData(p_0, p_1, batch_size, labels=None, is_train=True):
dset = tf.data.Dataset.from_tensor_slices((p_0,p_1))
if labels is not None:
label = tf.data.Dataset.from_tensor_slices(labels)
AUTO = tf.data.experimental.AUTOTUNE
final_dset = tf.data.Dataset.zip((dset, label))
final_dset = final_dset.batch(batch_size, drop_remainder=is_train).prefetch(AUTO)
return final_dset
This returns:
<PrefetchDataset shapes: (((64,), (64,)), (64,)), types: ((tf.string, tf.string), tf.int32)>
My question is how to apply a function to convert the paths to arrays using cv2 as the images are .tif files? such that the result will be:
<PrefetchDataset shapes: (((64,256,256,3), (64,256,256,3)), (64,)), types: ((tf.float64, tf.float64), tf.int32)>
I'm using a dataset.map. However it's throwing error:
def to_array(p_0):
im_1 = cv2.imread(p_0,1)
#im = tfio.experimental.image.decode_tiff(paths)
im_1 = cv2.resize(im_1,(img_w,img_h)) #img_w=img_h=256
im_1 = np.asarray(im_1, dtype=np.float64)
im_1 /= 255
return im_1
def parse_fn(p_0):
[p_0,] = tf.py_function(to_array, [p_0], [tf.float64])
return p_0
def TFData(p_0, p_1, batch_size, labels=None, is_train=True):
dset_1 = tf.data.Dataset.from_tensor_slices(p_0)
dset_1 = dset_1.map(parse_fn)
dset_2 = tf.data.Dataset.from_tensor_slices(p_1)
dset_2 = dset_2.map(parse_fn)
if labels is not None:
label = tf.data.Dataset.from_tensor_slices(labels)
AUTO = tf.data.experimental.AUTOTUNE
final_dset = tf.data.Dataset.zip((dset_1, dset_2, label))
final_dset = final_dset.batch(batch_size, drop_remainder=is_train).prefetch(AUTO)
return final_dset
print(train_data) #where train_data is defined as TFData()
<PrefetchDataset shapes: ((<unknown>, <unknown>), (64,)), types: ((tf.float64, tf.float64), tf.int32)>
This throws an error:
for (t,p),l in train_data.as_numpy_iterator():
print(t)
print(p)
print(l)
print(type(t))
break
SystemError: <built-in function imread> returned NULL without setting an error
[[{{node EagerPyFunc}}]] [Op:IteratorGetNext]
Any help will be highly appreciated
I think your problem is in cv2.imread.
Have you checked outside the functions to see if it is reading and plotting the data accordingly?
Please, try with -1 instead:
im_1 = cv2.imread(p_0,-1)

How to implement TensorBoard v2 (tf.contrib.summary) with while_loop?

I want to log the loss and accuracy value evaluated inside a nested body function of a while_loop during training. This is the structure: I have a class, a method of this class builds the graph using a while_loop (build_graph()). Another method calls build_graph() and then runs the session. It works. Or it seems to work. However, I would like to use TensorBoard to check if loss and accuracy are actually improving, but I'm not able to summary those tensors. I've tried to define a tf.contrib.summary.create_file_writer('summary') and a graph and pass them to build_graph() as parameters, so that the body function can see them. I have checked the list during graph execution coming from tf.contrib.summary.all_summary_ops() and it isn't empty. However, when I open TensorBoard I get "No dashboards are active for the current data set.". Neither the graph. I am aware that tf.summary does not work in while_loop but it seems that tf.contrib.summary works.
Here is a working example
import tensorflow as tf
import sys
import datamanagement
class myNet:
def __init__(self):
self.varlist = ["x", "y"]
self.data = []
self.hsize = [10, 10]
self.batch_size = 10
self.tr_mainsteps = 1000
self.learnrate = 0.001
self.sourcedatafile = "XYfit.csv" # source file
# Dataset parameters
self.seq_params = {'dim': len(self.varlist),
'batch_size': self.batch_size,
'shuffle': True,
'filepath': self.sourcedatafile}
# Dataset from CSV file
self.dataset = datamanagement.CSVDataSet(**self.seq_params).finaldataset
# Iterator on the CSV file
self.dataiterator = self.dataset.make_initializable_iterator()
# Optimizer
self.optim = tf.train.RMSPropOptimizer(learning_rate=self.learnrate)
# Official creation of the graph
self.graph = tf.get_default_graph()
with self.graph.as_default():
# Writer creation
self.writer = tf.contrib.summary.create_file_writer('./summary')
with self.writer.as_default():
tf.contrib.summary.always_record_summaries()
def mymodel(self, Zinp, reuse=False):
# This function builds the graph of the network
with tf.variable_scope("mymod/net", reuse=reuse):
h1 = tf.layers.dense(Zinp, self.hsize[0], activation=tf.nn.leaky_relu, name='h1')
h2 = tf.layers.dense(h1, self.hsize[1], activation=tf.nn.leaky_relu, name='h2')
out = tf.layers.dense(h2, len(self.varlist), activation=None, name='final') # None means linear activation
return out
def _trainepoch(self, ind):
with self.writer.as_default():
# Real data tensor from CSV file
self.realdata = self.dataiterator.get_next()
# random input vector
self.Znoise = tf.random_uniform([self.batch_size, len(self.varlist)], minval=-1., maxval=1.)
# Model and output tensor
self.output = self.mymodel(self.Znoise, reuse=tf.AUTO_REUSE)
# Loss
self.loss = tf.losses.mean_squared_error(self.realdata, self.output)
tf.contrib.summary.scalar("loss", self.loss)
#Trainable variables
t_vars = tf.trainable_variables()
# Evaluation of the weight gradients
grad = self.optim.compute_gradients(self.loss, var_list=t_vars)
# Update weights based on gradients
return self.optim.apply_gradients(grad), tf.contrib.summary.all_summary_ops()
def _train_buildgraph(self):
def body(ind, ops):
train_up, ops = self._trainepoch(ind)
# Ensure that the update is applied before continuing.
with tf.control_dependencies([train_up]):
ind = ind + 1
return ind, ops
def cond(ind, ops):
return ind < self.tr_mainsteps
return tf.while_loop(cond, body, [tf.constant(0), [tf.Variable(False)]])
def config_run(self, trepoch=50, testNet=False):
self.tr_mainsteps = trepoch # Number of adversarial training epoch
with self.graph.as_default():
with self.writer.as_default():
tr_loop, summary_ops = self._train_buildgraph()
# Graph execution
with self.graph.as_default():
with self.writer.as_default():
with tf.Session() as sess:
sess.run(tf.initializers.global_variables())
sess.run(self.dataiterator.initializer)
tf.contrib.summary.initialize(
graph=tf.get_default_graph()
)
sess.run([summary_ops, tr_loop, summary_ops])
def main(argv):
hmodel = myNet()
hmodel.config_run()
if __name__ == "__main__":
main(sys.argv[1:])
Can someone help me?

How to get Tensorflow Served model to pull from passed in input and not local batch file?

I am currently trying to get seq2seq model working with TF Serving. I thought I had it correctly however it seems I was mistaken. I originally trained the model via local text file input, read in as batches. Now I want to have a passed in sentence and it return back to me the summation.
I have been successful in getting the model saved, served and now I am able to view the prediction on my front end page, however the result is still pulling from my local text file and not my passed in query param sentence.
My input is one sentence currently sent as a query param, but the result actually displayed is pulling from my text file still, even though I mapped batch_x to the value of my arg[1] which I have verified is the correct expected input.
Does anyone see what I am doing wrong? Clearly I have misunderstood the process I was supposed to take.
Now an important note to make here is that if I modify the value of the argument passed in and call the python file directly, I get the correct results. However when I make the same call to the frozen model being served, I always get the same prediction response regardless of what is sent in.
This is how I am freezing the model (Notice the mapping of inputs_dict.X to batch_x...believe the issue is something I am doing here incorrectly):
pickle_fn = 'args.pickle'
folder = os.path.dirname(os.path.abspath(__file__)) + '/pickle'
pickle_filepath = os.path.join(folder, pickle_fn)
with open(pickle_filepath, "rb") as f:
args = pickle.load(f)
print("Loading dictionary...")
word_dict, reversed_dict, article_max_len, summary_max_len = build_dict("valid", args.toy)
print("Loading validation dataset...")
#The below call will pull from the arg passed when "serve" is used
valid_x, valid_y = build_dataset("serve", word_dict, article_max_len, summary_max_len, args.toy)
valid_x_len = list(map(lambda x: len([y for y in x if y != 0]), valid_x))
with tf.Session() as sess:
print("Loading saved model...")
model = Model(reversed_dict, article_max_len, summary_max_len, args, forward_only=True)
saver = tf.train.Saver(tf.global_variables())
ckpt = tf.train.get_checkpoint_state("./saved_model/")
saver.restore(sess, ckpt.model_checkpoint_path)
batches = batch_iter(valid_x, valid_y, args.batch_size, 1)
#print(valid_x, file=open("art_working_inp.txt", "a"))
print("Writing summaries to 'result.txt'...")
for batch_x, batch_y in batches:
batch_x_len = list(map(lambda x: len([y for y in x if y != 0]), batch_x))
valid_feed_dict = {
model.batch_size: len(batch_x),
model.X: batch_x,
model.X_len: batch_x_len,
}
prediction = sess.run(model.prediction, feed_dict=valid_feed_dict)
prediction_output = list(map(lambda x: [reversed_dict[y] for y in x], prediction[:, 0, :]))
#Save out our model
cwd = os.getcwd()
path = os.path.join(cwd, 'simple')
inputs_dict = {
"X": tf.convert_to_tensor(batch_x)
}
outputs_dict = {
"prediction": tf.convert_to_tensor(prediction_output)
}
tf.saved_model.simple_save(
sess, path, inputs_dict, outputs_dict
)
print('Model Saved')
#End save model code
#Save results to file
with open("result.txt", "a") as f:
for line in prediction_output:
summary = list()
for word in line:
if word == "</s>":
break
if word not in summary:
summary.append(word)
print(" ".join(summary), file=f)
print('Summaries are saved to "result.txt"...')
Then my call to the server for inference is here. Regardless of what I put into data, it will always spit out the same prediction which is the one I originally passed in when exporting the model.
def do_inference(hostport):
"""Tests PredictionService with concurrent requests.
Args:
hostport: Host:port address of the PredictionService.
Returns:
pred values, ground truth labels, processing time
"""
# connect to server
host, port = hostport.split(':')
channel = grpc.insecure_channel(hostport)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
# prepare request object
request = predict_pb2.PredictRequest()
request.model_spec.name = 'saved_model'
# Get the input data from our arg
jsn_inp = sys.argv[1]
data = json.loads(jsn_inp)['tokenized']
data = np.array(data)
request.inputs['X'].CopyFrom(
tf.contrib.util.make_tensor_proto(data, shape=data.shape, dtype=tf.int64))
#print(request)
result = stub.Predict(request, 10.0) # 10 seconds
return result
Should this be of any use, this is how it is building the dataset. I modified the build_dataset function so it uses just the arg passed in, but this didnt resolve the problem either. I thought perhaps something similar to javascript closures was occuring or something, so I thought I would pull the data in this way.
def build_dataset(step, word_dict, article_max_len, summary_max_len, toy=False):
if step == "train":
article_list = get_text_list(train_article_path, toy)
title_list = get_text_list(train_title_path, toy)
elif step == "valid":
article_list = get_text_list(valid_article_path, toy)
title_list = get_text_list(valid_title_path, toy)
elif step == "serve":
arg_to_use = sys.argv[1] if ("tokenized" in sys.argv[1]) else sys.argv[2]
article_list = [json.loads(arg_to_use)["tokenized"]]
else:
raise NotImplementedError
if step != "serve":
x = list(map(lambda d: word_tokenize(d), article_list))
x = list(map(lambda d: list(map(lambda w: word_dict.get(w, word_dict["<unk>"]), d)), x))
x = list(map(lambda d: d[:article_max_len], x))
x = list(map(lambda d: d + (article_max_len - len(d)) * [word_dict["<padding>"]], x))
print(x, file=open("input_values.txt", "a"))
y = list(map(lambda d: word_tokenize(d), title_list))
y = list(map(lambda d: list(map(lambda w: word_dict.get(w, word_dict["<unk>"]), d)), y))
y = list(map(lambda d: d[:(summary_max_len-1)], y))
else:
x = article_list
#x = list(map(lambda d: word_tokenize(d), article_list))
#x = list(map(lambda d: list(map(lambda w: word_dict.get(w, word_dict["<unk>"]), d)), x))
x = list(map(lambda d: d[:article_max_len], x))
x = list(map(lambda d: d + (article_max_len - len(d)) * [word_dict["<padding>"]], x))
y = list()
return x, y
SignatureDef info (One thing that has me a bit concerned is the Const below...but not sure that is anything...going to be looking at that right now):
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['X'] tensor_info:
dtype: DT_INT64
shape: (1, 50)
name: Const:0
The given SavedModel SignatureDef contains the following output(s):
outputs['prediction'] tensor_info:
dtype: DT_STRING
shape: (1, 11)
name: Const_1:0
Method name is: tensorflow/serving/predict
Ok....so it seems the const issue was my problem or rather directed me to finding what the real issue was. The real source to my problem was that I was passing into tf.convert_to_tensor my values rather than the tf.placeholders themselves. Therefore, by modifying the logic to the below entries when saving out the model, I was able to get the proper response when sending my inputs in. As you can see I also had to feed in the other original batch_size and x_len as well. Hope others find this helpful.
inputs_dict = {
"batch_size": tf.convert_to_tensor(model.batch_size),
"X": tf.convert_to_tensor(model.X),
"X_len": tf.convert_to_tensor(model.X_len),
}
outputs_dict = {
"prediction": tf.convert_to_tensor(model.prediction)
}
This yielded a much better looking SignatureDef:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['X'] tensor_info:
dtype: DT_INT32
shape: (-1, 50)
name: Placeholder:0
The given SavedModel SignatureDef contains the following output(s):
outputs['prediction'] tensor_info:
dtype: DT_INT32
shape: (-1, 10, -1)
name: decoder/decoder/transpose_1:0
Method name is: tensorflow/serving/predict

How to print the values of tensors inside a while loop?

I'm very new to tensorflow, and I couldn't figure this one out.
I have this while loop:
def process_tree_tf(n_child, reprs, weights, bias, embed_dim, activation = tf.nn.relu):
n_child, reprs = n_child, reprs
parent_idxs = generate_parents_numpy(n_child)
loop_idx = reprs.shape[0] - 1
loop_vars = loop_idx, reprs, parent_idxs, weights, embed_dim
def loop_condition(loop_ind, *_):
return tf.greater(0, loop_idx)
def loop_body(loop_ind, reprs, parent_idxs, weights, embed_dim):
x = reprs[loop_ind]
x_expanded = tf.expand_dims(x, axis=-1)
w = weights
out = tf.squeeze(tf.add(tf.matmul(x_expanded,w,transpose_a=True), bias))
activated = activation(out)
par_idx = parent_idxs[loop_ind]
reprs = update_parent(reprs, par_idx, embed_dim, activated)
reprs = tf.Print(reprs, [reprs]) #This doesn't work
loop_ind = loop_ind-1
return loop_ind, reprs, parent_idxs, weights, embed_dim
return tf.while_loop(loop_condition, loop_body, loop_vars)
And I'm evaluating it this way:
embed_dim = 2
hidden_dim = 2
n_nodes = 4
batch = 2
reprs = np.ones((n_nodes, embed_dim+hidden_dim))
n_child = np.array([1, 1, 1, 0])
weights = np.ones((embed_dim+hidden_dim, hidden_dim))
bias = np.ones(hidden_dim)
with tf.Session() as sess:
_, r, *_ = process_tree_tf(n_child, reprs, weights, bias, embed_dim, activation=tf.nn.relu)
print(r.eval())
I want to check the value of reprs inside the while loop, but tf.Print doesn't seem to work, and print just tells me it's a tensor and gives me its shape.
How do I go about doing this?
Thank you so much!
Take a look at this webpage: https://www.tensorflow.org/api_docs/python/tf/Print
You can see that tf.Print is an identity operator with the side effect of printing data when evaluating. You should therefore use this line to print:
reprs = tf.Print(reprs, [reprs])
Hope this helps, and good luck!
The approach suggested by rmeertens is the one I think is correct. I would just add (as a response to your comments) that if something is printing "Tensor("while/update_parent:0, ...... " then that implies that that value in the graph is not being evaluated.
You are likely seeing that as the output of your "print(r.eval())" statement, NOT the tf.Print() statement.
Note that the output of tf.Print() appears in PyCharm (the IDE I am using) in red, while the output of a normal python print operation appears in black. So the tf.Print() output looks like a warning message. It could be that it is indeed printing out, but you are simply overlooking it.

Resources