not able to restore tensorflow graph using python3 multiprocessing - python-3.x

I want to be able to load an existing tensorflow network simultaneously from several processes using multiprocessing library to do inference on different cores simultaneously.
def spawn_process(x):
g = tf.Graph()
sess = tf.Session(graph=g)
with g.as_default():
meta_graph = tf.train.import_meta_graph('model.meta')
meta_graph.restore(sess, tf.train.latest_checkpoint(chkp_dir)
x_ph = tf.get_collection('x')[0] # placeholder tensor that we use to pass in new x's to do inference on.
pred = tf.get_collection('pred')[0] # tensor responsible for computing prediction given x
prediction = sess.run(pred, feed_dict={x_ph: x})
This is basically the function I want to pass to Pool.map to infer parallely.
Above function works with just map and it looks like this
predictions = list(map(spawn_process, range(10)))
predictions = [spawn_process(x) for x in range(10)]
Both of the above work as expected.
But when I try do this, it fails, and each process just hangs right before the meta_graph.restore line and I'm stumped.
p = multiprocess.Pool(4)
predictions = p.map(spawn_process, range(10))
p.close()
p.join()
I don't know why this isn't working with tensorflow, this normally works for me when I try to do any sort of computation parallely. It stops right before the meta_graph.restore line and all the processes hang there.

Related

Forward-pass for PyTorch Module in Cython

I am creating a neural network for DeepSet where each element in the set is itself a set. Since these sets are not vectorizable I am representing my input as lists of lists of torch.tensors.
For the forward-pass of my network I don't think there is any alternative to using for-loops/list comprehension. Potentially these for-loops iterate a relatively big number of times. As such, training my networks is very time consuming since they are running in Python.
I have tried with TorchScript, without a major improvement in runningtime (about 40% improved runningtime). I hope that re-writing my loops in Cython might yield better reuslts. However I can't figure out how to combine nn.Modules from PyTorch with Cython. Any suggestions?
Here is my module:
class DSSN(nn.Module):
def __init__(self, pd_rho, dh_rho, fc_network, device):
super(DSSN, self).__init__()
self.pers_lay1 = pd_rho
self.pers_lay2 = dh_rho
self.fc = fc_network
def forward(self:nn.Module, x:List[List[torch.Tensor]]) -> torch.Tensor:
x = torch.stack([torch.stack([self.pers_lay1(pd) for pd in sample]) for sample in x])
x = self.pers_lay2(x)
x = self.fc(x)
return x

How to use multiprocessing in PyTorch?

I'm trying to use PyTorch with complex loss function. In order to accelerate the code, I hope that I can use the PyTorch multiprocessing package.
The first trial, I put 10x1 features into the NN and get 10x4 output.
After that, I want to pass 10x4 parameters into a function to do some calculation. (The calculation will be complex in the future.)
After calculating, the function will return a 10x1 array in total. This array will be set as NN_energy and calculate loss function.
Besides, I also want to know if there is another method to create a backward-able array to store the NN_energy array, instead of using
NN_energy = net(Data_in)[0:10,0]
Thanks a lot.
Full Code:
import torch
import numpy as np
from torch.autograd import Variable
from torch import multiprocessing
def func(msg,BOP):
ans = (BOP[msg][0]+BOP[msg][1]/BOP[msg][2])*BOP[msg][3]
return ans
class Net(torch.nn.Module):
def __init__(self, n_feature, n_hidden_1, n_hidden_2, n_output):
super(Net, self).__init__()
self.hidden_1 = torch.nn.Linear(n_feature , n_hidden_1) # hidden layer
self.hidden_2 = torch.nn.Linear(n_hidden_1, n_hidden_2) # hidden layer
self.predict = torch.nn.Linear(n_hidden_2, n_output ) # output layer
def forward(self, x):
x = torch.tanh(self.hidden_1(x)) # activation function for hidden layer
x = torch.tanh(self.hidden_2(x)) # activation function for hidden layer
x = self.predict(x) # linear output
return x
if __name__ == '__main__': # apply_async
Data_in = Variable( torch.from_numpy( np.asarray(list(range( 0,10))).reshape(10,1) ).float() )
Ground_truth = Variable( torch.from_numpy( np.asarray(list(range(20,30))).reshape(10,1) ).float() )
net = Net( n_feature=1 , n_hidden_1=15 , n_hidden_2=15 , n_output=4 ) # define the network
optimizer = torch.optim.Rprop( net.parameters() )
loss_func = torch.nn.MSELoss() # this is for regression mean squared loss
NN_output = net(Data_in)
args = range(0,10)
pool = multiprocessing.Pool()
return_data = pool.map( func, zip(args, NN_output) )
pool.close()
pool.join()
NN_energy = net(Data_in)[0:10,0]
for i in range(0,10):
NN_energy[i] = return_data[i]
loss = torch.sqrt( loss_func( NN_energy , Ground_truth ) ) # must be (1. nn output, 2. target)
print(loss)
Error messages:
File
"C:\ProgramData\Anaconda3\lib\site-packages\torch\multiprocessing\reductions.py",
line 126, in reduce_tensor
raise RuntimeError("Cowardly refusing to serialize non-leaf tensor which requires_grad, "
RuntimeError: Cowardly refusing to serialize non-leaf tensor which
requires_grad, since autograd does not support crossing process
boundaries. If you just want to transfer the data, call detach() on
the tensor before serializing (e.g., putting it on the queue).
First of all, Torch Variable API is deprecated since a very long time, just don't use it.
Next, torch.from_numpy( np.asarray(list(range( 0,10))).reshape(10,1) ).float() is wrong at many levels: np.asarray of list is useless since a copy will be performed anyway, and np.array takes list as input by design. Then, np.arange is available to return a range as numpy array, and it is also available on Torch. Next, specifying both dimension for reshape is useless and error prone, you could simply do reshape((-1, 1)), or even better unsqueeze(-1).
Here is the simplified expression torch.arange(10, dtype=torch.float32, requires_grad=True).unsqueeze(-1).
Using multiprocessing pool is a bad practice if using batch processing is possible. It will be both way more efficient and readable. Indeed, performing N small algebraic operations in parallel is always slower and a larger single algebraic operation, and even more on GPU. More importantly, computing the gradient is not supported by multiprocessing, hence the error that you get. Yet, this is partially true, because it is supports for tensors on cpu since 1.6.0. Have a lok, to the official release changelog.
Could you post a more representative example of what func method could be to make sure you really need it ?
NB: Distributed autograd as you are looking is now available in Pytorch as an experimental feature available in beta since 1.6.0. Have a look to the official documentation.

Keras: badalloc with custom generator

I am using keras with a tensorflow-gpu back end on a Ubuntu 17.04 VM.
I have created a custom generator to read inputs and classes from pickle files, but it seems to get the following error:
terminate called after throwing an instance of 'std::ba
d_alloc'
what(): std::bad_alloc
the code for loading data can be seen here:
def data_gen(self, pklPaths, batch_size=16):
while True:
data = []
labels = []
for i, pklPath in enumerate(pklPaths):
# print(pklPath)
image = pickle.load(open(pklPath, 'rb'))
for i in range(batch_size):
# Set a label
data.append(image[0][0])
labels.append(image[1][1])
yield np.array(data), np.array(labels)
then in the train section i'm using a fit generator:
vm_model.fit_generator(vm.data_gen(pkl_train), validation_data=vm.data_gen(pkl_validate), epochs=15, verbose=2,
steps_per_epoch=(5000/16), validation_steps=(1000/16), callbacks=[tb])
the generator should have better memory management than loading everything, however it doesn't seem to be the case! any ideas?
ok, so i found the issue so I'm answering my own question.
Basically, previous one had one unnecessary loop and also kept increasing the size of data and labels essentially loading the entire data in memory:
def data_gen(self, pklPaths, batch_size=16):
while True:
data = []
labels = []
for i, pklPath in enumerate(pklPaths):
# load pickle
image = pickle.load(open(pklPath, 'rb'))
# append
data.append(image[0][0])
labels.append(image[1])
# if batch is complete yield data and labels and reset
if i % batch_size == 0 and i != 0:
yield np.array(data), np.array(labels)
data.clear()
labels.clear()

Memory leak evaluating CNN model for text clasification

I've been doing some adaptation to code in this blog about CNN for text clasification:
http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/
Everything works fine! But when I try to use the model trained to predict new instances it consumes all memory available. It seems that it's not liberating any memory when evaluates and load all the model again and again. As far as I know memory should be liberated after every sess.run command.
Here is the part of the code I'm working with:
with graph.as_default():
session_conf = tf.ConfigProto(
allow_soft_placement=FLAGS.allow_soft_placement,
log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
# Load the saved meta graph and restore variables
saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
saver.restore(sess, checkpoint_file)
# Get the placeholders from the graph by name
input_x = graph.get_operation_by_name("input_x").outputs[0]
# input_y = graph.get_operation_by_name("input_y").outputs[0]
dropout_keep_prob = graph.get_operation_by_name("dropout_keep_prob").outputs[0]
# Tensors we want to evaluate
predictions = graph.get_operation_by_name("output/predictions").outputs[0]
# Add a vector for probas
probas =graph.get_operation_by_name("output/scores").outputs[0]
# Generate batches for one epoch
print("\nGenerating Bathces...\n")
gc.collect()
#mem0 = proc.get_memory_info().rss
batches = data_helpers.batch_iter(list(x_test), FLAGS.batch_size, 1, shuffle=False)
#mem1 = proc.get_memory_info().rss
print("\nBatches done...\n")
#pd = lambda x2, x1: 100.0 * (x2 - x1) / mem0
#print "Allocation: %0.2f%%" % pd(mem1, mem0)
# Collect the predictions here
all_predictions = []
all_probas = []
for x_test_batch in batches:
#Calculate probability of prediction been good
gc.collect()
batch_probas = sess.run(tf.reduce_max(tf.nn.softmax(probas),1), {input_x: x_test_batch, dropout_keep_prob: 1.0})
batch_predictions = sess.run(predictions, {input_x: x_test_batch, dropout_keep_prob: 1.0})
all_predictions = np.concatenate([all_predictions, batch_predictions])
all_probas = np.concatenate([all_probas, batch_probas])
# Add summary ops to collect data
with tf.name_scope("eval") as scope:
p_h = tf.histogram_summary("eval/probas", batch_probas)
summary= sess.run(p_h)
eval_summary_writer.add_summary(summary)
Any help will be much appreciated
Cheers
Your training loop creates new TensorFlow operations (tf.reduce_max(), tf.nn.softmax() and tf.histogram_summary()) in each iteration, which will lead to more memory being consumed over time. TensorFlow is most efficient when you run the same graph many times, because it can amortize the cost of optimizing the graph over multiple executions. Therefore,
to get the best performance, you should revise your program so that you create each of these operations once, before the for x_test_batch in batches: loop, and then re-use the same operations in each iteration.

Keras 1.0: getting intermediate layer output

I am currently trying to visualize the output of an intermediate layer in Keras 1.0 (which I could do with Keras 0.3) but it does not work anymore.
x = model.input
y = model.layers[3].output
f = theano.function([x], y)
But I get the following error:
MissingInputError: ("An input of the graph, used to compute DimShuffle{x,x,x,x}(keras_learning_phase), was not provided and not given a value.Use the Theano flag exception_verbosity='high',for more information on this error.", keras_learning_phase)
Prior to Keras 1.0, with my graph model, I could just do:
x = graph.inputs['input'].input
y = graph.nodes[layer].get_output(train=False)
f = theano.function([x], y, allow_input_downcast=True)
So I suspect it to come from the "train=False" parameter which I don't know how to set in the new version.
Thank you for your help
Try:
In the import statements first give
from keras import backend as K
from theano import function
then
f = K.function([model.layers[0].input, K.learning_phase()],
[model.layers[3].output])
# output in test mode = 0
layer_output = get_3rd_layer_output([X_test, 0])[0]
# output in train mode = 1
layer_output = get_3rd_layer_output([X_train, 1])[0]
This was just answered by François Chollet on github:
Your model apparently has a different behavior in training and test mode, and so needs to know what mode it should be using.
Use
iterate = K.function([input_img, K.learning_phase()], [loss, grads])
and pass 1 or 0 as value for the learning phase, based on whether you want the model in training mode or test mode.
https://github.com/fchollet/keras/issues/2417

Resources