User defined function in pytorch GPU python - pytorch

is it possible to execute user defined functions in GPU?
I have tried pytorch library to execute in cuda . I shall give an example what i have tried
def mymodule1(x):
......
......
......
return y
def mymodule2(x):
......
......
......
return y
def mymodule3(x):
......
......
......
return y
def mymodule4(x):
y=mymodule3(x)
y=mymodule2(y)
y=mymodule1(y)
return y
my requirement is to run the mymodule4 in GPU.May i assign each variable to cuda.
Thank You

Simple example for converting CPU functions to GPU
x=1000
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
y=torch.tensor(x,dtype=torch.float)
gpu_out = mymodule1(y).to(DEVICE)
out_cpu=gpu_out.cpu()
out=out_cpu.numpy()
print(out)
But this code will execute in GPU ,but poor performance compared to CPU. Use this code for multiple tensor arrays . Here is the example
dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = torch.randn(N, D_in).type(dtype)
y = torch.randn(N, D_out).type(dtype)
# Randomly initialize weights
w1 = torch.randn(D_in, H).type(dtype)
w2 = torch.randn(H, D_out).type(dtype)
gpu_out = mymodule1(x,y).to(DEVICE)
out_cpu=gpu_out.cpu()
out=out_cpu.numpy()
print(out)

Related

Tensorflow: why is the outcome of single neural network node without activation function different then my own calculation?

i created a single node with 3 inputs and one output with bias 0 and no activation function.
as far as i understand, the only thing that happens here is a matrix multiplication between the input vector and the randomly initialized weights but when i do the multiplication myself with the same inputs and weights i get a different outcome? what am i missing/doing wrong?
thanks in advance!
i base my calculation on some code provided here
here is the code:
def example_code(self):
import tensorflow as tf
data = [[1.0,2.0,3.0]]
x = tf.placeholder(tf.float32,shape=[1,3],name="mydata")
node = tf.layers.Dense(units=1)
y = node(x)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
print("input: "+str(data))
outcome = sess.run(y,feed_dict={x:data})
#print("outcome from tensorflow: "+str(outcome))
weights = node.get_weights()[0]
bias = node.get_weights()[1]
print("weights: "+str(weights))
print("bias: "+str(bias))
print("outcome from tensorflow: " + str(outcome))
outcome = tf.matmul(data,weights)
print("manually calculated outcome: "+str(sess.run(outcome)))
output from code:
input: [[1.0, 2.0, 3.0]]
weights: [[ 0.72705185] [-0.70188504] [ 0.5336163 ]]
bias: [0.]
outcome from tensorflow: [[-1.3463312]]
manually calculated outcome: [[0.9241307]]
The problem is that tf.layers is not using uses is not using your session sess. This in turn results in different initializations for the weights, hence the two different values. tf.layers ends up using tf.keras.backend.get_session() to retrieve the session used for initialization and retrieval of weights (node.get_weights()). tf.keras.backend.get_session() tries to use the default session if there is one, and if there is not then it creates its own session. In this case, sess is not configured as default session (only tf.InteractiveSession gets automatically configured as default session on construction). The simplest fix is to use tf.Session in the recommended way, as a context manager:
def example_code(self):
import tensorflow as tf
with tf.Session() as sess:
data = [[1.0,2.0,3.0]]
x = tf.placeholder(tf.float32,shape=[1,3],name="mydata")
node = tf.layers.Dense(units=1)
y = node(x)
init = tf.global_variables_initializer()
sess.run(init)
print("input: "+str(data))
outcome = sess.run(y,feed_dict={x:data})
#print("outcome from tensorflow: "+str(outcome))
weights = node.get_weights()[0]
bias = node.get_weights()[1]
print("weights: "+str(weights))
print("bias: "+str(bias))
print("outcome from tensorflow: " + str(outcome))
outcome = tf.matmul(data,weights)
print("manually calculated outcome: "+str(sess.run(outcome)))
This will set sess as default session, and also it will make sure its resources are freed when the function is finished (which was another issue in your code). If for whatever reason you want to use some session as default but do not want to close it with the context manager, you can just use as_default():
def example_code(self):
import tensorflow as tf
sess = tf.Session():
with sess.as_default():
data = [[1.0,2.0,3.0]]
x = tf.placeholder(tf.float32,shape=[1,3],name="mydata")
node = tf.layers.Dense(units=1)
y = node(x)
init = tf.global_variables_initializer()
sess.run(init)
print("input: "+str(data))
outcome = sess.run(y,feed_dict={x:data})
#print("outcome from tensorflow: "+str(outcome))
weights = node.get_weights()[0]
bias = node.get_weights()[1]
print("weights: "+str(weights))
print("bias: "+str(bias))
print("outcome from tensorflow: " + str(outcome))
outcome = tf.matmul(data,weights)
print("manually calculated outcome: "+str(sess.run(outcome)))
# You need to manually ensure that the session gets closed after
sess.close()

Creating a session in a graph that uses another graph and its session

Versions : I am using tensorflow (version : v1.1.0-13-g8ddd727 1.1.0) in python3 (Python 3.4.3 (default, Nov 17 2016, 01:08:31) [GCC 4.8.4] on linux), it is installed from source and GPU-based (name: GeForce GTX TITAN X major: 5 minor: 2 memoryClockRate (GHz) 1.076).
Context : Generative adversarial networks (GANs) learn to synthesise new samples from a high-dimensional distribution by passing samples drawn from a latent space through a generative network. When the high-dimensional distribution describes images of a particular data set, the network should learn to generate visually similar image samples for latent variables that are close to each other in the latent space. For tasks such as image retrieval and image classification, it may be useful to exploit the arrangement of the latent space by projecting images into it, and using this as a representation for discriminative tasks.
Context Problem : I am trying to invert a generator (compute L2 norm between an input image in cifar10 and a image g(z) of the generator, where z is a parameter to be trained with stochastic gradient descent in order to minimize this norm and find an approximation of the preimage of the input image).
Technical Issue : Therefore, I am building a new graph in a new session in tensorflow but I need to use a trained gan that was trained in another session, which I cannot import because the two graphs are not the same. That is to say, when I use sess.run(), the variables are not found and therefore there is a Error Message.
The code is
import tensorflow as tf
from data import cifar10, utilities
from . import dcgan
import logging
logger = logging.getLogger("gan.test")
BATCH_SIZE = 1
random_z = tf.get_variable(name='z_to_invert', shape=[BATCH_SIZE, 100], initializer=tf.random_normal_initializer())
#random_z = tf.random_normal([BATCH_SIZE, 100], mean=0.0, stddev=1.0, name='random_z')
# Generate images with generator
generator = dcgan.generator(random_z, is_training=True, name='generator')
# Add summaries to visualise output images
generator_visualisation = tf.cast(((generator / 2.0) + 0.5) * 255.0, tf.uint8)
summary_generator = tf.summary.\
image('summary/generator', generator_visualisation,
max_outputs=8)
#Create one image to test inverting
test_image = map((lambda inp: (inp[0]*2. -1., inp[1])),
utilities.infinite_generator(cifar10.get_train(), BATCH_SIZE))
inp, _ = next(test_image)
summary_inp = tf.summary.image('input_image', inp)
img_summary = tf.summary.merge([summary_generator, summary_inp])
with tf.name_scope('error'):
error = inp - generator #generator = g(z)
# We set axis = None because norm(tensor, ord=ord) is equivalent to norm(reshape(tensor, [-1]), ord=ord)
error_norm = tf.norm(error, ord=2, axis=None, keep_dims=False, name='L2Norm')
summary_error = tf.summary.scalar('error_norm', error_norm)
with tf.name_scope('Optimizing'):
optimizer = tf.train.AdamOptimizer(0.001).minimize(error_norm, var_list=z)
sv = tf.train.Supervisor(logdir="gan/invert_logs/", save_summaries_secs=None, save_model_secs=None)
batch = 0
with sv.managed_session() as sess:
logwriter = tf.summary.FileWriter("gan/invert_logs/", sess.graph)
while not sv.should_stop():
if batch > 0 and batch % 100 == 0:
logger.debug('Step {} '.format(batch))
(_, s) = sess.run((optimizer, summary_error))
logwriter.add_summary(s, batch)
print('step %d: Patiente un peu poto!' % batch)
img = sess.run(img_summary)
logwriter.add_summary(img, batch)
batch += 1
print(batch)
I understood what is the problem, it is actually that I am trying to run a session which is saved in gan/train_logs but the graph does not have those variables I am trying to run.
Therefore, I tried to implement this instead :
graph = tf.Graph()
tf.reset_default_graph()
with tf.Session(graph=graph) as sess:
ckpt = tf.train.get_checkpoint_state('gan/train_logs/')
saver = tf.train.import_meta_graph(ckpt.model_checkpoint_path + '.meta', clear_devices=True)
saver.restore(sess, ckpt.model_checkpoint_path)
logwriter = tf.summary.FileWriter("gan/invert_logs/", sess.graph)
#inp, _ = next(test_image)
BATCH_SIZE = 1
#Create one image to test inverting
test_image = map((lambda inp: (inp[0]*2. -1., inp[1])),
utilities.infinite_generator(cifar10.get_train(), BATCH_SIZE))
inp, _ = next(test_image)
#M_placeholder = tf.placeholder(tf.float32, shape=cifar10.get_shape_input(), name='M_input')
M_placeholder = inp
zmar = tf.summary.image('input_image', inp)
#Create sample noise from random normal distribution
z = tf.get_variable(name='z', shape=[BATCH_SIZE, 100], initializer=tf.random_normal_initializer())
# Function g(z) zhere z is randomly generated
g_z = dcgan.generator(z, is_training=True, name='generator')
generator_visualisation = tf.cast(((g_z / 2.0) + 0.5) * 255.0, tf.uint8)
sum_generator = tf.summary.image('summary/generator', generator_visualisation)
img_summary = tf.summary.merge([sum_generator, zmar])
with tf.name_scope('error'):
error = M_placeholder - g_z
# We set axis = None because norm(tensor, ord=ord) is equivalent to norm(reshape(tensor, [-1]), ord=ord)
error_norm = tf.norm(error, ord=2, axis=None, keep_dims=False, name='L2Norm')
summary_error = tf.summary.scalar('error_norm', error_norm)
with tf.name_scope('Optimizing'):
optimizer = tf.train.AdamOptimizer(0.001).minimize(error_norm, var_list=z)
sess.run(tf.global_variables_initializer())
for i in range(10000):
(_, s) = sess.run((optimizer, summary_error))
logwriter.add_summary(s, i)
print('step %d: Patiente un peu poto!' % i)
img = sess.run(img_summary)
logwriter.add_summary(img, i)
print('Done Training')
This script runs, but I have checked on tensorboard, the generator that is used here does not have the trained weights and it only produces noise.
I think I am trying to run a session in a graph that uses another graph and its trained session. I have read thoroughly the Graphs and Session documentation on tensorflow website https://www.tensorflow.org/versions/r1.3/programmers_guide/graphs, I have found an interesting tf.import_graph_def function :
You can rebind tensors in the imported graph to tf.Tensor objects in the default graph by passing the optional input_map argument. For example, input_map enables you to take import a graph fragment defined in a tf.GraphDef, and statically connect tensors in the graph you are building to tf.placeholder tensors in that fragment.
You can return tf.Tensor or tf.Operation objects from the imported graph by passing their names in the return_elements list.
But I don't know how to use this function, no example is given, and also I only found those two links that may help me :
https://github.com/tensorflow/tensorflow/issues/7508
Tensorflow: How to use a trained model in a application?
It would be really nice to have your help on this topic. This should be straightforward for someone who has already used the tf.import_graph_def function... What I really need is to get the trained generator to apply it to a new variable z which is to be trained in another session.
Thanks

Theano / Keras: Set the K-first values of Tensor to a value

I have a custom Keras layer and i use Theano as the backend and i want to do the following operation:
Suppose we have a tensor with shape (N,). I want to set the K first values to a fixed value x (3 or whatever...). How do i do that? I assume i have to use argsort but i don't know how to implement it using Theano.
For example in a simple FF layer, how can i set the first N values of the tensor a to a value x?
def call(self, x, mask=None):
a = K.dot(x, self.W)
if self.bias:
a += self.b
return a
For example using numpy i can set the 10 first values of a to 1 like this:
a[a.argsort() <= 10] = 1
I want to do the same using Keras backend functions or Theano functions.
For a[a.argsort() <= 10] = 1 the equivalent code will be:
import numpy as np
from keras import backend as K
a = np.asarray([[3,4,2,44,22,4,5,6,77,86,3,2,3,23,44,21],
[3,4,22,44,2,4,54,6,77,8,3,2,36,23,4,2]], dtype=np.float)
a_t = K.variable(a)
a[a.argsort() <= 10] = 1
print a
arg_sort = K.T.argsort(a_t)
_cond = K.lesser_equal(arg_sort,10)
a_new = K.switch(_cond, 1, a_t)
print K.eval(a_new)
In a single statement it will be:
a_new = K.switch(K.lesser_equal(K.T.argsort(a_t),10), 1, a_t)
print K.eval(a_new)
I am not sure if this is exactly what you need.

Keras 1.0: getting intermediate layer output

I am currently trying to visualize the output of an intermediate layer in Keras 1.0 (which I could do with Keras 0.3) but it does not work anymore.
x = model.input
y = model.layers[3].output
f = theano.function([x], y)
But I get the following error:
MissingInputError: ("An input of the graph, used to compute DimShuffle{x,x,x,x}(keras_learning_phase), was not provided and not given a value.Use the Theano flag exception_verbosity='high',for more information on this error.", keras_learning_phase)
Prior to Keras 1.0, with my graph model, I could just do:
x = graph.inputs['input'].input
y = graph.nodes[layer].get_output(train=False)
f = theano.function([x], y, allow_input_downcast=True)
So I suspect it to come from the "train=False" parameter which I don't know how to set in the new version.
Thank you for your help
Try:
In the import statements first give
from keras import backend as K
from theano import function
then
f = K.function([model.layers[0].input, K.learning_phase()],
[model.layers[3].output])
# output in test mode = 0
layer_output = get_3rd_layer_output([X_test, 0])[0]
# output in train mode = 1
layer_output = get_3rd_layer_output([X_train, 1])[0]
This was just answered by François Chollet on github:
Your model apparently has a different behavior in training and test mode, and so needs to know what mode it should be using.
Use
iterate = K.function([input_img, K.learning_phase()], [loss, grads])
and pass 1 or 0 as value for the learning phase, based on whether you want the model in training mode or test mode.
https://github.com/fchollet/keras/issues/2417

LSTMLayer produces NaN values even before training it

I'm currently trying to construct a LSTM network with Lasagne to predict the next step of noisy sequences. I first trained a stack of 2 LSTM layers for a while, but had to use an abysmally small learning rate (1e-6) because of divergence issues (that ultimately produced NaN values). The results were kind of disappointing, as the network produced smooth, out-of-phase versions of the input.
I then came to the conclusion I should use better parameter initialization than what is given by default. The goal was to start from a network that just mimics identity, since for strongly auto-correlated signal it should be a good first estimation of the next step (x(t) ~ x(t+1)), and to sprinkle a bit of noise on top of it.
import theano, numpy, lasagne
from theano import tensor as T
from lasagne.layers.recurrent import LSTMLayer, InputLayer, Gate
from lasagne.layers import DropoutLayer
from lasagne.nonlinearities import sigmoid, tanh, leaky_rectify
from lasagne.layers import get_output
from lasagne.init import GlorotNormal, Normal, Constant
floatX = 'float32'
# function to create a lstm that ~ propagate the input from start to finish off the bat
# should be a good start for a predictive lstm with high one-step autocorrelation
def create_identity_lstm(input, shape, orig_inp=None, noiselvl=0.01, G=10., mask_input=None):
inp, out = shape
# orig_inp is used to limit the number of units that are actually used to pass the input information from one layer to the other - the rest of the units should produce ~ 0 activation.
if orig_inp is None:
orig_inp = inp
# input gate
inputgate = Gate(
W_in=GlorotNormal(noiselvl),
W_hid=GlorotNormal(noiselvl),
W_cell=Normal(noiselvl),
b=Constant(0.),
nonlinearity=sigmoid
)
# forget gate
forgetgate = Gate(
W_in=GlorotNormal(noiselvl),
W_hid=GlorotNormal(noiselvl),
W_cell=Normal(noiselvl),
b=Constant(0.),
nonlinearity=sigmoid
)
# cell gate
cell = Gate(
W_in=GlorotNormal(noiselvl),
W_hid=GlorotNormal(noiselvl),
W_cell=None,
b=Constant(0.),
nonlinearity=leaky_rectify
)
# output gate
outputgate = Gate(
W_in=GlorotNormal(noiselvl),
W_hid=GlorotNormal(noiselvl),
W_cell=Normal(noiselvl),
b=Constant(0.),
nonlinearity=sigmoid
)
lstm = LSTMLayer(input, out, ingate=inputgate, forgetgate=forgetgate, cell=cell, outgate=outputgate, nonlinearity=leaky_rectify, mask_input=mask_input)
# change matrices and biases
# ingate - should return ~1 (matrices = 0, big bias)
b_i = lstm.b_ingate.get_value()
b_i[:orig_inp] += G
lstm.b_ingate.set_value(b_i)
# forgetgate - should return 0 (matrices = 0, big negative bias)
b_f = lstm.b_forgetgate.get_value()
b_f[:orig_inp] -= G
b_f[orig_inp:] += G # to help learning future features, I preserve a large bias on "unused" units to help it remember stuff
lstm.b_forgetgate.set_value(b_f)
# cell - should return x(t) (W_xc = identity, rest is 0)
W_xc = lstm.W_in_to_cell.get_value()
for i in xrange(orig_inp):
W_xc[i, i] += 1.
lstm.W_in_to_cell.set_value(W_xc)
# outgate - should return 1 (same as ingate)
b_o = lstm.b_outgate.get_value()
b_o[:orig_inp] += G
lstm.b_outgate.set_value(b_o)
# done
return lstm
I then use this lstm generation code to generate the following network:
# layers
#input + dropout
input = InputLayer((None, None, 7), name='input')
mask = InputLayer((None, None), name='mask')
drop1 = DropoutLayer(input, p=0.33)
#lstm1 + dropout
lstm1 = create_identity_lstm(drop1, (7, 1024), mask_input=mask)
drop2 = DropoutLayer(lstm1, p=0.33)
#lstm2 + dropout
lstm2 = create_identity_lstm(drop2, (1024, 128), orig_inp=7, mask_input=mask)
drop3 = DropoutLayer(lstm2, p=0.33)
#lstm3
lstm3 = create_identity_lstm(drop3, (128, 7), orig_inp=7, mask_input=mask)
# symbolic variables and prediction
x = input.input_var
ma = mask.input_var
ma_reshape = ma.dimshuffle((0,1,'x'))
yhat = get_output(lstm3, deterministic=False)
yhat_det = get_output(lstm3, deterministic=True)
y = T.ftensor3('y')
predict = theano.function([x, ma], yhat_det)
Problem is, even without any training, this network produces garbage values and sometimes even a bunch of NaNs, right from the very first LSTM layer:
X = numpy.random.random((5, 10000, 7)).astype('float32')
Masks = numpy.ones(X.shape[:2], dtype='float32')
hid1 = get_output(lstm1, determistic=True)
get_hid1 = theano.function([x, ma], hid1)
h1 = get_hid1(X, Masks)
print numpy.isnan(h1).sum(axis=1).sum(axis=1)
array([6379520, 6367232, 6377472, 6376448, 6378496])
# even the first output value is garbage!
print h1[:,0,0] - X[:,0,0]
array([-0.03898358, -0.10118812, 0.34877831, -0.02509735, 0.36689138], dtype=float32)
I don't get why, I checked each matrices and their values are fine, like I wanted them to be. I even tried to recreate each gate activations and the resulting hidden activations using the actual numpy arrays and they reproduce the input just fine. What did I do wrong there??

Resources