I am new to Numba and I need to use Numba to speed up some Pytorch functions. But I find even a very simple function does not work :(
import torch
import numba
#numba.njit()
def vec_add_odd_pos(a, b):
res = 0.
for pos in range(len(a)):
if pos % 2 == 0:
res += a[pos] + b[pos]
return res
x = torch.tensor([3, 4, 5.])
y = torch.tensor([-2, 0, 1.])
z = vec_add_odd_pos(x, y)
But the following error appears
def vec_add_odd_pos(a, b):
res = 0.
^
This error may have been caused by the following argument(s):
argument 0: cannot determine Numba type of <class 'torch.Tensor'>
argument 1: cannot determine Numba type of <class 'torch.Tensor'>
Can anyone help me? A link with more examples would be also appreciated. Thanks.
Pytorch now exposes an interface on GPU tensors which can be consumed by numba directly:
numba.cuda.as_cuda_array(tensor)
The test script provides a few usage examples: https://github.com/pytorch/pytorch/blob/master/test/test_numba_integration.py
As others have mentioned, numba currently doesn't support torch tensors, only numpy tensors. However there is TorchScript, which has a similar goal. Your function can then be rewritten as such:
import torch
#torch.jit.script
def vec_add_odd_pos(a, b):
res = 0.
for pos in range(len(a)):
if pos % 2 == 0:
res += a[pos] + b[pos]
return res
x = torch.tensor([3, 4, 5.])
y = torch.tensor([-2, 0, 1.])
z = vec_add_odd_pos(x, y)
Beware: although you said your code snippet was just a simple example, for loops are really slow and running TorchScript might not help you much, you should avoid them at any cost and only use then when no other solution exist. That being said, here's how to implement your function in a more performant way:
def vec_add_odd_pos(a, b):
evenids = torch.arange(len(a)) % 2 == 0
return (a[evenids] + b[evenids]).sum()
numba supports numpy-arrays but not torch's tensors. There is however a bridge Tensor.numpy():
Returns self tensor as a NumPy ndarray. This tensor and the returned
ndarray share the same underlying storage. Changes to self tensor will
be reflected in the ndarray and vice versa.
That means you have to call jitted functions as:
...
z = vec_add_odd_pos(x.numpy(), y.numpy())
If z should be a torch.Tensor as well, torch.from_numpy is what we need:
Creates a Tensor from a numpy.ndarray.
The returned tensor and ndarray share the same memory. Modifications
to the tensor will be reflected in the ndarray and vice versa. The
returned tensor is not resizable.
...
For our code that means
...
z = torch.from_numpy(vec_add_odd_pos(x.numpy(), y.numpy()))
should be called.
Related
I tried using Numpy, Scipy and Scikitlearn, but couldn't find what I need in any of them, basically I need to fit a curve to a dataset, but restricting some of the coefficients to known values, I found how to do it in MATLAB, using fittype, but couldn't do it in python.
In my case I have a dataset of X and Y and I need to find the best fitting curve, I know it's a polynomial of second degree (ax^2 + bx + c) and I know it's values of b and c, so I just needed it to find the value of a.
The solution I found in MATLAB was https://www.mathworks.com/matlabcentral/answers/216688-constraining-polyfit-with-known-coefficients which is the same problem as mine, but with the difference that their polynomial was of degree 5th, how could I do something similar in python?
To add some info: I need to fit a curve to a dataset, so things like scipy.optimize.curve_fit that expects a function won't work (at least as far as I tried).
The tools you have available usually expect functions only inputting their parameters (a being the only unknown in your case), or inputting their parameters and some data (a, x, and y in your case).
Scipy's curve-fit handles that use-case just fine, so long as we hand it a function that it understands. It expects x first and all your parameters as the remaining arguments:
from scipy.optimize import curve_fit
import numpy as np
b = 0
c = 0
def f(x, a):
return c+x*(b+x*a)
x = np.linspace(-5, 5)
y = x**2
# params == [1.]
params, _ = curve_fit(f, x, y)
Alternatively you can reach for your favorite minimization routine. The difference here is that you manually construct the error function so that it only inputs the parameters you care about, and then you don't need to provide that data to scipy.
from scipy.optimize import minimize
import numpy as np
b = 0
c = 0
x = np.linspace(-5, 5)
y = x**2
def error(a):
prediction = c+x*(b+x*a)
return np.linalg.norm(prediction-y)/len(prediction)**.5
result = minimize(error, np.array([42.]))
assert result.success
# params == [1.]
params = result.x
I don't think scipy has a partially applied polynomial fit function built-in, but you could use either of the above ideas to easily build one yourself if you do that kind of thing a lot.
from scipy.optimize import curve_fit
import numpy as np
def polyfit(coefs, x, y):
# build a mapping from null coefficient locations to locations in the function
# coefficients we're passing to curve_fit
#
# idx[j]==i means that unknown_coefs[i] belongs in coefs[j]
_tmp = [i for i,c in enumerate(coefs) if c is None]
idx = {j:i for i,j in enumerate(_tmp)}
def f(x, *unknown_coefs):
# create the entire polynomial's coefficients by filling in the unknown
# values in the right places, using the aforementioned mapping
p = [(unknown_coefs[idx[i]] if c is None else c) for i,c in enumerate(coefs)]
return np.polyval(p, x)
# we're passing an initial value just so that scipy knows how many parameters
# to use
params, _ = curve_fit(f, x, y, np.zeros((sum(c is None for c in coefs),)))
# return all the polynomial's coefficients, not just the few we just discovered
return np.array([(params[idx[i]] if c is None else c) for i,c in enumerate(coefs)])
x = np.linspace(-5, 5)
y = x**2
# (unknown)x^2 + 1x + 0
# params == [1, 0, 0.]
params = fit([None, 0, 0], x, y)
Similar features exist in nearly every mainstream scientific library; you just might need to reshape your problem a bit to frame it in terms of the available primitives.
I try to build up my own loss function as follows
import numpy as np
from keras import backend as K
def MyLoss(self, x_input, x_reconstruct):
a = np.copy(x_reconstruct)
a = np.asarray(a, dtype='float16')
a = np.floor(4*a)/4
return K.mean(K.square(a - x_input), axis=-1)`
In compilation, it says
ValueError: setting an array element with a sequence
Both x_input and x_reconstruct are [m, n, 1] np arrays. The last line of code is actually copied directly from Keras' built-in MSE loss function.
Also, I suppose loss is calculated per sample. If dimensions of the input and reconstructed input are both [m, n, 1], the result of Keras' built-in loss will also be a matrix sized [m, n]. So why does it work properly?
I then tried to us np's functions directly by
def MyLoss(self, x_input, x_reconstruct):
a = np.copy(x_reconstruct)
a = np.asarray(a, dtype=self.precision)
a = np.floor(4*a)/4
Diff = a - x_input
xx = np.mean(np.square(Diff), axis=-1)
yy = np.sum(xx)
return yy
yet the error persists. What mistake did I make? How should write the code?
Having borrowed the suggestion from Make a Custom loss function in Keras in detail, I tried following
def MyLoss(self, x_input, x_reconstruct):
if self.precision == 'float16':
K.set_floatx('float16')
K.set_epsilon(1e-4)
a = K.cast_to_floatx(x_input)
a = K.round(a*4.-0.5)/4.0
return K.sum(K.mean(K.square(x_input-a), axis=-1))
But the same error happens
You can not use numpy arrays in your loss. You have to use TensorFlow or Keras backend operations. Try this maybe:
import tensorflow as tf
import keras.backend as K
def MyLoss(x_input, x_reconstruct):
a = tf.cast(x_input, dtype='tf.float16')
a = tf.floor(4*a)/4
return K.mean(K.square(a - x_input), axis=-1)
I found the answer myself, and let me share it here
If I write code like this
def MyLoss(self, y_true, y_pred):
if self.precision == 'float16':
K.set_floatx('float16')
K.set_epsilon(1e-4)
return K.mean(K.square(y_true-K.round(y_pred*4.-0.5)/4.0), axis=-1)
It works. The trick is, I think, that I cannot use 'K.cast_to_floatx(y_true)'. Instead, simply use y_true directly. I still do not understand why...
I am working on a 2D matrix and finding sum of elements, below is my logic:
def calculateSum(a, x, y):
s = 0;
for i in range(0,x+1):
for j in range(0,y+1):
s = s + a[i][j];
print(s)
return s
def check(a):
arr = []
x = 0
y = 0
for i in range(len(a)):
row = []
y = 0
for j in range(len(a[i])):
row.append(calculateSum(a, x, y))
y = y + 1
x = x + 1
print(row)
check([[1, 2], [3, 4]])
calculateSum is the function that calculates sum of elements.
Now my question is, if the matrix size is huge then is there is a way to improve performance of the above program?
Update:
import numpy as np
def calculateSum(a, x, y):
return np.sum(a[x:,y:])
After using numpy I am getting error as TypeError: list indices must be integers or slices, not tuple if I use numpy
As the matrix dimensions increases, Efficiency will fall, the efficient way to deal with this is to parallelize the task of summing the values, this is possible because addition follows Associative property.
Luckily for you this parallelization is already implemented in a library known as numpy.
To get started with numpy, use pip install numpy To get an overview of the library visit: https://www.geeksforgeeks.org/numpy-in-python-set-1-introduction/
And for your question you will need to use function numpy.sum()
Edit:
Also as #Mad Physicist pointed out Numpy also has packed memory layout and the routines are implemented in C which boost its speed even further.
Is it possible to run map_fn on a tensor with a single value?
The following works:
import tensorflow as tf
a = tf.constant(1.0, shape=[3])
tf.map_fn(lambda x: x+1, a)
#output: [2.0, 2.0, 2.0]
However this does not:
import tensorflow as tf
b = tf.constant(1.0)
tf.map_fn(lambda x: x+1, b)
#expected output: 2.0
Is it possible at all?
What am I doing wrong?
Any hints will be greatly appreciated!
Well, I see you accepted an answer, which correctly states that tf.map_fn() is applying a function to elements of a tensor, and a scalar tensor has no elements. But it's not impossible to do this for a scalar tensor, you just have to tf.reshape() it before and after, like this code (tested):
import tensorflow as tf
b = tf.constant(1.0)
if () == b.get_shape():
c = tf.reshape( tf.map_fn(lambda x: x+1, tf.reshape( b, ( 1, ) ) ), () )
else:
c = tf.map_fn(lambda x: x+1, b)
#expected output: 2.0
with tf.Session() as sess:
print( sess.run( c ) )
will output:
2.0
as desired.
This way you can factor this into an agnostic function that can take both scalar and non-scalar tensors as argument.
No, this is not possible. As you probably saw it throws an error:
ValueError: elems must be a 1+ dimensional Tensor, not a scalar
The point of map_fn is to apply a function to each element of a tensor, so it makes no sense to use this for a scalar (single-element) tensor.
As to "what you are doing wrong": This is difficult to say without knowing what you're trying to achieve.
I am trying to learn about solving ODEs in python and scipy seemed like a good starting point. I've become comfortable with using odeint and am now trying to learn how to use ode. I tried running the sample code in the scipy docs, but it is returning an error. I've copied the code below alongside the error.
CODE
from scipy.integrate import ode
y0, t0 = [1.0j, 2.0], 0
def f(t, y, arg1):
return [1j*arg1*y[0] + y[1], -arg1*y[1]**2]
def jac(t, y, arg1):
return [[1j*arg1, 1], [0, -arg1*2*y[1]]]
r = ode(f, jac).set_integrator('zvode', method='bdf', with_jacobian=True)
r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0)
t1 = 10
dt = 1
while r.successful() and r.t < t1:
r.integrate(r.t+dt)
print("%g %g" % (r.t, r.y))
ERROR MESSAGE
print("%g %g" % (r.t, r.y))
TypeError: only length-1 arrays can be converted to Python scalars
r.y is a length 2 array. And that of complex numbers. Both is incompatible with the formatting instruction for one float.
Implicit conversion to string however works,
print "%g %s" % (r.t, r.y)