RBDL: Is there a relatively easy way to pass torch tensor instead of numpy array into the InverseDynamics function and get torch tensor out? - pytorch

I'm trying to use the RBDL library (https://github.com/rbdl/rbdl/tree/master/python), specifically the InverseDynamics function in the python folder in rbdl-wrapper.pyx file.
def InverseDynamics (Model model,
np.ndarray[double, ndim=1, mode="c"] q,
np.ndarray[double, ndim=1, mode="c"] qdot,
np.ndarray[double, ndim=1, mode="c"] qddot,
np.ndarray[double, ndim=1, mode="c"] tau,
np.ndarray[double, ndim=2, mode="c"] f_external = None):
cdef vector[crbdl.SpatialVector] *f_ext = new vector[crbdl.SpatialVector]()
if f_external is None:
crbdl.InverseDynamicsPtr (model.thisptr[0],
<double*>q.data,
<double*>qdot.data,
<double*>qddot.data,
<double*>tau.data,
NULL
)
else:
for ele in f_external:
f_ext.push_back(NumpyToSpatialVector(ele))
crbdl.InverseDynamicsPtr (model.thisptr[0],
<double*>q.data,
<double*>qdot.data,
<double*>qddot.data,
<double*>tau.data,
f_ext
)
del f_ext
It seems that the function only takes in numpy arrays, but I need to pass in torch tensors. Is there any way to go about doing this? I would appreciate any guidance on this.
I first straight up tried passing in torch tensors (for arguments q, qdot, qddot, and tau) into the function, but it threw this error:
rbdl.InverseDynamics(model, q, qdot, qddot, tau)
TypeError: Argument 'q' has incorrect type (expected numpy.ndarray, got Tensor)
I tried importing torch and tried to find a way that would allow me to change
np.ndarray[double, ndim=1, mode="c"]
into torch tensor format, but couldn't find a way to do so.

Related

Is there a way to compute the matrix logarithm of a Pytorch tensor?

I am trying to compute matrix logarithms in Pytorch but I need to keep tensors because I then apply gradients which means I can't use numpy arrays.
Basically I'm trying to do the equivalent of https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.logm.html but with Pytorch tensors.
Thank you.
Unfortunately the matrix logarithm (unlike the matrix exponential) is not implemented yet, but matrix powers are, this means
in the mean time you can approximate the matrix logarithm by using a the power series expansion, and just truncate it after you get a sufficient accuracy.
Alternatively Lezcano proposes a (slow) solution of a differentiable matrix logarithm via adjoint here. I'll cite their suggested solution:
import scipy.linalg
import torch
def adjoint(A, E, f):
A_H = A.T.conj().to(E.dtype)
n = A.size(0)
M = torch.zeros(2*n, 2*n, dtype=E.dtype, device=E.device)
M[:n, :n] = A_H
M[n:, n:] = A_H
M[:n, n:] = E
return f(M)[:n, n:].to(A.dtype)
def logm_scipy(A):
return torch.from_numpy(scipy.linalg.logm(A.cpu(), disp=False)[0]).to(A.device)
class Logm(torch.autograd.Function):
#staticmethod
def forward(ctx, A):
assert A.ndim == 2 and A.size(0) == A.size(1) # Square matrix
assert A.dtype in (torch.float32, torch.float64, torch.complex64, torch.complex128)
ctx.save_for_backward(A)
return logm_scipy(A)
#staticmethod
def backward(ctx, G):
A, = ctx.saved_tensors
return adjoint(A, G, logm_scipy)
logm = Logm.apply
Easy to implement in Pytorch as follows:
import torch
a=torch.randn(5,10)
cov=torch.cov(a)
u, s, v = torch.linalg.svd(cov)
log_cov=torch.matmul(torch.matmul(u, torch.diag_embed(torch.log(s))), v)
You can easily verify that log_cov and log_cov_np are the same.
log_cov_np=scipy.linalg.logm(cov.detach().numpy())
if cov is singular, you can use the regularization methods to make it have good condtion number for calculating the matrix-log.

How can I use Numba for Pytorch tensors?

I am new to Numba and I need to use Numba to speed up some Pytorch functions. But I find even a very simple function does not work :(
import torch
import numba
#numba.njit()
def vec_add_odd_pos(a, b):
res = 0.
for pos in range(len(a)):
if pos % 2 == 0:
res += a[pos] + b[pos]
return res
x = torch.tensor([3, 4, 5.])
y = torch.tensor([-2, 0, 1.])
z = vec_add_odd_pos(x, y)
But the following error appears
def vec_add_odd_pos(a, b):
res = 0.
^
This error may have been caused by the following argument(s):
argument 0: cannot determine Numba type of <class 'torch.Tensor'>
argument 1: cannot determine Numba type of <class 'torch.Tensor'>
Can anyone help me? A link with more examples would be also appreciated. Thanks.
Pytorch now exposes an interface on GPU tensors which can be consumed by numba directly:
numba.cuda.as_cuda_array(tensor)
The test script provides a few usage examples: https://github.com/pytorch/pytorch/blob/master/test/test_numba_integration.py
As others have mentioned, numba currently doesn't support torch tensors, only numpy tensors. However there is TorchScript, which has a similar goal. Your function can then be rewritten as such:
import torch
#torch.jit.script
def vec_add_odd_pos(a, b):
res = 0.
for pos in range(len(a)):
if pos % 2 == 0:
res += a[pos] + b[pos]
return res
x = torch.tensor([3, 4, 5.])
y = torch.tensor([-2, 0, 1.])
z = vec_add_odd_pos(x, y)
Beware: although you said your code snippet was just a simple example, for loops are really slow and running TorchScript might not help you much, you should avoid them at any cost and only use then when no other solution exist. That being said, here's how to implement your function in a more performant way:
def vec_add_odd_pos(a, b):
evenids = torch.arange(len(a)) % 2 == 0
return (a[evenids] + b[evenids]).sum()
numba supports numpy-arrays but not torch's tensors. There is however a bridge Tensor.numpy():
Returns self tensor as a NumPy ndarray. This tensor and the returned
ndarray share the same underlying storage. Changes to self tensor will
be reflected in the ndarray and vice versa.
That means you have to call jitted functions as:
...
z = vec_add_odd_pos(x.numpy(), y.numpy())
If z should be a torch.Tensor as well, torch.from_numpy is what we need:
Creates a Tensor from a numpy.ndarray.
The returned tensor and ndarray share the same memory. Modifications
to the tensor will be reflected in the ndarray and vice versa. The
returned tensor is not resizable.
...
For our code that means
...
z = torch.from_numpy(vec_add_odd_pos(x.numpy(), y.numpy()))
should be called.

Display image in a PIL format from torch.Tensor

I’m quite new to Pytorch. I was wondering how I could convert my tensor of size torch.Size([1, 3, 224, 224]) to display in an image format on a Jupyter notebook. A PIL format or a CV2 format should be fine.
I tried using transforms.ToPILImage(x) but it resulted in a different format like this: ToPILImage(mode=ToPILImage(mode=tensor([[[[1.3034e-16, 1.3034e-16, 1.3034e-16, ..., 1.4475e-16,.
Maybe I’m doing something wrong :no_mouth:
Since your image is normalized, you need to unnormalize it. You have to do the reverse operations that you did during normalization. One way is
class UnNormalize(object):
def __init__(self, mean, std):
self.mean = mean
self.std = std
def __call__(self, tensor):
"""
Args:
tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
Returns:
Tensor: Normalized image.
"""
for t, m, s in zip(tensor, self.mean, self.std):
t.mul_(s).add_(m)
# The normalize code -> t.sub_(m).div_(s)
return tensor
To use this, you'll need the mean and standard deviation (which you used to normalize the image). Then,
unorm = UnNormalize(mean = [0.35675976, 0.37380189, 0.3764753], std = [0.32064945, 0.32098866, 0.32325324])
image = unorm(normalized_image)

Errors when Building up a Custom Loss Function

I try to build up my own loss function as follows
import numpy as np
from keras import backend as K
def MyLoss(self, x_input, x_reconstruct):
a = np.copy(x_reconstruct)
a = np.asarray(a, dtype='float16')
a = np.floor(4*a)/4
return K.mean(K.square(a - x_input), axis=-1)`
In compilation, it says
ValueError: setting an array element with a sequence
Both x_input and x_reconstruct are [m, n, 1] np arrays. The last line of code is actually copied directly from Keras' built-in MSE loss function.
Also, I suppose loss is calculated per sample. If dimensions of the input and reconstructed input are both [m, n, 1], the result of Keras' built-in loss will also be a matrix sized [m, n]. So why does it work properly?
I then tried to us np's functions directly by
def MyLoss(self, x_input, x_reconstruct):
a = np.copy(x_reconstruct)
a = np.asarray(a, dtype=self.precision)
a = np.floor(4*a)/4
Diff = a - x_input
xx = np.mean(np.square(Diff), axis=-1)
yy = np.sum(xx)
return yy
yet the error persists. What mistake did I make? How should write the code?
Having borrowed the suggestion from Make a Custom loss function in Keras in detail, I tried following
def MyLoss(self, x_input, x_reconstruct):
if self.precision == 'float16':
K.set_floatx('float16')
K.set_epsilon(1e-4)
a = K.cast_to_floatx(x_input)
a = K.round(a*4.-0.5)/4.0
return K.sum(K.mean(K.square(x_input-a), axis=-1))
But the same error happens
You can not use numpy arrays in your loss. You have to use TensorFlow or Keras backend operations. Try this maybe:
import tensorflow as tf
import keras.backend as K
def MyLoss(x_input, x_reconstruct):
a = tf.cast(x_input, dtype='tf.float16')
a = tf.floor(4*a)/4
return K.mean(K.square(a - x_input), axis=-1)
I found the answer myself, and let me share it here
If I write code like this
def MyLoss(self, y_true, y_pred):
if self.precision == 'float16':
K.set_floatx('float16')
K.set_epsilon(1e-4)
return K.mean(K.square(y_true-K.round(y_pred*4.-0.5)/4.0), axis=-1)
It works. The trick is, I think, that I cannot use 'K.cast_to_floatx(y_true)'. Instead, simply use y_true directly. I still do not understand why...

RuntimeError: Unable to parse arguments while using scipy.optimize.bisect

I am having trouble with a code that uses the scipy.optimize.bisect root finder 1. While using that routine I get the following error:
Traceback (most recent call last):
File "project1.py", line 100, in <module>
zero1=bisect(rho_function1,x[0],crit,maxiter=1e6,full_output=True)
File "/home/irya/anaconda3/envs/hubble/lib/python3.6/site-packages/scipy /optimize/zeros.py", line 287, in bisect
r = _zeros._bisect(f,a,b,xtol,rtol,maxiter,args,full_output,disp)
RuntimeError: Unable to parse arguments
The code I am using is the following:
import numpy as np
from scipy.optimize import bisect
gamma_default=-0.8
gamma_used=-2.0
def rho_function(x,P_0=5.0e3,B_0=3.0e-6,gamma=gamma_default):
"""function used to solve rho. Here, B_0[G] (default 3.0 uG), _0[K/cm^3] and
gamma are constants. The variable x is mean to be rho/rho_0"""
kb=1.3806e-16 # boltzmann constant in erg/k
f= B_0**2/(8.0*np.pi*kb)*(x**(4./3)-1) + P_0*(x**gamma-1)
return f,P_0,B_0,gamma
def rho_function1(x):
P_0=5.0e3
B_0=3.0e-6
gamma=gamma_default
"""function used to solve rho. Here, B_0[G] (default 3.0 uG), P_0[K/cm^3] and
gamma are constants. The variable x is mean to be rho/rho_0"""
kb=1.3806e-16 # boltzmann constant in erg/k
f= B_0**2/(8.0*np.pi*kb)*(x**(4./3)-1) + P_0*(x**gamma-1)
return f
def rho_prime_function(x,P_0=5.0e3,B_0=3.0e-6,gamma=gamma_default):
"""Derivative of the rho_function"""
kb=1.3806e-16 # boltzmann constant in erg/k
f=B_0**2/(6.0*np.pi*kb)*x**(1./3) + P_0*gamma*x**(gamma-1)
return f
def rho_2nd_prime_function(x,P_0=5.0e3,B_0=3.0e-6,gamma=gamma_default):
kb=1.3806e-16 # boltzmann constant in erg/k
f=B_0**2/(18.0*np.pi*kb)*x**(-2./3) + P_0*gamma*(gamma-1)*x** (gamma-2)
return f
def magnetic_term(x,B_0=3.0e-6):
""""Magnetic term of the rho function"""
kb=1.3806e-16 # boltzmann constant in erg/k
f=B_0**2/(8.0*np.pi*kb)*(x**(4./3)-1)
return f
def TI_term(x,P_0=5.0e3,gamma=gamma_default):
f=P_0*(x**gamma-1)
return f
x=np.arange(0.8,2,0.01)
f,P_0_out,B_0_out,gamma_out=rho_function(x,gamma=gamma_used)
f_prime=rho_prime_function(x,gamma=gamma_used)
b_term=magnetic_term(x)
ti_term=TI_term(x,gamma=gamma_used)
kb=1.3806e-16
crit=(-B_0_out**2/(6.*np.pi*kb*P_0_out*gamma_out))**(3./(3.*gamma_out-4))
print("crit =",crit)
print("Using interval: a=",x[0],", b=",crit)
#using bisect
zero1=bisect(rho_function1,x[0],crit,maxiter=1e6,full_output=True)
I would like to know what is wrong with the way I am suing to pass the arguments to the bisect routine. Could you please help me?
Cheers,
The parameter maxiter must be an integer. 1e6 is a float. Use
maxiter=int(1e6)
or
maxiter=1000000

Resources