How to convert a cupy.ndarray to a scalar? - python-3.x

I'm working with CuPy at the moment. I've noticed something that seems to be a bug, but I'm not sure.
I've noticed that if I use certain math functions, they result in the wrong shape. For example:
if I do this using numpy it returns a scalar with no shape. As it should.
s8 = numpy.abs(2)
However if I use CuPy:
s8 = cupy.abs(2)
It results in cupy.ndarray. When evaluated with s8.shape, it returns ()
The frustrating part is trying to convert cupy.ndarray into a scalar with no shape. I've tried using:
cupy.Squeeze(s8) and s8.item() to return just the scalar. In both cases it returns the value but retains the shape. I've also tried using float(s8) and int(s8). Both of which return the appropriate value type, but the array still remains.
Any thoughts on how to convert this from a cupy.ndarray to a scalar?

In 2022 a working link is https://docs.cupy.dev/en/stable/user_guide/difference.html#reduction-methods but the answer is to explicitly cast the value, e.g. use int(cupy.abs(2)) or float(cupy.abs(2.0)).
Also .item() is used in some code, e.g.:
s8 = cupy.abs(2)
s8 = s8.item()

This is a known behavior in CuPy and is not a bug:
https://docs.cupy.dev/en/stable/reference/difference.html#reduction-methods
Unlike in NumPy (which makes the distinction between scalars and 0-d arrays), all scalars in CuPy are 0-d arrays, otherwise it would unavoidably lead to a data transfer by converting them to Python scalars, which compromises the performance.

Related

Eigenvectors in Julia vs Numpy

I'm currently working to diagonalize a 5000x5000 Hermitian matrix, and I find that when I use Julia's eigen function in the LinearAlgebra module, which produces both the eigenvalues and eigenvectors, I get different results for the eigenvectors compared to when I solve the problem using numpy's np.linalg.eigh function. I believe both of them use BLAS, but I'm not sure what else they may be using that is different.
Has anyone else experienced this/knows what is going on?
numpy.linalg.eigh(a, UPLO='L') is a different algorithm. It assumes the matrix is symmetric and takes the lower triangular matrix (as a default) to more efficiently compute the decomposition.
The equivalent to Julia's LinearAlgebra.eigen() is numpy.linalg.eig. You should get the same result if you turn your matrix in Julia into a Symmetric(A, uplo=:L) matrix before feeding it into LinearAlgebra.eigen().
Check out numpy's docs on eig and eigh. Whilst Julia's standard LinearAlgebra capabilities are here. If you go down to the special matrices sections, it details what special methods it uses depending on the type of special matrix thanks to multiple dispatch.

Is there a way to supply a numerical function to JiTCODE’s function argument instead of symbolic one?

I am getting a function (a learned dynamical system) through a neural network and want to pass it to JiTCODE to calculate trajectories, Lyapunov exponents, etc. As per the JiTCODE documentation, the function f has to be a symbolic function. Is there any way to change this since ultimately JiTCODE is going to lambdify the symbolic function?
Basically, this is what I'm doing right now:
# learns derviates from the Neural net model
# returns an array of numbers [\dot{x},\dot{y}] for input [x,y]
learned_fn = lambda t, y0: NN_model(t, y0)
ODE = jitcode_lyap(learned_fn, n_lyap=2)
ODE.set_integrator("vode")
First beware that JiTCODE does not take regular functions like your learned_fn as an input. It takes either iterables of symbolic expressions or generator functions returning symbolic expressions. This is why your example code will likely produce an error.
What you are asking for
You can “inject” any derivative with the right signature into JiTCODE by changing the f property and telling it that it failed compiling the actual derivative. Here is a minimal example doing this:
from jitcode import jitcode, y
ODE = jitcode([0])
ODE.f = lambda t,y: y[0]
ODE.compile_attempt = False
ODE.set_integrator("dopri5")
ODE.set_initial_value([1],0.0)
for time in range(30):
print(time,*ODE.integrate(time))
Why you probably do not want to do this
Ignoring Lyapunov exponents for a second, the entire point of JiTCODE is to hard-code your derivative for you and pass it to SciPy’s ode or solve_ivp who perform the actual integration. Thus the above example code is just an overly complicated way of passing a function to one SciPy’s standard integrators (here ode), with no advantage. If your NN_model is very efficiently implemented in the first place, you may not even gain a speed boost from JiTCODE’s auto-compilation.
The main reason to use JiTCODE’s Lyapunov-exponent capabilities is that it automatically obtains the Jacobian and the ODE for the tangent-vector evolution (needed for the Benettin method) from the symbolic representation of the derivative. Without a symbolic input, it cannot possibly do this. You could theoretically inject a tangent-vector ODE as well, but then again you would leave little for JiTCODE to do and you would probably better off using SciPy’s ode or solve_ivp directly.
What you probably need
If you want to use JiTCODE, you need to write a small piece of code that translates the output of your neural-network training to a symbolic representation of your ODE as needed by JiTCODE. This is probably much less scary than it sounds. You just need to obtain the trained coefficients and insert it in the equations of the general form of the neural network.
If you are lucky and your NN_model fully supports duck typing (and ), you may do something like this:
from jitcode import t,y
n = 10 # dimension of your ODE
NN_input = [y(i) for i in range(n)]
learned_fn = NN_model(t,NN_input)[1]
The idea is that you feed NN_model once with abstract symbolic input (t and NN_input). NN_model then once acts on this abstract input providing you an abstract result (here you need the duck-typing support). If I interpreted the output of your NN_model correctly, the second component of this result should be the abstract derivative as required by JiTCODE as an input.
Note that your NN_model appears to expect dimensions to be indices, but JiTCODE’s y expects dimensions to be function arguments. Thus you cannot just choose NN_input = y, but you have to transform it as above.
To quote directly from the linked documentation
JiTCODE takes an iterable (or generator function or dictionary) of symbolic expressions, which it translates to C code, compiles on the fly,
so there is no lambdification going on, the function is parsed, not just evaluated.
But in general that should be no problem, you just use the JITCODE provided symbolic vector y and symbol t instead of the function arguments t,y of the right side of the ODE.

More elegant method to take user-input vector as a float64?

I'm reprogramming an orbital analysis program that I wrote in MATlab in Python 3.7. The initial inputs of velocity and position are queried user inputs. The method I'm using currently is clunky feeling (I am a python beginner) and I'm wondering if someone can suggest a more elegant method to take this input vector as a numpy float64? I suspect this problem is trivial but I haven't found a clear answer yet...
The current input is a vector with the syntax: "i,k,j". no spaces, comma delimited. Each component is converted to a float64 in a list via list(map(float, input)), I then have to convert it back to numpy float64 in order to use r as a vector later on.
v = np.float64(list(map(np.float64,input('Query Text').split(','))))
I'd say that's pretty elegant already. I'd do it like this if you like it better:
np.float64(
[np.float64(i) for i in input("Query text").split(",")]
)
but i wouldn't say this is much more elegant, but at least it does the same thing.

Computation in fixed point or int

I am using fixed point numbers within my network based on keras framework. My concern is when there are multiplication operations in the network on theano variables, the result is float32 ( even if the numbers supplied are in fixed point). Is there any intrinsic way to get the result in fixed point format, or even int.
If not, what can be alternative approaches?

Can I avoid using `Theano.scan`?

I have 3-dimensional tensor ("tensor3" -- an array of matrices), and I'd like to compute the determinant (theano.sandbox.linalg.det) of each matrix. Is there a way to compute each determinant without using theano.scan? When I try calling det directly on the tensor I get the error
3-dimensional array given. Array must be two-dimensional.
But I read that scan is slow and doesn't parallelize well, and that one should use only tensor operations if possible. Is that so? Can I avoid using scan in this case?
I see 3 possibilities:
If you know before compiling the Theano function the number of matrix in the tensor3 variable, you could use the split() op or just call det() on all matrix in the tensor3.
If you don't know the shape, you can make your own op, that will loop over the input and call the numpy fct. See for an example on how to make an op.
Use scan. It is easy to use it for this case. See this example, just change the call from tensordot to det().

Resources