I'm porting some scientific python code to Rust as a learning exercise. In the Python version, I make use of scipy.interp1d, which I'm using to do things like the following:
Given sorted array x and array y, calculate array new_y using new_x. (with some flexibility of algorithm, linear, cubic etc.).
https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html#scipy.interpolate.interp1d
Does anyone have nice Rust examples of this? I found the 'enterpolate' crate, but it seems mostly suited to interpolating to fixed interval x data. Is there anything in 'ndarray' that does interpolation?
Related
I'm currently working to diagonalize a 5000x5000 Hermitian matrix, and I find that when I use Julia's eigen function in the LinearAlgebra module, which produces both the eigenvalues and eigenvectors, I get different results for the eigenvectors compared to when I solve the problem using numpy's np.linalg.eigh function. I believe both of them use BLAS, but I'm not sure what else they may be using that is different.
Has anyone else experienced this/knows what is going on?
numpy.linalg.eigh(a, UPLO='L') is a different algorithm. It assumes the matrix is symmetric and takes the lower triangular matrix (as a default) to more efficiently compute the decomposition.
The equivalent to Julia's LinearAlgebra.eigen() is numpy.linalg.eig. You should get the same result if you turn your matrix in Julia into a Symmetric(A, uplo=:L) matrix before feeding it into LinearAlgebra.eigen().
Check out numpy's docs on eig and eigh. Whilst Julia's standard LinearAlgebra capabilities are here. If you go down to the special matrices sections, it details what special methods it uses depending on the type of special matrix thanks to multiple dispatch.
There is lots of different implementations of 2D perlin noise in Python.
My question is there a simple implementation of perlin noise in Python that fits in 1 function or 1 class? Or maybe there is easier-to-implement 2D noise that is similar to perlin noise?
Does it need to be integers, or is double floating point precision good enough? Can you use Cython? There is a Cython wrapper for FastNoiseLite here: https://github.com/tizilogic/PyFastNoiseLite . You can convert the integers to doubles, with plenty of precision left over.
I would also suggest using the OpenSimplex2 or OpenSimplex2S noise option, rather than Perlin. Perlin as a base noise is very grid-aligned looking. Simplex/OpenSimplex2(S) directly address that.
The simplest implementation of Perlin noise I have found has been this.
https://pypi.org/project/perlin-noise/
Once installed, and initialised at the top of your code, simply calling the function noise(float) returns the value at that point of the noise field. Additionally, with "unlimited coordinate space", you can simply add more values to the noise function noise(float,float) to change to a 2D, 3D, or higher dimensional noise field.
They provide a couple of basic examples on the website which I found very helpful and sufficient to then be able to implement the library.
My xarray Dataset has three dimensions,
x,y,t
and 2 variables,
foo, bar
I would like to apply function baz() on every x, y coordinate pair's time series t
baz() will accept an array of foo-s and and array of bar-s for a given (x, y)
I'm having a tough time understanding whether or not built in structures to handle/distribute this exists in either, xarray, pandas, numpy, or dask.
Any hints?
My current approach is writing a python array iterator, or exploring ufunc:
The issues with iterating over something in python is that I have to deal with concurrency myself and that there is overhead in the iteration.
The issues with ufunc is that I don't understand it enough and seems like it's a function that is applied to individual elements of the array, not subsets along axes.
The hopeful part of me is looking for something that is xarray native or daskable.
Sounds (vaguely) like you are looking for Dataset.reduce:
ds.reduce(baz, dim=('x', 'y'))
I don't want to use numpy because it isn't permitted in competitive programming.I want to take derivative of for instance m=a(a**2-24a).
Now,a=0 and a=8 are critical points. I want to find this value.Is it possible by using any python library function excluding numpy.
m=a(a**2-24a) does not make sense as a Python expression, regardless of what a is. The first a( implies a function; a** implies a number, and 24a is I-don't-know-what.
Derivative is a mathematical concept that's implemented in sympy (symbolic math). numpy also implements it for polynomials. numpy also implements a finite difference approximation for arrays.
For a list of numbers (or two lists) you could calculate a finite difference approximation (without numpy).
But for a general function, there isn't a derivative method for Python nor numpy.
Your question might make more sense if you give examples of the function or lists that you are working with.
You can check the implementation in the numpy repository and use that information to implement your own version. You should start with polyder. Probably you do not need half of the code in their implementation.
I'm an engineering student. Pretty much all math I have to do is something in R2 or R3 and concerns differential geometry. Naturally I really like sympy because it makes my calculations reusable and presentable.
What I found:
The thing in sympy that comes closeset to what I know functions as, which is as mapping of scalar or vector values to scalar or vector values, with a name and connected to an expressions seems to be something of the form
functionname=sympy.Lambda(Variables in tuple, Expression)
or as an example
f=sympy.Lambda((x),x+1)
I also found that sympy has the diffgeom module that defines Manifolds, Patches and can then perform some operations on functions without expressions or points. Like translating a point in a coordinate system to the same point in a different, linked coordinate system.
I haven't found a way to perform those operations and transformations on functions like those above. Or to define something in the diffgeom context that performs like the Lambda function.
Examples of what I'd like to do:
scalarfield f (x,y,z) = expression
grad (f) = ( d expression / dx , d expression / dy , d expression / dz)^T
vectorfield v (x,y,z) = ( expression 1 , expression 2 , expression 3 )^T
I'd then like to be able to integrate the vectorfield over bodies or curves.
Do these things exist and I haven't found them?
Are they doable with diffgeom and I didn't understand it?
Would I have to write this myself with the backbones that sympy already provides?
There is a differential geometry module within sympy:
http://docs.sympy.org/latest/modules/diffgeom.html
For more examples you can see http://blog.krastanov.org/pages/diff-geometry-in-python.html
To do the suggested in the diffgeom module, just define your expression using the base coordinates of your manifold:
from diffgeom.rn import R2
scalar = (R2.r**2 - R2.x**2 - R2.y**2) # you can mix coordinate systems
gradient = (R2.e_x + R2.e_y).rcall(scalar)
There are various functions for change of coordinates, etc. Probably many things are missing, but it would take usage and bug reports (and help) for all this to get implemented.
You can see some other examples in the test files:
tested examples from a text book https://github.com/sympy/sympy/blob/master/sympy/diffgeom/tests/test_function_diffgeom_book.py
more tests https://github.com/sympy/sympy/blob/master/sympy/diffgeom/tests/test_diffgeom.py
However for doing what is suggested in your question, doing it through differential geometry (while possible) would be an overkill. You can just use the matrices module:
def gradient(expr, vars):
return Matrix([expr.diff(v) for v in vars])
More fancy things like matrix jacobians and more are implemented.
A final remark: using expressions instead of functions and lambdas will probably result in more readable and idiomatic sympy code (often it is more natural to use subs to substitute a symbols instead of some kind of closure, lambda, function call, etc).