Recently I've been trying to use JuicyPixels and hmatrix to process images.
However, I don't know how to calculate the gradient of a matrix as image gradient using hmatrix. There seems to be no available API and I can only write a one myself which is too slow to work.
In hmatrix, I discover that the mapMatrix function is very useful but it only concerns one element transformations. Is there any function which is more powerful than mapMatrix that can iterate over matrix efficiently?
hmatrix is not intended for image processing (see the mentioned repa-Devil and also easyVision), but you can try conv2.
Related
I'm currently working to diagonalize a 5000x5000 Hermitian matrix, and I find that when I use Julia's eigen function in the LinearAlgebra module, which produces both the eigenvalues and eigenvectors, I get different results for the eigenvectors compared to when I solve the problem using numpy's np.linalg.eigh function. I believe both of them use BLAS, but I'm not sure what else they may be using that is different.
Has anyone else experienced this/knows what is going on?
numpy.linalg.eigh(a, UPLO='L') is a different algorithm. It assumes the matrix is symmetric and takes the lower triangular matrix (as a default) to more efficiently compute the decomposition.
The equivalent to Julia's LinearAlgebra.eigen() is numpy.linalg.eig. You should get the same result if you turn your matrix in Julia into a Symmetric(A, uplo=:L) matrix before feeding it into LinearAlgebra.eigen().
Check out numpy's docs on eig and eigh. Whilst Julia's standard LinearAlgebra capabilities are here. If you go down to the special matrices sections, it details what special methods it uses depending on the type of special matrix thanks to multiple dispatch.
There is lots of different implementations of 2D perlin noise in Python.
My question is there a simple implementation of perlin noise in Python that fits in 1 function or 1 class? Or maybe there is easier-to-implement 2D noise that is similar to perlin noise?
Does it need to be integers, or is double floating point precision good enough? Can you use Cython? There is a Cython wrapper for FastNoiseLite here: https://github.com/tizilogic/PyFastNoiseLite . You can convert the integers to doubles, with plenty of precision left over.
I would also suggest using the OpenSimplex2 or OpenSimplex2S noise option, rather than Perlin. Perlin as a base noise is very grid-aligned looking. Simplex/OpenSimplex2(S) directly address that.
The simplest implementation of Perlin noise I have found has been this.
https://pypi.org/project/perlin-noise/
Once installed, and initialised at the top of your code, simply calling the function noise(float) returns the value at that point of the noise field. Additionally, with "unlimited coordinate space", you can simply add more values to the noise function noise(float,float) to change to a 2D, 3D, or higher dimensional noise field.
They provide a couple of basic examples on the website which I found very helpful and sufficient to then be able to implement the library.
I have a large matrix (right now about 450000 x 50, might be even larger) that I want to compute its SVD decomposition. The matrix isn't sparse and numpy can't seem to handle it and exits with MemoryError.
I tried using np.float16 and it didn't help. python's table package can't seem to help either (since I need to use the whole matrix later to find eigenvalues).
Do any of you have an idea how can I compute and use massive matrices?
I've got a matrix m of dimension 3329×3329 with lots of zero fields and I want to calculate m^9.
After trying this with the matrix package (Data.Matrix is easy to use) I figured that a sparse matrix would make a better representation of this in terms of memory usage and possibly also computation speed. So I'm trying to figure out how to use the hmatrix package. I've already managed to create a sparse matrix:
module Example where
import Numeric.LinearAlgebra as LA
assocExample :: AssocMatrix
assocExample = [((0,0), 1),((3329,5),1)]
sparseExample :: GMatrix
sparseExample = LA.mkSparse assocExample
My problem at this point appears to be that I've got a GMatrix, but for the multiplication operator (<>) I need a Matrix t instead.
By looking trough the hmatrix documentation on hackage I didn't manage to figure out how to obtain a Matrix t here.
I've also had a quick gaze at the introduction to hmatrix but the term sparse isn't even mentioned in it.
My hunch is that this should be easy enough to do, but I'm missing something simple.
Sparse matrices are to my knowledge rather young in hmatrix. Looking through the docs it seems there is no product of sparse matrices. You must implement it yourself.
Edit: And if you done so, comment here: https://github.com/albertoruiz/hmatrix/issues/162 (also substantiates my statement above)
In my application I am computing the inverse of a block tridiagonal matrix A - will Theano's matrix inverse account for that structure (by using a more efficient matrix inverse algorithm)?
Further, I only need the diagonal and first off diagonal blocks of the resulting inverse matrix. Is there a way of preventing Theano from computing the remaining blocks?
Generally, I'm curious whether it would be worth implementing a forward/backward block tridagonal matrix inverse algorithm myself.
As of April 2015 Theano matrix inverse function won't do it directly:
http://deeplearning.net/software/theano/library/tensor/nlinalg.html#theano.tensor.nlinalg.MatrixInverse
Theano do not have many optimization and function related to that type of methods. It partially wrap what is under numpy.linalg (most of it) and some of scipy.linalg:
http://deeplearning.net/software/theano/library/tensor/slinalg.html
So you are better in the short term to do it with numpy/scipy directly.
If you want to add those feature to Theano, this can be done. But it need someone with the time and willingness to do it.