What is the cleanest way to turn a Vector of vectors of unboxed doubles into a repa Array U DIM2 Double?
The most obvious way is to concatenate all the unboxed vectors into one and pass it to fromUnboxed, but that feels a bit clumsy. Is there a better alternative?
Related
Given a vector such as
x = torch.rand(10000)
What is the most efficient way to compute the symmetrix matrix
using pytorch? I can only think of doing matrix multiplications like
torch.matmul(x.view(-1,1), x.view(1,-1))
or
torch.matmul(x.unsqueeze(1),x.unsqueeze(0))
but I wonder if there are more efficient ways which make use of the symmetry of the resultant matrix to calculate it.
I need to render a 2d gaussian and still be able to differentiate with respect to the 2d mean, which has type float. The standard deviation of the gaussian can be constant. Same for the size of the matrix that is generated.
Any idea how to do this in tensorflow?
CLARIFICATION:
I need a function draw2dGaussian(mean2d) which returns a 2d matrix M. The matrix M will show a discretized 2d gaussian centered at the location mean2d. Note that mean2d is a pair of 2 floats. The matrix M will be 0 at the points far enough from the mean2d.
The requirement of this function draw2dGaussian is that it has to be differentiable with respect to mean2d.
I think openDR http://files.is.tue.mpg.de/black/papers/OpenDR.pdf might be able to offer such a function, but I was wondering if somebody had a simpler solution.
You are looking for the reparametrization trick. For a one-dimensional gaussian, N(mean, var) can be written as mean + sqrt(var) * N(0, 1). A similar construction applies to 2d gaussians but with a covariance matrix instead of a constant variance.
I've got a matrix m of dimension 3329×3329 with lots of zero fields and I want to calculate m^9.
After trying this with the matrix package (Data.Matrix is easy to use) I figured that a sparse matrix would make a better representation of this in terms of memory usage and possibly also computation speed. So I'm trying to figure out how to use the hmatrix package. I've already managed to create a sparse matrix:
module Example where
import Numeric.LinearAlgebra as LA
assocExample :: AssocMatrix
assocExample = [((0,0), 1),((3329,5),1)]
sparseExample :: GMatrix
sparseExample = LA.mkSparse assocExample
My problem at this point appears to be that I've got a GMatrix, but for the multiplication operator (<>) I need a Matrix t instead.
By looking trough the hmatrix documentation on hackage I didn't manage to figure out how to obtain a Matrix t here.
I've also had a quick gaze at the introduction to hmatrix but the term sparse isn't even mentioned in it.
My hunch is that this should be easy enough to do, but I'm missing something simple.
Sparse matrices are to my knowledge rather young in hmatrix. Looking through the docs it seems there is no product of sparse matrices. You must implement it yourself.
Edit: And if you done so, comment here: https://github.com/albertoruiz/hmatrix/issues/162 (also substantiates my statement above)
I have a matrix :: [[Int]] whose elements are all either zero or one.
How can I efficiently implement rref in GF(2)?
If LU decomposition can be used to calculate rref(matrix) in GF(2), any example or elaboration on the algorithm would be greatly appreciated.
I don't think it's possible to make an efficient GF(2) implementation using hmatrix, it was designed to handle "big" numbers, not bits.
You definitely don't want to use a Double to encode a Bit, that's 64 times more memory than what you actually need.
Have you searched for rref algorithms that are optimized for GF(2) ? A generic Gaussian elimination or LU decomposition might not be the best solution in GF(2).
In most of instruction discussing Decision Tree, the attributes are represented by a single value, and then these values are concatenated as a feature vector. It makes sense since normally the attributes are independent to each other.
However, in practice, some attributes can only represented as vector or matrix, for example, a GPS coordinate (x,y) in 2D map. If x and y are correlative, (nonlinear dependence e.g.), it is not a good a solution to concatenate them with other attributes simply. I wonder if there are some better techniques to deal with them?
thanks