How to create a dense matrix from uint[,] array - math.net

I am trying to create a DenseMatrix in MathDotNet from a uint array with two dimensions.
uint[10,10] = myarray;
Matrix<ushort> newarray = Matrix<ushort>.Build.DenseOfArray(myarray);
Mathdotnet complains about this beeing not implemented yet. Only floating point type matrices are implemented. What I would like to do is this:
uint[10,10] = myarray;
Matrix<double> newarray = Matrix<double>.Build.DenseOfArray(myarray);
But this fails, because myarray is of a different type than the Matrix.
Is there a way of implicitly converting my uint array to double to solve this problem?
Thanks for any hints!

I don't think casting can work in this case, but there is a mechanism to contruct a matrix from an arbitrary indexable source:
Matrix<double> newarray = Matrix<double>.Build.Dense(
myarray.GetLength(0), myarray.GetLength(1), (i,j) => myarray[i,j]);

Related

Multiply every element of matrix with a vector to obtain a matrix whose elements are vectors themselves

I need help in speeding up the following block of code:
import numpy as np
x = 100
pp = np.zeros((x, x))
M = np.ones((x,x))
arrayA = np.random.uniform(0,5,2000)
arrayB = np.random.uniform(0,5,2000)
for i in range(x):
for j in range(x):
y = np.multiply(arrayA, np.exp(-1j*(M[j,i])*arrayB))
p = np.trapz(y, arrayB) # Numerical evaluation/integration y
pp[j,i] = abs(p**2)
Is there a function in numpy or another method to rewrite this piece of code with so that the nested for-loops can be omitted? My idea would be a function that multiplies every element of M with the vector arrayB so we get a 100 x 100 matrix in which each element is a vector itself. And then further each vector gets multiplied by arrayA with the np.multiply() function to then again obtain a 100 x 100 matrix in which each element is a vector itself. Then at the end perform numerical integration for each of those vectors with np.trapz() to obtain a 100 x 100 matrix of which each element is a scalar.
My problem though is that I lack knowledge of such functions which would perform this.
Thanks in advance for your help!
Edit:
Using broadcasting with
M = np.asarray(M)[..., None]
y = 1000*arrayA*np.exp(-1j*M*arrayB)
return np.trapz(y,B)
works and I can ommit the for-loops. However, this is not faster, but instead a little bit slower in my case. This might be a memory issue.
y = np.multiply(arrayA, np.exp(-1j*(M[j,i])*arrayB))
can be written as
y = arrayA * np.exp(-1j*M[:,:,None]*arrayB
producing a (x,x,2000) array.
But the next step may need adjustment. I'm not familiar with np.trapz.
np.trapz(y, arrayB)

How to create a multi-diagonal square matrix in Theano?

Is there a better way to create a multi-diagonal square matrix in theano than the following,
A = theano.tensor.nlinalg.AllocDiag(offset=0)(x)
A += theano.tensor.nlinalg.AllocDiag(offset=1)(x[:-1])
A += theano.tensor.nlinalg.AllocDiag(offset=-1)(x[1:])
where x is the vector i want on the diagonals? Each time i call AllocDiag()() a new Apply node is created which is causing memory issues and inefficiencies.
I'm hoping there is a way similar to scipy where a list of vectors can be passed into the function with a corresponding list of offsets, see https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.diags.html.
Any assistance is much appreciated.
One way which doesn't require AllocDiag()() is to use theano.tensor.set_subtensor() with A[range(n),range(n)] to obtain the diagonal indexes where A is an n*nmatrix . Something like the following:
A = tt.set_subtensor(A0[range(n),range(n)], x)
A = tt.set_subtensor(A[range(n-1),range(1,n)], x[:-1])
A = tt.set_subtensor(A[range(1,n),range(n-1), x[1:])
where A0is the initial matrix, for example, a matrix of zeros.

Types with Math.net numerics

I am starting to use the Math.net numerics library and I can't find examples, so I'm running into a few issues:
To make a simple example, I have two arrays of doubles. I want to divide one by the other and then calculate the moving average.
So, the code looks like this:
var VD1 = Vector<double>.Build.Dense(Data1.ToArray());
var VD2 = Vector<double>.Build.Dense(Data2.ToArray());
var R = VD1 / VD2;
var SMA = R.MovingAverage(15);
The problem is that, on the way, the data type changes. It starts as 2 Vectors, the division result is a Vector and the SMA result is not, it's an IEnumerable<double>.
So, now if I want to plug that result into more functions, for example multiply it by another array, I can't. I have to rebuild a Vector from the result.
Am I somehow doing this wrong? I can't imagine that the API would bounce back and forth between different but similar types.
You are doing it right. That is how MathNet is designed. E.g., var R = VD1 / VD2; calls
// Summary: Pointwise divides two Vectors.
public static Vector<T> operator /(Vector<T> dividend, Vector<T> divisor);
and returns Vector<T>.
var SMA = R.MovingAverage(15); calls
public static IEnumerable<double> MovingAverage(this IEnumerable<double> samples, int windowSize);
and returns IEnumerable<double>.
You can call MovingAverage with Vector<double> R, because Vector<double> implements IEnumerable<double> and you get implicit casting. But MovingAverage does not know its argument is Vector<double>, it's designed to return IEnumerable<double>.
And that makes sense. As far as I remember from colledge, moving average is about time series and it has no explicit relationship to vectors.
But you can have some workarounds. For example your own overload for MovingAverage:
static class VectorHeplper
{
public static Vector<double> MovingAverage(this Vector<double> samples, int windowSize)
{
return DenseVector.OfEnumerable(samples.AsEnumerable().MovingAverage(windowSize));
}
}
Then var SMA = R.MovingAverage(15); is Vector<double>.
Anyway, building a new instance of Vector is the right and logical way.

HLSL mul and D3DXMATRIX order mismatch

I'm trying to multiply the transformation matrix in shader with vectors directly without doing unnecessary transportation. According to HLSL's mul documentation:
mul(x, y) Multiplies x and y using matrix math. The inner dimension x-columns and y-rows must be equal.
x [in] The x input value. If x is a vector, it treated as a row
vector.
y [in] The y input value. If y is a vector, it treated as a column
vector.
I have in the C++ code:
const D3DXMATRIX viewProjection = view * projection;
...
const D3DXMATRIX modelViewProjection = model * viewProjection;
where modelViewProjection is row-major order matrix that is copied to a constant buffer, not transposed. However, for this to work in the HLSL i need to multiply the transformation matrix with the position vector as:
output.position = mul(transformation, position);
which is the opposite of what the mul documentation is saying.
Can someone explain where is the mismatch here?
The deprecated D3DXMath library and the more modern DirectXMath use row-major matrix order. The HLSL language defaults to using column-major matrix order as it's slightly more efficient for multiplies. Therefore, most use of setting constant buffer constants will transpose matrix data. In almost all cases, any 'cost' of transposing the matrix here is completely hidden by all the other latencies in the system.
You can of course tell HLSL to use row-major matrix order instead, which means the HLSL mul needs to do an extra instruction on every vertex which is why it's usually worth doing the transpose on the CPU once per update instead.
See MSDN

Define swizzling programmatically (as in GLSL)

How would one write swizzling as a defined behaviour in a programming language? (swizzling members like matrices and vectors in GLSL) So if I wanted to make a programming language that would allow the definition of swizzling on some members, what would be a good way to do it? So for example I could do this:
struct
{
swizzable
{
float x, float y, float z, float w
}
}
But this is missing a lot. For example it does not define that what sould it return when I swizzle more or less elements or assign to a subset or just the elements backwards. Like in GLSL I can do v.xyz to create a Vec3 from a Vec4 called v. Or I could assign a subset of members: v.zyx = ... in any order.
So this swizzable substruct is not a solution (or at least too limited). Another way would be to return an array of swizzled members and an implicit cast (with a constructor) would generate the wanted element:
struct Vec2
{
swizzable { float x, float y }
Vec2(float[2] elements)
{ x = elements[0]; y = elements[1]; }
}
struct Vec3
{
swizzable { float x, float y, float z }
}
So if I accessed a Vec3's x and y via swizzling, I would get a float[2] and because I have a constructor for Vec2, I can assign this array to it (and implicitly instantiating a vec2).
This looks like a better solution but still: How could one do better?
Edit: Sorry I didn't specify the question: I want to implement a programming language that supports this kind of thing.
I'm not sure how to give a good, detailed answer, so here is just one idea.
If I understand right, swizzling is mainly a syntactic convenience. The page https://www.opengl.org/wiki/GLSL_Optimizations gives the following example of swizzling:
gl_FragColor = mycolor.xyzw * constantList.xxxy + constantList.yyyx;
This could simply be syntactic shorthand for something like:
gl_FragColor = Vector(mycolor.x, mycolor.y, mycolor.z, mycolor.w)
* Vector(constantList.x, constantList.x, constantList.x, constantList.y)
+ Vector(constantList.y, constantList.y, constantList.y, constantList.x);
So, one step may be to figure out how to parse the shorter syntax and interpret it as meaning something similar to the longer syntax.
I don't see why it would be necessary to declare the struct as anything more complicated than struct myStruct { float x, float y, float z, float w }. The language itself should be able to handle all the details of how to implement this swizzling.

Resources