How to reshape an array of shape (m,) to (m,1) - python-3.x

I have an array as below.
X1=np.array([[0,0],[0,0]])
X1[:,0].shape gives me (2,).
How do I convert X1[:,0] to a shape of (2,1).

thanks for asking. what you have is a two by two matrix, so turning one part of the array to a two dimensional array will cause an error. I think you should create a new array from the sub array and then reshape it. you can do it like this new_x = X[:,0]
new_x.reshape(2,1). I hope this works

Related

Place two 2D matrices (numpy arrays) one after other on chennel for CNN input

I am trying to create the data for my CNN network. Desired input shape for my CNN network is 36 X 36 X 2. This means, I have two different 2D matrices with the size of 36 X 36.
Using these two matrices, I want to get a output of 36 X 36 X 2.
I have tried above code.
arr1 = np.random.rand(36,36)
arr2 = np.random.rand(36,36)
res = np.stack((arr1, arr2), axis=2)
Output should look like as matrices in the image:
I want my input shape as described in the picture. first matrix should be arr1, second matrix should be arr2 and both matrix should be placed one after other.
However, I am quite confused with the result I got from res. It shows me the shape (36, 36, 2), but when I print the res, then I am not able to see my first matrix and second matrix properly. I see elements from my first matrix arr1 is inside other matrix.
I am not sure if this process gives me a correct output or I am doing anything wrong.

How to create a 1D sparse tensors from given list of indices and values?

I have a list of indices and values. I want to create a sparse tensor of size 30000 from this indices and values as follows.
indices = torch.LongTensor([1,3,4,6])
values = torch.FloatTensor([1,1,1,1])
So, I want to build a 30k dimensional sparse tensor in which the indices [1,3,4,6] are ones and the rest are zeros. How can I do that?
I want to store the sequences of such sparce tensors efficiently.
In general the indices tensor needs to have shape (sparse_dim, nnz) where nnz is the number of non-zero entries and sparse_dim is the number of dimensions for your sparse tensor.
In your case nnz = 4 and sparse_dim = 1 since your desired tensor is 1D. All we need to do to make your indices work is to insert a unitary dimension at the front of indices to make it shape (1, 4).
t = torch.sparse_coo_tensor(indices.unsqueeze(0), values, (30000,))
or equivalently
t = torch.sparse.FloatTensor(indices.unsqueeze(0), values, (30000,))
Keep in mind only a limited number of operations are supported on sparse tensors. To convert a tensor back to it's dense (inefficient) representation you can use the to_dense method
t_dense = t.to_dense()

ValuerError regarding dimensions when declaring PyTorch tensor

I'm currently trying to convert a list of values into a PyTorch tensor and am facing some difficulties.
The exact code that's causing the error is:
input_tensor = torch.cuda.FloatTensor(data)
Here, data is a list with two elements: The first element is another list of NumPy arrays and the second element is a list of tuples. The sizes of both lists differ, and I believe this is causing the following error:
*** ValueError: expected sequence of length x at dim 2 (got y)
Usually y is larger than x. I've tried playing around with an IPython terminal to see what's wrong, and it appears that trying to convert data of this format directly into PyTorch tensors doesn't work. Taking each individual element of the data list and converting those into tensors works, though.
Does anybody know why this doesn't work and perhaps also be able to provide some feedback on how to achieve my original goal? Thanks in advance.
Let's say that the first sublist of data contains n 1D arrays, each of size m, and the second sublist contains k tuples, each of size p.
When calling torch.FloatTensor(data) each sublist is converted to a 2D tensor, of shape (n, m) and of shape (k, p) respectively; then they are stack together to form a 3D tensor. This is possible only if n=k and m=p -- think of a 3D tensor as a cuboid.
This is quite obvious I think, so I guess you have m = p and want to create a 2D tensor of shape (n+k, m) by simply concatenating the two sublists:
torch.FloatTensor(np.concatenate(data))

Efficient way to find unique numpy arrays from a large set having the same shape and dtype

I have a large set (~ 10000) of numpy arrays, (a1, a2, a3,...,a10000). Each array has the same shape (10, 12) and all are of dtype = int. In any row of any array, the 12 values are unique.
Now, there are many doubles, triples, etc. I suspect only about a tenth of the arrays are actually unique (ie: having the same values in the same positions).
Could I get some advice on how I might isolate the unique arrays? I suspect numpy.array_equal will be involved, but I'm new enough to the language that I'm struggling with how to implement it.
numpy.unique can be used to find the unique elements of an array. Supposing your data is contained in a list; first, stack data to generate a 3D array. Then perform np.unique to find unique 2D arrays:
import numpy as np
# dummy list of numpy array to simulate your data
list_of_arrays = [np.stack([np.random.permutation(12) for i in range(10)]) for i in range(10000)]
# stack arrays to form a 3D array
arr = np.stack(list_of_arrays)
# find unique arrays
unq = np.unique(arr, axis = 0)

How to extract the 3 vectors from a given sparse matrix?

I can use the vectors data, row_ind and col_ind to create a sparse Matrix with the function sparse.csr as follows:
sparse.csr_matrix((data,(row_ind,col_ind)),[shape=(M, N)])
However, assuming that I have a sparse Matrix A as my input; how do I extract the data, row_ind and col_ind vectors?
Thanks in advance
Just figured it out. The sparse matrix has to be created With sparse.dok_matrix() and then the values can be extracted with the method values()
https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.dok_matrix.html
It does not work with sparse.csr_matrix() and sparse.csC_matrix() since the method values()is not available for them

Resources