I would like to select some slices from a 3D volume by indexing with this instruction:
ini=p[0,0]
fin=p[0,:-1]
aux=img[:,:,ini.astype(int):fin.astype(int)]
Where "ini" and "fin" come from a numpy array containing the slices of interest.
The error that python throw is:
TypeError: only integer scalar arrays can be converted to a scalar index
When I try the same but with the numbers directly, i.e:
aux=img[:,:,20:30]
It works.
Could you help me please?
Thank you very much in advance :)
I'm converting some odd code over to be Numba compatible using parallel=True. It has a problematic array assignment that I can't quite figure out how to rewrite in a way numba can handle. I try to decode what the error means, but I get quite lost. The only thing clear is it does not like the line: Averaging_price_3D[leg, :, expired_loc] = last_non_expired_values.T The error is pretty long, included for reference here:
TypingError: No implementation of function Function(<built-in function setitem>) found for signature:
setitem(array(float64, 3d, C), Tuple(int64, slice<a:b>, array(int64, 1d, C)), array(float64, 2d, F))
There are 16 candidate implementations:
- Of which 14 did not match due to:
Overload of function 'setitem': File: <numerous>: Line N/A.
With argument(s): '(array(float64, 3d, C), Tuple(int64, slice<a:b>, array(int64, 1d, C)), array(float64, 2d, F))':
No match.
- Of which 2 did not match due to:
Overload in function 'SetItemBuffer.generic': File: numba\core\typing\arraydecl.py: Line 176.
With argument(s): '(array(float64, 3d, C), Tuple(int64, slice<a:b>, array(int64, 1d, C)), array(float64, 2d, F))':
Rejected as the implementation raised a specific error:
NumbaNotImplementedError: only one advanced index supported
And here is a short code segment to reproduce the error:
import numpy as np
import numba as nb
#nb.jit(nopython=True, parallel=True, nogil=True)
def main(Averaging_price_3D, expired_loc, last_non_expired_values):
for leg in range(Averaging_price_3D.shape[0]):
# line below causes the numba error:
Averaging_price_3D[leg, :, expired_loc] = last_non_expired_values.T
return Averaging_price_3D
if __name__ == "__main__":
Averaging_price_3D=np.random.rand(2,8192,11)*100 # shape (2,8192,11) 3D array float64
expired_loc=np.arange(4,10).astype(np.int64) # shape (6,) 1D array int64
last_non_expired_values = Averaging_price_3D[1,:,0:expired_loc.shape[0]].copy() # shape (8192,6) 2D array float64
result = main(Averaging_price_3D, expired_loc, last_non_expired_values)
Now the best I can interpret this error is that "numba doesn't know how to set values in a 3D matrix using array indexing with values from a 2D array." But I searched online quite a bit and can't find another way to accomplish the same thing, without numba crashing on it.
In other cases like this I resorted to flattening the arrays with a .reshape(-1) before indexing, but I'm having issues with figuring out how to do that in this specific case (that was easy with a 3D array indexed with another 3D array, as they both would flatten in the same order)... Any help is appreciated!
Well interesting enough, I looked at the indexes passed to the 3D array (since the error said "only one advanced index supported," I chose to examine my indexing):
3Darray[int, :, 1Darray]
Seeing numba is quite picky, I tried rewriting it a little bit, so that a 1D array wasn't used as an index (apparently, this is an "advanced index", so use an int index instead). Reading numba errors and solutions, they tend to add loops, so I tried that here. So instead of passing a 1D array as an index, I looped over the elements of the 1D array:
import numpy as np
import numba as nb
#nb.jit(cache=True, nopython=True, parallel=True, nogil=True)
def main(Averaging_price_3D, expired_loc, last_non_expired_values):
for leg in nb.prange(Averaging_price_3D.shape[0]):
# change the indexing for numba to get rid of a 1D array index
for i in nb.prange(expired_loc.shape[0]):
# now we assign values 3Darray[int,:,int] = 1Darray
Averaging_price_3D[leg, :, expired_loc[i]] = last_non_expired_values[:,i].T
return Averaging_price_3D
if __name__ == "__main__":
Averaging_price_3D=np.random.rand(2,8192,11)*100 # shape (2,8192,11) 3D array float64
expired_loc=np.arange(4,10).astype(np.int64) # shape (6,) 1D array int64
last_non_expired_values = Averaging_price_3D[1,:,0:expired_loc.shape[0]] # shape (8192,6) 2D array float64
result = main(Averaging_price_3D, expired_loc, last_non_expired_values)
Now it works no problem at all. So it appears to me if you want to access elements from a 3D array with numba, you should do it with either ints or :. It appears to not like 1D array indexing, so replace it with a loop and it should run in parallel.
I have a list of indices and values. I want to create a sparse tensor of size 30000 from this indices and values as follows.
indices = torch.LongTensor([1,3,4,6])
values = torch.FloatTensor([1,1,1,1])
So, I want to build a 30k dimensional sparse tensor in which the indices [1,3,4,6] are ones and the rest are zeros. How can I do that?
I want to store the sequences of such sparce tensors efficiently.
In general the indices tensor needs to have shape (sparse_dim, nnz) where nnz is the number of non-zero entries and sparse_dim is the number of dimensions for your sparse tensor.
In your case nnz = 4 and sparse_dim = 1 since your desired tensor is 1D. All we need to do to make your indices work is to insert a unitary dimension at the front of indices to make it shape (1, 4).
t = torch.sparse_coo_tensor(indices.unsqueeze(0), values, (30000,))
or equivalently
t = torch.sparse.FloatTensor(indices.unsqueeze(0), values, (30000,))
Keep in mind only a limited number of operations are supported on sparse tensors. To convert a tensor back to it's dense (inefficient) representation you can use the to_dense method
t_dense = t.to_dense()
I have a large set (~ 10000) of numpy arrays, (a1, a2, a3,...,a10000). Each array has the same shape (10, 12) and all are of dtype = int. In any row of any array, the 12 values are unique.
Now, there are many doubles, triples, etc. I suspect only about a tenth of the arrays are actually unique (ie: having the same values in the same positions).
Could I get some advice on how I might isolate the unique arrays? I suspect numpy.array_equal will be involved, but I'm new enough to the language that I'm struggling with how to implement it.
numpy.unique can be used to find the unique elements of an array. Supposing your data is contained in a list; first, stack data to generate a 3D array. Then perform np.unique to find unique 2D arrays:
import numpy as np
# dummy list of numpy array to simulate your data
list_of_arrays = [np.stack([np.random.permutation(12) for i in range(10)]) for i in range(10000)]
# stack arrays to form a 3D array
arr = np.stack(list_of_arrays)
# find unique arrays
unq = np.unique(arr, axis = 0)
I can use the vectors data, row_ind and col_ind to create a sparse Matrix with the function sparse.csr as follows:
sparse.csr_matrix((data,(row_ind,col_ind)),[shape=(M, N)])
However, assuming that I have a sparse Matrix A as my input; how do I extract the data, row_ind and col_ind vectors?
Thanks in advance
Just figured it out. The sparse matrix has to be created With sparse.dok_matrix() and then the values can be extracted with the method values()
https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.dok_matrix.html
It does not work with sparse.csr_matrix() and sparse.csC_matrix() since the method values()is not available for them