Numpy Selecting Elements given row and column index arrays - python-3.x

I have row indices as a 1d numpy array and a list of numpy arrays (list as same length as the size of the row indices array. I want to extract values corresponding to these indices. How can I do it ?
This is an example of what I want as output given the input
A = np.array([[2, 1, 1, 0, 0],
[3, 0, 2, 1, 1],
[0, 0, 2, 1, 0],
[0, 3, 3, 3, 0],
[0, 1, 2, 1, 0],
[0, 1, 3, 1, 0],
[2, 1, 3, 0, 1],
[2, 0, 2, 0, 2],
[3, 0, 3, 1, 2]])
row_ind = np.array([0,2,4])
col_ind = [np.array([0, 1, 2]), np.array([2, 3]), np.array([1, 2, 3])]
Now, I want my output as a list of numpy arrays or list of lists as
[np.array([2, 1, 1]), np.array([2, 1]), np.array([1, 2, 1])]
My biggest concern is the efficiency. My array A is of dimension 20K x 10K.

As #hpaulj commented, likely, you won't be able to avoid looping - e.g.
import numpy as np
A = np.array([[2, 1, 1, 0, 0],
[3, 0, 2, 1, 1],
[0, 0, 2, 1, 0],
[0, 3, 3, 3, 0],
[0, 1, 2, 1, 0],
[0, 1, 3, 1, 0],
[2, 1, 3, 0, 1],
[2, 0, 2, 0, 2],
[3, 0, 3, 1, 2]])
row_ind = np.array([0,2,4])
col_ind = [np.array([0, 1, 2]), np.array([2, 3]), np.array([1, 2, 3])]
# make sure the following code is safe...
assert row_ind.shape[0] == len(col_ind)
# 1) select row (A[r, :]), then select elements (cols) [col_ind[i]]:
output = [A[r, :][col_ind[i]] for i, r in enumerate(row_ind)]
# output
# [array([2, 1, 1]), array([2, 1]), array([1, 2, 1])]
Another way to do this could be to use np.ix_ (still requires looping). Use with caution though for very large arrays; np.ix_ uses advanced indexing - in contrast to basic slicing, it creates a copy of the data instead of a view - see the docs.

Related

repeating specific rows in array n times

I have a huge numpy array with 15413 rows and 70 columns. The first column represents the weight (so if the first element in a row is n, that row should be repeated n times.
I tried with numpy.repeat but I don’t think it’s giving me the correct answer because np.sum(chain[:,0]) is not equal to len(ACT_chain)
ACT_chain = []
for i in range(len(chain[:,0])):
chain_row = chain[i]
ACT_chain.append(chain_row)
if int(chain[:,0][i]) > 1:
chain_row = np.repeat(chain[i], chain[:, 0][i], axis=0)
ACT_chain.append(chain_row)
For example, running this code with this sample array
chain = np.array([[1, 5, 3], [2, 2, 1], [3, 0, 1]])
gives
[array([1, 5, 3]), array([2, 2, 1]), array([[2, 2, 1],
[2, 2, 1]]), array([3, 0, 1]), array([[3, 0, 1],
[3, 0, 1],
[3, 0, 1]])]
but the output I expect is
array([[1, 5, 3], [2, 2, 1], [2, 2, 1], [3, 0, 1], [3, 0, 1], [3, 0, 1]])
You can use repeat here, without the iteration.
np.repeat(chain, chain[:, 0], axis=0)
array([[1, 5, 3],
[2, 2, 1],
[2, 2, 1],
[3, 0, 1],
[3, 0, 1],
[3, 0, 1]])
I solved the typeError by converting the whole array to int
chain = chain.astype(int)

Upsampling xarray DataArray similar to np.repeat()?

I'm hoping to upsample values in a large 2-dimensional DataArray (below). Is there an xarray tool similar to np.repeat() which can be applied in each dimension (x and y)? In the example below, I would like to duplicate each array entry in both x and y.
import xarray as xr
import numpy as np
x = np.arange(3)
y = np.arange(3)
x_mesh,y_mesh = np.meshgrid(x, y)
arr = x_mesh*y_mesh
df = xr.DataArray(arr, coords={'x':x, 'y':y}, dims=['x','y'])
Desired input:
array([[0, 0, 0],
[0, 1, 2],
[0, 2, 4]])
Desired output:
array([[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 2, 2],
[0, 0, 1, 1, 2, 2],
[0, 0, 2, 2, 4, 4],
[0, 0, 2, 2, 4, 4]])
I am aware of the xesmf regridding tools, but they seem more complicated than necessary for the application I have in mind.
There is a simple solution for this with np.kron.
>>> arr
array([[0, 0, 0],
[0, 1, 2],
[0, 2, 4]])
>>> np.int_(np.kron(arr, np.ones((2,2))))
array([[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 2, 2],
[0, 0, 1, 1, 2, 2],
[0, 0, 2, 2, 4, 4],
[0, 0, 2, 2, 4, 4]])

How to initialize columns in hybrid sparse tensor

How initialize in pytorch hybrid tensor torch.sparse_coo_tensor (one dimension is sparse and other is not), which have the following dense representation?
array([[1, 0, 5, 0],
[2, 0, 6, 0],
[3, 0, 7, 0],
[4, 0, 8, 0]])
What should I put into the indices argument?
How to initialize
Something like this:
import torch
indices = torch.tensor([[0, 0, 1, 1, 2, 2, 3, 3], [0, 2, 0, 2, 0, 2, 0, 2]])
tensor = torch.sparse_coo_tensor(
indices, torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]), size=(4, 4)
)
Given above:
indices - first dimension specifies row, second column, where non-zero value(s) will be located. Those become pairs, in this case: (0, 0), (0, 2), (1, 0), (1, 2)... and so on
values - values located at those pairs, so 1 will be under (0, 0) coordinate, 2 under (0, 2) and so it goes.
size - total size of the matrix, optional, might be inferred in this case from your input
8 pairs, 8 values, there are also other ways to specify it, but the idea holds.
And a quick check:
print(tensor)
print(tensor.to_dense())
Gives us:
tensor(indices=tensor([[0, 0, 1, 1, 2, 2, 3, 3],
[0, 2, 0, 2, 0, 2, 0, 2]]),
values=tensor([1, 2, 3, 4, 5, 6, 7, 8]),
size=(4, 4), nnz=8, layout=torch.sparse_coo)
tensor([[1, 0, 2, 0],
[3, 0, 4, 0],
[5, 0, 6, 0],
[7, 0, 8, 0]])
Why to initialize
If your actual data is 50% sparse, you shouldn't use COO tensor.
It will save some memory, but operations will be way slower, so keep that in mind.

How to reshape an array with numpy like this:

I have this:
array([[0, 0, 1, 1, 2, 2, 3, 3],
[0, 0, 1, 1, 2, 2, 3, 3]])
And I would like to reshape my array like this:
array([[0, 0, 1, 1],
[0, 0, 1, 1],
[2, 2, 3, 3],
[2, 2, 3, 3]])
How do I do it using python numpy?
You can just split and concatenate:
a = np.array([[0, 0, 1, 1, 2, 2, 3, 3],
[0, 0, 1, 1, 2, 2, 3, 3]])
cols = a.shape[1] // 2
np.concatenate((a[:,:cols], a[:,cols:]))
#[[0 0 1 1]
# [0 0 1 1]
# [2 2 3 3]
# [2 2 3 3]]
You can simply swap rows after reshaping it.
a= np.array([[0, 0, 1, 1, 2, 2, 3, 3],
[0, 0, 1, 1, 2, 2, 3, 3]]).reshape(4,4)
a[[1,2]] = a[[2,1]]
Output:
array([[0, 0, 1, 1],
[0, 0, 1, 1],
[2, 2, 3, 3],
[2, 2, 3, 3]])

How to get padding mask from input ids?

Considering a batch of 4 pre-processed sentences (tokenization, numericalizing and padding) shown below:
batch = torch.tensor([
[1, 2, 0, 0],
[4, 0, 0, 0],
[3, 5, 6, 7]
])
where 0 states for [PAD] token.
Thus, what would be an efficient approach to generate a padding masking tensor of the same shape as the batch assigning zero at [PAD] positions and assigning one to other input data (sentence tokens)?
In the example above it would be something like:
padding_masking=
tensor([
[1, 1, 0, 0],
[1, 0, 0, 0],
[1, 1, 1, 1]
])
The following is tested on pytorch 1.3.1.
pad_token_id = 0
batch = torch.tensor([
[1, 2, 0, 0],
[4, 0, 0, 0],
[3, 5, 6, 7]
])
pad_mask = ~(batch == pad_token_id)
print(pad_mask)
Output
tensor([[1, 1, 0, 0],
[1, 0, 0, 0],
[1, 1, 1, 1]], dtype=torch.uint8)
You can get your desired result with
padding_masking = batch > 0
If you want ints instead of booleans, use
padding_masking.type(torch.int)

Resources