Here is my code to demonstrate:
import torch
import concurrent.futures
if __name__ == '__main__':
x = torch.tensor([[1, 2, 3], [4, 5, 6]])
args=[(i,x) for i in range(3)]
with concurrent.futures.ProcessPoolExecutor(3) as executor:
executor.map(print,args)
x = x.to( torch.device('cuda'))
args=[(i,x) for i in range(3)]
print(args)
with concurrent.futures.ProcessPoolExecutor(3) as executor:
executor.map(print,args)
and the result:
(0, tensor([[1, 2, 3],
[4, 5, 6]])) (1, tensor([[1, 2, 3],
[4, 5, 6]])) (2, tensor([[1, 2, 3],
[4, 5, 6]]))
[(0, tensor([[1, 2, 3],
[4, 5, 6]], device='cuda:0')), (1, tensor([[1, 2, 3],
[4, 5, 6]], device='cuda:0')), (2, tensor([[1, 2, 3],
[4, 5, 6]], device='cuda:0'))]
(0, tensor([[0, 0, 0],
[0, 0, 0]], device='cuda:0')) (1, tensor([[0, 0, 0],
[0, 0, 0]], device='cuda:0')) (2, tensor([[0, 0, 0],
[0, 0, 0]], device='cuda:0'))
As we can see, the args were passed as expected with CPU but failed with GPU. All the inputs were cleared to zeros.
What is the right way? or I have to convert them inside the child processes?
Related
I have a huge numpy array with 15413 rows and 70 columns. The first column represents the weight (so if the first element in a row is n, that row should be repeated n times.
I tried with numpy.repeat but I don’t think it’s giving me the correct answer because np.sum(chain[:,0]) is not equal to len(ACT_chain)
ACT_chain = []
for i in range(len(chain[:,0])):
chain_row = chain[i]
ACT_chain.append(chain_row)
if int(chain[:,0][i]) > 1:
chain_row = np.repeat(chain[i], chain[:, 0][i], axis=0)
ACT_chain.append(chain_row)
For example, running this code with this sample array
chain = np.array([[1, 5, 3], [2, 2, 1], [3, 0, 1]])
gives
[array([1, 5, 3]), array([2, 2, 1]), array([[2, 2, 1],
[2, 2, 1]]), array([3, 0, 1]), array([[3, 0, 1],
[3, 0, 1],
[3, 0, 1]])]
but the output I expect is
array([[1, 5, 3], [2, 2, 1], [2, 2, 1], [3, 0, 1], [3, 0, 1], [3, 0, 1]])
You can use repeat here, without the iteration.
np.repeat(chain, chain[:, 0], axis=0)
array([[1, 5, 3],
[2, 2, 1],
[2, 2, 1],
[3, 0, 1],
[3, 0, 1],
[3, 0, 1]])
I solved the typeError by converting the whole array to int
chain = chain.astype(int)
I am kind of new with numpy and torch and I am struggling to understand what to me seems the most basic operations.
For instance, given this tensor:
A = tensor([[[6, 3, 8, 3],
[1, 0, 9, 9]],
[[4, 9, 4, 1],
[8, 1, 3, 5]],
[[9, 7, 5, 6],
[3, 7, 8, 1]]])
And this other tensor:
B = tensor([1, 0, 1])
I would like to use B as indexes for A so that I get a 3 by 4 tensor that looks like this:
[[1, 0, 9, 9],
[4, 9, 4, 1],
[3, 7, 8, 1]]
Thanks!
Ok, my mistake was to assume this:
A[:, B]
is equal to this:
A[[0, 1, 2], B]
Or more generally the solution I wanted is:
A[range(B.shape[0]), B]
Alternatively, you can use torch.gather:
>>> indexer = B.view(-1, 1, 1).expand(-1, -1, 4)
tensor([[[1, 1, 1, 1]],
[[0, 0, 0, 0]],
[[1, 1, 1, 1]]])
>>> A.gather(1, indexer).view(len(B), -1)
tensor([[1, 0, 9, 9],
[4, 9, 4, 1],
[3, 7, 8, 1]])
I have this:
array([[0, 0, 1, 1, 2, 2, 3, 3],
[0, 0, 1, 1, 2, 2, 3, 3]])
And I would like to reshape my array like this:
array([[0, 0, 1, 1],
[0, 0, 1, 1],
[2, 2, 3, 3],
[2, 2, 3, 3]])
How do I do it using python numpy?
You can just split and concatenate:
a = np.array([[0, 0, 1, 1, 2, 2, 3, 3],
[0, 0, 1, 1, 2, 2, 3, 3]])
cols = a.shape[1] // 2
np.concatenate((a[:,:cols], a[:,cols:]))
#[[0 0 1 1]
# [0 0 1 1]
# [2 2 3 3]
# [2 2 3 3]]
You can simply swap rows after reshaping it.
a= np.array([[0, 0, 1, 1, 2, 2, 3, 3],
[0, 0, 1, 1, 2, 2, 3, 3]]).reshape(4,4)
a[[1,2]] = a[[2,1]]
Output:
array([[0, 0, 1, 1],
[0, 0, 1, 1],
[2, 2, 3, 3],
[2, 2, 3, 3]])
I want to ask how numpy remove columns in batch by list.
The value in the list corresponds to the batch is different from each other.
I know this problem can use the for loop to solve, but it is too slow ...
Can anyone give me some idea to speed up?
array (batch size = 3):
[[0, 1, 2, 3, 4, 5, 6], [0, 1, 2, 3, 4, 5, 6], [0, 1, 2, 3, 4, 5, 6]]
remove index in the list (batch size = 3)
[[2, 3, 4], [1, 2, 6], [0, 1, 5]]
output:
[[0, 1, 5, 6], [0, 3, 4, 5], [2, 3, 4, 6]]
Assuming the array is 2d, and the indexing removes equal number of elements per row, we can remove items with a boolean mask:
In [289]: arr = np.array([[0, 1, 2, 3, 4, 5, 6], [0, 1, 2, 3, 4, 5, 6], [0, 1, 2, 3, 4, 5, 6]]
...: )
In [290]: idx = np.array([[2, 3, 4], [1, 2, 6], [0, 1, 5]])
In [291]: mask = np.ones_like(arr, dtype=bool)
In [292]: mask[np.arange(3)[:,None], idx] = False
In [293]: arr[mask]
Out[293]: array([0, 1, 5, 6, 0, 3, 4, 5, 2, 3, 4, 6])
In [294]: arr[mask].reshape(3,-1)
Out[294]:
array([[0, 1, 5, 6],
[0, 3, 4, 5],
[2, 3, 4, 6]])
I know I can do np.subtract.outer(x, x). If x has shape (n,), then I end up with an array with shape (n, n). However, I have an x with shape (n, 3). I want to output something with shape (n, n, 3). How do I do this? Maybe np.einsum?
You can use broadcasting after extending the dimensions with None/np.newaxis to form a 3D array version of x and subtracting the original 2D array version from it, like so -
x[:, np.newaxis, :] - x
Sample run -
In [6]: x
Out[6]:
array([[6, 5, 3],
[4, 3, 5],
[0, 6, 7],
[8, 4, 1]])
In [7]: x[:,None,:] - x
Out[7]:
array([[[ 0, 0, 0],
[ 2, 2, -2],
[ 6, -1, -4],
[-2, 1, 2]],
[[-2, -2, 2],
[ 0, 0, 0],
[ 4, -3, -2],
[-4, -1, 4]],
[[-6, 1, 4],
[-4, 3, 2],
[ 0, 0, 0],
[-8, 2, 6]],
[[ 2, -1, -2],
[ 4, 1, -4],
[ 8, -2, -6],
[ 0, 0, 0]]])