Add new elements to Pytorch tensor - pytorch

I have a tensor that looks like this:
I would like to add 0 to each inner tensor. So we have something like[0.1111, 0.6667, 0].
Any suggestion would be appreciated.

You can do :
z = torch.zeros((10,1))
torch.cat((your_tensor,z),1)

Related

Modify the rows of a tensor at specific indices given by a list (Pytorch)

I have a tensor X with shape (N,M) and a list of indices idx_list.
I would like to modify X but only at the rows given by idx_list.
So I'd like to do something like this:
X[idx_list, :] = Y
Y is a tensor with shape (len(idx_list), M)
The solution is mentioned in your question. The usual slicing notation that's used in numpy arrays works well with PyTorch tensors as well.
X[idx_list, :] = Y
Here's a screenshot from Jupyter notebook illustrating it:
Your approach posted as answer would work too
X[(torch.tensor(idx_list),)] = Y
However, you do not need to complicate it like this by converting the idx_list into a tensor. Slicing is preferably done using standard Python lists.
This works pretty well:
X[(torch.tensor(idx_list),)] = Y

How to assign a subset of a tensor but still keep the original tensor?

I want to zero the main diagonal of the tensor exp_scaled_pcs, and by doing so I don't want the original tensor to change but I want to receive the output of the assignment into a new tensor zero_sim_exp_scaled_pcs.
By doing:
zero_sim_exp_scaled_pcs = torch.diagonal(exp_scaled_pcs, 0).zero_()
I noted that I change my original tensor also. How can I keep it?
You could use Tensor.index_put
x = torch.randn(10, 10)
d = min(x.shape)
diag_indices = [torch.arange(d)] * x.dim()
y = x.index_put(diag_indices, torch.zeros(d))

How to reshape an array of shape (m,) to (m,1)

I have an array as below.
X1=np.array([[0,0],[0,0]])
X1[:,0].shape gives me (2,).
How do I convert X1[:,0] to a shape of (2,1).
thanks for asking. what you have is a two by two matrix, so turning one part of the array to a two dimensional array will cause an error. I think you should create a new array from the sub array and then reshape it. you can do it like this new_x = X[:,0]
new_x.reshape(2,1). I hope this works

Reshape and conccatente arrays in 4D to 3D

l'm working with 4D arrays in numpy. l would like to append the data in forth dimension as follow :
1)
inputs :
data_1_1=dim(2,4,130,10)
data_1_2=dim(2,4,130,10)
expected output :
data_1=dim(2,4,130,20)
2) reduce 4D array to 3D array
inputs :
data_2_1=dim(3,5,130,20)
expected output :
data_2_1=dim(15,130,20)
Sorry for my newbie question.
Thank you for your help
What l have tried ?
1)
data_1= np.concatenate((data_1_1[...,np.newaxis],data_1_2[...,np.newaxis]),axis=2)
l'm wondering if this solution do the right job. since l would like to concatenate on the last dimension. In which order it's done ?
Is is correct ?
2) For this case l don't have any idea
For the first part you need to say you want a specific axis to work on:
>>>x=np.arange(2*4*130*10).reshape(2,4,130,10)
>>>np.concatenate((x,x),axis=3).shape
(2, 4, 130, 20)
and for the second part, sounds like you want a reshape
>>>y=np.arange(3*5*130*20).reshape(3,5,130,20)
>>> y.reshape(15,130,20).shape
(15, 130, 20)
The numpy terms you need to acquaint yourself with are axis and shape - a good read on these will help you a lot.

How do I remove the first element of a tensor in keras as a layer?

How do I remove the first element of a tensor in keras as a layer?
For example:
layer = Input(input_shape=(100,),name='input')
layer = Conv1D(97,kernel_size=10,strides=10)(layer)
layer = >something that removes the first element<(layer)
layer = Flaten()(layer)
model = Model(input,layer)
This model would have 97*9 outputs. 97 from the Conv layer, and each conv filter would output 10 nodes, but the first of those nodes would be removed by the layer I am looking for. Because the conv layer has shape (batch_size,10,97) I am looking for a way to remove the first element of axis=1.
How would I go about doing this? I tried using the Lambda layer but I can't quite figger out how to make this work.
Edit:
I'm asking this question because what I want to do is if I have a layer of shape (batch_size, x, y) I want to transform this to the shape (batch_size, 0.5x, 2y) in such a way that if x is for example 10, the elements 0,2,4,6,8 and 1,3,5,7,9 are stacked on top of each other. Right now I'm doing this with Maxpooling1D(pool_size=1, strides=2) to generate 0,2,4,6,8. To generate 1,3,5,7,9 I'd have to remove 1 element from the start in the manner explained above before applying the maxpooling layer. Thank you so much for your time!
Because #NadavB wants to know the answer:
layer = Input(input_shape=(100,),name='input')
layer = Conv1D(97,kernel_size=10,strides=10)(layer)
layer = Lambda(lambda x : x[:,1:,:])(layer)
layer = Flaten()(layer)
model = Model(input,layer)
I hope this helps you out :)
If your only goal is to have 9 values instead of 10 out of the convolution, why don't you try this :
layer = Conv1D(97,kernel_size=11,strides=11)(layer)
? Because if you remove the first element, it means that you don't care about the 10 first values of your sequence, so you might as well feed sequences of 90 values instead of 100...
If you care about those 10 first values, and you just want to output less, then use a bigger kernel :-)
Does that help? Otherwise we can figure out a lambda layer that does the trick
If it's a time sequence I do it this way:
def firstElementRemoved(inputTensor):
return inputTensor[:, 1:]
LSTMMinusFirstElement = Lambda(firstElementRemoved) (LSTM)

Resources