reshape output or tf.gather(), tensorflow - reshape

I have a tensor in [None, 100, 5, 50] from tf.gather(W,X) and I want to reshape it to [None, 100, 250])
but using tf.reshape (even with tf.pack) shows me error of Dimension(None) because of dynamic aspect of the graph.
is there any way to first reshape the inside of tensor [100,5,50] as far as the dimensions are known in graph and then using the set_shape having the [None, 100, 250]?

The tf.reshape() op doesn't understand partial shapes (i.e. those with a None for one or more dimensions), because they can be ambiguous: e.g. there could be many possible concrete shapes for a partial shape [None, None, 50].
However, tf.reshape() also allows you to specify one of the dimensions as a wildcard, which will be chosen automatically, so you can use this in your case. To specify the wildcard, use -1 as one of the dimensions:
input = ...
print input.get_shape() ==> [None, 100, 5, 50]
reshaped = tf.reshape(input, [-1, 100, 250])

Related

How to lower the last dimension of a Tensor?

I have an immature question.
For example, I got a tensor with the size of: torch.Size([2, 1, 80, 64]).
I need to turn it into another tensor with the size of: torch.Size([2, 1, 80, 16]).
Are there any right ways to achieve that?
There exist many functions to achieve dimensionality reduction and the following are some examples:
randomly select 16 out of the 64 features
take the mean of every four features (64/4=16)
use a dimensionality reduction technique like PCA
apply a linear transformation
apply a convolution function
To give a satisfying answer, more information about why and what you want to do is necessary.
Answered by: #ptrblck_de
slice the tensor
y = x[..., :16]
print(y.shape)
# torch.Size([2, 1, 80, 16])
index it with a stride of 4
y = x[..., ::4]
print(y.shape)
# torch.Size([2, 1, 80, 16])
use any pooling (max, avg, etc.) layer (the same would also work using adaptive pooling layers)
pool = nn.MaxPool2d((1, 2), (1, 4))
y = pool(x)
print(y.shape)
# torch.Size([2, 1, 80, 16])
pool = nn.AdaptiveAvgPool2d(output_size=(80, 16))
y = pool(x)
print(y.shape)
# torch.Size([2, 1, 80, 16])
or manually reduce the last dimension with any reduction op (sum, mean, max, etc.)

How to use affine_grid for batch tensor in pytorch?

Following the official tutorial on affine_grid, this line (inside function stn()):
grid = F.affine_grid(theta, x.size())
gives me an error:
RuntimeError: Expected size for first two dimensions of batch2 tensor to be: [10, 3] but got: [4000, 3].
using the input as follow:
from torchinfo import summary
model = Net()
summary(model, input_size=(10, 1, 256, 256))
theta is of size (4000, 2, 3) and x is of size (10, 1, 256, 256), how do I properly manipulate theta for correct dimension to use with batch tensor?
EDIT: I honestly don't see any mistake here, and the dimension is actually according the the affine_grid doc, is there something changed in the function itself?

(Torch tenor) Subtracting different dimension matrices

matrice1 = temp.unsqueeze(0)
print(M.shape)
matrice2 = M.permute(1, 0, 2, 3)
print(matrice2.shape)
print( torch.abs(matrice1 - matrice2).shape )
#torch.Size([1, 10, 3, 256])
#torch.Size([10, 1, 3, 256])
#torch.Size([10, 10, 3, 256])
I got the outcome above. I am wondering why the subtraction between two different dimension tensors make the outcome the tensor that has the shape as [10,10,3,256].
According to the broadcast semantics of PyTorch,The two tensors are "broadcastable" in your case, so they are automatically expanded to the equal size of torch.Size([10, 10, 3, 256]).

Need help transforming a pytorch tensor

Getting this error when trying to run a version of U-net, cannot just alter the shape of the tensor, will I need to change the actual model itself?
Expected 5-dimensional input for 5-dimensional weight [16, 1, 5, 5, 5], but got 4-dimensional input of size [4, 320, 320, 24] instead
I think your model was trained on 3-D data, expecting a batch of 1-channel 3-D volume. I think your input data "squeezed" the 1-channel dimension from the 4-batch input you have of shape 4x320x320x24. Try unsqueezeing the missing dimension:
x = x.unsqueeze(dim=1) # for x of shape [4, 320, 320, 24]

the meaning of (-1,)+x.shape[2:] where x is a torch.tensor

I once saw code segment using torch.reshape as following:
for name, value in samples.items():
value_flat = torch.reshape(value, (-1,) + value.shape[2:]))
what does (-1,) + value.shape[2:]) means here? Here value is of type torch.tensor.
(-1,) is a one element tuple containing -1.
value.shape[2:] selects from the third to last elements from value.shape (the shape of tensor value).
All in all, what happens is the tuple gets concatenated with the torch.Size object to make a new tuple. Let's take an example tensor:
>>> x = torch.rand(2, 3, 63, 64)
>>> x.shape[2:]
torch.Size([64, 64])
>>> (-1,) + x.shape[2:]
(-1, 64, 64)
When using -1 in torch.reshape, it indicates to 'put the rest of the dimensions on that axis'. Here it will essentially flatten the first and second axes (batch and channel) together.
In our example, the shape will go from (2, 3, 64, 64) to (6, 64, 64), i.e. if the tensor has four dimensions, the operation is equivalent to
value.reshape(value.size(0)*value.size(1), value.size(2), value.size(3))
but is certainly very clumsy to write it this way.

Resources