Cannot feed value of shape (32, 2, 2, 2) for Tensor 'real_input_3:0', which has shape '(?, 8)'. How to solve this issue - reshape

I am unable to match both the dimension of the array
I tried np.reshape() function but could not solve

Related

unable to use `torch.gather` with 3D index and 3D input

I have an idx tensor with shape torch.Size([2, 80000, 3]), which corresponds to batch, points, and indices of 3 elements (from 0 to 512) from the feat tensor with shape torch.Size([2, 513, 2]).
I cant seem to find a way to use torch.gather to index feat with idx with a resulting tensor with shape [2,8000,3,2].

how do I solve ValueError in Tensorflow?

I am running a cnn in Google colab and i am using tensorflow or Keras. However I received this feedback
Negative dimension size caused by subtracting 3 from 2 for '{{node conv2d_11/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](Placeholder, conv2d_11/Conv2D/ReadVariableOp)' with input shapes: [?,2,2,394], [3,3,394,394].
Call arguments received:
• inputs=tf.Tensor(shape=(None, 2, 2, 394), dtype=float32)
does this have to do with my input data or my parameters? Thanks
Try inserting a number instead of using None when specifying the shape. As said in the documentation here, you run across not-fully-specified shapes when using None.
in this case you have defined model which consists of MaxPool layers or AvgPool layers alot, so by the images pass through model layers, images size will be decreased; i think it would be helpful if you set padding parameter in convolution layer to same and for more details you could read about conv layers parameters strides and padding.

How to use affine_grid for batch tensor in pytorch?

Following the official tutorial on affine_grid, this line (inside function stn()):
grid = F.affine_grid(theta, x.size())
gives me an error:
RuntimeError: Expected size for first two dimensions of batch2 tensor to be: [10, 3] but got: [4000, 3].
using the input as follow:
from torchinfo import summary
model = Net()
summary(model, input_size=(10, 1, 256, 256))
theta is of size (4000, 2, 3) and x is of size (10, 1, 256, 256), how do I properly manipulate theta for correct dimension to use with batch tensor?
EDIT: I honestly don't see any mistake here, and the dimension is actually according the the affine_grid doc, is there something changed in the function itself?

Understanding matrix dimesions in printing CNN Heatmaps (for Class Activation Map" (CAM))

This question is related to computing the Class Activation Map (CAM) visualization.
Source code is at [ln [24 and onwards]: See ln [24] onwards and code snapshot pasted below
Model.summary()
The last convolutional layer is block5_conv3 with dimensions (14, 14, 512) and it predicts 1000 classes.
My questions are related to lines of code in this screenshot.
I included the lines of code separately also
In this line of code:
african_elephant_output = model.output[:, 386]
This model was trained on 1000 classes (the last line in the output of model.summary()). To understand the gradient calculation at a later step, I first want to understand how to print the length of vector african_elephant_outputand also the actual values in this feature vector
last_conv_layer = model.get_layer('block5_conv3')
grads = K.gradients(african_elephant_output, last_conv_layer.output)[0]
The dimensions of last_conv_layer are (14, 14, 512). But to understand how the dot product calculated by K.gradients, I need to know the dimensions of african_elephant_output. Should these dimensions of African_elephant output be: (1, 1, 512) so that we are able to first broadcast African_elephant output and then calculated the dot product of corresponding channels? How can I print the dimensions of african_elephant_output and the dimensions and values of grads?
What does axis = (0, 1, 2) refer to this line of code:
pooled_grads = K.mean(grads, axis=(0, 1, 2))
I am assuming the grads vector in #2 above is of shape (14, 14, 512) so the axis values of 0, 1, 2 refer to width (referred to by 0), height (referred to by 1) and the channel dimension ((referred to by 2). So that the mean is calculated along the width and height and we get a vector of shape (512, ) ?

How can I select single indices over a dimension in pytorch?

Assume I have a tensor sequences of shape [8, 12, 2]. Now I would like to make a selection of that tensor for each first dimension which results in a tensor of shape [8, 2]. The selection over dimension 1 is specified by indices stored in a long tensor indices of shape [8].
I tried this, however it selects each index in indices for each first dimension in sequences instead of only one.
sequences[:, indices]
How can I make this query without a slow and ugly for loop?
sequences[torch.arange(sequences.size(0)), indices]
torch.index_select solves your problem more easily than torch.gather since you don't have to adapt the dimensions of the indeces. Indeces must be a tensor. For your case
indeces = [0,2]
a = torch.rand(size=(3,3,3))
torch.index_select(a,dim=1,index=torch.tensor(indeces,dtype=torch.long))
This should be doable by torch.gather, but you need to convert your index tensor first by
unsqueeze it to match the number of dimension of your input tensor
repeat_interleave it to match the size of last dimension
Here is an example based on your description:
# original indices dimension [8]
# after first unsueeze, dimension is [8, 1]
indices = torch.unsqueeze(indices, 1)
# after second unsueeze, dimension is [8, 1, 1]
indices = torch.unsqueeze(indices, 2)
# after repeat, dimension is [8, 1, 2]
indices = torch.repeat_interleave(indices, 2, dim=2)
# now you have the right dimension for torch.gather
# don't forget to squeeze the redundant dimension
# result has dimension [8, 2]
result = torch.gather(sequences, 1, indices).squeeze()
It can be done using torch.Tensor.gather
sequences = torch.randn(8,12,2)
# defining the indices as a 1D Tensor of random integers and reshaping it to use with Tensor.gather
indices = torch.randint(sequences.size(1),(sequences.size(0),)).unsqueeze(-1).unsqueeze(-1).repeat(1,1,sequences.size(-1))
# indices shape: (8, 1, 2)
output = sequences.gather(1,indices).squeeze(1)
# output shape: (8, 2)

Resources