Need help transforming a pytorch tensor - pytorch

Getting this error when trying to run a version of U-net, cannot just alter the shape of the tensor, will I need to change the actual model itself?
Expected 5-dimensional input for 5-dimensional weight [16, 1, 5, 5, 5], but got 4-dimensional input of size [4, 320, 320, 24] instead

I think your model was trained on 3-D data, expecting a batch of 1-channel 3-D volume. I think your input data "squeezed" the 1-channel dimension from the 4-batch input you have of shape 4x320x320x24. Try unsqueezeing the missing dimension:
x = x.unsqueeze(dim=1) # for x of shape [4, 320, 320, 24]

Related

How to add a channel layer for a 3D image for 3D CNN

I am working on a medical dataset of 3d images the shapeimage.shape gives me (512,512,241) I assume it is height , width, depth but when I run 3D cnn i get this error Given groups=1, weight of size [32, 3, 3, 3, 3], expected input[1, 1, 241, 512, 512] to have 3 channels, but got 1 channels instead. what should be done? I am using pytorch.
Thank you in advance.
Your network expects each slice of the 3d volume to have three channels (RGB). You can simply convert grayscale (single channel) data to "fake" RGB by duplicating the single channel you have:
x_gray = ... # your tensor of shape batch-1-241-512-512
x_fake_rgb = x_gray.expand(-1, 3, -1, -1, -1)
See expand for more details.

How to use affine_grid for batch tensor in pytorch?

Following the official tutorial on affine_grid, this line (inside function stn()):
grid = F.affine_grid(theta, x.size())
gives me an error:
RuntimeError: Expected size for first two dimensions of batch2 tensor to be: [10, 3] but got: [4000, 3].
using the input as follow:
from torchinfo import summary
model = Net()
summary(model, input_size=(10, 1, 256, 256))
theta is of size (4000, 2, 3) and x is of size (10, 1, 256, 256), how do I properly manipulate theta for correct dimension to use with batch tensor?
EDIT: I honestly don't see any mistake here, and the dimension is actually according the the affine_grid doc, is there something changed in the function itself?

pytorch modifying the input data to forward to make it suitable to my model

Here is what I want to do.
I have an individual data of shape (20,20,20) where 20 tensors of shape (1,20,20) will be used as an input for 20 separate CNN. Here's the code I have so far.
class MyModel(torch.nn.Module):
def __init__(self, ...):
...
self.features = nn.ModuleList([nn.Sequential(
nn.Conv2d(1,10, kernel_size = 3, padding = 1),
nn.ReLU(),
nn.Conv2d(10, 14, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(14, 18, kernel_size=3, padding=1),
nn.ReLU(),
nn.Flatten(),
nn.Linear(28*28*18, 256)
) for _ in range(20)])
self.fc_module = nn.Sequential(
nn.Linear(256*n_selected, cnn_output_dim),
nn.Softmax(dim=n_classes)
)
def forward(self, input_list):
concat_fusion = cat([cnn(x) for x,cnn in zip(input_list,self.features)], dim = 0)
output = self.fc_module(concat_fusion)
return output
The shape of the input_list in forward function is torch.Size([100, 20, 20, 20]), where 100 is the batch size.
However, there's an issue with
concat_fusion = cat([cnn(x) for x,cnn in zip(input_list,self.features)], dim = 0)
as it results in this error.
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [10, 1, 3, 3], but got 3-dimensional input of size [20, 20, 20] instead
First off, I wonder why it expects me to give 4-dimensional weight [10,1,3,3]. I've seen
"RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead"?
but I'm not sure where those specific numbers are coming from.
I have an input_list which is a batch of 100 data. I'm not sure how I can deal with individual data of shape (20,20,20) so that I can actually separate this into 20 pieces to use it as an independent input to 20 CNN.
why it expects me to give 4-dimensional weight [10,1,3,3].
Note the following log means the nn.Conv2d with kernel (10, 1, 3, 3) requiring a 4 dimensional input.
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [10, 1, 3, 3]
How to separate input into 20 pieces along channels.
Iteration over input_list(100, 20, 20, 20) produces 100 tensors of shape (20, 20, 20).
If you want to split input along channel, try to slice input_list along second dimension.
concat_fusion = torch.cat([cnn(input_list[:, i:i+1]) for i, cnn in enumerate(self.features)], dim = 1)

Input and Output of the Convolutional Neural Network is a 3D matrix

I am writing a code for 3D pointcloud object detection and classification. First I labeled all my raw pointcloud and now converted it to voxels so at every timestamp I get a 3D matrix with 4 features. And I expect my output to be again the same 3D matrix with labels. In programming terms:
My Input to the neural network is - [no. of entries, 4(features), 10, 20, 15].
My output from the network is [no. of entries, 1, 10, 20, 15]
and due to one hot vector my shape now is :[no. of entries, 10, 20, 15, 8] due to 8 classes.
So which layers as input and output should I use in keras so that it does not give me any error?
I use this layer as an input for the network :
model.add(Convolution3D(64, 3, 3, 3, activation='relu',
border_mode='same', name='conv1',
subsample=(1, 1, 1),
input_shape=(4, 10, 20, 15)))
But I am not able to figure out by adding pooling layers and other layers the size is reduced. Which layer should I use as an output and how to maintain the same size in the center layers?

reshape output or tf.gather(), tensorflow

I have a tensor in [None, 100, 5, 50] from tf.gather(W,X) and I want to reshape it to [None, 100, 250])
but using tf.reshape (even with tf.pack) shows me error of Dimension(None) because of dynamic aspect of the graph.
is there any way to first reshape the inside of tensor [100,5,50] as far as the dimensions are known in graph and then using the set_shape having the [None, 100, 250]?
The tf.reshape() op doesn't understand partial shapes (i.e. those with a None for one or more dimensions), because they can be ambiguous: e.g. there could be many possible concrete shapes for a partial shape [None, None, 50].
However, tf.reshape() also allows you to specify one of the dimensions as a wildcard, which will be chosen automatically, so you can use this in your case. To specify the wildcard, use -1 as one of the dimensions:
input = ...
print input.get_shape() ==> [None, 100, 5, 50]
reshaped = tf.reshape(input, [-1, 100, 250])

Resources