Convolutional Neural Networks - Non square inputs - conv-neural-network

I have been doing some reading on convolutional neural networks and I realized that the inputs are always squared images. So I was wondering how I can train a CNN to detect people (upright) since the ground truth (bounding box) would be rectangular. Would it be ok to resize the cropped image to a square ?
Thanks.

Related

my ground truth is a series of gray scale images

I aim at trainig a U-Net whereby my ground truth are grey scales images (240,240,1) associated with a collection of input medical images MRI T1/T2/T1Contrast/FLAIR (240,240,4). I wonder how to configure the last layer of my unet (softmax? sigmoid?), and how to make the model understand that the grey scale values of the ground truth are to be used to calculate the loss. The last layer should deliver e.g. 256 grey levels (?) ... can I consider my problem as a segmentation problem with 256 labels??
You will want either a 'linear' activation if your groundtruth is scaled 0 to 255, or no activation (None) if you scale it 0 to 1.
And no this is not a segmentation problem, this is a regression problem, which means you will need to use regression losses. You can start with mean-squared error. Check out the regression losses in the Keras documentation.

Output of CNN should be image

I am pretty new to deep learning, so I got one question:
Assume an input Grayscale image of shape (128,128,1). Target (Output) is as well an (128,128,1) sized image, e.g. for segmentation, depth prediction etc.. Usually with valid padding the size of the image shrinks after several convolution layers.
What are decent (maybe not the toughest one) variants to keep the size or predict a same sized image? Is it via same-padding? Is it via tranpose convolution or upsampling? Should I use a FCN at the end and reshape them to the image size? I am using pytorch. I would be glad for any hints, because I didn't find much in the internet.
Best
TLDR; You want to look at Deconv networks (Convolution transpose) that help regenerate an image using convolution operations. You want to build an encoder-decoder convolution architecture that compresses an image to a latent representation using convolutions and then decodes an image from this compressed representation. For image segmentation, a popular architecture is U-net.
NOTE: I cant answer for pytorch, so I will he sharing the Tensorflow equivalent. Please feel to ignore the code, but since you are looking for the concept, I can help you with what you need to solve this.
You are trying to generate an image as the output of the network.
A series convolution operation help to Downsample an image. Since you need an output 2D matrix (gray scale image), you want to Upsample as well. Such a network is called a Deconv network.
The first series of layers convolve over the input, 'flattening' them into a vector of channels. The next set of layers use 2D Conv Transpose or Deconv operations to change the channels back into a 2D matrix (Gray scale image)
Refer to this image for reference -
Here is a sample code that shows you how you can take a (10,3,1) image to a (12,10,1) image using a deconv net.
You can find the conv2dtranspose layer implementation in pytorch here.
from tensorflow.keras import layers, Model, utils
inp = layers.Input((128,128,1)) ##
x = layers.Conv2D(2, (3,3))(inp) ## Convolution part
x = layers.Conv2D(4, (3,3))(x) ##
x = layers.Conv2D(6, (3,3))(x) ##
##########
x = layers.Conv2DTranspose(6, (3,3))(x)
x = layers.Conv2DTranspose(4, (3,3))(x) ## ## Deconvolution part
out = layers.Conv2DTranspose(1, (3,3))(x) ##
model = Model(inp, out)
utils.plot_model(model, show_shapes=True, show_layer_names=False)
Also, if you are looking for tried and tested architectures in this domain, check out U-net; U-Net: Convolutional Networks for Biomedical Image Segmentation. This is an encoder-decoder (conv2d, conv2d-transpose) architecture that uses a concept called skip connections to avoid information loss and generate better image segmentation masks.

How do I crop a Landsat image into smaller chunks for training and then predict on the original image

I am looking at using Landsat imagery to train a CNN for unsupervised pixel-wise semantic segmentation classification. That said, I have been unable to find a method that allows me to crop images from the larger Landsat image for training and then predict on the original image. Essentially here is what I am trying to do:
Original Landsat image (5,000 x 5,000 - this is an arbitrary size, not exactly sure of the actual dimensions off-hand) -> crop the image into (100 x 100) chunks -> train the model on these cropped images -> output a prediction for each pixel in the original (uncropped) image.
That said, I am not sure if I should predict on the cropped images and stitch them together after they are predicted or if I can predict on the original image.
Any clarification/code examples would be greatly appreciated. For reference, I use both pytorch and tensorflow.
Thank you!
Lance D
Borrowing from Ronneberger et al., what we have been doing is to split the input Landsat scene and corresponding ground truth mask into overlapping tiles. Take the original image and pad it by the overlap margin (we use reflection for the padding) then split into tiles. Here is a code snippet using scikit-image:
import skimage as sk
patches = sk.util.view_as_windows(image,
(self.tile_height+2*self.image_margin,
self.tile_width+2*self.image_margin,raster_value['channels']),
(self.tile_height,self.tile_width,raster_value['channels'])
I don't know what you are using for a loss function for unsupervised segmentation. In our case with supervised learning, we crop the final segmentation prediction to match the ground truth output shape. In the Ronneberger paper they relied on shrinkage due to the use of valid padding.
For predictions you would do the same (split into overlapping tiles) and stitch the result.

VGG16 trained on grayscale imagenet

I have found the VGG16 network pre-trained on the (color) imagenet database (as .npy). Is there a VGG16 network pre-trained on a gray-scale version of the imagenet database available?
(The usual 'tricks' for using the 3-channel filters of the conv1.1 layer on the gray 1-channel input are not enough for me. I am looking at incremental improvements of the network performance, so I need to see how the transfer learning behaves when the pre-trained model was 'looking' at gray-scale input).
Thanks!
Yes, there's this one:
https://github.com/DaveRichmond-/grayscale-imagenet
Greyscale imagenet trained model, and also a version of it that's finetuned on X-rays. They showed that Imagenet performance barely drops btw.
#GrimSqueaker gave you the code of this paper : https://openaccess.thecvf.com/content_eccv_2018_workshops/w33/html/Xie_Pre-training_on_Grayscale_ImageNet_Improves_Medical_Image_Classification_ECCVW_2018_paper.html
However, the model trained in it is Inception v3 not VGG16.
You have two options:
Use a colored pre-trained VGG16 model and duplicate one channel to the three channels
Train your VGG16 model on the ImageNet grayscaled dataset.
You may find this link useful:
https://github.com/zzangho/VGG16_grayscale

Low gradient values while training convolutional neural network

I am training a neural network with 3 convolutional layers and 1 fully connected layer which further connected to a regression layer. My training data set if TID: for image quality estimation task. I am pre-processing images by doing local contrast normalization on a small 6 * 6 window, which results in pixel values ranging from -4 to +4 approximately. I am using RELU activation function in all the layers.
The problem is that the gradient values for different layers produced by my network are very small, somewhere around 2e-6 to 2e-7, which I guess are not ideal for good convergence as it is not producing expected results. I have already tried altering initialization of network weights and biases, changing learning rates, employing L1 and L2 regularization, etc., but nothing seems to be help elevate this problem.
So my first question is that is it actually a problem? or is a common thing to have such small gradient values in convolutional network? If it is a problem then what could an appropriate solution for it?

Resources