Tensorflow Keras: Problems to handle variable length input, using generator? - python-3.x

We want to train our model on varying input dimensions. Every input in a given batch and across batches has different dimensions.
We cannot resize our input (since we’ll lose our microscopic features). Now, since we cannot resize our input, converting them into batches of numpy array becomes impossible. In order to handle this now I have made the list for the input and each list of element contained (height, width, 1). Height is variable size and width is constant.
Sometime my input excessively large. In order to do that I have plan to use model.fit_generator(). In this, We find the max height and width of input in a batch and pad every other input with zeros so that every input in the batch has an equal dimension. Now we can easily convert it to a numpy array or a tensor and pass it to the fit_generator(). The model automatically learns to ignore the zeros and learns features from the intended portion from the padded input. This way we have a batch with equal input dimensions but every batch has a different shape (due to difference in max height and width of input across batches).
Now until here, I described the things what I have learned and what I have plan to do with variable input data. But I am stuck with the following confusions:
1- I have plan to use CNN first and then LSTM on that. I am using tensorflow keras. There, we have the facility of padding and masking . However, As for as I know that LSTM can work on masking and padding ignore 0-padded values. However, I am concerned about the CNN (does CNN ignores 0-padded values), because my padded input will first feed to CNN. I have seen some discussion in the following links:
How to apply masking layer to sequential CNN model in Keras?
https://github.com/keras-team/keras/issues/411
In these link, they mentioned that Unfortunately masking is not yet supported by the Keras Conv layers. However, now we can see alot of development and advancements specifically in the form of tensorflow Keras. So I am wondering that now tensorflow keras can support masking input?
2- To use the generator, we can use custom keras generator. For that I went through a vary good tutorial. I made the mind to use this. But I am wondering is there any advance built-in facility in tensorflow keras to use generator and save me to write custom keras generator?

Related

Extracting hidden representations for each token - PyTorch LSTM

I am currently working on a NLP project involving recurrent neural networks. I implemented a LSTM with PyTorch, following the tutorial here.
For my project, I need to extract the hidden representation for every token of an input text. I thought that the easiest way would be to test using a batch size and sequence length of 1, but when I do that the loss gets orders of magnitude larger than in training phase (during training I used a batch size of 64 and a sequence length of 35).
Is there any other way I can easily access these word-level hidden representations? Thank you.
Yes, that is possible with nn.LSTM as long as it is a single layer LSTM. If u check the documentation (here), for the output of an LSTM, you can see it outputs a tensor and a tuple of tensors. The tuple contains the hidden and cell for the last sequence step. What each dimension means of the output depends on how u initialized your network. Either the first or second dimension is the batch dimension and the rest is the sequence of word embeddings you want.
If u use a packed sequence as input, it is a bit of a different story.

Training/Predicting with CNN / ResNet on all classes each iteration - concatenation of input data + Hungarian algorithm

So I've got a simple pytorch example of how to train a ResNet CNN to learn MNIST labeling from this link:
https://zablo.net/blog/post/using-resnet-for-mnist-in-pytorch-tutorial/index.html
It's working great, but I want to hack it a bit so that it does 2 things. First, instead of predicting digits, it predicts animal shapes/colors for a project I'm working on. That's already working quite well already and am happy with it.
Second, I'd like to hack the training (and possibly layers) so that predictions is done in parallel on multiple images at a time. In the MNIST example, basically prediction (or output) would be done for an image that has 10 digits at a time concatenated by me. For clarity, each 10-image input will have the digits 0-9 appearing only once each. The key here is that each of the 10 digit gets a unique class/label from the CNN/ResNet and each class gets assigned exactly once. And that digits that have high confidence will prevent other digits with lower confidence from using that label (a Hungarian algorithm type of approach).
So in my use case I want to train on concatenated images (not single images) as in Fig A below and force the classifier to learn to predict the best unique label for each of the concatenated images and do this all at once. Such an approach should outperform single image classification - and it's particularly useful for my animal classification because otherwise the CNN can sometimes return the same ID for multiple animals which is impossible in my application.
I can already predict in series as in Fig B below. And indeed looking at the confidence of each prediction I am able to implement a Hungarian-algorithm like approach post-prediction to assign the best (most confident) unique IDs in each batch of 4 animals. But this doesn't always work and I'm wondering if ResNet can try and learn the greedy Hungarian assignment as well.
In particular, it's not clear that implementing A simply requires augmenting the data input and labels in the training set will do it automatically - because I don't know how to penalize or dissalow returning the same label twice for each group of images. So for now I can generate these training datasets like this:
print (train_loader.dataset.data.shape)
print (train_loader.dataset.targets.shape)
torch.Size([60000, 28, 28])
torch.Size([60000])
And I guess I would want the targets to be [60000, 10]. And each input image would be [1, 28, 28, 10]? But I'm not sure what the correct approach would be.
Any advice or available links?
I think this is a specific type of training, but I forgot the name.

Calculation of Keras layers output dimensions

I am currently trying to implement GoogLeNet architecture (InceptionV1) in Keras using theano backend, as I want to generate features for CUB dataset using GoogLeNet model.
I found an implementation in Keras here.
However, it is based on the earlier version of Keras and I had to make changes in the layers as per Keras version 2.
Now, the model is getting built correctly. However, the predict() function is failing with the error as
ValueError: CorrMM images and kernel must have the same stack size
So, I started looking at the original paper and correlating the layers mentioned in the paper with the implemented one.
So, here I found first layer to have output as expected as 112x112x64 with the input as 224x224x3.
However, when I tried to calculate the expected output dimensions as per the formula given in Stanford University tutorial page, it is different from the actual output which I received from the Keras code, though this is what is the expected output as per the GoogLeNet paper. i.e. as per the formula mentioned on the Stanford page Output height or length = ((Input height or length - filter size + 2 * Padding) / Stride) + 1
As per above equation, the output dimension comes in fraction which is not valid and to get the expected dimension as per the formula, input needs to be of shape 227x227x3. However, in Keras, with this input, output comes as 114x114x64.
Does Keras calculate the output dimensions in some different way or am I missing out on something?
Somehow I could make it work yesterday by removing few lines of code from the model which was making it to change the dimensions. (Possibly it was required by earlier version of Keras and Theano)
Also, contrary to the one mentioned in the paper, I changed patch size of MaxPooling2D() function from 3x3 to 2x2 which is the only way to achieve the desired output dimensions in GoogLeNet architecture. With input shape 224x224 and applying max pooling with patch size 2x2 and stride 2x2, its dimensions gets halved and we can get the desired output shape.
I am not sure why equation of output dimensions based on input, filter, padding and stride as parameters are not applicable here.

when do you use Input shape vs batch_shape in keras?

I don't find API that explains keras Input.
When should you use shape attribute vs batch_shape attribute?
From the Keras source code:
Arguments
shape: A shape tuple (integer), not including the batch size.
For instance, `shape=(32,)` indicates that the expected input
will be batches of 32-dimensional vectors.
batch_shape: A shape tuple (integer), including the batch size.
For instance, `batch_shape=(10, 32)` indicates that
the expected input will be batches of 10 32-dimensional vectors.
`batch_shape=(None, 32)` indicates batches of an arbitrary number
of 32-dimensional vectors.
The batch size is how many examples you have in your training data.
You can use any. Personally I never used "batch_shape". When you use "shape", your batch can be any size, you don't have to care about it.
shape=(32,) means exactly the same as batch_shape=(None,32)
To expand on Daniel's answer, one case I've found where it's necessary to specify batch_shape instead of shape to an Input layer is when you are using stateful LSTMs in the functional API. It's described well in Phillipe Remy's blog. In short, the stateful mode allows you to keep the hidden state values in an LSTM across batches (they usually get reset every batch if the default stateful=False is set). That means it needs knowledge about the batch size in order to shape everything properly. If you don't do this, it yells at you:
ValueError: If a RNN is stateful, it needs to know its batch size. Specify the batch size of your input tensors:
- If using a Sequential model, specify the batch size by passing a `batch_input_shape` argument to your first layer.
- If using the functional API, specify the batch size by passing a `batch_shape` argument to your Input layer.
The second point is the relevant one here. If using LSTM with stateful=True in the functional API, you need to set batch_shape for your Input layers.

Exclude zero-padded regions from network loss - Keras 2.0 Theno Backend (segmentation network)

I'm training a segmentation network in Keras with Theano backend and I'm using ImageDataGenerator with flow_from_directory.
My images have flexible size. In order to use flow_from_directory though you have to specify a fixed size (target_size) and while reading the images, the function automatically fills the points outside the boundaries of the original image.
Currently I'm setting this specified size a value larger than my largest image- say may largest image is 300x400, I fix the target_size to 400x400 and use fill_mode='constant' and cval=0 to pad the points outside original image with zero.
Now my problem is as follows. During training, I do not wish these padded regions to contribute to my loss function at all. Does anyone has an idea how to do that?
Masking could help you.
Masks a sequence by using a mask value to skip timesteps.
For each timestep in the input tensor (dimension #1 in the tensor), if all values in the input tensor at that timestep are equal to mask_value, then the timestep will be masked (skipped) in all downstream layers (as long as they support masking).
If any downstream layer does not support masking yet receives such an input mask, an exception will be raised.
It's technically made for time series, but it should work for images too with some tweaking. Here you can find some attempts to do so (and some alternatives).

Resources