PyTorch training with Batches of different lenghts? - pytorch

Is it possible to train model with batches that have unequal lenght during an epoch? I am new to pytorch.

If you take a look at the dataloader documentation, you'll see a drop_last parameter, which explains that sometimes when the dataset size is not divisible by the batch size, then you get a last batch of different size. So basically the answer is yes, it is possible, it happens often and it does not affect (too much) the training of a neural network.
However you must a bit careful, some pytorch layers deal poorly with very small batch sizes. For example if you happen to have Batchnorm layers, and if you get a batch of size 1, you'll get errors due to the fact that batchnorm at some point divides by len(batch)-1. More generally, training a network that has batchnorms generally require batches of significant sizes, say at least 16 (literature generally aims for 32 or 64). So if you happen to have variable size batches, take the time to check whether your layers have requirement in terms of batch size for optimal training and convergence. But except in particular cases, your network will train anyway, no worries.
As for how to make your batches with custom sizes, I suggest you look at and take inspiration from the pytorch implementation of dataloader and sampler. You may want to implement something similar to BatchSampler and use the batch_sampler argument of Dataloader

Related

How do I prevent a lack of VRAM halfway through training a Huggingface Transformers (Pegasus) model?

I'm taking a pre-trained pegasus model through Huggingface transformers, (specifically, google/pegasus-cnn_dailymail, and I'm using Huggingface transformers through Pytorch) and I want to finetune it on my own data. This is however quite a large dataset and I've run into the problem of running out of VRAM halfway through training, which because of the size of the dataset can be a few days after training even started, which makes a trial-and-error approach very inefficient.
I'm wondering how I can make sure ahead of time that it doesn't run out of memory. I would think that the memory usage of the model is in some way proportional to the size of the input, so I've passed truncation=True, padding=True, max_length=1024 to my tokenizer, which if my understanding is correct should make all the outputs of the tokenizer of the same size per line. Considering that the batch size is also a constant, I would think that the amount of VRAM in use should be stable. So I should just be able to cut up the dataset into managable parts, just looking at the ram/vram use of the first run, and infer that it will run smoothly from start to finish.
However, the opposite seems to be true. I've been observing the amount of VRAM used at any time and it can vary wildly, from ~12GB at one time to suddenly requiring more than 24GB and crashing (because I don't have more than 24GB).
So, how do I make sure that the amount of vram in use will stay within reasonable bounds for the full duration of the training process, and avoid it crashing due to a lack of vram when I'm already days into the training process?
padding=True actually doesn't pad to max_length, but to the longest sample in the list you pass to the tokenizer. To pad to max_length you need to set padding='max_length'.

If Keras is forcing me to use a large batch size for prediction, can I simply fill in a bunch of fake values and only look at the predictions I need

...or is there a way to circumvent this?
In stateful LSTMs, I have to define a batch size but Keras is forcing me to use the same batch size in training as in prediction, but I find that my modeling problem depends a lot on having larger batch sizes to see good performance.

How is a minibatch processed by the GPU in PyTorch?

I am trying to understand how PyTorch actually performs a forward pass over a minibatch. When a minibatch is processed by a network, is each example in the minibatch (e.g. each image) sent forwards individually, one after the other? Or are all examples in the minibatch sent forwards at the same time?
When an example is sent forwards through a network, the additional memory requirement is the activations at each layer. And as long as the network does not take up the entire GPU, then it seems that multiple instantiations of these activations could be stored at the same time. Each instantiation could then be used to store the activations for one example in the minibatch. And therefore, multiple examples could be sent through the network simultaneously. However, I'm unsure whether this is actually done in practice.
I have done some simple experiments, and the time for a forward pass is roughly proportional to the minibatch size. This suggests that the examples are sent through one after the other. If so, then why is it that people say that training is faster when the minibatch size is larger? It seems that the processing time for an entire epoch would not be dependent on the minibatch size.
I am trying to understand how PyTorch actually performs a forward pass over a minibatch. When a minibatch is processed by a network, is each example in the minibatch (e.g. each image) sent forwards individually, one after the other? Or are all examples in the minibatch sent forwards at the same time?
All at the same time. To do so, it relies on batch processing, broadcasting, element-wise vectorization for non-linear operations (basically, a highly optimized for-loop, sometimes in parrallel) and matrix linear algebra. The later is much more efficient than a for-loop, since it can leverage dedicated hardware component designed for parallel linear algebra (this is true for both cpu and gpu, but gpu are especially well suited for this).
Each instantiation could then be used to store the activations for one example in the minibatch. And therefore, multiple examples could be sent through the network simultaneously. However, I'm unsure whether this is actually done in practice.
This is not how it works, torch is keeping track of "operations", each of them having a backward used computing the gradient of the inputs wrt to the outputs. It is designed to support batch processing and vectorization, such that processing a bunch of samples is done at once as in single backward pass.
I have done some simple experiments, and the time for a forward pass is roughly proportional to the minibatch size.
This is not true. It may be because you are already eating up 100% of the available resources (cpu or gpu), or because you are not doing the profiling properly (which is not so easy to do). If you post an example, one you try to help you on this point.

value of steps per epoch passed to keras fit generator function

What is the need for setting steps_per_epoch value when calling the function fit_generator() when ideally it should be number of total samples/ batch size?
Keras' generators are infinite.
Because of this, Keras cannot know by itself how many batches the generators should yield to complete one epoch.
When you have a static number of samples, it makes perfect sense to use samples//batch_size for one epoch. But you may want to use a generator that performs random data augmentation for instance. And because of the random process, you will never have two identical training epochs. There isn't then a clear limit.
So, these parameters in fit_generator allow you to control the yields per epoch as you wish, although in standard cases you'll probably keep to the most obvious option: samples//batch_size.
Without data augmentation, the number of samples is static as Daniel mentioned.
Then, the number of samples for training is steps_per_epoch * batch size.
By using ImageDataGenerator in Keras, we make additional training data for data augmentation. Therefore, the number of samples for training can be set by yourself.
If you want two times training data, just set steps_per_epoch as (original sample size *2)/batch_size.

How to set batch size and epoch value in Keras for infinite data set?

I want to feed images to a Keras CNN. The program randomly feeds either an image downloaded from the net, or an image of random pixel values. How do I set batch size and epoch number? My training data is essentially infinite.
Even if your dataset is infinite, you have to set both batch size and number of epochs.
For batch size, you can use the largest batch size that fits into your GPU/CPU RAM, by just trial and error. For example you can try power of two batch sizes like 32, 64, 128, 256.
For number of epochs, this is a parameter that always has to be tuned for the specific problem. You can use a validation set to then train until the validation loss is maximized, or the training loss is almost constant (it converges). Make sure to use a different part of the dataset to decide when to stop training. Then you can report final metrics on another different set (the test set).
It is because implementations are vectorised for faster & efficient execution. When the data is large, all the data cannot fit the memory & hence we use batch size to still get some vectorisation.
In my opinion, one should use a batch size as large as your machine can handle.

Resources