Using CNN with Dataset that has different depths between volumes - pytorch

I am working with Medical Images, where I have 130 Patient Volumes, each volume consists of N number of DICOM Images/slices.
The problem is that between the volumes the the number of slices N, varies.
Majority, 50% of volumes have 20 Slices, rest varies by 3 or 4 slices, some even more than 10 slices (so much so that interpolation to make number of slices equal between volumes is not possible)
I am able to use Conv3d for volumes where the depth N (number of slices) is same between volumes, but I have to make use of entire data set for the classification task. So how do I incorporate entire dataset and feed it to my network model ?

If I understand your question, you have 130 3-dimensional images, which you need to feed into a 3D ConvNet. I'll assume your batches, if N was the same for all of your data, would be tensors of shape (batch_size, channels, N, H, W), and your problem is that your N varies between different data samples.
So there's two problems. First, there's the problem of your model needing to handle data with different values of N. Second, there's the more implementation-related problem of batching data of different lengths.
Both problems come up in video classification models. For the first, I don't think there's a way of getting around having to interpolate SOMEWHERE in your model (unless you're willing to pad/cut/sample) -- if you're doing any kind of classification task, you pretty much need a constant-sized layer at your classification head. However, the interpolation doesn't have happen right at the beginning. For example, if for an input tensor of size (batch, 3, 20, 256, 256), your network conv-pools down to (batch, 1024, 4, 1, 1), then you can perform an adaptive pool (e.g. https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool3d) right before the output to downsample everything larger to that size before prediction.
The other option is padding and/or truncating and/or resampling the images so that all of your data is the same length. For videos, sometimes people pad by looping the frames, or you could pad with zeros. What's valid depends on whether your length axis represents time, or something else.
For the second problem, batching: If you're familiar with pytorch's dataloader/dataset pipeline, you'll need to write a custom collate_fn which takes a list of outputs of your dataset object and stacks them together into a batch tensor. In this function, you can decide whether to pad or truncate or whatever, so that you end up with a tensor of the correct shape. Different batches can then have different values of N. A simple example of implementing this pipeline is here: https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/03-advanced/image_captioning/data_loader.py
Something else that might help with batching is putting your data into buckets depending on their N dimension. That way, you might be able to avoid lots of unnecessary padding.

You'll need to flatten the dataset. You can treat every individual slice as an input in the CNN. You can set each variable as a boolean flag Yes / No if categorical or if it is numerical you can set the input as the equivalent of none (Usually 0).

Related

How to use data with different length for cnn with conv1 as first layer?

I have a list of arrays of different shapes i.e.
list = [array([1,2,3]), dtype=int16),
array([1,2,3,4,5]), dtype=int16),
array([1,2]), dtype=int16),
array([1,2,3,4,5,6,7,8,9]), dtype=int16)]
I want to use these data as input in a cnn in which the first layer is conv1. How should i transform the data in order to work? Should i fill with zeros the arrays? The data is signal coming from a heart device.d
Every sample data has to have same shape to feed any Keras model as you know, therefore you have to make all sample's shape same. In order to do so, you can leverage sklearn.impute.SimpleImputer to full with dummy numbers to make the shapes same.
The SimpleImputer has some options to fill out, please refer below site.
https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html
If the smallest sample shape (e.g. [1, 2]) is still relatively big, you can prune other sample's data in order to fit to the smallest shape. If the smallest one is still small, you should consider whether you should omit it or not.
Given data is heart beat wave data, I'll try with zero as you mentioned at first.

Random Index from a Tensor (Sampling with Replacement from a Tensor)

I'm trying to manipulate individual weights of different neural nets to see how their performance degrades. As part of these experiments, I'm required to sample randomly from their weight tensors, which I've come to understand as sampling with replacement (in the statistical sense). However, since it's high-dimensional, I've been stumped by how to do this in a fair manner. Here are the approaches and research I've put into considering this problem:
This was previously implemented by selecting a random layer and then selecting a random weight in that layer (ignore the implementation of picking a random weight). Since layers are different sizes, we discovered that weights were being sampled unevenly.
I considered what would happen if we sampled according to the numpy.shape of the tensor; however, I realize now that this encounters the same problem as above.
Consider what happens to a rank 2 tensor like this:
[[*, *, *],
[*, *, *, *]]
Selecting a row randomly and then a value from that row results in an unfair selection. This method could work if you're able to assert that this scenario never occurs, but it's far from a general solution.
Note that this possible duplicate actually implements it in this fashion.
I found people suggesting flattening the tensor and use numpy.random.choice to select randomly from a 1D array. That's a simple solution, except I have no idea how to invert the flattened tensor back into its original shape. Further, flattening millions of weights would be a somewhat slow implementation.
I found someone discussing tf.random.multinomial here, but I don't understand enough of it to know whether it's applicable or not.
I ran into this paper about resevoir sampling, but again, it went over my head.
I found another paper which specifically discusses tensors and sampling techniques, but it went even further over my head.
A teammate found this other paper which talks about random sampling from a tensor, but it's only for rank 3 tensors.
Any help understanding how to do this? I'm working in Python with Keras, but I'll take an algorithm in any form that it exists. Thank you in advance.
Before I forget to document the solution we arrived at, I'll talk about the two different paths I see for implementing this:
Use a total ordering on scalar elements of the tensor. This is effectively enumerating your elements, i.e. flattening them. However, you can do this while maintaining the original shape. Consider this pseudocode (in Python-like syntax):
def sample_tensor(tensor, chosen_index: int) -> Tuple[int]:
"""Maps a chosen random number to its index in the given tensor.
Args:
tensor: A ragged-array n-tensor.
chosen_index: An integer in [0, num_scalar_elements_in_tensor).
Returns:
The index that accesses this element in the tensor.
NOTE: Entirely untested, expect it to be fundamentally flawed.
"""
remaining = chosen_index
for (i, sub_list) in enumerate(tensor):
if type(sub_list) is an iterable:
if |sub_list| > remaining:
remaining -= |sub_list|
else:
return i joined with sample_tensor(sub_list, remaining)
else:
if len(sub_list) <= remaining:
return tuple(remaining)
First of all, I'm aware this isn't a sound algorithm. The idea is to count down until you reach your element, with bookkeeping for indices.
We need to make crucial assumptions here. 1) All lists will eventually contain only scalars. 2) By direct consequence, if a list contains lists, assume that it also doesn't contain scalars at the same level. (Stop and convince yourself for (2).)
We also need to make a critical note here too: We are unable to measure the number of scalars in any given list, unless the list is homogeneously consisting of scalars. In order to avoid measuring this magnitude at every point, my algorithm above should be refactored to descend first, and subtract later.
This algorithm has some consequences:
It's the fastest in its entire style of approaching the problem. If you want to write a function f: [0, total_elems) -> Tuple[int], you must know the number of preceding scalar elements along the total ordering of the tensor. This is effectively bound at Theta(l) where l is the number of lists in the tensor (since we can call len on a list of scalars).
It's slow. It's too slow compared to sampling nicer tensors that have a defined shape to them.
It begs the question: can we do better? See the next solution.
Use a probability distribution in conjunction with numpy.random.choice. The idea here is that if we know ahead of time what the distribution of scalars is already like, we can sample fairly at each level of descending the tensor. The hard problem here is building this distribution.
I won't write pseudocode for this, but lay out some objectives:
This can be called only once to build the data structure.
The algorithm needs to combine iterative and recursive techniques to a) build distributions for sibling lists and b) build distributions for descendants, respectively.
The algorithm will need to map indices to a probability distribution respective to sibling lists (note the assumptions discussed above). This does require knowing the number of elements in an arbitrary sub-tensor.
At lower levels where lists contain only scalars, we can simplify by just storing the number of elements in said list (as opposed to storing probabilities of selecting scalars randomly from a 1D array).
You will likely need 2-3 functions: one that utilizes the probability distribution to return an index, a function that builds the distribution object, and possibly a function that just counts elements to help build the distribution.
This is also faster at O(n) where n is the rank of the tensor. I'm convinced this is the fastest possible algorithm, but I lack the time to try to prove it.
You might choose to store the distribution as an ordered dictionary that maps a probability to either another dictionary or the number of elements in a 1D array. I think this might be the most sensible structure.
Note that (2) is truly the same as (1), but we pre-compute knowledge about the densities of the tensor.
I hope this helps.

What should be the batch for Keras LSTM CNN to process image sequence

I want to use image sequence to predict 1 output.
training data:
[(x_img1, y1), (x_img2, y2), ..., (x_img10, y10)]
Color image dimension:
(100, 120, 3)
Output dimention: (1)
Model implemented in Keras:
img_sequence_length = 3
model = Sequential()
model.add(TimeDistributed(Convolution2D(24, 5, 5, subsample=(2, 2), border_mode="same", activation=‘rely’, name='conv1'),
input_shape=(img_sequence_length,
100,
120,
3)))
….
model.add(LSTM(64, return_sequences=True, name='lstm_1'))
model.add(LSTM(10, return_sequences=False, name='lstm_2'))
model.add(Dense(256))
model.add(Dense(1, name='output'))
The batch should be:
A)
[ [(x_img1, y1), (x_img2, y2), (x_img3, y3)],
[(x_img2, y2), (x_img3, y3), (x_img4, y4)],
…
]
Or
B)
[ [(x_img1, y1), (x_img2, y2), (x_img3, y3)],
[(x_img4, y4), (x_img5, y5), (x_img6, y6)],
…
]
Why?
This choice really depends on what you want to achieve. Understanding what your data is totally influences the decision. (Not only the shape and type of the data, but what it means and what you want from it. Is it a video? Many videos? Do I want the name of the character in a little segment of a video? Or to know the state of the plot continously along the video?)
In option A:
This option is used when all your images form a single long sequence and you want to predict the next element in the sequence by knowing a specific number of previous images.
Each group of 3 images in that batch is completely independent.
The layer doesn't keep a memory between them, and the actual memory is length 3.
The simulation of a long sequence happens because you are repeating images in each batch, like a sliding window. But there is no connection or memory transfer from one group to another.
You use this if the sequence has any logical possibility of being predicted from 3 images.
Imagine you have one long video, but you watch only three 3 seconds of it and try to deduce something from those 3 seconds. But, your memory is completely washed away before you watch another 3 seconds. When you watch this 3 new seconds, you will not be able to remember what you watched before and you will not be able to say you watched 4 seconds. All you learn will be confined in 3 second segments.
In option B:
In this option, each group of 3 images has absolutely no connection at all to the others. You can use this as if every group of 3 images were a different sequence (not belonging to a long sequence).
Image you have lots of videos, and they talk about different things. One is Titanic, the other is the Avengers, and so on.
This batch may be used for a case similar to the one proposed in A, but your sliding window would have a step 3. This would make learning faster, but also make it less learning.
Other options:
You can take a look at this question, its answer and the comments to have more ideas.
Some hints on splitting the data:
First, input and output data must be separate:
X = [item[0] for item in training_data]
Y = [item[1] for item in training_data]
Then you must separate the sequences properly.
As you defined in the input_shape, X must follow the same shape.
X.shape must be (numberOfSequences, img_sequence_length, 100, 120, 3)
So if it's a list of images, you must make sure that every image is a numpy array (transform them if necessary), and that you will later convert X to numpy:
X = np.asarray(X_with_numpy_images)
And if you have only one Y for each sequence, you may have it shaped as:
Y.shape must be (numberOfSequences,1)
You probably mount it taking values in steps of 3:
Y = [Y[(i+1)*3 - 1] for i in range(numberOfSequences)]
Now it's important to understand if each sequence of 3 images is independent of the other sequences, or if you have just one huge sequence divided in small parts.
In case one, use LSTM(..., stateful=False), in case two, use LSTM(...,stateful=True)
And you will also probably need to reshape the tensors properly in the transition from the convolutional layers to the LSTM layers, because LSTM will require inputs shaped as (NumberOfSequences, SequenceLength, Features)
A suggestion is to use reshape layers:
model.add(Reshape((img_sequence_length,100*120*3)))
#of course the last dimension may be different if you don't use `padding='same'` in the convolutions of if you use pooling.

Why a CNN learns different feature maps

I understand (and please correct me if my understanding is wrong) that the primary purpose of a CNN is to reduce the number of parameters from what you would need if you were to use a fully connected NN. And CNN achieves this by extracting "features" of images.
CNN can do this because in a natural image, there are small features such as lines and elementary curves that may occur in an "invariant" fashion, and constitute the image much like elementary building blocks.
My question is: when we create layers of feature maps, say, 5 of them, and we get these by using the sliding window of a size, say, 5x5 on an image that has pixels of, say, 100x100, Initially, these feature maps are initialized as random number weight matrices, and must progressively adjust the weights with gradient descent right? But then, if we are getting these feature maps by using the exactly same sized windows, sliding in exactly the same ways (sharing the same starting point and the same stride value), on the exactly same image, how can these maps learn different features of the image? Won't they all come out the same, say, a line or a curve?
Is it due to the different initial values of the weight matrices? (I.e. some weight matrices are more receptive to learning a certain particular feature than others?)
Thanks!! I wrote my 4 questions/opinions and indexed them, for the ease of addressing them separately!

Plotting Hidden Weights

I've had an interest for neural networks for a while now and have just started following the deep learning tutorials. I have what I hope is a relatively straight forward question that I am hoping someone may answer.
In the multilayer perception tutorial, I am interested in seeing the state of the network at different layers (something similar to what is seen in this paper: http://www.iro.umontreal.ca/~lisa/publications2/index.php/publications/show/247 ). For instance, I am able to write out the weights of the hidden layer using:
W_open = open('mlp_w_pickle.pkl','w')
cPickle.dump(classifier.hiddenLayer.W.get_value(borrow=True), W_open, -1)
When I plot this using the utils.py tile plotting, I get the following pretty plot [edit: pretty plot rmoved as I dont have enough rep].
If I wanted to plot the weights at the logRegressionLayer, such that
cPickle.dump(classifier.logRegressionLayer.W.get_value(borrow=True), W_open, -1)
what would I actually have to do? The above doesn't seem to work - it returns a 2darray of shape (500,10). I understand that the 500 relates to the number of hidden units. The paragraph on the Miscellaneous page:
Plotting the weights is a bit more tricky. We have n_hidden hidden
units, each of them corresponding to a column of the weight matrix. A
column has the same shape as the visible, where the weight
corresponding to the connection with visible unit j is at position j.
Therefore, if we reshape every such column, using numpy.reshape, we
get a filter image that tells us how this hidden unit is influenced by
the input image.
confuses me alittle. I am unsure exactly how I would string it together.
Thanks to all - sorry if the question is confusing!
You could plot them just the like the weights in the first layer but they will not necessarily make much sense.
Consider the weights in the first layer of a neural network. If the inputs have size 784 (e.g. MNIST images) and there are 2000 hidden units in the first layer then the first layer weights are a matrix of size 784x2000 (or maybe the transpose depending on how it's implemented). Those weights can be plotted as either 784 patches of size 2000 or, more usually, 2000 patches of size 784. In this latter case each patch can be plotted as a 28x28 image which directly ties back to the original inputs and thus is interpretable.
For you higher level regression layer, you could plot 10 tiles, each of size 500 (e.g. patches of size 22x23 with some padding to make it rectangular), or 500 patches of size 10. Either might illustrate some patterns that are being found but it may be difficult to tie those patterns back to the original inputs.

Resources