Difference between DepthwiseConv2D and SeparableConv2D - keras

From the document, I know SeparableConv2D is a combination of depthwise and pointwise operation. However, when I call
SeparableConv2D(100, 5, input_shape=(416,416,10)
# total parameters is 1350
model.add(DepthwiseConv2D(5, input_shape=(416,416,10)))
model.add(Conv2D(100, 1))
# total parameters is 1360
Does it mean SeparableConv2D does not use bias in depthwise phase by default?
Thanks.

Correct, checking the source code (I did this for tf.keras but I suppose it is the same for standalone keras) shows that in SeparableConv2D, the separable convolution works using only filters, no biases, and a single bias vector is added at the end. The second version, on the other hand, has biases for both DepthwiseConv2D and Conv2D.
Given that convolution is a linear operation and you are using no non-linearity inbetween depthwise and 1x1 convolution, I would suppose that having two biases is unnecessary in this case, similar to how you don't use biases in a layer that is followed by batch normalization, for example. As such, the extra 10 parameters wouldn't actually improve the model (nor should they really hurt either).

Related

Basic question in trainable parameters in CNN convolution

I am learning CNN trainable parameters calculation in Keras. I just wonder why we consider filter calculation as trainable parameters? Since the convolution process is a fixed calculation (i.e. matrix multiplication) and there are nothing need to update (trainable). I know there is a formula but why we consider this as trainable parameters. For example: in the first conV2D, image size, say 10x10x1, filter 3 x 3 , 1 filter, the parameters in keras is 10 (3x3+1).
Alex
In your convolution layer there is a 3x3(x1) kernel (the x1 since your image only has a single channel). The values in the convolution layer's kernel are learned parameters. In addition to the kernel itself, the convolution layer (may, it's usually optional) have a learnable bias parameter (that's the +1 in your formula). It's a bit hard to understand from your question, but it looks like in your setup you are asking the layer to learn the parameters for 10 different convolutional kernels (each with a bias) hence the 10(3x3+1) learned parameters.

TensorFlow 2.4.0 - Parameters associated with BatchNorm and Activation

I am printing a tensorflow.keras.Model instance summary. The type is tensorflow.python.keras.engine.functional.Functional object.
This model has layers with activations and batch normalization associated. When I print the list of parameters, I see
weights
bias
4 items co-dimensional with the bias
These four items are (I guess) the batch normalization and activations.
My question is: why do we have parameters associated with batch normalization and activations? And what could be the other two items?
My aim is to transpose this Keras model to a PyTorch counterpart, so I need to know the order of the parameters and what these parameters represent
there are no parameters associated with activations, those are simply some element-wise nonlinear function. So no matter how many activations you have they don't account for any parameter counts. However, your guess is right, there are in fact parameters associated with BatchNorm layer, 2 parameters in each BatchNorm layer to be precise (lambda and beta). So those BatchNorm layer does add additional parameters in your network.

Using LSTM for large text

I have a dataset for detecting fake news that i got from kaggle( https://www.kaggle.com/c/fake-news/data ).
I want to use LSTM for the classification
The mean length of words in a single article is about 750 words. I have tried to remove punctuation, stop words, removed numbers. Preprocessing the text is also taking a very long time.
I'd like a method to feed large text into the LSTM using keras. What should i do to reduce computation time and not lose a lot of accuracy.
There are some things you could try to speed things up:
1. Use CUDNN version of LSTM
It is usually faster, check available layers here keras.layers.CuDNNLSTM is what you are after.
2. Use Conv1d to create features
You can use 1 dimensional convolution with kernel_size specifying how many words should be taken into account and stride specifying the jump of moving window. For kernel_size=3 and stride=3, padding="SAME" it would drop your dimensionality three times.
You may stack more convolutional layers.
On top of that you can still employ LSTM normally.
3. Drop LSTM altogether
You may go with 1d convolutions and pooling for classification, RNNs are not the only way.
On the upside: you will not encounter vanishing gradients (could be mitigated a little by Bidirectional LSTM as well).
On the downside: you will lose strict dependence between words, though it shouldn't be much of a problem for binary classification (I suppose it's your goal).

How should I change the model if accuracy is very low?

I am new to deep learning and I want to build an image classifier using CNN(keras). I have built a model with 2 convolution layers (filters = 32 , kernel = 3x3) followed by a MaxPooling layer(2x2) and this repeated 2 times. Finally 2 fully connected layers. I am getting an accuracy of 50%. My question is how do we choose the model to begin with. Like how do we decide that there should be 2 convolution layers followed by a MaxPooling layer or 1 convolution and 1 MaxPooling layer. Also how do we choose the number of filters in each convolution layer and the kernel size.
If my model is not working then how to decide what changes to be made to the model .
model = Sequential()
model.add(Convolution2D(32,3,3,input_shape=
(280,280,3),activation='relu'))
model.add(Convolution2D(32,3,3,activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
#model.add(Dropout(0.25))
model.add(Convolution2D(64,3,3,activation='relu'))
model.add(Convolution2D(64,3,3,activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
#model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(output_dim=256 , activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(output_dim=5,activation='softmax'))
I am getting an accuracy of 50% after 5 epochs. What changes should i make in my model?
Let us first start with the more straightforward part. Knowing the number of input and output layers and the number of their neurons is the easiest part. Every network has a single input layer and a single output layer. The number of neurons in the input layer equals the number of input variables in the data being processed. The number of neurons in the output layer equals the number of outputs associated with each input.
But the challenge is knowing the number of hidden layers and their neurons.
The answer is you cannot analytically calculate the number of layers or the number of nodes to use per layer in an artificial neural network to address a specific real-world predictive modeling problem.
The number of layers and the number of nodes in each layer are model hyperparameters that you must specify and learn.
You must discover the answer using a robust test harness and controlled experiments. Regardless of the heuristics, you might encounter, all answers will come back to the need for careful experimentation to see what works best for your specific dataset.
Again the filter size is one such hyperparameter you should specify before training your network.
For an image recognition problem, if you think that a big amount of pixels are necessary for the network to recognize the object you will use large filters (as 11x11 or 9x9). If you think what differentiates objects are some small and local features you should use small filters (3x3 or 5x5).
These are some tips but do not exist any rules.
There are many tricks to increase the accuracy of your deep learning model. Kindly refer to this link Improve deep learning model performance.
Hope this will help you.

Using Dropout with Keras and LSTM/GRU cell

In Keras you can specify a dropout layer like this:
model.add(Dropout(0.5))
But with a GRU cell you can specify the dropout as a parameter in the constructor:
model.add(GRU(units=512,
return_sequences=True,
dropout=0.5,
input_shape=(None, features_size,)))
What's the difference? Is one preferable to the other?
In Keras' documentation it adds it as a separate dropout layer (see "Sequence classification with LSTM")
The recurrent layers perform the same repeated operation over and over.
In each timestep, it takes two inputs:
Your inputs (a step of your sequence)
Internal inputs (can be states and the output of the previous step, for instance)
Note that the dimensions of the input and output may not match, which means that "your input" dimensions will not match "the recurrent input (previous step/states)" dimesions.
Then in every recurrent timestep there are two operations with two different kernels:
One kernel is applied to "your inputs" to process and transform it in a compatible dimension
Another (called recurrent kernel by keras) is applied to the inputs of the previous step.
Because of this, keras also uses two dropout operations in the recurrent layers. (Dropouts that will be applied to every step)
A dropout for the first conversion of your inputs
A dropout for the application of the recurrent kernel
So, in fact there are two dropout parameters in RNN layers:
dropout, applied to the first operation on the inputs
recurrent_dropout, applied to the other operation on the recurrent inputs (previous output and/or states)
You can see this description coded either in GRUCell and in LSTMCell for instance in the source code.
What is correct?
This is open to creativity.
You can use a Dropout(...) layer, it's not "wrong", but it will possibly drop "timesteps" too! (Unless you set noise_shape properly or use SpatialDropout1D, which is currently not documented yet)
Maybe you want it, maybe you dont. If you use the parameters in the recurrent layer, you will be applying dropouts only to the other dimensions, without dropping a single step. This seems healthy for recurrent layers, unless you want your network to learn how to deal with sequences containing gaps (this last sentence is a supposal).
Also, with the dropout parameters, you will be really dropping parts of the kernel as the operations are dropped "in every step", while using a separate layer will let your RNN perform non-dropped operations internally, since your dropout will affect only the final output.

Resources