How to embed 3d input in keras? - keras

I am trying to make an Embedding layer in Keras.
My input size is 3d: (batch, 8, 6), and I want to have an embedding for the last dimension.
So the embedding should work as (batch*8, 6) -> embedding output
But I don't want to keep this batchsize for all the learning step, just for the embedding layer.
I think one of the solution is seperating 8 inputs and applying the embedding to each input.
But then this embedding layer is not the same as one big embedding layer.
Is there any possible solution? Thanks!

The solution is very simple:
input_shape = (8,6)
And pass through embedding. You will get exactly what you want.
A complete working example:
from keras.layers import *
from keras.models import *
ins = Input((8,6))
out = Embedding(10, 15)(ins)
model = Model(ins, out)
model.summary()
Where 10 is the dictionary size (number of words or similars) and 15 is the embedding size (the resulting dimension).
Resulting summary:
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 8, 6) 0
_________________________________________________________________
embedding_1 (Embedding) (None, 8, 6, 15) 150
=================================================================
Total params: 150
Trainable params: 150
Non-trainable params: 0
_________________________________________________________________

Related

How to visualize input and neurons in Keras embedding layer?

I realized I'm having problems understanding/visualizing Embedding layer in a form of input, edges and nodes representing neurons and connections between them. Not entirely getting what is the number of inputs and neurons etc in the layer.
In this toy example used for sentiment analysis where my vocab size is 18 and max sentence length is 4
model = Sequential()
model.add(Embedding(input_dim=vocab_size,output_dim=embedding_size,input_length=max_seq_len, name='embedding_lay'))
model.add(Flatten())
model.add(Dense(1,activation='sigmoid'))
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_lay (Embedding) (None, 4, 4) 76
_________________________________________________________________
flatten_1 (Flatten) (None, 16) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 93
Trainable params: 93
Non-trainable params: 0
I always thought that my embedding comes from 18 inputs (vocab size) times 4 neurons in 1st hidden layer plus 4 biases chance 18*4+4 = 76 and embeddings are the weights between input and 4 neurons (embedding). Is this correct ? If it what happens with flatten so I arrive in 'dense_1' having my 4x4 flatten to -> 16 + 1 chance 17 trainable parameters. To be clear from the matrix math perspective I more or less understand this but I just can map all this to very basic neuron connection (node edge) schema which is used for instance to visualize how Dense layer works.
Thanks in advance for any help.

Keras LSTM Layer ValueError: Dimensions must be equal, but are 17 and 2

I'm working on a basic RNN model for a multiclass task and I'm facing some issues with output dimensions.
This is my input/output shapes:
input.shape = (50000, 2, 5) # (samples, features, feature_len)
output.shape = (50000, 17, 185) # (samples, features, feature_len) <-- one hot encoded
input[0].shape = (2, 5)
output[0].shape = (17, 185)
This is my model, using Keras functional API:
inp = tf.keras.Input(shape=(2, 5,))
x = tf.keras.layers.LSTM(128, input_shape=(2, 5,), return_sequences=True, activation='relu')(inp)
out = tf.keras.layers.Dense(185, activation='softmax')(x)
model = tf.keras.models.Model(inputs=inp, outputs=out)
This is my model.summary():
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 2, 5)] 0
_________________________________________________________________
lstm (LSTM) (None, 2, 128) 68608
_________________________________________________________________
dense (Dense) (None, 2, 185) 23865
=================================================================
Total params: 92,473
Trainable params: 92,473
Non-trainable params: 0
_________________________________________________________________
Then I compile the model and run fit():
model.compile(optimizer='adam',
loss=tf.nn.softmax_cross_entropy_with_logits,
metrics='accuracy')
model.fit(x=input, y=output, epochs=5)
And I'm getting a dimension error:
ValueError: Dimensions must be equal, but are 17 and 2 for '{{node Equal}} = Equal[T=DT_INT64, incompatible_shape_error=true](ArgMax, ArgMax_1)' with input shapes: [?,17], [?,2].
The error is clear, the model output a dimension 2 and my output has dimension 17, although I understand the issue, I can't find a way of fixing it, any ideas?
I think your output shape is not "output[0].shape = (17, 185)" but "dense (Dense) (None, 2, 185) ".
You need to change your output shape or change your layer structure.
LSTM output is a list of encoder_outputs, when you specify return_sequences=True. hence; I suggest just using the last item of encoder_outputs as the input of your Dense layer. you can see the example section of this link to the documentation. It may help you.

How to see keras.engine.sequential.Sequential

I am new to Keras and deep learning and was working with MNIST on Keras. When I created a model using
model = models.Sequential()
model.add(layers.Dense(512,activation = 'relu',input_shape=(28*28,)))
model.add(layers.Dense(32,activation ='relu'))
model.add(layers.Dense(10,activation='softmax'))
and then I printed it
print(model)
output is
<keras.engine.sequential.Sequential at 0x7f3d554f6710>
My question is that is there any way to see a better result of Keras, meaning if i print model i can see that i have 3 hidden layers with first hidden layer having 512 hidden units and 784 input units, 2nd hidden layer having 512 input units and 32 hidden units and so on.
You can also try plot_model()
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(512,activation = 'relu',input_shape=(28*28,)))
model.add(tf.keras.layers.Dense(32,activation ='relu'))
model.add(tf.keras.layers.Dense(10,activation='softmax'))
model.summary()
from keras.utils.vis_utils import plot_model
plot_model(model, show_shapes=True, show_layer_names=True)
model.summary() will print he entire model for you.
model = Sequential()
model.add(Dense(512,activation = 'relu',input_shape=(28*28,)))
model.add(Dense(32,activation ='relu'))
model.add(Dense(10,activation='softmax'))
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 512) 401920
_________________________________________________________________
dense_1 (Dense) (None, 32) 16416
_________________________________________________________________
dense_2 (Dense) (None, 10) 330
=================================================================
Total params: 418,666
Trainable params: 418,666
Non-trainable params: 0
____________________________

What is the difference between SeparableConv2D and Conv2D layers?

I didn't find a clearly answer to this question online (sorry if it exists).
I would like to understand the differences between the two functions (SeparableConv2D and Conv2D), step by step with, for example a input dataset of (3,3,3) (as RGB image).
Running this script based on Keras-Tensorflow :
import numpy as np
from keras.layers import Conv2D, SeparableConv2D
from keras.models import Model
from keras.layers import Input
red = np.array([1]*9).reshape((3,3))
green = np.array([100]*9).reshape((3,3))
blue = np.array([10000]*9).reshape((3,3))
img = np.stack([red, green, blue], axis=-1)
img = np.expand_dims(img, axis=0)
inputs = Input((3,3,3))
conv1 = SeparableConv2D(filters=1,
strides=1,
padding='valid',
activation='relu',
kernel_size=2,
depth_multiplier=1,
depthwise_initializer='ones',
pointwise_initializer='ones',
bias_initializer='zeros')(inputs)
conv2 = Conv2D(filters=1,
strides=1,
padding='valid',
activation='relu',
kernel_size=2,
kernel_initializer='ones',
bias_initializer='zeros')(inputs)
model1 = Model(inputs,conv1)
model2 = Model(inputs,conv2)
print("Model 1 prediction: ")
print(model1.predict(img))
print("Model 2 prediction: ")
print(model2.predict(img))
print("Model 1 summary: ")
model1.summary()
print("Model 2 summary: ")
model2.summary()
I have the following output :
Model 1 prediction:
[[[[40404.]
[40404.]]
[[40404.]
[40404.]]]]
Model 2 prediction:
[[[[40404.]
[40404.]]
[[40404.]
[40404.]]]]
Model 1 summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 3, 3, 3) 0
_________________________________________________________________
separable_conv2d_1 (Separabl (None, 2, 2, 1) 16
=================================================================
Total params: 16
Trainable params: 16
Non-trainable params: 0
_________________________________________________________________
Model 2 summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 3, 3, 3) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 2, 2, 1) 13
=================================================================
Total params: 13
Trainable params: 13
Non-trainable params: 0
I understand how Keras compute the Conv2D prediction of model 2 thanks to this post, but can someone explains the SeperableConv2D computation of model 1 prediction please and its number of parameters (16) ?
As Keras uses Tensorflow, you can check in the Tensorflow's API the difference.
The conv2D is the traditional convolution. So, you have an image, with or without padding, and filter that slides through the image with a given stride.
On the other hand, the SeparableConv2D is a variation of the traditional convolution that was proposed to compute it faster.
It performs a depthwise spatial convolution followed by a pointwise convolution which mixes together the resulting output channels. MobileNet, for example, uses this operation to compute the convolutions faster.
I could explain both operations here, however, this post has a very good explanation using images and videos that I strongly recommend you to read.

How to model Convolutional recurrent network ( CRNN ) in Keras

I was trying to port CRNN model to Keras.
But, I got stuck while connecting output of Conv2D layer to LSTM layer.
Output from CNN layer will have a shape of ( batch_size, 512, 1, width_dash) where first one depends on batch_size, and last one depends on input width of input ( this model can accept variable width input )
For eg: an input with shape [2, 1, 32, 829] was resulting output with shape of (2, 512, 1, 208)
Now, as per Pytorch model, we have to do squeeze(2) followed by permute(2, 0, 1)
it will result a tensor with shape [208, 2, 512 ]
I was trying to implement this is Keras, but I was not able to do that because, in Keras we can not alter batch_size dimension in a keras.models.Sequential model
Can someone please guide me how to port above part of this model to Keras?
Current state of ported CNN layer
You don't need to permute the batch axis in Keras. In a pytorch model you need to do it because a pytorch LSTM expects an input shape (seq_len, batch, input_size). However in Keras, the LSTM layer expects (batch, seq_len, input_size).
So after defining the CNN and squeezing out axis 2, you just need to permute the last two axes. As a simple example (in 'channels_first' Keras image format),
model = Sequential()
model.add(Conv2D(512, 3, strides=(32, 4), padding='same', input_shape=(1, 32, None)))
model.add(Reshape((512, -1)))
model.add(Permute((2, 1)))
model.add(LSTM(32))
You can verify the shapes with model.summary():
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_4 (Conv2D) (None, 512, 1, None) 5120
_________________________________________________________________
reshape_3 (Reshape) (None, 512, None) 0
_________________________________________________________________
permute_4 (Permute) (None, None, 512) 0
_________________________________________________________________
lstm_3 (LSTM) (None, 32) 69760
=================================================================
Total params: 74,880
Trainable params: 74,880
Non-trainable params: 0
_________________________________________________________________

Resources