Understanding keras model.summary() - python-3.x

I am trying to understand the model.summary() in keras, I have the code as:
model = Sequential([
Dense(3,activation='relu',input_shape=(6,)),
Dense(3,activation='relu'),
Dense(1),
])
model.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['mae','mape','mse','cosine']
)
And when I print(model.summary()) I get output as
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_16 (Dense) (None, 3) 21
_________________________________________________________________
dense_17 (Dense) (None, 3) 12
_________________________________________________________________
dense_18 (Dense) (None, 1) 4
=================================================================
Total params: 37
Trainable params: 37
Non-trainable params: 0
_________________________________________________________________
None
I cannot understand the meaning of dense_16, dense_17 and dense_18 with respect to my described model input layers.

Those are just the names of the layer that were autogenerated by Keras. To name layers manually, pass a keyword argument name='my_custon_name' to each layer that you want to name. Note that layer names must be unique inside a model.
Layer names are useful for debugging and to get specific layers in code, for example using model.get_layer(layer_name).

These are just the names of your layers. If you do not explicitly specify the layer names, they will just be named and numbered automatically.

Related

Keras-rl ValueError"Model has more than one output. DQN expects a model that has a single output"

Is there any way to get around this error? I have a model with a 15x15 input grid, which leads to two outputs. Each output has 15 possible values, which are x or y coordinates. I did this because it is significantly simpler than having 225 separate outputs for every location on the grid.
The problem is that when i try to train the model using this code:
def build_agent(model,actions)
policy = BoltzmannQPolicy()
memory = SequentialMemory(limit=100000, window_length=1)
dqn = DQNAgent(model=model, memory=memory,policy=policy,nb_actions=actions,nb_steps_warmup=100, target_model_update=1e-2)
return(dqn)
dqn = build_agent(model, np.array([15,15]))
dqn.compile(Adam(learning_rate = 0.01), metrics=['mae'])
dqn.fit(env, nb_steps=10000, action_repetition=1, visualize=False, verbose=1,nb_max_episode_steps=10000)
plt.show()
I get the error: "Model has more than one output. DQN expects a model that has a single output".
The model summary is below so you can see there are 2 output layers.
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) [(None, 1, 15, 15)] 0 []
conv2d_2 (Conv2D) (None, 12, 13, 13) 120 ['input_2[0][0]']
conv2d_3 (Conv2D) (None, 10, 11, 3) 354 ['conv2d_2[0][0]']
flatten_1 (Flatten) (None, 330) 0 ['conv2d_3[0][0]']
dropout_1 (Dropout) (None, 330) 0 ['flatten_1[0][0]']
dense_2 (Dense) (None, 15) 4965 ['dropout_1[0][0]']
dense_3 (Dense) (None, 15) 4965 ['dropout_1[0][0]']
==================================================================================================
Total params: 10,404
Trainable params: 10,404
Non-trainable params: 0
__________________________________________________________________________________________________
Standard Keras allows a model with multiple outputs using the functional api but from the errpr message i assume that feature is just not supported for Keras-rl? If thats true, is there any way to get around this issue?
The solution was that i had to just use one output of 225. This didn't work great, but it was the best i could find. Two different outputs will not work using keras-rl, so this was all i could think of. Another possibility would be using a different library such as stable baselines2, but that would be completely different to the already built code.

Keras model.fit() IndexError: list index out of range

I need some help, I keep getting this strange situation where my Keras model goes out of range
print(np.array(train_x).shape)
print(np.array(train_y).shape)
Returns:
(731, 42)
(731,)
Then:
normalizer = Normalization(input_shape=[42,], axis=None)
normalizer.adapt(train_x[0])
linear_model = Sequential([
normalizer,
Dense(units=1)
])
linear_model.summary()
Shows:
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
normalization_5 (Normalizati (None, 42) 3
_________________________________________________________________
dense_1 (Dense) (None, 1) 43
=================================================================
Total params: 46
Trainable params: 43
Non-trainable params: 3
_________________________________________________________________
Then:
linear_model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.1),
loss='mean_absolute_error')
linear_model.fit(
train_x,
train_y,
epochs=100)
Which results in an IndexError: list index out of range. It looks like my inputs are in the right shape. Any idea what could be causing this?
train_x and train_y needed to be numpy arrays.

model.summary() and plot_model() showing nothing from the built model in tensorflow.keras

I am testing something which includes building a FCNN network Dynamically. Idea is to build Number of layers and it's neurons based on a given list and the dummy code is:
neurons = [10,20,30] # First Dense has 10 neuron, 2nd has 20 and third has 30
inputs = keras.Input(shape=(1024,))
x = Dense(10,activation='relu')(inputs)
for n in neurons:
x = Dense(n,activation='relu')(x)
out = Dense(1,activation='sigmoid')(x)
model = Model(inputs,out)
model.summary()
keras.utils.plot_model(model,'model.png')
for layer in model.layers:
print(layer.name)
To my surprise, it is showing nothing.I even compiled and ran the functions again and nothing came out.
The model.summary always shows number of trainable and non trainable params but not the model structure and layer names. Why is this happening? Or is this normal?
About model.summary(), don't mix tf 2.x and standalone keras at a time. If I ran you model in tf 2.x, I get the expected results.
from tensorflow.keras.layers import *
from tensorflow.keras import Model
from tensorflow import keras
# your code ...
model.summary()
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 1024)] 0
_________________________________________________________________
dense (Dense) (None, 10) 10250
_________________________________________________________________
dense_1 (Dense) (None, 10) 110
_________________________________________________________________
dense_2 (Dense) (None, 20) 220
_________________________________________________________________
dense_3 (Dense) (None, 30) 630
_________________________________________________________________
dense_4 (Dense) (None, 1) 31
=================================================================
Total params: 11,241
Trainable params: 11,241
Non-trainable params: 0
_________________________________
About plotting the model, there is a couple of option that can be used while you plot your keras model. Here is one example:
keras.utils.plot_model(model, show_dtype=True,
show_layer_names=True, show_shapes=True,
to_file='model.png')

How to see keras.engine.sequential.Sequential

I am new to Keras and deep learning and was working with MNIST on Keras. When I created a model using
model = models.Sequential()
model.add(layers.Dense(512,activation = 'relu',input_shape=(28*28,)))
model.add(layers.Dense(32,activation ='relu'))
model.add(layers.Dense(10,activation='softmax'))
and then I printed it
print(model)
output is
<keras.engine.sequential.Sequential at 0x7f3d554f6710>
My question is that is there any way to see a better result of Keras, meaning if i print model i can see that i have 3 hidden layers with first hidden layer having 512 hidden units and 784 input units, 2nd hidden layer having 512 input units and 32 hidden units and so on.
You can also try plot_model()
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(512,activation = 'relu',input_shape=(28*28,)))
model.add(tf.keras.layers.Dense(32,activation ='relu'))
model.add(tf.keras.layers.Dense(10,activation='softmax'))
model.summary()
from keras.utils.vis_utils import plot_model
plot_model(model, show_shapes=True, show_layer_names=True)
model.summary() will print he entire model for you.
model = Sequential()
model.add(Dense(512,activation = 'relu',input_shape=(28*28,)))
model.add(Dense(32,activation ='relu'))
model.add(Dense(10,activation='softmax'))
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 512) 401920
_________________________________________________________________
dense_1 (Dense) (None, 32) 16416
_________________________________________________________________
dense_2 (Dense) (None, 10) 330
=================================================================
Total params: 418,666
Trainable params: 418,666
Non-trainable params: 0
____________________________

How the number of parameters associated with BatchNormalization layer is 2048?

I have the following code.
x = keras.layers.Input(batch_shape = (None, 4096))
hidden = keras.layers.Dense(512, activation = 'relu')(x)
hidden = keras.layers.BatchNormalization()(hidden)
hidden = keras.layers.Dropout(0.5)(hidden)
predictions = keras.layers.Dense(80, activation = 'sigmoid')(hidden)
mlp_model = keras.models.Model(input = [x], output = [predictions])
mlp_model.summary()
And this is the model summary:
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_3 (InputLayer) (None, 4096) 0
____________________________________________________________________________________________________
dense_1 (Dense) (None, 512) 2097664 input_3[0][0]
____________________________________________________________________________________________________
batchnormalization_1 (BatchNorma (None, 512) 2048 dense_1[0][0]
____________________________________________________________________________________________________
dropout_1 (Dropout) (None, 512) 0 batchnormalization_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 80) 41040 dropout_1[0][0]
====================================================================================================
Total params: 2,140,752
Trainable params: 2,139,728
Non-trainable params: 1,024
____________________________________________________________________________________________________
The size of the input for the BatchNormalization (BN) layer is 512. According to Keras documentation, shape of the output for BN layer is same as input which is 512.
Then how the number of parameters associated with BN layer is 2048?
These 2048 parameters are in fact [gamma weights, beta weights, moving_mean(non-trainable), moving_variance(non-trainable)], each having 512 elements (the size of the input layer).
The batch normalization in Keras implements this paper.
As you can read there, in order to make the batch normalization work during training, they need to keep track of the distributions of each normalized dimensions. To do so, since you are in mode=0by default, they compute 4 parameters per feature on the previous layer. Those parameters are making sure that you properly propagate and backpropagate the information.
So 4*512 = 2048, this should answer your question.

Resources