Structure a Keras Tensorboard graph - keras

When I create a simple Keras Model
model = Sequential()
model.add(Dense(10, activation='tanh', input_dim=1))
model.add(Dense(1, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mean_squared_error'])
and do a callback to Tensorboard
tensorboard = TensorBoard(log_dir='c:/temp/tensorboard/run1', histogram_freq=1, write_graph=True, write_images=False)
model.fit(x, y, epochs=1000, batch_size=1, callbacks=[tensorboard])
The output in Tensorboard looks like this:
In other words, it's a complete mess.
Is there anything I can do to make the graphs output look more structured?
How can I create histograms of weights with Keras and Tensorboard?

You can create a name scope to group layers in your model using with K.name_scope('name_scope').
Example:
with K.name_scope('CustomLayer'):
# add first layer in new scope
x = GlobalAveragePooling2D()(x)
# add a second fully connected layer
x = Dense(1024, activation='relu')(x)
Thanks to
https://github.com/fchollet/keras/pull/4233#issuecomment-316954784

Related

What to expect from model.predict in Keras?

I am new to Keras and trying to write my first code. I want to understand what 'model.predict' should return. Consider a simple model below.
model = keras.Sequential()
model.add(keras.layers.Dense(12, input_dim=232, activation='relu'))
model.add(keras.layers.Dense(232, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
model.fit(vSignal, vLabels, epochs=15, batch_size=100 )
# evaluate the keras model
_, accuracy = model.evaluate(vSignal, vLabels)
print('Accuracy: %.2f' % (accuracy*100))
pred=model.predict(vSignalT)
Consider we train the "model" with "vSignal" and "vLabels" as shown above. Now consider that the accuracy of the model as given by model.evaluate is 100%. Now if we give same data 'vSignal' to 'model.predict' should we get the 'vLabels' return?
pred=model.predict(vSignalT) returns a numpy arrays of predictions.
each row consists of one of the vlabels that the model predicted.
for more information refer to here
save return value of fit function:
hist = model.fit(vSignal, vLabels, epochs=15, batch_size=100 );
then check the
hist.history["accuracy"]

How to use tensorboard with tensorflow 2.0?

The following seems to be the new way of creating a FileWriter but I am not sure how to add_graph or do other things to get the model graph showing up in tensorboard.
train_writer = tf.summary.create_file_writer('logs/1/train')
You can do it by using tf.keras.callback.TensorBoard(/path/to/log/dir) in following way.
Create a build model function
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(train_dataset[col_to_norm].keys())]),
layers.Dense(64, activation="relu"),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
2 . assign to a variable
model = build_model()
Fit the model using fit method
model.fit(
normalized_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[tf.keras.callbacks.TensorBoard('logs/1/train')])

Find Most Important Input from a Neural Network

I trained a neural network with 37 Inputs. It has around 85% accuracy. Is it possible for me to find out which Input has the most effect. I tried this code but I cannot figure out how to find most important Input
weights = model.layers[0].get_weights()[0]
biases = model.layers[0].get_weights()[1]
One possible solution is to wrap your model with keras.wrappers.scikit_learn and then use Recursive Feature elimination in scikit-learn:
def create_model():
# create model
model = Sequential()
model.add(Dense(512, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(10, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model, epochs=100, batch_size=128, verbose=0)
rfe = RFE(estimator=model, n_features_to_select=1, step=1)
rfe.fit(X, y)
ranking = rfe.ranking_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking, cmap=plt.cm.Blues)
plt.colorbar()
plt.title("Ranking of pixels with RFE")
plt.show()
If you need to visualize weights see here.

Is there any method to plot vector(matrix) values after each NN layer

I modified the existing activation function and using it in the Convolutional layer of the Neural Network. I would like to know how does it perform compared to the existing activation function.Is there any method/function to plot in a graph the results(matrix values) after each Neural network layer,so that I could customise my activation function according to the values for better results?
model = Sequential()
e = Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=max_length, trainable=False)
model.add(e)
model.add(Conv1D(64,kernel_size,padding='valid',activation=newactivation,strides=1))
model.add(MaxPooling1D(pool_size=pool_size))
model.add(Conv1D(256,kernel_size,padding='valid',activation=newactivation,strides=1))
model.add(MaxPooling1D(pool_size=pool_size))
model.add(Bidirectional(GRU(gru_output_size, dropout=0.2, recurrent_dropout=0.2)))
model.add(Bidirectional(LSTM(lstm_output_size)))
model.add(Dense(nclass, activation='softmax'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
print(model.summary())
model.fit(padded_docs,y_train, epochs=epoch_size, verbose=0)
loss, accuracy = model.evaluate(tpadded_docs, y_test, verbose=0)
I cannot comment yet so I post this as an answer:
Refer to the Keras FAQ: "How can I obtain the output of an intermediate layer?"
It shows you how you can access the output of each layer. If you are using the version that uses the keras function, you can even access the output of the model in the learning phase (if your model contains layer that behave differently in training vs. testing).

ValueError when Fine-tuning Inception_v3 in Keras

I am trying to fine-tune pre-trained Inceptionv3 in Keras for a multi-label (17) prediction problem.
Here's the code:
# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=False)
# add a new top layer
x = base_model.output
predictions = Dense(17, activation='sigmoid')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# we need to recompile the model for these modifications to take effect
# we use SGD with a low learning rate
from keras.optimizers import SGD
model.compile(loss='binary_crossentropy', # We NEED binary here, since categorical_crossentropy l1 norms the output before calculating loss.
optimizer=SGD(lr=0.0001, momentum=0.9))
# Fit the model (Add history so that the history may be saved)
history = model.fit(x_train, y_train,
batch_size=128,
epochs=1,
verbose=1,
callbacks=callbacks_list,
validation_data=(x_valid, y_valid))
But I got into the following error message and had trouble deciphering what it is saying:
ValueError: Error when checking target: expected dense_1 to have 4
dimensions, but got array with shape (1024, 17)
It seems to have something to do with that it doesn't like my one-hot encoding for the labels as target. But how do I get 4 dimensions target?
It turns out that the code copied from https://keras.io/applications/ would not run out-of-the-box.
The following post has helped me:
Keras VGG16 fine tuning
The changes I need to make are the following:
Add in the input shape to the model definition base_model = InceptionV3(weights='imagenet', include_top=False, input_shape=(299,299,3)), and
Add a Flatten() layer to flatten the tensor output:
x = base_model.output
x = Flatten()(x)
predictions = Dense(17, activation='sigmoid')(x)
Then the model works for me!

Resources