When I read the Keras document I find a term called functional API.
What is the meaning of functional API in Keras?
Could anyone help you to understand the basic and significant terms in Keras?
Thanks
It is a way to create your models. One can use sequential models (tutorial here):
For example:
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential([
Dense(32, input_shape=(784,)),
Activation('relu'),
Dense(10),
Activation('softmax'),
])
And you can call it with input features. Second approach is functional (tutorial here). You call each layer with the next which allows you to have more flexibility when creating your model, for example:
from keras.layers import Input, Dense
from keras.models import Model
# This returns a tensor
inputs = Input(shape=(784,))
# a layer instance is callable on a tensor, and returns a tensor
output_1 = Dense(64, activation='relu')(inputs)
# you call layer with another layer to create model
output_2 = Dense(64, activation='relu')(output_1)
predictions = Dense(10, activation='softmax')(output_2)
# This creates a model that includes
# the Input layer and three Dense layers
model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels) # starts training
You could also subclass Model, similar to what Chainer or PyTorch provide to the user, but it's used that heavily in Keras.
Related
I am using Keras functional API to build a classifier and I am using the training flag in the dropout layer to enable dropout when predicting new instances (in order to get an estimate of the uncertainty). In order to get the expected response one needs to repeat this prediction several times, with keras randomly activating links in the dense layer, and of course it is computational expensive. Therefore, I would also like to have the option to not use dropout at the prediction phase, i.e., use all the network links. Does anyone know how I can do this? Following is a sample code of what I am doing. I tried to look if predict has any relevant parameter but does not seem like it does (?). I can technically train the same model without the training flag at the dropout layer, but I do not want to do this (or better I want to have a more clean solution, rather than having 2 different models).
from sklearn.datasets import make_circles
from keras.models import Sequential
from keras.utils import to_categorical
from keras.layers import Dense
from keras.layers import Dropout
import numpy as np
import keras
# generate a 2d classification sample dataset
X, y = make_circles(n_samples=100, noise=0.1, random_state=1)
n_train = 30
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
trainy = to_categorical(trainy)
testy = to_categorical(testy)
inputlayer = keras.layers.Input((2,))
d = keras.layers.Dense(500, activation = 'relu')(inputlayer)
d1 = keras.layers.Dropout(rate = .3)(d,training = True)
out = keras.layers.Dense(2, activation = 'softmax')(d1)
model = keras.Model(inputs = inputlayer, outputs = out)
model.compile(loss = 'categorical_crossentropy',metrics = ['accuracy'],optimizer='adam')
model.fit(x = trainX, y = trainy, validation_data=(testX, testy),epochs=1000, verbose=1)
# another prediction on a specific sample
print(model.predict(testX[0:1,:]))
# another prediction on the same sample
print(model.predict(testX[0:1,:]))
Running the above example I get the following output:
[[0.9230819 0.07691813]]
[[0.8222245 0.17777553]]
which is as expected, different class probabilities for the same input, since there is a random (de)activation of the links from the dropout layer.
Any suggestions on how I can enable/disable dropout at the prediction phase with the functional API?
Sure, you do not need to set the training flag when building the Dropout layer. After training your model you define this function:
mc_func = K.function([model.input, K.learning_phase()],
[model.output])
Then you call mc_func with your input and flag 1 to enable dropout, or 0 to disable it:
stochastic_pred = mc_func([some_input, 1])
deterministic_pred = mc_func([some_input, 0])
I studied Keras and created my first neural network model as the following:
from keras.layers import Dense
import keras
from keras import Sequential
from sklearn.metrics import accuracy_score
tr_X, tr_y = getTrainingData()
# NN Architecture
model = Sequential()
model.add(Dense(16, input_dim=tr_X.shape[1]))
model.add(keras.layers.advanced_activations.PReLU())
model.add(Dense(16))
model.add(keras.layers.advanced_activations.PReLU())
model.add(Dense(1, activation='sigmoid'))
# Compile the Model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the Model
model.fit(tr_X, tr_y, epochs=1000, batch_size=200, validation_split=0.2)
# ----- Evaluate the Model (Using UNSEEN data) ------
ts_X, ts_y = getTestingData()
yhat_classes = model.predict_classes(ts_X, verbose=0)[:, 0]
accuracy = accuracy_score(ts_y, yhat_classes)
print(accuracy)
I am not sure about the last portion of my code, i.e., model evaluation using model.predict_classes() where new data are loaded via a custom method getTestingData(). See my goal is to test the final model using new UNSEEN data to evaluate its prediction. My question is about this part: Am I evaluating the model correctly?
Thank you,
Yes, that is correct. You can use predict or predict_classes to get the predictions on test data. If you need the loss & metrics directly, you can use the evaluate method by feeding ts_X and ts_y.
y_pred = model.predict(ts_X)
loss, accuracy = model.evaluate(ts_X, ts_y)
https://keras.io/models/model/#predict
https://keras.io/models/model/#evaluate
Difference between predict & predict_classes: What is the difference between "predict" and "predict_class" functions in keras?
I trained a model with Resnet3D and I want to extract the neurons of a layer. I plan to use them with the SVM classifier. How can I extract these weights and put them to the numpy array?
Load the weights by keras
model = Resnet3DBuilder.build_resnet_18((128, 96, 96, 3), nClass[0])
model.load_weights('drive/app/models/3d_resnet_modelq.hdf5')
extract a layer
dns = model.layers[-1].output
now what should i do?
If you just want to visualise the features, in pure Keras you can define a Model with the desired layer as output:
from keras.models import Model
model_cut = Model(inputs=model.inputs, output=model.layers[-1].output)
features = model_cut.predict(x) # Assuming you have your images in x
Note that in order for this to work, model must have been compiled at least once.
Trying to use transfer learning (fine tuning) with InceptionV3, removing the last layer, keeping training for all the layers off, and adding a single dense layer. When I look at the summary again, I do not see my added layer, and getting expectation.
RuntimeError: You tried to call count_params on dense_7, but the
layer isn't built. You can build it manually via:
dense_7.build(batch_input_shape).
from keras import applications
pretrained_model = applications.inception_v3.InceptionV3(weights = "imagenet", include_top=False, input_shape = (299, 299, 3))
from keras.layers import Dense
for layer in pretrained_model.layers:
layer.trainable = False
pretrained_model.layers.pop()
layer = (Dense(2, activation='sigmoid'))
pretrained_model.layers.append(layer)
Looking at summary again gives above exception.
pretrained_model.summary()
Wanted to train compile and fit model, but
pretrained_model.compile(optimizer=RMSprop(lr=0.0001),
loss = 'sparse_categorical_crossentropy', metrics = ['acc'])
Above line gives this error,
Could not interpret optimizer identifier:
You are using pop to pop the fully connected layer like Dense at the end of the network. But this is already accomplished by the argument include top = False. So you just need to initialize Inception with include_top = False, add the final Dense layer. In addition, since it's InceptionV3, I suggest you to add GlobalAveragePooling2D() after output of InceptionV3 to reduce overfitting. Here is a code,
from keras import applications
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
pretrained_model = applications.inception_v3.InceptionV3(weights = "imagenet", include_top=False, input_shape = (299, 299, 3))
x = pretrained_model.output
x = GlobalAveragePooling2D()(x) #Highly reccomended
layer = Dense(2, activation='sigmoid')(x)
model = Model(input=pretrained_model.input, output=layer)
for layer in pretrained_model.layers:
layer.trainable = False
model.summary()
This should give you the desired model to fine tune.
I'm new to Keras
my neural network structure is here:
neural network structure
my idea is :
import keras.backend as KBack
import tensorflow as tf
#...some code here
model = Sequential()
hidden_units = 4
layer1 = Dense(
hidden_units,
input_dim=len(InputIndex),
activation='sigmoid'
)
model.add(layer1)
# layer1_bias = layer1.get_weights()[1][0]
layer2 = Dense(
1, activation='sigmoid',
use_bias=False
)
model.add(layer2)
# KBack.bias_add(model.output, layer1_bias[0])
I know this is not working cause layer1_bias[0] is not tensor, but I have no idea how to fix it. Or somebody has other solution.
Thanks.
You get the error because bias_add expects a Tensor and you are passing it a float (the actual value of the bias). Also, be aware that your hidden layer actually has 3 biases (one for each node). If you want to add the bias of the first node to your output layer, this should work:
import keras.backend as K
from keras.layers import Dense, Activation
from keras.models import Sequential
model = Sequential()
layer1 = Dense(3, input_dim=2, activation='sigmoid')
layer2 = Dense(1, activation=None, use_bias=False)
activation = Activation('sigmoid')
model.add(layer1)
model.add(layer2)
K.bias_add(model.output, layer1.bias[0:1]) # slice like this to not lose a dimension
model.add(activation)
print(model.summary())
Note that, to be 'correct' (according to the definition of what a dense layer does), you should add the bias first, then the activation.
Also, your code is not really in line with the picture of your network. In the picture, one single shared bias is added to each of the nodes in the network. You can do this with the functional API. The idea is to disable the use of biases in the hidden layer and the output layers, and to manually add a bias variable that you define yourself and that will be shared by the layers. I'm using tensorflow for tf.add() since that supports broadcasting:
from keras.layers import Dense, Lambda, Input, Add
from keras.models import Model
import keras.backend as K
import tensorflow as tf
# Define the shared bias as a custom keras variable
shared_bias = K.variable(value=[0], name='shared_bias')
input_layer = Input(shape=(2,))
# Disable biases in the hidden layer
dense_1 = Dense(units=3, use_bias=False, activation=None)(input_layer)
# Manually add the shared bias
dense_1 = Lambda(lambda x: tf.add(x, shared_bias))(dense_1)
# Disable bias in output layer
output_layer = Dense(units=1, use_bias=False)(dense_1)
# Manually add the bias variable
output_layer = Lambda(lambda x: tf.add(x, shared_bias))(output_layer)
model = Model(inputs=input_layer, outputs=output_layer)
print(model.summary())
This assumes that your shared bias is not trainable though.