Making predictions with Tensorflow model on new data - python-3.x

I am new to TensorFlow and I am trying to model a two layer fully-connected neural network for regression in TensorFlow. The answer in the following StackOverflow discussion shows how to make predictions with a single layer neural network model. However, I feel this approach will become inefficient with more layers.
Making predictions with a TensorFlow model
Could anyone please let me know, how I can make predictions with my TensorFlow model?
I have defined my network as shown below:
x = tf.placeholder(tf.float32,[None,n_input])
y = tf.placeholder(tf.float32,[None])
weights_h1 = tf.Variable(tf.truncated_normal([n_input, n_hidden_1]))
weights_h2 = tf.Variable(tf.truncated_normal([n_hidden_1, n_hidden_2]))
weights_out = tf.Variable(tf.truncated_normal([n_hidden_2, n_output]))
bias_b1 = tf.Variable(tf.truncated_normal([n_hidden_1]))
bias_b2 = tf.Variable(tf.truncated_normal([n_hidden_2]))
bias_out = tf.Variable(tf.truncated_normal([n_output]))
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights_h1), bias_b1)
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights_h2), bias_b2)
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.add(tf.matmul(layer_2, weights_out), bias_out)
Using the following command, I can train my neural network model.
_, training_error = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})

Related

Keras Tuner and Specifying Advanced Activation Layers

I have the following Keras (Python) code for using the Keras Tuner for a single hidden layer neural network.
def build_model(hp):
# Initialize sequential
model = keras.Sequential()
# Tune the number of units in the first Dense layer
hp_units = hp.Int('units', min_value=25, max_value=250, step=25)
model.add(keras.layers.Dense(units=hp_units, input_shape = (train_pca_scaled.shape[1],),
kernel_regularizer=keras.regularizers.L1(l1 = hp.Choice('L1_value', [1e-2, 1e-3, 1e-4]))))
# Batch Normalization (before activation)
model.add(keras.layers.BatchNormalization())
# Activation
activation = hp.Choice("activation", ["relu", "elu"])
model.add(keras.layers.Activation(activation))
# Tune dropout rate
hp_dropout = hp.Float('dropout_rate', min_value=0, max_value=0.5, step=0.1)
model.add(keras.layers.Dropout(rate = hp_dropout))
# Final output layer
model.add(keras.layers.Dense(units = 1))
# Compile
lr = hp.Choice("learning_rate", values=[1e-2, 1e-3, 1e-4])
model.compile(optimizer = keras.optimizers.Adam(learning_rate= lr),
loss = keras.losses.MeanSquaredError(),
metrics= [keras.metrics.RootMeanSquaredError()])
return model
My issue is that I would like to also have the advanced activation layers of LeakyReLU and PReLU as activation options (along with relu and elu), but they have a different call. For advanced activations layers, it would be keras.layers.LeakyReLU() for example, which is different than the format I have above of keras.layers.Activation(activation).
How can I change the code above so that all 4 of the mentioned activation layers are included in the hyperparameter search?
Thanks.

Evalulate Tensorflow Keras VS KerasRegressor Neural Network

I'm attempting to find variable importance on a Neural Network I've built. Using tensorflow, it seems you can use either the tensorflow.keras way, or the kerasRegressor way. Admittedly, I have been reading documentation / stack overflow for hours and am confused on the differences. They seem to perform similarly but have slightly different pros/cons.
One issue I'm running into is when I use tf.keras to build the model, I am able to clearly compare my training data to my validation/testing data, and get an 'accuracy score'. But, when using kerasRegressor, I am not.
The difference here is the .evaluate() function, which kerasRegressor doesn't seem to have.
Questions:
How to evaluate performance of kerasRegressor model w/ same output as tf.keras.evaluate()?
kerasRegressor Code:
K.clear_session()
def base_model():
# 1- Instantiate Model
modelNEW = keras.Sequential()
# 2- Specify Shape of First Layer
modelNEW.add(layers.Dense(512, activation = 'relu', input_shape = ourInputShape))
# 3- Add the layers
modelNEW.add(layers.Dense(3, activation= 'softmax')) #softmax returns array of probability scores (num prior), and in this case we have to predict either CSCANCEL, MEMBERCANCEL, ACTIVE)
modelNEW.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
return modelNEW
# *** THIS IS SUPPOSED TO PREVENT OVERFITTING ***
from tensorflow.keras.callbacks import EarlyStopping
callbacks = [
EarlyStopping(patience=2)
]
yTrain = keras.utils.to_categorical(yTrain, 3)
yValidation = keras.utils.to_categorical(yValidation, 3)
currentModel = KerasRegressor(build_fn=base_model, epochs=100, batch_size=50, shuffle='True')
history = currentModel.fit(xTrain, yTrain)
Now if I want to test the accuracy, I have to use .predict()
prediction = currentModel.predict(xValidation)
# print(prediction)
# train_error = np.abs(yValidation - prediction)
# mean_error = np.mean(train_error)
# min_error = np.min(train_error)
# max_error = np.max(train_error)
# std_error = np.std(train_error)```
tf.Keras neural Network:
modelNEW = keras.Sequential()
modelNEW.add(layers.Dense(512, activation = 'relu', input_shape = ourInputShape))
modelNEW.add(layers.Dense(3, activation= 'softmax')) #softmax returns array of probability scores (num prior), and in this case we have to predict either CSCANCEL, MEMBERCANCEL, ACTIVE)
modelNEW.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
*** THIS IS SUPPOSED TO PREVENT OVERFITTING ***
from tensorflow.keras.callbacks import EarlyStopping
callbacks = [
EarlyStopping(patience=2)
]
yTrain = keras.utils.to_categorical(yTrain, 3)
yValidation = keras.utils.to_categorical(yValidation, 3)
history = modelNEW.fit(xTrain, yTrain, epochs=100, batch_size=50, shuffle="True")
This is the evaluation I need to see, and cannot with kerasRegressor:
# 6- Model evaluation with test data
test_loss, test_acc = modelNEW.evaluate(xValidation, yValidation)
print('test_acc:', test_acc)
Possible Workaround, still error:
# predictionTrain = currentModel.predict(xTrain)
predictionValidation = currentModel.predict(xValidation)
# print('Train Accuracy = ',accuracy_score(yTrain,np.argmax(pred_train, axis=1)))
print('Test Accuracy = ',accuracy_score(yValidation,np.argmax(predictionValidation, axis=1)))
: Classification metrics can't handle a mix of multilabel-indicator and binary targets

Adding an activation layer to Keras Add() layer and using this layer as output to model

I am trying to apply a softmax activation layer to the output of the Add() layer. I am trying to make this layer the output of my model and I am running into a few problems.
It seems Add() layer doesn't allow the usage of activations and if I do something like this:
predictions = Add()([x,y])
predictions = softmax(predictions)
model = Model(inputs = model.input, outputs = predictions)
I get:
ValueError: Output tensors to a Model must be the output of a Keras `Layer` (thus holding past layer metadata). Found: Tensor("Softmax:0", shape=(?, 6), dtype=float32)
It has nothing to do with the Add layer, you are using K.softmax directly on Keras tensors and this won't work, you need an actual layer. You can use the Activation layer for this:
from keras.layers import Activation
predictions = Add()([x,y])
predictions = Activation("softmax")(predictions)
model = Model(inputs = model.input, outputs = predictions)

Is it possible to train using same model with two inputs?

Hello I have a some question for keras.
currently i want implement some network
using same cnn model, and use two images as input of cnn model
and use two result of cnn model, provide to Dense model
for example
def cnn_model():
input = Input(shape=(None, None, 3))
x = Conv2D(8, (3, 3), strides=(1, 1))(input)
x = GlobalAvgPool2D()(x)
model = Model(input, x)
return model
def fc_model(cnn1, cnn2):
input_1 = cnn1.output
input_2 = cnn2.output
input = concatenate([input_1, input_2])
x = Dense(1, input_shape=(None, 16))(input)
x = Activation('sigmoid')(x)
model = Model([cnn1.input, cnn2.input], x)
return model
def main():
cnn1 = cnn_model()
cnn2 = cnn_model()
model = fc_model(cnn1, cnn2)
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(x=[image1, image2], y=[1.0, 1.0], batch_size=1, ecpochs=1)
i want to implement model something like this, and train models
but i got error message like below :
'All layer names should be unique'
Actually i want use only one CNN model as feature extractor and finally use two features to predict one float value as 0.0 ~ 1.0
so whole system -->>
using two images and extract features from same CNN model, and features are provided to Dense model to get one floating value
Please, help me implement this system and how to train..
Thank you
See the section of the Keras documentation on shared layers:
https://keras.io/getting-started/functional-api-guide/
A code snippet from the documentation above demonstrating this:
# This layer can take as input a matrix
# and will return a vector of size 64
shared_lstm = LSTM(64)
# When we reuse the same layer instance
# multiple times, the weights of the layer
# are also being reused
# (it is effectively *the same* layer)
encoded_a = shared_lstm(tweet_a)
encoded_b = shared_lstm(tweet_b)
# We can then concatenate the two vectors:
merged_vector = keras.layers.concatenate([encoded_a, encoded_b], axis=-1)
# And add a logistic regression on top
predictions = Dense(1, activation='sigmoid')(merged_vector)
# We define a trainable model linking the
# tweet inputs to the predictions
model = Model(inputs=[tweet_a, tweet_b], outputs=predictions)
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit([data_a, data_b], labels, epochs=10)

ValueError when Fine-tuning Inception_v3 in Keras

I am trying to fine-tune pre-trained Inceptionv3 in Keras for a multi-label (17) prediction problem.
Here's the code:
# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=False)
# add a new top layer
x = base_model.output
predictions = Dense(17, activation='sigmoid')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# we need to recompile the model for these modifications to take effect
# we use SGD with a low learning rate
from keras.optimizers import SGD
model.compile(loss='binary_crossentropy', # We NEED binary here, since categorical_crossentropy l1 norms the output before calculating loss.
optimizer=SGD(lr=0.0001, momentum=0.9))
# Fit the model (Add history so that the history may be saved)
history = model.fit(x_train, y_train,
batch_size=128,
epochs=1,
verbose=1,
callbacks=callbacks_list,
validation_data=(x_valid, y_valid))
But I got into the following error message and had trouble deciphering what it is saying:
ValueError: Error when checking target: expected dense_1 to have 4
dimensions, but got array with shape (1024, 17)
It seems to have something to do with that it doesn't like my one-hot encoding for the labels as target. But how do I get 4 dimensions target?
It turns out that the code copied from https://keras.io/applications/ would not run out-of-the-box.
The following post has helped me:
Keras VGG16 fine tuning
The changes I need to make are the following:
Add in the input shape to the model definition base_model = InceptionV3(weights='imagenet', include_top=False, input_shape=(299,299,3)), and
Add a Flatten() layer to flatten the tensor output:
x = base_model.output
x = Flatten()(x)
predictions = Dense(17, activation='sigmoid')(x)
Then the model works for me!

Resources