Save two models in two differents folders - python-3.x

I'm using two differents neural networks sequential models in my python program.
One RNN model defined like this:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, CuDNNLSTM
ModelRNN = Sequential()
ModelRNN.add(CuDNNLSTM(150, return_sequences=True, batch_size=None, input_shape=(None,10)))
ModelRNN.add(CuDNNLSTM(150, return_sequences=True))
ModelRNN.add(Dense(100, activation='relu'))
ModelRNN.add(Dense(10, activation='relu'))
optimizer = tf.keras.optimizers.Adam(lr=0.001)
ModelRNN.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['accuracy'])
One Dense model defined like this :
ModelDense = Sequential()
ModelDense.add(Dense(380, batch_size=None, input_shape=(1,380), activation='elu'))
ModelDense.add(Dense(380, activation='elu'))
ModelDense.add(Dense(380, activation='elu'))
ModelDense.add(Dense(380, activation='elu'))
optimizer = tf.keras.optimizers.Adam(lr=0.00025)
ModelDense.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['accuracy'])
So my problem is that the two networks will work together so i have to run both of them in the same tensorflow session BUT, i want to save them in two differents folders.
I don't really know if it's possible because i never really interested myself in how is working the tensorflow graphs at all and i only know that when i use my tensorflow saver, i only give as parameter my session and a path.
So my question is how can i separate the storage of my models in two folders ?
I want to do that because i want to be able to easily change my RNN without having to retrain both of my networks or without having to overwrite my trained RNN
If i'am not clear please ask me for more details

So my question is how can i separate the storage of my models in two
folders ?
The save method of Keras model accepts a file path, so different folders can be given for the individual models.
ModelRNN.save('folder1/<filename.h5>')
ModelDense.save('folder2/<filename.h5>')
https://www.tensorflow.org/tutorials/keras/save_and_restore_models#save_the_entire_model

Related

shall i use a sequential model or Functional API to model a neural network for two input 2D matrix

Good morning,
i tried to use a sequential model to create my neural network which have a multiple input (concatenated). But i want to know if shall i use The Keras functional API to CREATE my model.
in1= loadtxt('in1.csv', delimiter=',')#2D matrix
in2= loadtxt('in2.csv', delimiter=',')#2D matrix
y= loadtxt('y.csv', delimiter=',') #2D matrix (output labels)
X_train=np.hstack((in1,in2))
y_train=y
model = Sequential()
model.add(Dense(nbinneuron, input_dim=2*nx,activation='tanh',kernel_initializer='normal'))
model.add(Dropout(0.5))
#output layer
model.add(Dense(2, activation='tanh'))
opt =Adalta(lr=0.01)
model.compile(loss='mean_squared_error', optimizer=opt, metrics=['mse'])
# fit the keras model on the dataset
history=model.fit(X_train, y_train,validation_data=(X_test, y_test), epochs=500,verbose=0)
...
thanks an advance
A Sequential Model can only have one input and one output. To build a model with multiple inputs (and/or multiple outputs), you need to use the Functional API.

multi Input with shared weights in Keras

Im trying to build network like this:
the network
and my question is how to implement the begining with the shared weights,
because it contains FC+BN+ReLu (3-layers), and I have multi inputs vectors( M(~25) vectors with length=F).
I tried with functional API modle in keras, and I had some diffculty with this.
thanks
You can try using a TimeDistributed over each layer.
For example:
model = Sequential()
model.add(TimeDistributed(MobileNetV2(weights='imagenet',include_top=False), input_shape=(n_sequence, *dim, n_channels)))
model.add(TimeDistributed(GlobalAveragePooling2D()))
model.add(CuDNNLSTM(64, return_sequences=False))
model.add(Dense(64, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(24, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(n_output, activation='softmax'))
Code has been taken from https://github.com/peachman05/action-recognition-tutorial/blob/master/model_ML.py

Adding dropout layer with the existence of advanced_activations layer

I have the following NN architecture using Keras:
from keras import Sequential
from keras.layers import Dense
import keras
model = Sequential()
model.add(Dense(16, input_dim=32))
model.add(keras.layers.advanced_activations.PReLU())
model.add(Dense(8))
model.add(keras.layers.advanced_activations.PReLU())
model.add(Dense(4))
model.add(Dense(1, activation='sigmoid'))
I wonder if it makes any difference to add model.add(Dropout(0.5)) before advanced_activations.PReLU() or after it. In another word, where is the correct place to add dropout layer with the existence of advanced_activations layer?
Thank you.
It does not really matter if you do it before the activation or after, since for most activations f(0) = 0, then putting dropout after or before will produce the same result.

Keras Init Sequential model layers by Model layers

I'm trying to build some app using Transfer Learning. I want to use Vgg16 so I've done sth like this:
vgg16_model = keras.applications.vgg16.VGG16() but I want to transfer layers from Vgg16 to my model.
model = Sequential(layers=vgg16_model.layers) (I've seen this here)
but it leads me to error
TypeError: The added layer must be an instance of class Layer. Found:
tensorflow.python.keras.engine.input_layer.InputLayer
How can I init my Sequential model by vgg16 layers?
Thanks in advance.
Try this:
vgg = VGG16()
model = Sequential()
model.add(vgg)
model.add(...) # add additional layers

Fine-Tune pre-trained InceptionResnetV2

Steps for fine-tuning a network are as follow:
Add your custom network on top of an already trained base
network.
Freeze the base network.
Train the part you added.
Unfreeze some layers in the base network.
Jointly train both these layers and the part you added.
Now if the network architecture is simple as VGG16, we can simply unfreeze the base network from block5_conv1 (Conv2D) and re-train it.
VGG16 Architecture
But When the architecture is highly complex as InceptionResnetV2, where to start? Does anyone has any practical experience? Run the following code in python to see the model:
from keras.applications import InceptionResNetV2
conv_base = InceptionResNetV2(weights='imagenet',
include_top=False,
input_shape=(299, 299, 3))
conv_base.summary()
from keras.utils import plot_model
plot_model(conv_base, to_file='model.png')`
A very basic fine-tuning of model with InceptionResNetV2 will look like this:
from inception_resnet_v2 import InceptionResNetV2
# ImageNet classification
model = InceptionResNetV2()
model.predict(...)
# Finetuning on another 100-class dataset
base_model = InceptionResNetV2(include_top=False, pooling='avg')
# The first argument in the next line represents the number of classes
outputs = Dense(100, activation='softmax')(base_model.output)
model = Model(base_model.inputs, outputs)
model.compile(...)
model.fit(...)
This is a good place to start github.com/yuyang-huang/keras-inception-resnet-v2

Resources