I have generated a tf.data.Dataset which can be accessed as a class attribute using window.train. This dataset is not batched and its dimensions can be displayed as follows:
for inputs, labels in window.train.take(1):
print(inputs.shape, labels.shape)
which yields:
(46, 27)
(46, 1)
Alternatively, if we were to batch the dataset and take the first batch:
for inputs, labels in window.train.batch(3).take(1):
print(inputs.shape, labels.shape)
which yields:
(3, 46, 27)
(3, 46, 1)
I then created an elementary model that takes a linear combination of the inputs to predict the labels. Since I intend to gradually make this model more articulated, I set it up as a Keras Tuner Hypermodel.
class LinearModel(kt.HyperModel):
def build(self, hp):
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(units=1, activation=None)
model.compile(tf.keras.optimizers.Adam(), loss='mse')
return model
While I understand how to set up the build method, I am not sure how I should set up the fit method, which I need to find the optimal batch size. I tried something as follows:
def fit(self, hp, model, x, validation_data, **kwargs):
hp_batch_size = hp.Choice('batch_size', [100, 200])
x = window.train.batch(hp_batch_size)
validation_data = window.valid.batch(hp_batch_size)
return model.fit(x, validation_data, **kwargs)
This doesn't work. In particular I do not understand what arguments I should pass to fit and why. Can someone help me understand how to define fit? I am familiar with the examples provided by keras but they were not helpful. Importantly, my model is then tuned as follows:
model_tuner = kt.Hyperband(hypermodel=LinearModel, objective='val_loss')
model_tuner.search(x=window.train, validation_data=window.valid, epochs=500)
Related
I need to visualize the output of Vgg16 model which classify 14 different classes.
I load the trained model and I did replace the classifier layer with the identity() layer but it doesn't categorize the output.
Here is the snippet:
the number of samples here is 1000 images.
epoch = 800
PATH = 'vgg16_epoch{}.pth'.format(epoch)
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
class Identity(nn.Module):
def __init__(self):
super(Identity, self).__init__()
def forward(self, x):
return x
model.classifier._modules['6'] = Identity()
model.eval()
logits_list = numpy.empty((0,4096))
targets = []
with torch.no_grad():
for step, (t_image, target, classess, image_path) in enumerate(test_loader):
t_image = t_image.cuda()
target = target.cuda()
target = target.data.cpu().numpy()
targets.append(target)
logits = model(t_image)
print(logits.shape)
logits = logits.data.cpu().numpy()
print(logits.shape)
logits_list = numpy.append(logits_list, logits, axis=0)
print(logits_list.shape)
tsne = TSNE(n_components=2, verbose=1, perplexity=10, n_iter=1000)
tsne_results = tsne.fit_transform(logits_list)
target_ids = range(len(targets))
plt.scatter(tsne_results[:,0],tsne_results[:,1],c = target_ids ,cmap=plt.cm.get_cmap("jet", 14))
plt.colorbar(ticks=range(14))
plt.legend()
plt.show()
here is what this script has been produced: I am not sure why I have all colors for each cluster!
The VGG16 outputs over 25k features to the classifier. I believe it's too much to t-SNE. It's a good idea to include a new nn.Linear layer to reduce this number. So, t-SNE may work better. In addition, I'd recommend you two different ways to get the features from the model:
The best way to get it regardless of the model is by using the register_forward_hook method. You may find a notebook here with an example.
If you don't want to use the register, I'd suggest this one. After loading your model, you may use the following class to extract the features:
class FeatNet (nn.Module):
def __init__(self, vgg):
super(FeatNet, self).__init__()
self.features = nn.Sequential(*list(vgg.children())[:-1]))
def forward(self, img):
return self.features(img)
Now, you just need to call FeatNet(img) to get the features.
To include the feature reducer, as I suggested before, you need to retrain your model doing something like:
class FeatNet (nn.Module):
def __init__(self, vgg):
super(FeatNet, self).__init__()
self.features = nn.Sequential(*list(vgg.children())[:-1]))
self.feat_reducer = nn.Sequential(
nn.Linear(25088, 1024),
nn.BatchNorm1d(1024),
nn.ReLU()
)
self.classifier = nn.Linear(1024, 14)
def forward(self, img):
x = self.features(img)
x_r = self.feat_reducer(x)
return self.classifier(x_r)
Then, you can run your model returning x_r, that is, the reduced features. As I told you, 25k features are too much for t-SNE. Another method to reduce this number is by using PCA instead of nn.Linear. In this case, you send the 25k features to PCA and then train t-SNE using the PCA's output. I prefer using nn.Linear, but you need to test to check which one you get a better result.
I am playing with Variational Autoencoders and would like to adapt a Keras example found on GitHub.
Basically, the example is very simple based on mnist dataset and I would like to implement on a more difficult set as it is more realistic.
Code I'm trying to modify:
vae_dfc.fit(
x_train,
epochs=epochs,
steps_per_epoch=train_size//batch_size,
validation_data=(x_val),
validation_steps=val_size//batch_size,
verbose=1
)
With more complex datasets it is nearly impossible to load everything on memory so we need to use fit_generator() to train the model. But it doesn't seem able to handle this:
image_generator = image.ImageDataGenerator(
rescale=1./255,
validation_split=0.2
)
train_generator = image_generator.flow_from_directory(
dir,
class_mode=None,
color_mode='rgb',
target_size=(ORIGINAL_SHAPE[0], ORIGINAL_SHAPE[1]),
batch_size=BATCH_SIZE,
subset='training'
)
vae.fit_generator(
train_generator,
epochs=EPOCHS,
steps_per_epoch=train_generator.samples // BATCH_SIZE,
validation_data=validation_generator,
validation_steps=validation_generator.samples // BATCH_SIZE
)
My understanding is that class_mode=None is producing an output similar to the original simple example, but the fit_generator() is unable to handle this. Are there any workarounds to deal with the fit generator error?
Configurations:
tensorflow-gpu==1.12.0
Python 3.6
Windows 10
Cuda 9.0
Full error:
File "xxx\venv\lib\site-packages\tensorflow\python\keras\engine\training.py",
line 2177, in fit_generator
initial_epoch=initial_epoch)
File "xxx\venv\lib\site-packages\tensorflow\python\keras\engine\training_generator.py",
line 162, in fit_generator
'or (x, y). Found: ' + str(generator_output)) ValueError: Output of generator should be a tuple (x, y, sample_weight) or (x, y).
Found: [[[[0.48627454 0.34901962 0.2901961 ] ....]]]
An autoencoder needs outputs = inputs. It's different from not having outputs.
I believe you can try class_mode='input'.
If this doesn't work, you can create a wrapper generator for outputting both:
class AutoencGenerator(keras.utils.Sequence):
def __init__(self, originalGenerator):
self.generator = originalGenerator
def __len__(self):
return len(self.generator)
def __getitem__(self, i):
x = self.generator[i]
return x, x
def on_epoch_end(self):
self.generator.on_epoch_end() #this only if there is an on_epoch_end in the original
train_autoenc_generator = AutoencGenerator(train_generator)
Both options will need that your model has outputs, of course. If the model was created without outputs (unusual), make it output the results and use the loss function in model.compile(loss=the_loss).
Example of VAE
inputs = Input(shape)
means, sigmas = encoder(inputs)
def encode(x):
means, sigmas = x
randomSamples = tf.random_normal(K.shape(means)) #samples
encoded = (sigmas * randomSamples) + means
return encoded
encodings = Lambda(encode)([means, sigmas])
outputs = decoder(encodings)
kl_loss = some_tensor_function(means, sigmas)
VAE = Model(inputs, outputs)
VAE.add_loss(kl_loss)
VAE.compile(loss = 'mse', optimizer='adam')
Train with the generator:
VAE.fit_generator(train_autoenc_generator, ...)
I am trying to create a ResNet50 model for a regression problem, with an output value ranging from -1 to 1.
I omitted the classes argument, and in my preprocessing step I resize my images to 224,224,3.
I try to create the model with
def create_resnet(load_pretrained=False):
if load_pretrained:
weights = 'imagenet'
else:
weights = None
# Get base model
base_model = ResNet50(weights=weights)
optimizer = Adam(lr=1e-3)
base_model.compile(loss='mse', optimizer=optimizer)
return base_model
and then create the model, print the summary and use the fit_generator to train
history = model.fit_generator(batch_generator(X_train, y_train, 100, 1),
steps_per_epoch=300,
epochs=10,
validation_data=batch_generator(X_valid, y_valid, 100, 0),
validation_steps=200,
verbose=1,
shuffle = 1)
I get an error though that says
ValueError: Error when checking target: expected fc1000 to have shape (1000,) but got array with shape (1,)
Looking at the model summary, this makes sense, since the final Dense layer has an output shape of (None, 1000)
fc1000 (Dense) (None, 1000) 2049000 avg_pool[0][0]
But I can't figure out how to modify the model. I've read through the Keras documentation and looked at several examples, but pretty much everything I see is for a classification model.
How can I modify the model so it is formatted properly for regression?
Your code is throwing the error because you're using the original fully-connected top layer that was trained to classify images into one of 1000 classes. To make the network working, you need to replace this top layer with your own which should have the shape compatible with your dataset and task.
Here is a small snippet I was using to create an ImageNet pre-trained model for the regression task (face landmarks prediction) with Keras:
NUM_OF_LANDMARKS = 136
def create_model(input_shape, top='flatten'):
if top not in ('flatten', 'avg', 'max'):
raise ValueError('unexpected top layer type: %s' % top)
# connects base model with new "head"
BottleneckLayer = {
'flatten': Flatten(),
'avg': GlobalAvgPooling2D(),
'max': GlobalMaxPooling2D()
}[top]
base = InceptionResNetV2(input_shape=input_shape,
include_top=False,
weights='imagenet')
x = BottleneckLayer(base.output)
x = Dense(NUM_OF_LANDMARKS, activation='linear')(x)
model = Model(inputs=base.inputs, outputs=x)
return model
In your case, I guess you only need to replace InceptionResNetV2 with ResNet50. Essentially, you are creating a pre-trained model without top layers:
base = ResNet50(input_shape=input_shape, include_top=False)
And then attaching your custom layer on top of it:
x = Flatten()(base.output)
x = Dense(NUM_OF_LANDMARKS, activation='sigmoid')(x)
model = Model(inputs=base.inputs, outputs=x)
That's it.
You also can check this link from the Keras repository that shows how ResNet50 is constructed internally. I believe it will give you some insights about the functional API and layers replacement.
Also, I would say that both regression and classification tasks are not that different if we're talking about fine-tuning pre-trained ImageNet models. The type of task mostly depends on your loss function and the top layer's activation function. Otherwise, you still have a fully-connected layer with N outputs but they are interpreted in a different way.
Inspired by tf.keras.Model subclassing I created custom model.
I can train it and get successfull results, but I can't save it.
I use python3.6 with tensorflow v1.10 (or v1.9)
Minimal complete code example here:
import tensorflow as tf
from tensorflow.keras.datasets import mnist
class Classifier(tf.keras.Model):
def __init__(self):
super().__init__(name="custom_model")
self.batch_norm1 = tf.layers.BatchNormalization()
self.conv1 = tf.layers.Conv2D(32, (7, 7))
self.pool1 = tf.layers.MaxPooling2D((2, 2), (2, 2))
self.batch_norm2 = tf.layers.BatchNormalization()
self.conv2 = tf.layers.Conv2D(64, (5, 5))
self.pool2 = tf.layers.MaxPooling2D((2, 2), (2, 2))
def call(self, inputs, training=None, mask=None):
x = self.batch_norm1(inputs)
x = self.conv1(x)
x = tf.nn.relu(x)
x = self.pool1(x)
x = self.batch_norm2(x)
x = self.conv2(x)
x = tf.nn.relu(x)
x = self.pool2(x)
return x
if __name__ == '__main__':
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(*x_train.shape, 1)[:1000]
y_train = y_train.reshape(*y_train.shape, 1)[:1000]
x_test = x_test.reshape(*x_test.shape, 1)
y_test = y_test.reshape(*y_test.shape, 1)
y_train = tf.keras.utils.to_categorical(y_train)
y_test = tf.keras.utils.to_categorical(y_test)
model = Classifier()
inputs = tf.keras.Input((28, 28, 1))
x = model(inputs)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(10, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
model.fit(x_train, y_train, epochs=1, shuffle=True)
model.save("./my_model")
Error message:
1000/1000 [==============================] - 1s 1ms/step - loss: 4.6037 - acc: 0.7025
Traceback (most recent call last):
File "/home/user/Data/test/python/mnist/mnist_run.py", line 62, in <module>
model.save("./my_model")
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1278, in save
save_model(self, filepath, overwrite, include_optimizer)
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/saving.py", line 101, in save_model
'config': model.get_config()
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1049, in get_config
layer_config = layer.get_config()
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1028, in get_config
raise NotImplementedError
NotImplementedError
Process finished with exit code 1
I looked into the error line and found out that get_config method checks self._is_graph_network
Do anybody deal with this problem?
Thanks!
Update 1:
On the keras 2.2.2 (not tf.keras)
Found comment (for model saving)
file: keras/engine/network.py
Function: get_config
# Subclassed networks are not serializable
# (unless serialization is implemented by
# the author of the subclassed network).
So, obviously it won't work...
I wonder, why don't they point it out in the documentation (Like: "Use subclassing without ability to save!")
Update 2:
Found in keras documentation:
In subclassed models, the model's topology is defined as Python code
(rather than as a static graph of layers). That means the model's
topology cannot be inspected or serialized. As a result, the following
methods and attributes are not available for subclassed models:
model.inputs and model.outputs.
model.to_yaml() and model.to_json()
model.get_config() and model.save().
So, there is no way to save model by using subclassing.
It's possible to only use Model.save_weights()
TensorFlow 2.2
Thanks for #cal for noticing me that the new TensorFlow has supported saving the custom models!
By using model.save to save the whole model and by using load_model to restore previously stored subclassed model. The following code snippets describe how to implement them.
class ThreeLayerMLP(keras.Model):
def __init__(self, name=None):
super(ThreeLayerMLP, self).__init__(name=name)
self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')
self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')
self.pred_layer = layers.Dense(10, name='predictions')
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
return self.pred_layer(x)
def get_model():
return ThreeLayerMLP(name='3_layer_mlp')
model = get_model()
# Save the model
model.save('path_to_my_model',save_format='tf')
# Recreate the exact same model purely from the file
new_model = keras.models.load_model('path_to_my_model')
See: Save and serialize models with Keras - Part II: Saving and Loading of Subclassed Models
TensorFlow 2.0
TL;DR:
do not use model.save() for custom subclass keras model;
use save_weights() and load_weights() instead.
With the help of the Tensorflow Team, it turns out the best practice of saving a Custom Sub-Class Keras Model is to save its weights and load it back when needed.
The reason that we can not simply save a Keras custom subclass model is that it contains custom codes, which can not be serialized safely. However, the weights can be saved/loaded when we have the same model structure and custom codes without any problem.
There has a great tutorial written by Francois Chollet who is the author of Keras, for how to save/load Sequential/Functional/Keras/Custom Sub-Class Models in Tensorflow 2.0 in Colab at here. In Saving Subclassed Models section, it said that:
Sequential models and Functional models are datastructures that represent a DAG of layers. As such, they can be safely serialized and deserialized.
A subclassed model differs in that it's not a datastructure, it's a
piece of code. The architecture of the model is defined via the body
of the call method. This means that the architecture of the model
cannot be safely serialized. To load a model, you'll need to have
access to the code that created it (the code of the model subclass).
Alternatively, you could be serializing this code as bytecode (e.g.
via pickling), but that's unsafe and generally not portable.
This will be fixed in an upcoming release according to the 1.13 pre-release patch notes:
Keras & Python API:
Subclassed Keras models can now be saved through tf.contrib.saved_model.save_keras_model.
EDIT:
It seems this is not quite as finished as the notes suggest. The docs for that function for v1.13 state:
Model limitations: - Sequential and functional models can always be saved. - Subclassed models can only be saved when serving_only=True. This is due to the current implementation copying the model in order to export the training and evaluation graphs. Because the topology of subclassed models cannot be determined, the subclassed models cannot be cloned. Subclassed models will be entirely exportable in the future.
Tensorflow 2.1 allows to save subclassed models with SavedModel format
From my beginning using Tensorflow, i was always a fan of Model Subclass, i feel this way of build models more pythonic and collaborative friendly. But saving the model was always a point of pain with this approach.
Recently i started to update my knowledge and reach to the following information that seems to be True for Tensorflow 2.1:
Subclassed Models
I found this
Second approach is by using model.save to save whole model and by
using load_model to restore previously stored subclassed model.
This last saves the model, the weight and other stuff into a SavedModel file
And by last the confirmation:
Saving custom objects:
If you are using the SavedModel format, you can
skip this section. The key difference between HDF5 and SavedModel is
that HDF5 uses object configs to save the model architecture, while
SavedModel saves the execution graph. Thus, SavedModels are able to
save custom objects like subclassed models and custom layers without
requiring the orginal code.
I tested this personally, and efectively, model.save() for subclassed models generate a SavedModel save. There is no more need for use model.save_weights() or related functions, they now are more for specific usecases.
This is suposed to be the end of this painful path for all of us interested in Model Subclassing.
I found a way to solve it. Create a new model and load the weights from the saved .h5 model. This way is not preferred, but it works with keras 2.2.4 and tensorflow 1.12.
class MyModel(keras.Model):
def __init__(self, inputs, *args, **kwargs):
outputs = func(inputs)
super(MyModel, self).__init__( inputs=inputs, outputs=outputs, *args, **kwargs)
def get_model():
return MyModel(inputs, *args, **kwargs)
model = get_model()
model.save(‘file_path.h5’)
model_new = get_model()
model_new.compile(optimizer=optimizer, loss=loss, metrics=metrics)
model_new.load_weights(‘file_path.h5’)
model_new.evaluate(x_test, y_test, **kwargs)
UPDATE: Jul 20
Recently I also tried to create my subclassed layers and model. Write your own get_config() function might be difficult. So I used model.save_weights(path_to_model_weights) and model.load_weights(path_to_model_weights). When you want to load the weights, remember to create the model with the same architecture than do model.load_weights(). See the tensorflow guide for more details.
Old Answer (Still correct)
Actually, tensorflow document said:
In order to save/load a model with custom-defined layers, or a subclassed model, you should overwrite the get_config and optionally from_config methods. Additionally, you should use register the custom object so that Keras is aware of it.
For example:
class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
The output is:
{'name': 'linear_8', 'trainable': True, 'dtype': 'float32', 'units': 64}
You can play with this simple code. For example, in function "get_config()", remove config.update(), see what's going on. See this and this for more details. These are the Keras guide on tensorflow website.
use model.predict before tf.saved_model.save
Actually recreating the model with
keras.models.load_model('path_to_my_model')
didn't work for me
First we have to save_weights from the built model
model.save_weights('model_weights', save_format='tf')
Then
we have to initiate a new instance for the subclass Model then compile and train_on_batch with one record and load_weights of built model
loaded_model = ThreeLayerMLP(name='3_layer_mlp')
loaded_model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
loaded_model.train_on_batch(x_train[:1], y_train[:1])
loaded_model.load_weights('model_weights')
This work perfectly in TensorFlow==2.2.0
Based on this post. I need some basic implementation help. Below you see my model using a Dropout layer. When using the noise_shape parameter, it happens that the last batch does not fit into the batch size creating an error (see other post).
Original model:
def LSTM_model(X_train,Y_train,dropout,hidden_units,MaskWert,batchsize):
model = Sequential()
model.add(Masking(mask_value=MaskWert, input_shape=(X_train.shape[1],X_train.shape[2]) ))
model.add(Dropout(dropout, noise_shape=(batchsize, 1, X_train.shape[2]) ))
model.add(Dense(hidden_units, activation='sigmoid', kernel_constraint=max_norm(max_value=4.) ))
model.add(LSTM(hidden_units, return_sequences=True, dropout=dropout, recurrent_dropout=dropout))
Now Alexandre Passos suggested to get the runtime batchsize with tf.shape. I tried to implement the runtime batchsize idea it into Keras in different ways but never working.
import Keras.backend as K
def backend_shape(x):
return K.shape(x)
def LSTM_model(X_train,Y_train,dropout,hidden_units,MaskWert,batchsize):
batchsize=backend_shape(X_train)
model = Sequential()
...
model.add(Dropout(dropout, noise_shape=(batchsize[0], 1, X_train.shape[2]) ))
...
But that did just give me the input tensor shape but not the runtime input tensor shape.
I also tried to use a Lambda Layer
def output_of_lambda(input_shape):
return (input_shape)
def LSTM_model_2(X_train,Y_train,dropout,hidden_units,MaskWert,batchsize):
model = Sequential()
model.add(Lambda(output_of_lambda, outputshape=output_of_lambda))
...
model.add(Dropout(dropout, noise_shape=(outputshape[0], 1, X_train.shape[2]) ))
And different variants. But as you already guessed, that did not work at all.
Is the model definition actually the correct place?
Could you give me a tip or better just tell me how to obtain the running batch size of a Keras model? Thanks so much.
The current implementation does adjust the according to the runtime batch size. From the Dropout layer implementation code:
symbolic_shape = K.shape(inputs)
noise_shape = [symbolic_shape[axis] if shape is None else shape
for axis, shape in enumerate(self.noise_shape)]
So if you give noise_shape=(None, 1, features) the shape will be (runtime_batchsize, 1, features) following the code above.