How can I use tensorboard with tf.estimator.Estimator - python-3.x

I am considering to move my code base to tf.estimator.Estimator, but I cannot find an example on how to use it in combination with tensorboard summaries.
MWE:
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
# Declare list of features, we only have one real-valued feature
def model(features, labels, mode):
# Build a linear model and predict values
W = tf.get_variable("W", [1], dtype=tf.float64)
b = tf.get_variable("b", [1], dtype=tf.float64)
y = W*features['x'] + b
loss = tf.reduce_sum(tf.square(y - labels))
# Summaries to display for TRAINING and TESTING
tf.summary.scalar("loss", loss)
tf.summary.image("X", tf.reshape(tf.random_normal([10, 10]), [-1, 10, 10, 1])) # dummy, my inputs are images
# Training sub-graph
global_step = tf.train.get_global_step()
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = tf.group(optimizer.minimize(loss), tf.assign_add(global_step, 1))
return tf.estimator.EstimatorSpec(mode=mode, predictions=y,loss= loss,train_op=train)
estimator = tf.estimator.Estimator(model_fn=model, model_dir='/tmp/tf')
# define our data set
x=np.array([1., 2., 3., 4.])
y=np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x": x}, y, 4, num_epochs=1000)
for epoch in range(10):
# train
estimator.train(input_fn=input_fn, steps=100)
# evaluate our model
estimator.evaluate(input_fn=input_fn, steps=10)
How can I display my two summaries in tensorboard? Do I have to register a hook in which I use a tf.summary.FileWriter or something else?

EDIT:
Upon testing (in v1.1.0, and probably in later versions as well), it is apparent that tf.estimator.Estimator will automatically write summaries for you. I confirmed this using OP's code and tensorboard.
(Some poking around r1.4 leads me to conclude that this automatic summary writing occurs due to tf.train.MonitoredTrainingSession.)
Ultimately, the automatic summarizing is accomplished with the use of hooks, so if you wanted to customize the Estimator's default summarizing, you could do so using hooks. Below are the (edited) details from the original answer.
You'll want to use hooks, formerly known as monitors. (Linked is a conceptual/quickstart guide; the short of it is that the notion of hooking into / monitoring training is built into the Estimator API. A bit confusingly, though, it doesn't seem like the deprecation of monitors for hooks is really documented except in a deprecation annotation in the actual source code...)
Based on your usage, it looks like r1.2's SummarySaverHook fits your bill.
summary_hook = tf.train.SummarySaverHook(
SAVE_EVERY_N_STEPS,
output_dir='/tmp/tf',
summary_op=tf.summary.merge_all())
You may want to customize the hook's initialization parameters, as by providing an explicity SummaryWriter or writing every N seconds instead of N steps.
If you pass this into the EstimatorSpec, you'll get your customized Summary behavior:
return tf.estimator.EstimatorSpec(mode=mode, predictions=y,loss=loss,
train_op=train,
training_hooks=[summary_hook])
EDIT NOTE:
A previous version of this answer suggested passing the summary_hook into estimator.train(input_fn=input_fn, steps=5, hooks=[summary_hook]). This does not work because tf.summary.merge_all() has to be called in the same context as your model graph.

For me this worked without adding any hooks or merge_all calls. I just added some tf.summary.image(...) in my model_fn and when I train the model they magically appear in tensorboard. Not sure what the exact mechanism is, however. I'm using TensorFlow 1.4.

estimator = tf.estimator.Estimator(model_fn=model, model_dir='/tmp/tf')
Code model_dir='/tmp/tf' means estimator write all logs to /tmp/tf, then run tensorboard --log.dir=/tmp/tf, open you browser with url: http://localhost"6006 ,you can see the graphic

You can create a SummarySaverHook with tf.summary.merger_all() as the summary_op in the model_fn itself. Pass this hook to the training_hooks param of the EstimatorSpec constructor in your model_fn.
I don't think what #jagthebeetle said is exactly applicable here. As the hooks that you transfer to the estimator.train method cannot be run for the summaries that you define in your model_fn, since they won't be added to the merge_all op as they remain bounded by the scope of model_fn

Related

ValueError: Tensor Tensor("dense_4/Sigmoid:0", shape=(?, 1025), dtype=float32) is not an element of this graph

Today I suddenly started getting this error for no apparent reason, while I was running model.fit(). This used to work before, I am using TF 2.3.0, more specifically its Keras module.
The function is called on validation inside a generator, which is fed into model.predict().
Basically, I load a checkpoint, I resume training the network, and I make a prediction on validation.
The error keeps occurring even when training a model from scratch, and erasing all the related data. It's like if something has been hardcoded, somewhere, as I was able to run model.fit() up until a few hours ago.
I saw several solutions like THIS, but none of these variations really work for me, as they lead to more tricky error messages.
I even tried installing a different version of TF, thinking that this was due to some old version, but the error still occurs.
I will answer my own question, as this one was particularly tricky and none of the solutions I found on the internet has worked for me, probably because outdated.
I'll write down just the relevant part to add in the code, feel free to add more technical explanations.
I like using args for passing variables, but it can work without:
from tensorflow.python.keras.backend import set_session
from tensorflow.keras.models import load_model
import generator # custom generator
def main(args):
# open new session and define TF graph
args.sess = tf.compat.v1.Session()
args.graph = tf.compat.v1.get_default_graph()
set_session(args.sess)
# define training generator
train_generator = generator(args.train_data)
# load model
args.model = load_model(args.model_path)
args.model.fit(train_generator)
Then, in the model prediction function:
# In my specific case, the predict_output() function is
# called inside the generator function
def predict_output(args, x):
with args.graph.as_default():
set_session(args.sess)
y = model.predict(x)
return y

Why are input data transferrd to Variable type?

I'm reading code that implementing YOLOv3 with Pytorch, and coming with a line like this:
for batch_i, (_, imgs, targets) in enumerate(dataloader):
batches_done = len(dataloader) * epoch + batch_i
imgs = Variable(imgs.to(device)) # ??
targets = Variable(targets.to(device), requires_grad=False)
imgs is the input data, and I can't understand why there exits the transform: Variable(imgs.to(device)))
Does this mean that the input data should be trained(since the default option is that requires_grad=true) or is there another reason?
As Natthaphon pointed out in his comment I don't really see the calls to Variable make any sense in the scenario.
Technically the Variable automatically becomes part of the computational graph. So maybe it's written by someone coming over from tensorflow or with visualization of the complete computational graph in mind.
if you read doc here
the Variable API has been deprecated.
Hence, we should not bother using Variable to wrap a tensor anymore.
you can proceed with the variable wrapper in latest torch version.

example of doing simple prediction with pytorch-lightning

I have an existing model where I load some pre-trained weights and then do prediction (one image at a time) in pytorch. I am trying to basically convert it to a pytorch lightning module and am confused about a few things.
So currently, my __init__ method for the model looks like this:
self._load_config_file(cfg_file)
# just creates the pytorch network
self.create_network()
self.load_weights(weights_file)
self.cuda(device=0) # assumes GPU and uses one. This is probably suboptimal
self.eval() # prediction mode
What I can gather from the lightning docs, I can pretty much do the same, except not to do the cuda() call. So something like:
self.create_network()
self.load_weights(weights_file)
self.freeze() # prediction mode
So, my first question is whether this is the correct way to use lightning? How would lightning know if it needs to use the GPU? I am guessing this needs to be specified somewhere.
Now, for the prediction, I have the following setup:
def infer(frame):
img = transform(frame) # apply some transformation to the input
img = torch.from_numpy(img).float().unsqueeze(0).cuda(device=0)
with torch.no_grad():
output = self.__call__(Variable(img)).data.cpu().numpy()
return output
This is the bit that has me confused. Which functions do I need to override to make a lightning compatible prediction?
Also, at the moment, the input comes as a numpy array. Is that something that would be possible from the lightning module or do things always have to use some sort of a dataloader?
At some point, I want to extend this model implementation to do training as well, so want to make sure I do it right but while most examples focus on training models, a simple example of just doing prediction at production time on a single image/data point might be useful.
I am using 0.7.5 with pytorch 1.4.0 on GPU with cuda 10.1
LightningModule is a subclass of torch.nn.Module so the same model class will work for both inference and training. For that reason, you should probably call the cuda() and eval() methods outside of __init__.
Since it's just a nn.Module under the hood, once you've loaded your weights you don't need to override any methods to perform inference, simply call the model instance. Here's a toy example you can use:
import torchvision.models as models
from pytorch_lightning.core import LightningModule
class MyModel(LightningModule):
def __init__(self):
super().__init__()
self.resnet = models.resnet18(pretrained=True, progress=False)
def forward(self, x):
return self.resnet(x)
model = MyModel().eval().cuda(device=0)
And then to actually run inference you don't need a method, just do something like:
for frame in video:
img = transform(frame)
img = torch.from_numpy(img).float().unsqueeze(0).cuda(0)
output = model(img).data.cpu().numpy()
# Do something with the output
The main benefit of PyTorchLighting is that you can also use the same class for training by implementing training_step(), configure_optimizers() and train_dataloader() on that class. You can find a simple example of that in the PyTorchLightning docs.
Even though above answer suffices, if one takes note of following line
img = torch.from_numpy(img).float().unsqueeze(0).cuda(0)
One has to put both the model as well as image to the right GPU. On multi-gpu inference machine, this becomes a hassle.
To solve this, .predict was also recently produced, see more at https://pytorch-lightning.readthedocs.io/en/stable/deploy/production_basic.html

Adverserial images in TensorFlow

I am reading an article that explains how to trick neural networks into predicting any image you want. I am using the mnist dataset.
The article provides a relatively detailed walk through but the person who wrote it is using Caffe.
Anyways, my first step was to create a logistic regression function using TensorFlow that is trained on the mnist dataset. So, if I were to restore the logistic regression model I can use it to predict any image. For example, I feed the number 7 to the following model...
with tf.Session() as sess:
saver.restore(sess, "/tmp/model.ckpt")
# number 7
x_in = np.expand_dims(mnist.test.images[0], axis=0)
classification = sess.run(tf.argmax(pred, 1), feed_dict={x:x_in})
print(classification)
>>>[7]
This prints out the number [7] which is correct.
Now the article explains that in order to break a neural network we need to calculate the gradient of the neural network. This is the derivative of the neural network.
The article states that to calculate the gradient, we first need to pick an intended outcome to move towards, and set the output probability list to be 0 everywhere, and 1 for the intended outcome. Backpropagation is an algorithm for calculating the gradient.
Then there's code provided in Caffe as to how to calculate the gradient...
def compute_gradient(image, intended_outcome):
# Put the image into the network and make the prediction
predict(image)
# Get an empty set of probabilities
probs = np.zeros_like(net.blobs['prob'].data)
# Set the probability for our intended outcome to 1
probs[0][intended_outcome] = 1
# Do backpropagation to calculate the gradient for that outcome
# and the image we put in
gradient = net.backward(prob=probs)
return gradient['data'].copy()
Now, my issue is, I'm having a hard time understanding how this function is able to get the gradient just by feeding just the image and the probabilities to the function. Because I do not fully understand this code, I am having a hard time translating this logic to TensorFlow.
I think I am confused as to how the Caffe framework works because I've never seen/used it before. If someone could explain how this logic works step-by-step that would be great.
I already know the basics of Backpropagation so you may assume I already know how it works.
Here is a link to the article itself...https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture
I'm going to show you how to do the basics of generating an adversarial image in TF, to apply that to an already learned model you might need some adaptations.
The code blocks work well as blocks in a Jupyter notebook if you want to try this out interactively. If you don't use a notebook, you'll need to add plt.show() calls for the plots to show and remove the matplotlib inline statement. The code is basically the simple MNIST tutorial from the TF documentation, I'll point out the important differences.
First block is just setup, nothing special ...
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# if you're not using jupyter notebooks then comment this out
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
Get MNIST data (it is down from time to time so you might need to download it from web.archive.org manually and put it into that directory). We're not using one hot encoding like in the tutorial because by now TF has nicer functions to calculate the loss that don't need the one hot encoding anymore.
mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data')
In the next block we are doing something "special". The input image tensor is defined as a variable because later we want to optimize with regard to the input image. Usually you would have a placeholder here. It does limit us a bit here because we need a definite shape so we only feed in one example at a time. Not something you want to do in production, but for teaching purposes it's fine (and you can get around it with a little more code). Labels are placeholders like normal.
input_images = tf.get_variable("input_image", shape=[1,784], dtype=tf.float32)
input_labels = tf.placeholder(shape=[1], name='input_label', dtype=tf.int32)
Our model is a standard logistic regression model like in the tutorial. We only use the softmax for visualization of results, the loss function takes plain logits.
W = tf.get_variable("weights", shape=[784, 10], dtype=tf.float32, initializer=tf.random_normal_initializer())
b = tf.get_variable("biases", shape=[1, 10], dtype=tf.float32, initializer=tf.zeros_initializer())
logits = tf.matmul(input_images, W) + b
softmax = tf.nn.softmax(logits)
The loss is standard cross entropy. What's to note in the training step is that there is an explicit list of variables passed in - we have defined the input image as a training variable but we don't want to try optimizing the image while training the logistic regression, just weights and biases - so we explicitly state that.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,labels=input_labels,name='xentropy')
mean_loss = tf.reduce_mean(loss)
train_step = tf.train.AdamOptimizer(learning_rate=0.1).minimize(mean_loss, var_list=[W,b])
Start the session ...
sess = tf.Session()
sess.run(tf.global_variables_initializer())
Training is slower than it should be because of batch size 1. Like I said, not something you want to do in production, but this is just for teaching the basics ...
for step in range(10000):
batch_xs, batch_ys = mnist.train.next_batch(1)
loss_v, _ = sess.run([mean_loss, train_step], feed_dict={input_images: batch_xs, input_labels: batch_ys})
At this point we should have a model that is good enough to demonstrate how to generate an adversarial image. First, we get an image that has label '2' because these are easy so even our suboptimal classifier should get them right (if it doesn't, run this cell again ;) this step is random so I can't guarantee that it'll work).
We're setting our input image variable to that example.
sample_label = -1
while sample_label != 2:
sample_image, sample_label = mnist.test.next_batch(1)
sample_label
plt.imshow(sample_image.reshape(28, 28),cmap='gray')
# assign image to var
sess.run(tf.assign(input_images, sample_image));
sess.run(softmax) # now using the variable as input, no feed dict
# should show something like
# array([[ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
# With the third entry being the highest by far.
Now we are going to "break" the classification. We want to change the image to make it look more like another number, in the eyes of the network, without changing the network itself. To do that, the code looks basically identical to what we had before. We define a "fake" label, the same loss as before (cross entropy) and we get an optimizer to minimize the fake loss, but this time with a var_list consisting of only the input image - so we won't change the logistic regression weights:
fake_label = tf.placeholder(tf.int32, shape=[1])
fake_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,labels=fake_label)
adversarial_step = tf.train.GradientDescentOptimizer(learning_rate=1e-3).minimize(fake_loss, var_list=[input_images])
The next block is intended to be run interactively multiple times, while you see the image and the scores changing (here moving towards a label of 8):
sess.run(adversarial_step, feed_dict={fake_label:np.array([8])})
plt.imshow(sess.run(input_images).reshape(28,28),cmap='gray')
sess.run(softmax)
The first time you run this block, the scores will probably still heavily point towards 2, but it will change over time and after a couple runs you should see something like the following image - note that the image still looks like a 2 with some noise in the background, but the score for "2" is at around 3% while the score for "8" is at over 96%.
Note that we never actually computed the gradient explicitly - we don't need to, the TF optimizer takes care of computing gradients and applying updates to the variables. If you want to get the gradient, you can do so by using tf.gradients(fake_loss, input_images).
The same pattern works for more complicated models, but what you'll want to do is to train your model as normal - using placeholders with bigger batches, or using a pipeline with TF readers, and when you want to do the adversarial image you'd recreate the network with the input image variable as an input. As long as all the variable names remain the same (which they should if you use the same functions to build the network) you can restore using your network checkpoint, and then apply the steps from this post to get to an adversarial image. You might need to play around with learning rates and such.

Using a transformer (estimator) to transform the target labels in sklearn.pipeline

I understand that one can chain several estimators that implement the transform method to transform X (the feature set) in sklearn.pipeline. However I have a use case where I would like also transform the target labels (like transform the labels to [1...K] instead of [0, K-1] and I would love to do that as a component in my pipeline. Is it possible to that at all using the sklearn.pipeline.?
There is now a nicer way to do this built into scikit-learn; using a compose.TransformedTargetRegressor.
When constructing these objects you give them a regressor and a transformer. When you .fit() them they transform the targets before regressing, and when you .predict() them they transform their predicted targets back to the original space.
It's important to note that you can pass them a pipeline object, so they should interface nicely with your existing setup. For example, take the following setup where I train a ridge regression to predict 1 target given 2 features:
# Imports
import numpy as np
from sklearn import compose, linear_model, metrics, pipeline, preprocessing
# Generate some training and test features and targets
X_train = np.random.rand(200).reshape(100,2)
y_train = 1.2*X_train[:, 0]+3.4*X_train[:, 1]+5.6
X_test = np.random.rand(20).reshape(10,2)
y_test = 1.2*X_test[:, 0]+3.4*X_test[:, 1]+5.6
# Define my model and scalers
ridge = linear_model.Ridge(alpha=1e-2)
scaler = preprocessing.StandardScaler()
minmax = preprocessing.MinMaxScaler(feature_range=(-1,1))
# Construct a pipeline using these methods
pipe = pipeline.make_pipeline(scaler, ridge)
# Construct a TransformedTargetRegressor using this pipeline
# ** So far the set-up has been standard **
regr = compose.TransformedTargetRegressor(regressor=pipe, transformer=minmax)
# Fit and train the regr like you would a pipeline
regr.fit(X_train, y_train)
y_pred = regr.predict(X_test)
print("MAE: {}".format(metrics.mean_absolute_error(y_test, y_pred)))
This still isn't quite as smooth as I'd like it to be, for example you can access the regressor that contained by a TransformedTargetRegressor using .regressor_ but the coefficients stored there are untransformed. This means there are some extra hoops to jump through if you want to work your way back to the equation that generated the data.
No, pipelines will always pass y through unchanged. Do the transformation outside the pipeline.
(This is a known design flaw in scikit-learn, but it's never been pressing enough to change or extend the API.)
You could add the label column to the end of the training data, then you apply your transformation and you delete that column before training your model. That's not very pro but enough.

Resources