FailedPreconditionError: Resource localhost/_AnonymousVar404/N10tensorflow3VarE does not exist - python-3.x

I'm a beginner in deep learning and python,
I tried to run keras R-FCN I google colab and i got an error like this
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Error while reading resource variable _AnonymousVar404 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar404/N10tensorflow3VarE does not exist.
[[node regr_vote/crop_to_bounding_box_7/stack/ReadVariableOp_1 (defined at usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3009) ]]
(1) Cancelled: Function was cancelled before it was started
0 successful operations.
0 derived errors ignored. [Op:__inference_keras_scratch_graph_16906]
I think function crop_to_bounding_box has a something that uninitialized
and this is the code that called crop_to_bounding_box
# position-sensitive ROI pooling + classify
score_map_bins = []
for channel_step in range(self.k*self.k):
bin_x = K.variable(int(channel_step % self.k) *
self.pool_shape, dtype='int32')
print(bin_x)
bin_y = K.variable(int(channel_step / self.k) *
self.pool_shape, dtype='int32')
channel_indices = K.variable(list(range(
channel_step*self.channel_num, (channel_step+1)*self.channel_num)), dtype='int32')
croped = tf.image.crop_to_bounding_box(
tf.gather(pooled, indices=channel_indices, axis=-1), bin_y, bin_x, self.pool_shape, self.pool_shape)
# [pool_shape, pool_shape, channel_num] ==> [1,1,channel_num] ==> [1, channel_num]
croped_mean = K.pool2d(croped, (self.pool_shape, self.pool_shape), strides=(
1, 1), padding='valid', data_format="channels_last", pool_mode='avg')
# [batch * num_rois, 1,1,channel_num] ==> [batch * num_rois, 1, channel_num]
croped_mean = K.squeeze(croped_mean, axis=1)
score_map_bins.append(croped_mean)
I'm using tensorflow-GPU v2.1.0 and Keras 2.3.1
UPDATED:
probably because I add K.variable inside loop, if I tried comment K.variable the code work perfectly, but I need to change K.variable based on channel_step.
how to update K.variable so i can defined K.variable outside loop, and update value inside the loop based on channel_step ?

I found the solution for this problem, I just need to downgrade Keras and TensorFlow version. Now I'm using tensorflow-GPU 1.15.0 and Keras 2.2.4. Apparently, this code does not support TensorFlow 2 and later.

I have fixed this by changing the load function. If you are loading your model from .h5 or .pb
change it from this:
model = tf.saved_model.load(self.filename, [tf.saved_model.SERVING])
to this:
from tensorflow.python.keras.models import load_model
model = load_model(self.filename)

Related

caffe2 inference a onnx model , happend IndexError: Input 475 is undefined

Error:
when I use caffe2 for pretraind model. the model is from https://github.com/onnx/models/blob/master/vision/classification/vgg/model/vgg16-7.onnx
the model I use is pretrained model,
I do not change the model, and not use https://github.com/onnx/optimizer
the code is:
import caffe2
model=onnx.load(vgg16-7.onnx)
prepared_backend=caffe2.python.onnx.backend.prepare(model)
then an error happend:
WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
WARNING: ONNX Optimizer has been moved to https://github.com/onnx/optimizer.
All further enhancements and fixes to optimizers will be done in this new repo.
The optimizer code in onnx/onnx repo will be removed in 1.9 release.
Traceback (most recent call last):
File "test.py", line 20, in
init_net, predict_net = c2.onnx_graph_to_caffe2_net(onnx_model_proto)
File "/home/eeodev/.local/lib/python3.6/site-packages/caffe2/python/onnx/backend.py", line 921, in onnx_graph_to_caffe2_net
return cls._onnx_model_to_caffe2_net(model, device=device, opset_version=opset_version, include_initializers=True)
File "/home/eeodev/.local/lib/python3.6/site-packages/caffe2/python/onnx/backend.py", line 876, in _onnx_model_to_caffe2_net
onnx_model = onnx.utils.polish_model(onnx_model)
File "/usr/local/lib64/python3.6/site-packages/onnx/utils.py", line 24, in polish_model
model = onnx.optimizer.optimize(model)
File "/usr/local/lib64/python3.6/site-packages/onnx/optimizer.py", line 55, in optimize
optimized_model_str = C.optimize(model_str, passes)
IndexError: Input 475 is undefined!
who can tell the solution?
another,if it is a pytorch model,when conver to onnx model ,we can use torch.onnx.export(model, input, 'model.onnx', verbose=True, keep_initializers_as_inputs=True), by keep_initializers_as_inputs=True , use caffe2 load model will not happen than error. but the model I used is pretrained model ,how to use this method?
I believe it is related to IR gap issue: https://github.com/onnx/onnx/issues/2902.
Currently the deprecated ONNX optimizer in ONNX repo cannot deal with ONNX model which IR_VERSION >=4 if the initializer are not included in model's input.
The workaround is to use the following script to let your model include input from initializer (contributed by #TMVector in GitHub):
def add_value_info_for_constants(model : onnx.ModelProto):
"""
Currently onnx.shape_inference doesn't use the shape of initializers, so add
that info explicitly as ValueInfoProtos.
Mutates the model.
Args:
model: The ModelProto to update.
"""
# All (top-level) constants will have ValueInfos before IRv4 as they are all inputs
if model.ir_version < 4:
return
def add_const_value_infos_to_graph(graph : onnx.GraphProto):
inputs = {i.name for i in graph.input}
existing_info = {vi.name: vi for vi in graph.value_info}
for init in graph.initializer:
# Check it really is a constant, not an input
if init.name in inputs:
continue
# The details we want to add
elem_type = init.data_type
shape = init.dims
# Get existing or create new value info for this constant
vi = existing_info.get(init.name)
if vi is None:
vi = graph.value_info.add()
vi.name = init.name
# Even though it would be weird, we will not overwrite info even if it doesn't match
tt = vi.type.tensor_type
if tt.elem_type == onnx.TensorProto.UNDEFINED:
tt.elem_type = elem_type
if not tt.HasField("shape"):
# Ensure we set an empty list if the const is scalar (zero dims)
tt.shape.dim.extend([])
for dim in shape:
tt.shape.dim.add().dim_value = dim
# Handle subgraphs
for node in graph.node:
for attr in node.attribute:
# Ref attrs refer to other attrs, so we don't need to do anything
if attr.ref_attr_name != "":
continue
if attr.type == onnx.AttributeProto.GRAPH:
add_const_value_infos_to_graph(attr.g)
if attr.type == onnx.AttributeProto.GRAPHS:
for g in attr.graphs:
add_const_value_infos_to_graph(g)
return add_const_value_infos_to_graph(model.graph)

Cannot export PyTorch model to ONNX

I am trying to convert a pre-trained torch model to ONNX, but recive the following error:
RuntimeError: step!=1 is currently not supported
I'm trying this on a pre-trained colorization model: https://github.com/richzhang/colorization
Here is the code I ran in Google Colab:
!git clone https://github.com/richzhang/colorization.git
cd colorization/
import colorizers
model = colorizer_siggraph17 = colorizers.siggraph17(pretrained=True).eval()
input_names = [ "input" ]
output_names = [ "output" ]
dummy_input = torch.randn(1, 1, 256, 256, device='cpu')
torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True,
input_names=input_names, output_names=output_names)
I appreciate any help :)
UPDATE 1: #Proko suggestion solved the ONNX export issue. Now I have a new possibly related problem when I try to convert the ONNX to TensorRT. I get the following error:
[TensorRT] ERROR: Network must have at least one output
Here is the code I used:
import torch
import pycuda.driver as cuda
import pycuda.autoinit
import tensorrt as trt
import onnx
TRT_LOGGER = trt.Logger()
def build_engine(onnx_file_path):
# initialize TensorRT engine and parse ONNX model
builder = trt.Builder(TRT_LOGGER)
builder.max_workspace_size = 1 << 25
builder.max_batch_size = 1
if builder.platform_has_fast_fp16:
builder.fp16_mode = True
network = builder.create_network()
parser = trt.OnnxParser(network, TRT_LOGGER)
# parse ONNX
with open(onnx_file_path, 'rb') as model:
print('Beginning ONNX file parsing')
parser.parse(model.read())
print('Completed parsing of ONNX file')
# generate TensorRT engine optimized for the target platform
print('Building an engine...')
engine = builder.build_cuda_engine(network)
context = engine.create_execution_context()
print("Completed creating Engine")
return engine, context
ONNX_FILE_PATH = 'siggraph17.onnx' # Exported using the code above
engine,_ = build_engine(ONNX_FILE_PATH)
I tried to force the build_engine function to use the output of the network by:
network.mark_output(network.get_layer(network.num_layers-1).get_output(0))
but it did not work.
I appropriate any help!
Like I have mentioned in a comment, this is because slicing in torch.onnx supports only step = 1 but there are 2-step slicing in the model:
self.model2(conv1_2[:,:,::2,::2])
Your only option as for now is to rewrite slicing to be some other ops. You can do it by using range and reshape to obtain proper indices. Consider the following function "step-less-arange" (I hope it is generic enough for anyone with similar problem):
def sla(x, step):
diff = x % step
x += (diff > 0)*(step - diff) # add length to be able to reshape properly
return torch.arange(x).reshape((-1, step))[:, 0]
usage:
>> sla(11, 3)
tensor([0, 3, 6, 9])
Now you can replace every slice like this:
conv2_2 = self.model2(conv1_2[:,:,self.sla(conv1_2.shape[2], 2),:][:,:,:, self.sla(conv1_2.shape[3], 2)])
NOTE: you should optimize it. Indices are calculated for every call so it might be wise to pre-compute it.
I have tested it with my fork of the repo and I was able to save the model:
https://github.com/prokotg/colorization
What works for me was to add the opset_version=11 on torch.onnx.export
First I had tried use opset_version=10, but the API suggest 11 so it works.
So your function should be:
torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True,opset_version=11,
input_names=input_names, output_names=output_names)

Is it possible to replace keras Lambda layers with native CoreML ones to convert the model?

For the first time I faced a need to convert mu keras model to coreml. This can be done via coremltools package,
import coremltools
import keras
model = Model(...) # keras
coreml_model = coremltools.converters.keras.convert(model,
input_names="input_image_NHWC",
output_names="output_image_NHWC",
image_scale=1.0,
model_precision='float32',
use_float_arraytype=True,
custom_conversion_functions={ "Lambda": convert_lambda },
input_name_shape_dict={'input_image_NHWC': [None, 384, 384, 3]}
)
However, I have two lambda layers, where the first one is depth-to-space (pixelshuffle) and another one is scaler:
def tf_upsampler(x):
return tf.nn.depth_to_space(x, 4)
def mulfunc(x, beta=0.2):
return beta*x
...
x = Lambda(tf_upsampler)(x)
...
x = Lambda(mulfunc)(x)
The only advice I found was as far as I understand, to use custom layer with the need to implement my layer in Swift code later. Something like this with MyPixelShuffle and MyScaleLayer to be implemented somehow as classes in XCode project (?):
def convert_lambda(layer):
# Only convert this Lambda layer if it is for our swish function.
if layer.function == tf_upsampler:
params = NeuralNetwork_pb2.CustomLayerParams()
# The name of the Swift or Obj-C class that implements this layer.
params.className = "MyPixelShuffle"
# The desciption is shown in Xcode's mlmodel viewer.
params.description = "pixelshuffle"
params.parameters["blockSize"].intValue = 4
return params
elif layer.function == mulfunc:
# https://stackoverflow.com/questions/47987777/custom-layer-with-two-parameters-function-on-core-ml
params = NeuralNetwork_pb2.CustomLayerParams()
# The name of the Swift or Obj-C class that implements this layer.
params.className = "MyScaleLayer"
params.description = "scaling input"
# HERE!! This is important.
params.parameters["scale"].doubleValue = 0.2
# The desciption is shown in Xcode's mlmodel viewer.
params.description = "multiplication by constant"
return params
However, I found that CoreML actually has layers I need, they can be found as ScaleLayer and ReorganizeDataLayer
How can I use those native layers to replace lambdas in keras model? Is it possible to edit coreML protobuf for the network? Or if there are Swift/OBj-C classes for them, how they are called?
Can it be done via deleting/adding layers with coremltools.models.neural_network.NeuralNetworkBuilder ?
UPDATE:
I found that keras converter actually invokes Neural network builder to add different layers. Builder has the layer builder.add_reorganize_data I need. Now it's a question how to replace custom layers in the model. I can load it into builder and isnpect layers:
coreml_model_path = 'mymodel.mlmodel'
spec = coremltools.models.utils.load_spec(coreml_model_path)
builder = coremltools.models.neural_network.NeuralNetworkBuilder(spec=spec)
builder.inspect_layers(last=10)
[Id: 417], Name: lambda_10 (Type: custom)
Updatable: False
Input blobs: ['up1_output']
Output blobs: ['lambda_10_output']
It's much simpler to do something like this:
def convert_lambda(layer):
if layer.function == tf_upsampler:
params = NeuralNetwork_pb2.ReorganizeDataLayerParams()
params.fillInTheOtherPropertiesHere = someValue
return params
...etc..
In other words, you don't have to return a custom layer if some existing layer type already does what you want.
Okay, seems I have found a way to go. I created a virtual environment with separate copy of coremltools and edited _convert() method in _keras2_converter.py by adding the following code:
for iter, layer in enumerate(graph.layer_list):
keras_layer = graph.keras_layer_map[layer]
print("%d : %s, %s" % (iter, layer, keras_layer))
if isinstance(keras_layer, _keras.layers.wrappers.TimeDistributed):
keras_layer = keras_layer.layer
converter_func = _get_layer_converter_fn(keras_layer, add_custom_layers)
input_names, output_names = graph.get_layer_blobs(layer)
# this may be none if we're using custom layers
if converter_func:
converter_func(builder, layer, input_names, output_names,
keras_layer, respect_trainable)
else:
if _is_activation_layer(keras_layer):
import six
if six.PY2:
layer_name = keras_layer.activation.func_name
else:
layer_name = keras_layer.activation.__name__
else:
layer_name = type(keras_layer).__name__
if layer_name in custom_conversion_functions:
custom_spec = custom_conversion_functions[layer_name](keras_layer)
else:
custom_spec = None
if layer.find('tf_up') != -1:
print('TF_UPSCALE found')
builder.add_reorganize_data(layer, input_names[0], output_names[0], mode='DEPTH_TO_SPACE', block_size=4)
elif layer.find('mulfunc') != -1:
print('SCALE found')
builder.add_scale(layer, W=0.2, b=0, has_bias=False, input_name=input_names[0], output_name=output_names[0])
else:
builder.add_custom(layer, input_names, output_names, custom_spec)
The trigger is layer name. In keras I use model.load_weights(by_name=True) and the following marking of my lambdas:
x = Lambda(mulfunc, name=scope+'mulfunc')(x)
x = Lambda(tf_upsampler,name='tf_up')(x)
Now the model at least has layers I need:
[Id: 417], Name: tf_up (Type: reorganizeData)
Updatable: False
Input blobs: ['up1_output']
Output blobs: ['tf_up_output']
Now it's time to validate on my VBox MacOS, will post what I've got.
UPDATE:
I don't see errors regarding my replaced Lambda layers, but have another one not allowing me to predict:
Layer 'concatenate_1' type 320 has 1 inputs but expects at least 2
I think it's related to the fact that Keras receives single input to concatenate layer which is a list of inputs. Looking how to fix this
UPDATE 2:
Tried to hotfix this by using add_concat_nd(self, name, input_names, output_name, axis) function of builder for my concatenate layers: got error during inference that this layer type is usupported (?!) :
if converter_func:
if layer.find('concatenate') != -1:
print('CONCATENATE FOUND')
builder.add_concat_nd(layer, input_names, output_names[0], axis=3)
else:
converter_func(builder, layer, input_names, output_names,
keras_layer, respect_trainable)
Unsupported layer type (CoreML.Specification.NeuralNetworkLayer)
UPDATE 4:
Found fix for this, and changed the way how builder is initialized:
builder = _NeuralNetworkBuilder(input_features, output_features, mode = mode,
use_float_arraytype=use_float_arraytype, disable_rank5_shape_mapping=True)
Now error message is gone, but I have problems with XCode version: model has version 4, while my Xcode supports 3. Will seek to update my VM.
"CoreML survival guide" pdf suggests in this case:
pip install -U git+https://github.com/apple/coremltools.git
For example, if loading a model using coremltools gives an error such as the following, then
try installing the latest coremltools directly from the GitHub repo.
Error compiling model: "Error reading protobuf spec. validator error:
The .mlmodel supplied is of version 3, intended for a newer version of
Xcode. This version of Xcode supports model version 2 or earlier.
UPDATE:
Updated from git. Have error:
No module named 'coremltools.libcoremlpython'
Looks like latest git is broken :(
Damn, it seems I need macos 10.15 and xcode 11
UPDATE 5:
Still fighting with bug on 10.15. Found that
coremltools somehow deduplicates inputs to Concatenate layer, so if you have in your keras code something like Concatenate()([x,x]) you will have concatenate layer with 1 inputs in coreml and error. To fix it I try to further modify the code above:
if layer.find('concatenate') != -1:
print('CONCATENATE FOUND', len(input_names))
if len(input_names) == 1:
input_names = [input_names[0],input_names[0]]
builder.add_concat_nd(layer, input_names, output_names[0], axis=3)
[I faced this error: input layer 'conv_in' of type 'Convolution' has input rank 3 but expects rank at least 4](coremltools: how to properly use NeuralNetworkMultiArrayShapeRange? which seems to be caused by coreml making input 3-dimensional CHW, while it must be 4 NWHC (?). Currently playing with the following
spec = coreml_model._spec
# fixing input shape
# https://stackoverflow.com/questions/59765376/coreml-model-convert-imagetype-model-input-to-multiarray
spec.description.input[0].type.multiArrayType.shape.extend([1, 384, 384, 3])
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]
coremltools.utils.save_spec(spec, "my.mlmodel")
Getting invalid blob shape from model internals

Cannot clone object <tensorflow.python.keras.wrappers.scikit_learn.KerasClassifier object

This is with regards to TF 2.0.
Please find below my code that performs GridSearch along with Cross Validation using sklearn.model_selection.GridSearchCV for the mnist dataset that works perfectly fine.
# Build Function to create model, required by KerasClassifier
def create_model(optimizer_val='RMSprop',hidden_layer_size=16,activation_fn='relu',dropout_rate=0.1,regularization_fn=tf.keras.regularizers.l1(0.001),kernel_initializer_fn=tf.keras.initializers.glorot_uniform,bias_initializer_fn=tf.keras.initializers.zeros):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(units=hidden_layer_size, activation=activation_fn,kernel_regularizer=regularization_fn,kernel_initializer=kernel_initializer_fn,bias_initializer=bias_initializer_fn),
tf.keras.layers.Dropout(dropout_rate),
tf.keras.layers.Dense(units=hidden_layer_size,activation='softmax',kernel_regularizer=regularization_fn,kernel_initializer=kernel_initializer_fn,bias_initializer=bias_initializer_fn)
])
optimizer_val_final=optimizer_val
model.compile(optimizer=optimizer_val, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
return model
#Create the model with the wrapper
model = tf.keras.wrappers.scikit_learn.KerasClassifier(build_fn=create_model, epochs=100, batch_size=10, verbose=2)
#Initialize the parameter grid
nn_param_grid = {
'epochs': [10],
'batch_size':[128],
'optimizer_val': ['Adam','SGD'],
'hidden_layer_size': [128],
'activation_fn': ['relu'],
'dropout_rate': [0.2],
'regularization_fn':['l1','l2','L1L2'],
'kernel_initializer_fn':['glorot_normal', 'glorot_uniform'],
'bias_initializer_fn':[tf.keras.initializers.zeros]
}
#Perform GridSearchCV
grid = GridSearchCV(estimator=model, param_grid=nn_param_grid, verbose=2, cv=3,scoring=precision_custom,return_train_score=False,n_jobs=-1)
grid_result = grid.fit(x_train, y_train)
My idea is to pass different optimizers with different learning rates , say Adam for learning rates 0.1,0.01 and 0.001. I also want to try out SGD with different learning rates and momentum values.
In that case , when I pass 'optimizer_val': [tf.keras.optimizers.Adam(0.1)], , I get the error as given below:
Cannot clone object <tensorflow.python.keras.wrappers.scikit_learn.KerasClassifier object at 0x7fe08b210e10>, as the constructor either does not set or modifies parameter optimizer_val
Please advise as to how can I rectify this error.
This is sklearn bug. You should reduce the version of sklearn:
conda install scikit-learn==0.21.2
It's OK!
You can fix the issue with changing the list into tuple.
If there is any single valued instance then you can use list.
#Initialize the parameter grid
nn_param_grid = {
'epochs': [10],
'batch_size':[128],
'optimizer_val': ('Adam','SGD'),
'hidden_layer_size': [128],
'activation_fn': ['relu'],
'dropout_rate': [0.2],
'regularization_fn':('l1','l2','L1L2'),
'kernel_initializer_fn':('glorot_normal', 'glorot_uniform'),
'bias_initializer_fn':[tf.keras.initializers.zeros]
}
Found this comment online and it helped!
For those who are getting following error due to above statement:
Cannot clone object <keras.wrappers.scikit_learn.KerasClassifier object at 0x7f93ddc5d1d0>, as the constructor either does not set or modifies parameter layers
Change the layers from array of list to array of tuple:
layers => [(20,), (45, 30, 15), (40, 20)]
Don't forget to add comma after
(20,) otherwise another error/warning will appear - FitFailedWarning:
Estimator fit failed. The score on this train-test partition for these
parameters will be set to nan. Details: TypeError: 'int' object is
not iterable Because single tuple without comma is treated as int.
Only installing TensorFlow 2.8 helped with this issue. Notice that it is available only via pip Anaconda TensorFlow 2.7 vs. Pypi TensorFlow 2.8
To check your version of Tensorflow type: conda list tensorflow
(base) C:\Users\User> conda list tensorflow-gpu
# Name Version Build Channel
tensorflow-gpu 2.4.1 pyhd8ed1ab_3 conda-forge
To uninstall type: conda uninstall tensorflow and to install version 2.8 type:
pip install tensorflow-gpu==2.8

Tensorflow : How to get tensor name from Tensorboard?

I downloaded a ssd_mobilenet_v2_coco from tensorflow detection model zoo. And I used import_pb_to_tensorboard.py to show the structure on Tensorboard.
I find a node named 'image_tensor', this is the picture discribed in Tensorboard.
I want to use the function 'get_tensor_by_name()' to input a new image and get the ouputs. However, it failed.
I tried 'get_operation_by_name()' , it didn't work neither.
Here is the code:
import tensorflow as tf
def one_image(im_path, model_path):
sess= tf.Session()
with sess.as_default():
image_tensor = tf.image.decode_jpeg(tf.read_file(im_path), channels=3)
saver = tf.train.import_meta_graph(model_path + "/model.ckpt.meta")
saver.restore(sess, tf.train.latest_checkpoint(model_path))
graph = tf.get_default_graph()
# x = graph.get_tensor_by_name("import/image_tensor:0")
# out_put = graph.get_tensor_by_name("import/detection_classes:0")
x = graph.get_operation_by_name("import/image_tensor").outputs[0]
outputs = graph.get_operation_by_name("import/detection_classes").outputs[0]
out_put = sess.run(outputs, feed_dict={x: image_tensor.eval()})
print(out_put)
sess.close()
if __name__ == "__main__":
one_image("testimg-4-resize.jpg", "ssd_mobilenet_v2_coco_2018_03_29")
And here is the KeyError:
KeyError: "The name 'import/image_tensor' refers to an Operation not in the graph."
I am wondering how to get the tensor name from Tensorboard and whether there is another way to load model from 'only-ckpts'.
'only-ckpts' means files only include 'model.ckpt.data-00000-of-00001' , 'model.ckpt.index' and 'model.ckpt.meta'.
Any advice will be grateful. Thank you in advance.
The tool import_pb_to_tensorboard.py uses tf.import_graph_def to import the graph and uses default name argument, which is "import" as documented.
Your code imports the graph through tf.train.import_meta_graph and uses default import_scope argument, which will not prefix imported tensor or operation name. It is obvious then you have two options to correct this error:
Do the following in place of your import_meta_graph line:
saver = tf.train.import_meta_graph(model_path + "/model.ckpt.meta",
import_scope='import')
Remove import/ prefix when trying to get tensor or operation by name like this:
x = graph.get_tensor_by_name("image_tensor:0")
out_put = graph.get_tensor_by_name("detection_classes:0")
x = graph.get_operation_by_name("image_tensor").outputs[0]
outputs = graph.get_operation_by_name("detection_classes").outputs[0]

Resources