I want to use xception model to classify images,but iam getting valuerror.
xception=keras.applications.xception.Xception(include_top=False,input_shape=(71,71,3))
classifier=Sequential()
for layer in xception.layers:
classifier.add(layer)
Iam getting this error
ValueError: Input 0 is incompatible with layer conv2d_1: expected axis -1 of input shape to have value 64 but got shape (None, 33, 33, 128)
I also get this error when using resnet.But i dont get it when iam using vgg16 or vgg19.Can anyone say how to use it??
You can use the functional API. Here is one possible example of classifier
#Base model Xception
xception=keras.applications.xception.Xception(include_top=False,input_shape=(71,71,3))
# Input of your model
input=Input(shape=(71,71,3))
# Add the inception base model to your model
y=xception(input)
.
.
# Other layers by passing previous output
y=Dense(...)(y)
# Define model
model=Model(input,y)
Docs
Related
I wanted to use the multilingual-codesearch model but first the code doesn't work and outputs the following error which suggest that it cannot load with only weights:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ncoop57/multilingual-codesearch")
model = AutoModel.from_pretrained("ncoop57/multilingual-codesearch")
ValueError: Unrecognized model in ncoop57/multilingual-codesearch. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: gpt_neo, big_bird, speech_to_text, vit, wav2vec2, m2m_100, convbert, led, blenderbot-small, retribert, ibert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta-v2, deberta, flaubert, fsmt, squeezebert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas
Then I downloaded the pytorch bin file but it only contains the weight dictionnary (state dictionnary as mentioned here), which means that if I want to use the model I have to initialize the good architecture and then load the weights.
But how am I supposed to find the architecture fitting the weight of a model that complex ? I saw that some method could find back the model based on the weight dictionnary but I didn't manage to make them work (I think about enter link description here).
How can one find back the architecture of a weight dictionnary in order to make the model work ? Is it even possible ?
On Android Studio, I'm not able to view the model metadata, even though I had added the metadata in Python manually. I get the error:
This model is not supported: input tensor 0 does not have a name.
My attempts at fixing:
I added the layer name to the tensorflow input layer, using:
img_input = Input(shape=input_shape, batch_size=1, name="input_image")
I even checked it in Netron, and the input layer showed up as input_image as expected:
I fixed it by copying 1 line from the documentation, I needed to add a specific property to the TensorMetadataT object, input_meta.name = "image". This does not come from the model layer name, but needs to be manually added.
input_meta.name = "image_input"
Weirdly: This wasn't a problem before, but for some reason, my previous code broke and I needed this change.
I train a boject detection model on pytorch, and I have exported to onnx file.
And I want to convert it to caffe2 model :
import onnx
import caffe2.python.onnx.backend as onnx_caffe2_backend
# Load the ONNX ModelProto object. model is a standard Python protobuf object
model = onnx.load("CPU4export.onnx")
# prepare the caffe2 backend for executing the model this converts the ONNX model into a
# Caffe2 NetDef that can execute it. Other ONNX backends, like one for CNTK will be
# availiable soon.
prepared_backend = onnx_caffe2_backend.prepare(model)
# run the model in Caffe2
# Construct a map from input names to Tensor data.
# The graph of the model itself contains inputs for all weight parameters, after the input image.
# Since the weights are already embedded, we just need to pass the input image.
# Set the first input.
W = {model.graph.input[0].name: x.data.numpy()}
# Run the Caffe2 net:
c2_out = prepared_backend.run(W)[0]
# Verify the numerical correctness upto 3 decimal places
np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3)
print("Exported model has been executed on Caffe2 backend, and the result looks good!")
I always got this error :
RuntimeError: ONNX conversion failed, encountered 1 errors:
Error while processing node: input: "90"
input: "91"
output: "92"
op_type: "Resize"
attribute {
name: "mode"
s: "nearest"
type: STRING
}
. Exception: Don't know how to translate op Resize
How can I solve it ?
The problem is that the Caffe2 ONNX backend does not yet support the export of the Resize operator.
Please raise an issue on the Caffe2 / PyTorch github -- there's an active community of developers who should be able to address this use case.
So, I'm using TensorFlow SSD-Mobilnet V1 coco dataset. That I have further trained on my own dataset but when I try to convert it to OpenVino IR to run it on Raspberry PI with Movidius Chip. I get an error
➜ utils sudo python3 summarize_graph.py --input_model ssd.pb
WARNING: Logging before flag parsing goes to stderr.
W0722 17:17:05.565755 4678620608 __init__.py:308] Limited tf.compat.v2.summary API due to missing TensorBoard installation.
W0722 17:17:06.696880 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:35: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.
W0722 17:17:06.697348 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:109: The name tf.MetaGraphDef is deprecated. Please use tf.compat.v1.MetaGraphDef instead.
W0722 17:17:06.697680 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:235: The name tf.NodeDef is deprecated. Please use tf.compat.v1.NodeDef instead.
1 input(s) detected:
Name: image_tensor, type: uint8, shape: (-1,-1,-1,3)
7 output(s) detected:
detection_boxes
detection_scores
detection_multiclass_scores
detection_classes
num_detections
raw_detection_boxes
raw_detection_scores
When I try to convert the ssd.pb(frozen model) to OpenVino IR
➜ model_optimizer sudo python3 mo_tf.py --input_model ssd.pb
Password:
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/ssd.pb
- Path for generated IR: /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/.
- IR output name: ssd
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 2019.1.1-83-g28dfbfd
WARNING: Logging before flag parsing goes to stderr.
E0722 17:24:22.964164 4474824128 infer.py:158] Shape [-1 -1 -1 3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.
E0722 17:24:22.964462 4474824128 infer.py:178] Cannot infer shapes or values for node "image_tensor".
E0722 17:24:22.964554 4474824128 infer.py:179] Not all output shapes were inferred or fully defined for node "image_tensor".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
E0722 17:24:22.964632 4474824128 infer.py:180]
E0722 17:24:22.964720 4474824128 infer.py:181] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x12ab64bf8>.
E0722 17:24:22.964787 4474824128 infer.py:182] Or because the node inputs have incorrect values/shapes.
E0722 17:24:22.964850 4474824128 infer.py:183] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
E0722 17:24:22.965915 4474824128 infer.py:192] Run Model Optimizer with --log_level=DEBUG for more information.
E0722 17:24:22.966033 4474824128 main.py:317] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
How do you think we should fix this?
I updated my OpenVINO to OpenVINO toolkit R2 2019 & using the below command I was able to generate IR file
python3 ~/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config ~/intel/openvino/deployment_tools/model_optimizer/extension/front/tf/ssd_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config pipeline.config -b 1 --data_type FP16 --reverse_input_channels
When you try to convert ssd.pb(your frozen model), you are passing only the input model parameter to mo_tf.py scripts. To convert an object detection model to IR, go
to the model optimizer directory, run the mo_tf.py script with the following required parameters:
--input_model :
File with a pre-trained model (binary or text .pb file after freezing)
--tensorflow_use_custom_operations_config :
Configuration file that describes rules to convert specific TensorFlow* topologies.
For the models downloaded from the TensorFlow* Object Detection API zoo, you can find the configuration files in the /deployment_tools/model_optimizer/extensions/front/tf directory
You can use ssd_v2_support.json / ssd_support.json — for frozen SSD topologies from the models zoo. It will be available in the above mentioned directory.
--tensorflow_object_detection_api_pipeline_config :
A special configuration file that describes the topology hyper-parameters and structure of the TensorFlow Object Detection API model.
For the models downloaded from the TensorFlow* Object Detection API zoo, the configuration file is named pipeline.config.
If you plan to train a model yourself, you can find templates for these files in the models repository
--input_shape(optional):
A custom input image shape, we need to pass these values based on the pretrained model you used.
The model takes input image in the format [1 H W C], Where the parameter refers to the batch size, height, width, channel respectively.
Model Optimizer does not accept negative values for batch, height, width and channel number.
So, you need to pass a valid set of 4 positive numbers using --input_shape parameter, if input image dimensions of the model(SSD mobilenet) is known in advance.
If it is not available, you don't need to pass input shape.
An example mo_tf.py command which uses the model SSD-MobileNet-v2-COCO downloaded from model downloader comes up with openvino is shown below.
python mo_tf.py
--input_model "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.frozen.pb"
--tensorflow_use_custom_operations_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json"
--tensorflow_object_detection_api_pipeline_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.config"
--data_type FP16
--log_level DEBUG
For more details, refer to the link https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html
Hope it helps.
for conversion of mobilenetv2 ssd add "Postprocessor/Cast_1" in original ssd_v2_support.json and use following command. it should work fine.
"instances": {
"end_points": [
"detection_boxes",
"detection_scores",
"num_detections"
],
"start_points": [
"Postprocessor/Shape",
"Postprocessor/scale_logits",
"Postprocessor/Tile",
"Postprocessor/Reshape_1",
"Postprocessor/Cast_1"
]
},
then use following command
#### object detection conversion
import platform
is_win = 'windows' in platform.platform().lower()
mo_tf_path = '/opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py'
json_file = '/opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json'
pb_file = 'model/frozen_inference_graph.pb'
pipeline_file = 'model/pipeline.config'
output_dir = 'output/'
img_height = 300
input_shape = [1,img_height,img_height,3]
input_shape_str = str(input_shape).replace(' ','')
input_shape_str
!python3 {mo_tf_path} --input_model {pb_file} --tensorflow_object_detection_api_pipeline_config {pipeline_file} --tensorflow_use_custom_operations_config {json_file} --output="detection_boxes,detection_scores,num_detections" --output_dir {output_dir} --reverse_input_channels --data_type FP16 --log_level DEBUG
I'm trying to take the last layer in a model (old model) and make a new model of only one layer (new model) that has the exact same parameters as the last layer of the old model. I want to do this in a way that's agnostic to what the last layer of the old model happens to be. I'm trying to do it with this code, but am getting an error.
newModel = Sequential()
newModel.add(type(oldModel.layers[-1])(oldModel.layers[-1].output_shape,
activation=oldModel.layers[-1].activation,
input_shape=oldModel.layers[-1].input_shape))
That yields the following error:
TypeError: __init__() missing 1 required positional argument: 'output_dim'
If I check the last layer in oldModel, it shows me this:
full_model.model.layers[-1]
>>>> <keras.layers.core.Dense at 0x7fe22010e128>
I tried adding output_dim to the list of parameters I'm copying in this way, but that didn't seem to help. It gave me this error instead when I did that:
Exception: Input 0 is incompatible with layer dense_8: expected ndim=2, found ndim=3
Any idea what I'm doing wrong here?
Found the answer myself. If, instead of making the input_shape the same as the input_shape of the last layer of the old model, I make it the output_shape of the penultimate layer of the old model and specify only [1:] of that output array, it works. Code that works is as follows:
newModel.add(type(oldModel.layers[-1])(oldModel.layers[-1].output_shape,
activation=oldModel.layers[-1].activation,
input_shape=oldModel.layers[-2].output_shape[1:]))