Through python Api's , I have uploaded and deployed an ML based custom model on DataRobot Platform successfully.
Now how to get the accuracy metric for the deployed model.
NOTE: ACCURACY_METRIC used is 'LOGGLOSS'.
I tried accuracy_over_time.metric. it gave "LogLoss" as an output result.
but how to get the metric value for this LogLoss metric.
I tried accuracy_over_time.metric_values. It says "there is no attribute as 'metric_values'."
You're looking for get_accuracy():
accuracy = deployment.get_accuracy(
start_time=datetime(2019, 8, 1, hour=15),
end_time=datetime(2019, 8, 1, 15, 0)
)
You'll need to define the time span if you want accuracy over time, like so:
rmse = deployment.get_accuracy_over_time(
start_time=datetime(2019, 8, 1),
end_time=datetime(2019, 8, 3),
bucket_size=construct_duration_string(days=1),
metric=ACCURACY_METRIC.RMSE
)
I made a model using BERT, for a NLI problem, the algorithm ran without problems, however, when I wanted to adapt it to RoBERTa, and I use strategy.scope (), it generates an error that I don't know how to solve, I appreciate any indication.
´´´
max_len1 = 515 # 128*4 de premisa mas 128*4 de hipotesis
def build_model1():
input_word_ids = tf.keras.Input(shape=(max_len1,), dtype=tf.int32,name="input_word_ids")
input_mask = tf.keras.Input(shape = (max_len1,),dtype=tf.int32,name = "input_mask")
input_type_ids = tf.keras.Input(shape = (max_len1,),dtype=tf.int32,name="input_type_ids")
embedding = model([input_word_ids,input_mask,input_type_ids])[0]
output = tf.keras.layers.Dense(3,activation='softmax')(embedding[:,0,:])
model3 = tf.keras.Model(inputs=[input_word_ids, input_mask, input_type_ids], outputs=output)
model3.compile(tf.keras.optimizers.Adam(lr=1e-5),
loss = 'sparse_categorical_crossentropy', metrics= ['accuracy'])
return model3
with strategy.scope():
model3 = build_model1()
model3.summary()
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f2425631d00>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f2425631d00>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f2425631d00>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:AutoGraph could not transform <function wrap at 0x7f243c214d40> and will run it as-is.
Cause: while/else statement not yet supported
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function wrap at 0x7f243c214d40> and will run it as-is.
Cause: while/else statement not yet supported
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING: AutoGraph could not transform <function wrap at 0x7f243c214d40> and will run it as-is.
Cause: while/else statement not yet supported
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-24-e91a2e7e4b41> in <module>()
1 with strategy.scope():
----> 2 model3 = build_model1()
3 model3.summary()
2 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py in _validate_compile(self, optimizer, metrics, **kwargs)
2533 'with strategy.scope():\n'
2534 ' model=_create_model()\n'
-> 2535 ' model.compile(...)' % (v, strategy))
2536
2537 # Model metrics must be created in the same distribution strategy scope
ValueError: Variable (<tf.Variable 'tfxlm_roberta_model/roberta/encoder/layer_._0/attention/self/query/kernel:0' shape=(1024, 1024) dtype=float32, numpy=
array([[-0.00294119, -0.00129846, 0.00517603, ..., 0.03835522,
0.0218797 , 0.02100084],
[-0.00933813, -0.05062149, 0.01634834, ..., -0.02387142,
0.0113477 , -0.02262339],
[-0.02023344, -0.04181184, -0.00581416, ..., -0.00609464,
0.00801133, 0.00512151],
...,
[-0.02129102, -0.03157991, -0.04071935, ..., 0.04682101,
0.01948426, 0.00312433],
[-0.04902648, -0.01055507, 0.01377375, ..., 0.00845209,
0.01616496, -0.01041171],
[ 0.00759454, -0.00162496, -0.00215843, ..., -0.03199947,
-0.03871808, 0.04949447]], dtype=float32)>) was not created in the distribution strategy scope
of (<tensorflow.python.distribute.tpu_strategy.TPUStrategy object at 0x7f21fcbbb210>). It is most
likely due to not all layers or the model or optimizer being created outside the distribution
strategy scope. Try to make sure your code looks similar to the following.
with strategy.scope():
model=_create_model()
model.compile(...)
´´´
The same code, as I said above, works perfectly for BERT, obviously, for RoBERTa I made the changes in the tokenizer and the loading of the model
I managed to solve it, investigating, I reached that the implementation of roberta went beyond just calling the model
I made and trained a pytorch v1.4 model that predicts a sin() value (based on an example found on the web). Inference works. I then tried to compile it with TVM v0.8dev0 and llvm 10 on Ubuntu with a x86 cpu. I followed the TVM setup guide and ran some tutorials for onnx that do work.
I mainly used existing tutorials on TVM to figure out the procedure below. Note that I'm not a ML nor DataScience engineer. These were my steps:
import tvm, torch, os
from tvm import relay
state = torch.load("/home/dude/tvm/tst_state.pt") # load the trained pytorch state
import tst
m = tst.Net()
m.load_state_dict(state) # init the model with its trained state
m.eval()
sm = torch.jit.trace(m, torch.tensor([3.1415 / 4])) # convert to a scripted model
# the model only takes 1 input for inference hence [("input0", (1,))]
mod, params = tvm.relay.frontend.from_pytorch(sm, [("input0", (1,))])
mod.astext # outputs some small relay(?) script
with tvm.transform.PassContext(opt_level=1):
lib = relay.build(mod, target="llvm", target_host="llvm", params=params)
The last line gives me this error that I don't know how to solve nor where I went wrong. I hope that someone can pinpoint my mistake ...
... removed some lines here ...
[bt] (3) /home/dude/tvm/build/libtvm.so(TVMFuncCall+0x5f) [0x7f5cd65660af]
[bt] (2) /home/dude/tvm/build/libtvm.so(+0xb4f8a7) [0x7f5cd5f318a7]
[bt] (1) /home/dude/tvm/build/libtvm.so(tvm::GenericFunc::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x1ab) [0x7f5cd5f315cb]
[bt] (0) /home/tvm/build/libtvm.so(+0x1180cab) [0x7f5cd6562cab]
File "/home/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 81, in cfun
rv = local_pyfunc(*pyargs)
File "/home/tvm/python/tvm/relay/op/strategy/x86.py", line 311, in dense_strategy_cpu
m, _ = inputs[0].shape
ValueError: not enough values to unpack (expected 2, got 1)
I'm studying AzureML RL with example codes.
I could run cartpole example (cartpole_ci.ipynb) which trains
the PPO model on compute instance.
I tried SAC instead of PPO by changing training_algorithm = "PPO" to training_algorithm = "SAC"
but it failed with the message below.
ray.rllib.utils.error.UnsupportedSpaceException: Action space Discrete(2) is not supported for SAC.
Has someone tried SAC algorithm on AzureML RL and did it work?
AzureML RL does support SAC Discrete Actions but not parametric and I have confirmed it in the doc - https://docs.ray.io/en/latest/rllib-algorithms.html#feature-compatibility-matrix
Are you following the code sample?
from azureml.contrib.train.rl import ReinforcementLearningEstimator, Ray
training_algorithm = "PPO" rl_environment = "CartPole-v0"
script_params = {
# Training algorithm
"--run": training_algorithm,
# Training environment
"--env": rl_environment,
# Algorithm-specific parameters
"--config": '\'{"num_gpus": 0, "num_workers": 1}\'',
# Stop conditions
"--stop": '\'{"episode_reward_mean": 200, "time_total_s": 300}\'',
# Frequency of taking checkpoints
"--checkpoint-freq": 2,
# If a checkpoint should be taken at the end - optional argument with no value
"--checkpoint-at-end": "",
# Log directory
"--local-dir": './logs' }
training_estimator = ReinforcementLearningEstimator(
# Location of source files
source_directory='files',
# Python script file
entry_script='cartpole_training.py',
# A dictionary of arguments to pass to the training script specified in ``entry_script``
script_params=script_params,
# The Azure Machine Learning compute target set up for Ray head nodes
compute_target=compute_target,
# Reinforcement learning framework. Currently must be Ray.
rl_framework=Ray() )
Update #1 (original question and details below):
As per the suggestion of #MatthijsHollemans below I've tried to run this by removing dynamic_axes from the initial create_onnx step below. This removed both:
Description of image feature 'input_image' has missing or non-positive width 0.
and
Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
Unfortunately this opens up two sub-questions:
I still want to have a functional ONNX model. Is there a more appropriate way to make H and W dynamic? Or should I be saving two versions of the ONNX model, one without dynamic_axes for the CoreML conversion, and one with for use as a valid ONNX model?
Although this solves the compilation error in xcode (specified below) it introduces the following runtime issues:
Finalizing CVPixelBuffer 0x282f4c5a0 while lock count is 1.
[espresso] [Espresso::handle_ex_plan] exception=Invalid X-dimension 1/480 status=-7
[coreml] Error binding image input buffer input_image: -7
[coreml] Failure in bindInputsAndOutputs.
I am calling this the same way I was calling the fixed size model, which does still work fine. The image dimensions are 640 x 480.
As specified below the model should accept any image between 64x64 and higher.
For flexible shape models, do I need to provide an input differently in xcode?
Original Question (parts still relevant)
I have been slowly working on converting a style transfer model from pytorch > onnx > coreml. One of the issues that has been a struggle is flexible/dynamic input + output shape.
This method (besides i/o renaming) has worked well on iOS 12 & 13 when using a static input shape.
I am using the following code to do the onnx > coreml conversion:
def create_coreml(name):
mlmodel = convert(
model="onnx/" + name + ".onnx",
preprocessing_args={'is_bgr': True},
deprocessing_args={'is_bgr': True},
image_input_names=['input_image'],
image_output_names=['stylized_image'],
minimum_ios_deployment_target='13'
)
spec = mlmodel.get_spec()
img_size_ranges = flexible_shape_utils.NeuralNetworkImageSizeRange()
img_size_ranges.add_height_range((64, -1))
img_size_ranges.add_width_range((64, -1))
flexible_shape_utils.update_image_size_range(
spec,
feature_name='input_image',
size_range=img_size_ranges)
flexible_shape_utils.update_image_size_range(
spec,
feature_name='stylized_image',
size_range=img_size_ranges)
mlmodel = coremltools.models.MLModel(spec)
mlmodel.save("mlmodel/" + name + ".mlmodel")
Although the conversion 'succeeds' there are a couple of warnings (spaces added for readability):
Translation to CoreML spec completed. Now compiling the CoreML model.
/usr/local/lib/python3.7/site-packages/coremltools/models/model.py:111:
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was:
Error compiling model:
"Error reading protobuf spec. validator error: Description of image feature 'input_image' has missing or non-positive width 0.".
RuntimeWarning)
Model Compilation done.
/usr/local/lib/python3.7/site-packages/coremltools/models/model.py:111:
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was:
Error compiling model:
"compiler error: Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
".
RuntimeWarning)
If I ignore these warnings and try to compile the model for latest targets (13.0) I get the following error in xcode:
coremlc: Error: compiler error: Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
Here is what the problematic area appears to look like in netron:
My main question is how can I get these two warnings out of the way?
Happy to provide any other details.
Thanks for any advice!
Below is my pytorch > onnx conversion:
def create_onnx(name):
prior = torch.load("pth/" + name + ".pth")
model = transformer.TransformerNetwork()
model.load_state_dict(prior)
dummy_input = torch.zeros(1, 3, 64, 64) # I wasn't sure what I would set the H W to here?
torch.onnx.export(model, dummy_input, "onnx/" + name + ".onnx",
verbose=True,
opset_version=10,
input_names=["input_image"], # These are being renamed from garbled originals.
output_names=["stylized_image"], # ^
dynamic_axes={'input_image':
{2: 'height', 3: 'width'},
'stylized_image':
{2: 'height', 3: 'width'}}
)
onnx.save_model(original_model, "onnx/" + name + ".onnx")