We are trying to use Azure Machine Learning to interpret a model by using Azure ML Interpretability libraries namely azureml-interpret and azureml-sdk[explain].
Our model is RandomForestRegressor from sklearn.ensemble.
import lightgbm
from interpret.ext.blackbox import PFIExplainer
#from interpret.ext.glassbox import DecisionTreeExplainableModel
from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient
model = train_model(X_train_df,y_train_df)
explainer = PFIExplainer(model, features = feature_names)
global_explanation = explainer.explain_global(X_test_df[0:50],true_labels=y_test_df[0:50])
explain_client = ExplanationClient.from_run(run)
explain_client.upload_model_explanation(global_explanation)
We are getting the following error
Traceback (most recent call last):
File "training/train.py", line 83, in <module>
explain_client.upload_model_explanation(global_explanation)
File "/azureml-envs/azureml_d5d57a45ca9af991b8408524822c201f/lib/python3.6/site-packages/azureml/interpret/_internal/explanation_client.py", line 793, in upload_model_explanation
asset_type=History.ASSET_TYPE
TypeError: create_asset() got an unexpected keyword argument 'asset_type'
We have tried -
TabularExplainer, MimicExplainer(with DecisionTreeExplainableModel) but all of them result in the same error.
Related
Used this standard code for speech recognition:
import sys
sys.path.append('/home/nagy/.local/lib/python3.9/site-packages')
import speech_recognition
import pyttsx3
r = speech_recognition.Recognizer()
while True:
with speech_recognition.Microphone() as mic:
r.adjust_for_ambient_noise(mic,duration=0.2)
audio = r.listen(mic)
words = r.recognize_sphinx(audio)
words=words.lower()
print(words)
This is the error I receivee:
Traceback (most recent call last):
File "speech.py", line 3, in <module>
import speech_recognition
File "/home/nagy/.local/lib/python3.9/site-packages/speech_recognition/__init__.py", line 1513
endpoint = f"https://api.assemblyai.com/v2/transcript/{transciption_id}" ^
SyntaxError: invalid syntax
I'm not sure exactly what the error is but I know it's in the speech recognition library.
I am following this tutorial. All my tensorflow and CUDA setups are complete and tested to be fully working.
My test of the setup
import torch
import tensorflow as tf
import tensorflow.keras as ks
print(tf)
print(ks)
print(torch.cuda.is_available())
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
X_train = torch.FloatTensor([0., 1., 2.])
X_train = X_train.to(device)
print(X_train.is_cuda)
print(torch.cuda.current_device())
print(torch.cuda.device_count())
print(torch.cuda.get_device_name(0))
Gives
<module 'tensorflow' from 'C:\\Venv\\time_series_forecast\\lib\\site-packages\\tensorflow\\__init__.py'>
<module 'tensorflow.keras' from 'C:\\Venv\\time_series_forecast\\lib\\site-packages\\keras\\api\\_v2\\keras\\__init__.py'>
True
True
0
1
NVIDIA GeForce GTX 1070 Ti
In my main.py file, I have these imports
import torch
import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
from tensorflow.keras.models import load_model
when I try to load the saved classifier, I am getting error in these But the following lines with load_model gives an error
classifier_path = '../saved_classifiers/bury_pnas_21/len500/best_model_1_1_len500.pkl'
classifier = load_model(classifier_path)
I am getting this error
Traceback (most recent call last):
File "G:\My Drive\Working_Dir\\project_and_initiative\forecast\code_v8\_test_p9.py", line 46, in <module>
classifier = load_model(final_classifier_path)
File "C:\Venv\time_series_forecast\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Venv\time_series_forecast\lib\site-packages\tensorflow\python\saved_model\load.py", line 948, in load_partial
raise FileNotFoundError(
FileNotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for G:\My Drive\Working_Dir\project_and_initiative\forecast\code_v8\saved_dl_classifiersv\bury_pnas_21\len500\best_model_1_1_len500.pkl\variables\variables
You may be trying to load on a different device from the computational device. Consider setting the `experimental_io_device` option in `tf.saved_model.LoadOptions` to the io_device such as '/job:localhost'.
Process finished with exit code 1
I downloaded the .pkl from here including its content from the variables folder, and placed it the same folder structure as suggested in the tutorial. What is the cause and what should I try?
I'm creating a new testing frameworks, I started to implement my own functions in *.py files, but when I try to run test, I've got following stack:
(venv) PLAMWS0024:OAT user$ robot -v CONFIG_FILE:"/variables-config.robot" ./catalog/tests/test1.robot
Traceback (most recent call last):
File "/Users/user/PycharmProjects/OAT/venv/bin/robot", line 5, in <module>
from robot.run import run_cli
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/__init__.py", line 44, in <module>
from robot.rebot import rebot, rebot_cli
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/rebot.py", line 45, in <module>
from robot.run import RobotFramework
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/run.py", line 44, in <module>
from robot.running.builder import TestSuiteBuilder
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/running/__init__.py", line 98, in <module>
from .builder import TestSuiteBuilder, ResourceFileBuilder
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/running/builder/__init__.py", line 16, in <module>
from .builders import TestSuiteBuilder, ResourceFileBuilder
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/running/builder/builders.py", line 20, in <module>
from robot.parsing import SuiteStructureBuilder, SuiteStructureVisitor
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/parsing/__init__.py", line 380, in <module>
from .model import ModelTransformer, ModelVisitor
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/parsing/model/__init__.py", line 18, in <module>
from .statements import Statement
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/parsing/model/statements.py", line 453, in <module>
class Error(Statement, Exception):
TypeError: multiple bases have instance lay-out conflict
I suspect it's because in one of my files I'm trying to get variables from Robot Framework built in functionalities.
and I'm thinking it's because I'm trying to use protected methods, but I am not sure.
I found issue TypeError: multiple bases have instance lay-out conflict and it shows that there might be a mismatch in naming convention (or am I wrong?), but my project is a bit small, so the only option is that Robot can't see the function itself.
What can I miss?
Some code:
Test itself:
*** Settings ***
Documentation TO BE CHANGED
... SET IT TO CORRECT DESCRIPTION
Library ${EXECDIR}/file.py
Library String
*** Test Cases ***
User can do stuff
foo bar
from datetime import datetime
from robot.api import logger
from robot.libraries.BuiltIn import _Variables
from robot.parsing.model.statements import Error
import json
import datetime
from catalog.resources.utils.clipboardContext import get_value_from_clipboard
Vars = _Variables()
def foo_bar(params):
# Get all variables
country = get_value_from_clipboard('${COUNTRY}')
address = get_value_from_clipboard('${ADDRESS}')
city = get_value_from_clipboard('${CITY}')
postcode = get_value_from_clipboard('${POSTALCODE}')
And calling Vars:
from robot.libraries.BuiltIn import _Variables
from robot.parsing.model.statements import Error
Vars = _Variables()
def get_value_from_clipboard(name):
"""
Returns value saved inside variables passed in Robot Framework
:param name: name of the variable, needs to have ${} part
as example: ${var} passed in config file
:return: value itself, passed as string
"""
try:
return Vars.get_variable_value(name)
except Error as e:
raise Error('Missing parameter in the clipboard, stack:' + str(e))
What fixed issue:
uninstall all requirements from requirements.txt file and install all one-by-one.
Additional steps I tried:
comment out all files one-by-one and run only robot command - failed, got same errors
cleaned vnenv as described here: How to reset virtualenv and pip? (failed)
check out if any variable has same naming as described in python3.8/site-packages/robot/parsing/model/statements.py - none
So looks like there was some clash in installing requirements by PyCharm IDE
I am trying to make a gRPC call to split deep learning between layers in PyTorch. For this, I need to send a Tensor to my cloud for processing using gRPC call. I am currently trying to use the following as message:
message Tensor {
google.protobuf.Any out = 1;
}
and in my pyTorch code,
from google.protobuf.any_pb2 import Any
...
request = Any()
...
request.Pack(out)
send_grpc_msg(request)
Here, type(out) is <class 'torch.Tensor'>. However, this code is giving the following error:
Traceback (most recent call last):
File "vgg_inference.py", line 95, in <module>
out = detect_images(Image.open("images/dog.jpg"))
File "vgg_inference.py", line 88, in detect_images
request.Pack(out)
File "/home/rohit/.local/lib/python3.6/site-packages/google/protobuf/internal/well_known_types.py", line 78, in Pack
self.type_url = '%s%s' % (type_url_prefix, msg.DESCRIPTOR.full_name)
AttributeError: 'Tensor' object has no attribute 'DESCRIPTOR'
I am also unable to find a good resource of using ANY in python.
Thanks
I am trying to load a tensorflow meta graph from a saved checkpoint using Tensorflow version 1.15 to convert it to a SavedModel for tensorflow serving. It is a Speech Recognition Model with Local attention and unidirectional LSTM implemented using the Returnn Toolkit with Tensorflow Backend. I am using the following code.
import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
import sys
if len(sys.argv)!=2:
print("Usage:" + sys.argv[0] + "save_dir")
exit(1)
export_dir=sys.argv[1]
builder = tf.compat.v1.saved_model.builder.SavedModelBuilder(export_dir)
sigs={}
with tf.Session(graph=tf.Graph()) as sess:
new_saver=tf.train.import_meta_graph("./serv_test/model.238.meta")
new_saver.restore(sess, tf.train.latest_checkpoint("./serv_test"))
graph=tf.get_default_graph()
input_audio=graph.get_tensor_by_name('inference/default/wav:0')
output_hyps=graph.get_tensor_by_name('inference/default/Reshape_7:0')
sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = tf.saved_model.signature_def_utils.predict_signature_def({"in":input_audio},{"out":output_hyps})
builder.add_meta_graph_and_variables(sess, [tag_constants.SERVING], signature_def_map=sigs,)
builder.save()
But I am getting the following error in the import_meta_graph line:
Traceback (most recent call last):
File "xport.py", line 16, in <module>
new_saver=tf.train.import_meta_graph("./serv_test/model.238.meta")
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 1453, in import_meta_graph
**kwargs)[0]
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 1477, in _import_meta_graph_with_return_elements
**kwargs))
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/meta_graph.py", line 809, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py", line 405, in import_graph_def
producer_op_list=producer_op_list)
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py", line 501, in _import_graph_def_internal
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered
'NativeLstm2' in binary running on ip-10-1-21-241. Make sure the Op and Kernel
are registered in the binary running in this process. Note that if you are loading a
saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler`
should be done before importing the graph, as contrib ops are lazily registered when
the module is first accessed.
Is there any way to get around this error? Is it because of the custom built layers used in Returnn? Is there any way to make a Returnn Model tensorflow servable?
Thanks.
You should remove the graph=tf.Graph(), otherwise your import_meta_graph will import it into the wrong graph.
Just see some official TF examples how to use import_meta_graph.