In the latest version of the Anomaly Detection Service by Azure which supports the Multi-variate Cognitive Service, we need to train a model and then consume it.
The quickstart documentation for Python mentions a few libraries which are not getting imported:
from azure.ai.anomalydetector.models import DetectionRequest, ModelInfo
Both these libraries are throwing import errors.
How can we use the Multivariate Anomaly Detection Service using the Python SDK?
This error was with version azure-ai-anomalydetector==3.0.0b2. With version azure-ai-anomalydetector==3.0.0b3, this has been addressed.
The problem is because of the change of the response format recently. To solve that issue, you can change the line with error to
model_status = self.ad_client.get_multivariate_model(trained_model_id).model_info.status
Related
Basically title. The Azure documentation for v2 is constantly getting updated, and as of now i have no resource to find out how you can register a pre-trained model from SentenceTransformers on AzureML for future use in endpoints. The library is based on Pytorch, but so far I've had no luck in using MLFlow(mentioned in the docs) to register it.
I don't have much code to show, so any help whatsoever would be appreciated.
With MLFlow, you have to first save or log your model before you can register it. But with log_model you can do both in one step
mlflow.pytorch.log_model(model, "my_model_path", registered_model_name="fancy")
Then it is easiest to deploy it from the AzureML Studio:
I have been trying to import a model from onnx format to work with pytorch. I am finding it difficult to get an example for the same. As most of the resources in Internet talks about exporting a pytorch model to onnx.
I found that torch.onnx() can only export the model and the import method hasn't been implemented yet. A direct installation of onnx library, helps me to do onnx.load("model_name.onnx"). How do I use this model with pytorch? I am not able to move the model to GPU with model.to(device="GPU")
PyTorch doesn't currently support importing onnx models. As of writing this answer it's an open feature request.
While not guaranteed to work, a potential solution is to use a tool developed by Microsoft called MMdnn (no it's not windows only!) which supports conversion to and from various frameworks. Unfortunately onnx can only be a target of a conversion, and not a source. That said you may be able to import your model to another framework, then use MMdnn to convert from that framework to pytorch. Obviously this isn't ideal and the potential for success will depend on how other frameworks use onnx which may not be amenable to the way MMdnn works.
Update August 2022
Unfortunately it appears the feature request was rejected and Mmdnn has been abandoned. There are some more recent 3rd party tools that provide some ability to import onnx into pytorch like onnx2pytorch and onnx-pytorch. Neither of these tools appear to be actively developed, though pytorch and onnx are relatively stable at this point so hopefully these tools remain relevant in the future (official support would be better IMO). Note that both of these tools have unaddressed issues, so it may be necessary to try both of them if one doesn't work for you.
Update September 2022
Based on the comment from #DanNissenbaum there is a newer 3rd party tool onnx2torch that is being actively developed and maintained.
I am getting this error when training my model on google cloud while trying to run Tensorflow Object detection
gapic-google-cloud-logging-v2 0.91.3 has requirement google-gax<0.16dev,>=0.15.7, but you'll have google-gax 0.12.5 which is incompatible.
any help how to fix it?
I trained a model in AZURE ML. Now i want to use that model in my ios app to predict the outputĀ .
How to download the model from AZURE and use it my swift code.
As far as I know, the model could run in Azure Machine Learning Studio.It seems that you are unable to download it, the model could do nothing outside of Azure ML.
Here is a similar post for you to refer, I have also tried #Ahmet's
method, but result is like #mrjrdnthms says.
I trained a deep-learning model with python using Tensorflow library, and I saved it in pickle file.
My question- Is there a way to extract this file with firebase cloud functions via node.js runtime?
thanks.
There has been an official javascript version of tensorflow released some weeks ago.
With tfjs-converter it is possible to convert pretrained models to javascript.
Check out https://github.com/tensorflow/tfjs-converter and https://js.tensorflow.org/