I have a pickle file parameters.pkl containing some parameters and their values of a model. Is there a process to store it in Microsoft Azure Machine Learning Studio as endpoint. So that we can access the parameters and their values through API at some later stage.
The pickle file has been created through the following process:
dict={'scaler': scaler,
'features': z_tags,
'Z_reconstruction_loss': Z_reconstruction_loss}
pickle.dump(dict, open('parameters.pkl', 'wb'))
Related
I am trying to train a model in databricks with mlflow and roberta transformers. I am able to register the model but when I call it for testing, I got the following error:
OSError: We couldn't connect to 'https://huggingface.co/' to load this model and it looks like dbfs:/databricks/mlflow-tracking/3638851642935524/cd4eae6034684211933b97b178e5f062/artifacts/checkpoint-36132/artifacts/checkpoint-36132 is not the path to a directory containing a config.json file.
However, when I check for the model saved, I can see that the config.json and other files are saved in the mentioned artifact but with the following error:
Couldn't load model information due to an error.
For more details, I have followed the following link for creating an mlflow with transformers on databricks:
https://gitlab.com/juliensimon/huggingface-demos/-/tree/main/mlflow
I am trying to register a model inside one of my azure ml experiments. I am able to register it via Model.register but not via run_context.register_model
This are the two code sentences I use. The commented one is the one that fails
learn.path = Path('./outputs').absolute()
Model.register(run_context.experiment.workspace, "outputs/login_classification.pkl","login_classification", tags=metrics)
run_context.register_model("login_classification", "outputs/login_classification.pkl", tags=metrics)
I received the next error:
Message: Could not locate the provided model_path outputs/login_classification.pkl
But model is stored in this path:
Before implementing run_context.register_model() implement run_context = Run.get_context()
I was able to fix the problem by explicitly uploading the model into the run history record before trying for registering the model.
run.upload_file("output/model.pickle", "output/model.pickle")
Check the documentation for Message: Could not locate the provided model_path outputs/login_classification.pkl
To check about Run Class
How do you access using the Python programming language the WebhookData object in the Azure automation webhooks. I read the documentation regarding this, but it is in PowerShell, and not helping in my instance. My Azure webhook URL endpoint is successfully receiving data from a custom external application. I would like to read the received data and run logic driven by the received data. As shown on the below screenshot, I am receiving the data in Azure.
This is the error message I am getting when I attempt to access the WEBHOOKDATA input parameter:
Traceback (most recent call last): File "C:\Temp\rh0xijl1.ayb\3b9ba51c-73e7-44ba-af36-3c910e659c71", line 7, in <module> received_data = WEBHOOKDATA NameError: name 'WEBHOOKDATA' is not defined
This is the code producing the error message:
#!/usr/bin/env python3
import json
# Here is where my question is. How do I get this in Python?
# Surely, I should be able to access this easily. But how.
# Powershell does have a concept of param in the documentation - but I want to do this in Python.
received_data = WEBHOOKDATA
#convert JSON to string
received_as_text = json.dumps(received_data)
print(received_as_text)
You access runbook input parameters with sys.argv. See Tutorial: Create a Python 3 runbook
With an sagemaker.estimator.Estimator, I want to re-deploy a model after retraining (calling fit with new data).
When I call this
estimator.deploy(initial_instance_count=1, instance_type='ml.m5.xlarge')
I get an error
botocore.exceptions.ClientError: An error occurred (ValidationException)
when calling the CreateEndpoint operation:
Cannot create already existing endpoint "arn:aws:sagemaker:eu-east-
1:1776401913911:endpoint/zyx".
Apparently I want to use functionality like UpdateEndpoint. How do I access that functionality from this API?
Yes, under the hood the model.deploy creates a model, an endpoint configuration and an endpoint. When you call again the method from an already-deployed, trained estimator it will create an error because a similarly-configured endpoint is already deployed. What I encourage you to try:
use the update_endpoint=True parameter. From the SageMaker SDK doc:
"Additionally, it is possible to deploy a different endpoint configuration, which links to your model, to an already existing
SageMaker endpoint. This can be done by specifying the existing
endpoint name for the endpoint_name parameter along with the
update_endpoint parameter as True within your deploy() call."
Alternatively, if you want to create a separate model you can specify a new model_name in your deploy
update_endpoint has been deprecated since AFAIK . To re-create the UpdateEndpoint functionality from this API itself and deploy a newly fit training job to an existing endpoint , we could do something like this (this example uses the sagemaker sklearn API):
from sagemaker.sklearn.estimator import SKLearn
sklearn_estimator = SKLearn(
entry_point=model.py,
instance_type=<instance_type>,
framework_version=<framework_version>,
role=<role>,
dependencies=[
<comma seperated names of files>
],
hyperparameters={
'key_1':value,
'key_2':value,
...
}
)
sklearn_estimator.fit()
sm_client = boto3.client('sagemaker')
# Create the model
sklearn_model = sklearn_estimator.create_model()
# Define an endpoint config and an endpoint
endpoint_config_name = 'endpoint-' + datetime.utcnow().strftime("%Y%m%d%H%m%s")
current_endpoint = endpoint_config_name
# From the Model : create the endpoint config and the endpoint
sklearn_model.deploy(
initial_instance_count=<count>,
instance_type=<instance_type>,
endpoint_name=current_endpoint
)
# Update the existing endpoint if it exists or create a new one
try:
sm_client.update_endpoint(
EndpointName=DESIRED_ENDPOINT_NAME, # The Prod/Existing Endpoint Name
EndpointConfigName=endpoint_config_name
)
except Exception as e:
try:
sm_client.create_endpoint(
EndpointName=DESIRED_ENDPOINT_NAME, # The Prod Endpoint name
EndpointConfigName=endpoint_config_name
)
except Exception as e:
logger.info(e)
sm_client.delete_endpoint(EndpointName=current_endpoint)
I was trying to register an ONNX model to Azure Machine Learning service workspace in two different ways, but I am getting errors I couldn't solve.
First method: Via Jupyter Notebook and python Script
model = Model.register(model_path = MODEL_FILENAME,
model_name = "MyONNXmodel",
tags = {"onnx":"V0"},
description = "test",
workspace = ws)
The error is : HttpOperationError: Operation returned an invalid status code 'Service invocation failed!Request: GET https://cert-westeurope.experiments.azureml.net/rp/workspaces'
Second method: Via Azure Portal
Anyone can help please?
error 413 means the payload is too large. Using Azure portal, you can only upload a model upto 25MB in size. Please use python SDK to upload models larger than 25MB.