how to deploy bert-base-ner model with pipeline function on sagemaker with aggregation_strategy
I want to deploy ner model with aggregation strategy but only found this
from sagemaker.huggingface.model import HuggingFaceModel
hub = {
'HF_MODEL_ID':'dslim/bert-base-NER',
'HF_TASK':'token-classification'}
huggingface_model = HuggingFaceModel(
env=hub,
role=role,
transformers_version="4.6", # Transformers version used
pytorch_version="1.7", # PyTorch version used
py_version='py36', # Python version used
)
but want to deploy this -------
from transformers import pipeline
token_classifier = pipeline(model='dslim/bert-base-NER', aggregation_strategy="simple")
Related
I have an XGBoost model currently in production using AWS sagemaker and making real time inferences. After a while, I would like to update the model with a newer one trained on more data and keep everything as is (e.g. same endpoint, same inference procedure, so really no changes aside from the model itself)
The current deployment procedure is the following :
from sagemaker.xgboost.model import XGBoostModel
from sagemaker.xgboost.model import XGBoostPredictor
xgboost_model = XGBoostModel(
model_data = <S3 url>,
role = <sagemaker role>,
entry_point = 'inference.py',
source_dir = 'src',
code_location = <S3 url of other dependencies>
framework_version='1.5-1',
name = model_name)
xgboost_model.deploy(
instance_type='ml.c5.large',
initial_instance_count=1,
endpoint_name = model_name)
Now that I updated the model a few weeks later, I would like to re-deploy it. I am aware that the .deploy() method creates an endpoint and an endpoint configuration so it does it all. I cannot simply re-run my script again since I would encounter an error.
In previous versions of sagemaker I could have updated the model with an extra argument passed to the .deploy() method called update_endpoint = True. In sagemaker >=2.0 this is a no-op. Now, in sagemaker >= 2.0, I need to use the predictor object as stated in the documentation. So I try the following :
predictor = XGBoostPredictor(model_name)
predictor.update_endpoint(model_name= model_name)
Which actually updates the endpoint according to a new endpoint configuration. However, I do not know what it is updating... I do not specify in the above 2 lines of code that we need to considering the new xgboost_model trained on more data... so where do I tell the update to take a more recent model?
Thank you!
Update
I believe that I need to be looking at production variants as stated in their documentation here. However, their whole tutorial is based on the amazon sdk for python (boto3) which has artifacts that are hard to manage when I have difference entry points for each model variant (e.g. different inference.py scripts).
Since I found an answer to my own question I will post it here for those who encounter the same problem.
I ended up re-coding all my deployment script using the boto3 SDK rather than the sagemaker SDK (or a mix of both as some documentation suggest).
Here's the whole script that shows how to create a sagemaker model object, an endpoint configuration and an endpoint to deploy the model on for the first time. In addition, it shows how to update the endpoint with a newer model (which was my main question)
Here's the code to do all 3 in case you want to bring your own model and update it safely in production using sagemaker :
import boto3
import time
from datetime import datetime
from sagemaker import image_uris
from fileManager import * # this is a local script for helper functions
# name of zipped model and zipped inference code
CODE_TAR = 'your_inference_code_and_other_artifacts.tar.gz'
MODEL_TAR = 'your_saved_xgboost_model.tar.gz'
# sagemaker params
smClient = boto3.client('sagemaker')
smRole = <your_sagemaker_role>
bucket = sagemaker.Session().default_bucket()
# deploy algorithm
class Deployer:
def __init__(self, modelName, deployRetrained=False):
self.modelName=modelName
self.deployRetrained = deployRetrained
self.prefix = <S3_model_path_prefix>
def deploy(self):
'''
Main method to create a sagemaker model, create an endpoint configuration and deploy the model. If deployRetrained
param is set to True, this method will update an already existing endpoint.
'''
# define model name and endpoint name to be used for model deployment/update
model_name = self.modelName + <any_suffix>
endpoint_config_name = self.modelName + '-%s' %datetime.now().strftime('%Y-%m-%d-%HH%M')
endpoint_name = self.modelName
# deploy model for the first time
if not self.deployRetrained:
print('Deploying for the first time')
# here you should copy and zip the model dependencies that you may have (such as preprocessors, inference code, config code...)
# mine were zipped into the file called CODE_TAR
# upload model and model artifacts needed for inference to S3
uploadFile(list_files=[MODEL_TAR, CODE_TAR], prefix = self.prefix)
# create sagemaker model and endpoint configuration
self.createSagemakerModel(model_name)
self.createEndpointConfig(endpoint_config_name, model_name)
# deploy model and wait while endpoint is being created
self.createEndpoint(endpoint_name, endpoint_config_name)
self.waitWhileCreating(endpoint_name)
# update model
else:
print('Updating existing model')
# upload model and model artifacts needed for inference (here the old ones are replaced)
# make sure to make a backup in S3 if you would like to keep the older models
# we replace the old ones and keep the same names to avoid having to recreate a sagemaker model with a different name for the update!
uploadFile(list_files=[MODEL_TAR, CODE_TAR], prefix = self.prefix)
# create a new endpoint config that takes the new model
self.createEndpointConfig(endpoint_config_name, model_name)
# update endpoint
self.updateEndpoint(endpoint_name, endpoint_config_name)
# wait while endpoint updates then delete outdated endpoint config once it is InService
self.waitWhileCreating(endpoint_name)
self.deleteOutdatedEndpointConfig(model_name, endpoint_config_name)
def createSagemakerModel(self, model_name):
'''
Create a new sagemaker Model object with an xgboost container and an entry point for inference using boto3 API
'''
# Retrieve that inference image (container)
docker_container = image_uris.retrieve(region=region, framework='xgboost', version='1.5-1')
# Relative S3 path to pre-trained model to create S3 model URI
model_s3_key = f'{self.prefix}/'+ MODEL_TAR
# Combine bucket name, model file name, and relate S3 path to create S3 model URI
model_url = f's3://{bucket}/{model_s3_key}'
# S3 path to the necessary inference code
code_url = f's3://{bucket}/{self.prefix}/{CODE_TAR}'
# Create a sagemaker Model object with all its artifacts
smClient.create_model(
ModelName = model_name,
ExecutionRoleArn = smRole,
PrimaryContainer = {
'Image': docker_container,
'ModelDataUrl': model_url,
'Environment': {
'SAGEMAKER_PROGRAM': 'inference.py', #inference.py is at the root of my zipped CODE_TAR
'SAGEMAKER_SUBMIT_DIRECTORY': code_url,
}
}
)
def createEndpointConfig(self, endpoint_config_name, model_name):
'''
Create an endpoint configuration (only for boto3 sdk procedure) and set production variants parameters.
Each retraining procedure will induce a new variant name based on the endpoint configuration name.
'''
smClient.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
'VariantName': endpoint_config_name,
'ModelName': model_name,
'InstanceType': INSTANCE_TYPE,
'InitialInstanceCount': 1
}
]
)
def createEndpoint(self, endpoint_name, endpoint_config_name):
'''
Deploy the model to an endpoint
'''
smClient.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
def deleteOutdatedEndpointConfig(self, name_check, current_endpoint_config):
'''
Automatically detect and delete endpoint configurations that contain a string 'name_check'. This method can be used
after a retrain procedure to delete all previous endpoint configurations but keep the current one named 'current_endpoint_config'.
'''
# get a list of all available endpoint configurations
all_configs = smClient.list_endpoint_configs()['EndpointConfigs']
# loop over the names of endpoint configs
names_list = []
for config_dict in all_configs:
endpoint_config_name = config_dict['EndpointConfigName']
# get only endpoint configs that contain name_check in them and save names to a list
if name_check in endpoint_config_name:
names_list.append(endpoint_config_name)
# remove the current endpoint configuration from the list (we do not want to detele this one since it is live)
names_list.remove(current_endpoint_config)
for name in names_list:
try:
smClient.delete_endpoint_config(EndpointConfigName=name)
print('Deleted endpoint configuration for %s' %name)
except:
print('INFO : No endpoint configuration was found for %s' %endpoint_config_name)
def updateEndpoint(self, endpoint_name, endpoint_config_name):
'''
Update existing endpoint with a new retrained model
'''
smClient.update_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name,
RetainAllVariantProperties=True)
def waitWhileCreating(self, endpoint_name):
'''
While the endpoint is being created or updated sleep for 60 seconds.
'''
# wait while creating or updating endpoint
status = smClient.describe_endpoint(EndpointName=endpoint_name)['EndpointStatus']
print('Status: %s' %status)
while status != 'InService' and status !='Failed':
time.sleep(60)
status = smClient.describe_endpoint(EndpointName=endpoint_name)['EndpointStatus']
print('Status: %s' %status)
# in case of a deployment failure raise an error
if status == 'Failed':
raise ValueError('Endpoint failed to deploy')
if __name__=="__main__":
deployer = Deployer('MyDeployedModel', deployRetrained=True)
deployer.deploy()
Final comments :
The sagemaker documentation mentions all this but fails to state that you can provide an 'entry_point' to the create_model method as well as a 'source_dir' for inference dependencies (e.g. normalization artifacts). It can be done as seen in PrimaryContainer argument.
my fileManager.py script just contains basic functions to make tar files, upload and download to and from my S3 paths. To simplify the class, I have not included them in.
The method deleteOutdatedEndpointConfig may seem like a bit of an overkill with unnecessary loops and checks, I do so because I have multiple endpoint configurations to handle and wanted to remove the ones that weren't live AND contain the string name_check (I do not know the exact name of the configuration since there is a datetime suffix). Feel free to simplify it or remove it all together.
Hope it helps.
In your model_name you specify the name of a SageMaker Model object where you can specify the image_uri, model_data etc.
I have trained a BERT model on sagemaker and now I want to get it ready for making predictions, i.e, inference.
I have used pytorch to train the model and model is saved to s3 bucket after training.
Here is structure inside model.tar.gz file which is present in s3 bucket.
Now, I do not understand how can I make predictions on it. I have tried to follow many guides but still could not understand.
Here is something which I have tried:
inference_image_uri = sagemaker.image_uris.retrieve(
framework='pytorch',
version='1.7.1',
instance_type=inference_instance_type,
region=aws_region,
py_version='py3',
image_scope='inference'
)
sm.create_model(
ModelName=model_name,
ExecutionRoleArn=role,
PrimaryContainer={
'ModelDataUrl': model_s3_dir,
'Image': inference_image_uri
}
)
sm.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"VariantName": "variant1", # The name of the production variant.
"ModelName": model_name,
"InstanceType": inference_instance_type, # Specify the compute instance type.
"InitialInstanceCount": 1 # Number of instances to launch initially.
}
]
)
sm.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name
)
from sagemaker.predictor import Predictor
from sagemaker.serializers import JSONLinesSerializer
from sagemaker.deserializers import JSONLinesDeserializer
inputs = [
{"inputs": ["I have a question [EOT] Hey Manish Mittal ! I'm OneAssist bot. I'm here to answer your queries. [SEP] thanks"]},
# {"features": ["OK, but not great."]},
# {"features": ["This is not the right product."]},
]
predictor = Predictor(
endpoint_name=endpoint_name,
serializer=JSONLinesSerializer(),
deserializer=JSONLinesDeserializer(),
sagemaker_session=sess
)
predicted_classes = predictor.predict(inputs)
for predicted_class in predicted_classes:
print("Predicted class {} with probability {}".format(predicted_class['predicted_label'], predicted_class['probability']))
I can see the endpoint created but while predicting, its giving me error:
ModelError: An error occurred (ModelError) when calling the
InvokeEndpoint operation: Received server error (0) from primary with
message "Your invocation timed out while waiting for a response from
container primary. Review the latency metrics for each container in
Amazon CloudWatch, resolve the issue, and try again."
I do not understand how to make it work, and also, do I need to give any entry script to the inference, if yes where.
Here's detailed documentation on deploying PyTorch models - https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#deploy-pytorch-models
If you're using the default model_fn provided by the estimator, you'll need to have the model as model.pt.
To write your own inference script and deploy the model, see the section on Bring your own model. The pytorch_model.deploy function will deploy it to a real-time endpoint, and then you can use the predictor.predict function on the resulting endpoint variable.
I have a .pkl file that is the result of a trained model. And I want to create an endpoint from sage maker to be able to consume the predictions, and I have already managed to read the file from s3 but I can't find exact documentation on how to expose the "compiled" as API
s3 = boto3.resource('s3')
bucket = s3.Bucket("sagemake-models-workshop").Object("pikle-
file/contatos/xgb_contratos_mensual_RandomizedSearchLinux.pkl").get()['Body'].read()
bucket_pickle = pickle.loads(bucket)
output :
bucket_pickle
XGBRegressor(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.1, gamma=0, gpu_id=-1,
importance_type='gain', interaction_constraints='',
learning_rate=0.33, max_delta_step=0, max_depth=3,
min_child_weight=1, missing=nan, monotone_constraints='()',
n_estimators=150, n_jobs=0, num_parallel_tree=1, random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, subsample=1,
tree_method='exact', validate_parameters=1, verbosity=None)
If your model is an XGBoost model you can look at deploying it using the XGBoost Framework container. Please see this link for details on Bring Your Own Model for XGBoost.
I work for AWS and my opinions are my own.
Can anyone provide an example for deploying a pytorch model using SageMaker Pipeline?
I've used the MLOps template (MLOps template for model building, traing and deployment) of SageMaker Studio to build a MLOps project.
The template is using sagemaker pipelines to build a pipeline for preprocessing and training and registering the model.
And deployment script is implemented in the YAML file and employing CloudFormation to run. The deployment script will be triggered automatically when the model is registered.
The template is using xgboost model to train the data and deploy the model. I want to use Pytorch and deploy it.
I successfully replaced the pytorch with xgboost and successfully preprocessed the data, trained the model and registered the model. But I didn't use inference.py in my model. So I get error for the model deployment.
The error log in updating the endpoint is:
FileNotFoundError: [Errno 2] No such file or directory: '/opt/ml/model/code/inference.py'
I tried to find example of using inference.py for pytorch model, but I couldn't find any example which uses sagemaker pipelines and RegisterModel.
Any help would be appreciated.
Below you can see a part of the pipeline for training and registering the model.
from sagemaker.pytorch.estimator import PyTorch
from sagemaker.workflow.pipeline import Pipeline
from sagemaker.workflow.steps import (
ProcessingStep,
TrainingStep,
)
from sagemaker.workflow.step_collections import RegisterModel
pytorch_estimator = PyTorch(entry_point= os.path.join(BASE_DIR, 'train.py'),
instance_type= "ml.m5.xlarge",
instance_count=1,
role=role,
framework_version='1.8.0',
py_version='py3',
hyperparameters = {'epochs': 5, 'batch-size': 64, 'learning-rate': 0.1})
step_train = TrainingStep(
name="TrainModel",
estimator=pytorch_estimator,
inputs={
"train": sagemaker.TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"train_data"
].S3Output.S3Uri,
content_type="text/csv",
),
"dev": sagemaker.TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"dev_data"
].S3Output.S3Uri,
content_type="text/csv"
),
"test": sagemaker.TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"test_data"
].S3Output.S3Uri,
content_type="text/csv"
),
},
)
step_register = RegisterModel(
name="RegisterModel",
estimator=pytorch_estimator,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
content_types=["text/csv"],
response_types=["text/csv"],
inference_instances=["ml.t2.medium", "ml.m5.large"],
transform_instances=["ml.m5.large"],
model_package_group_name=model_package_group_name,
approval_status=model_approval_status,
)
pipeline = Pipeline(
name=pipeline_name,
parameters=[
processing_instance_type,
processing_instance_count,
training_instance_type,
model_approval_status,
input_data,
],
steps=[step_process, step_train, step_register],
sagemaker_session=sagemaker_session,
)
PyTorch api is using base pytorch images.
when sagemaker.pytorch.deploy method called, the sagemaker run '/opt/ml/model/code/inference.py'
But in your base image is not have that file.
So if you want to use deploy method, you make 'inference.py' with sagemaker style(can execute in sagemaker container) and build and push image.
And then you can use deploy method!
Here is sample codes
https://sagemaker-workshop.com/custom/containers.html
I have pkl package saved in my azure devops repository
using below code it searches for package in workspace.
How to provide package saved in repository
ws = Workspace.get(
name=workspace_name,
subscription_id=subscription_id,
resource_group=resource_group,
auth=cli_auth)
image_config = ContainerImage.image_configuration(
execution_script="score.py",
runtime="python-slim",
conda_file="conda.yml",
description="Image with ridge regression model",
tags={"area": "ml", "type": "dev"},
)
image = Image.create(
name=image_name, models=[model], image_config=image_config, workspace=ws
)
image.wait_for_creation(show_output=True)
if image.creation_state != "Succeeded":
raise Exception("Image creation status: {image.creation_state}")
print(
"{}(v.{} [{}]) stored at {} with build log {}".format(
image.name,
image.version,
image.creation_state,
image.image_location,
image.image_build_log_uri,
)
)
# Writing the image details to /aml_config/image.json
image_json = {}
image_json["image_name"] = image.name
image_json["image_version"] = image.version
image_json["image_location"] = image.image_location
with open("aml_config/image.json", "w") as outfile:
json.dump(image_json, outfile)
I tried to provide path to models but its fails saying package not found
models = $(System.DefaultWorkingDirectory)/package_model.pkl
Register model:
Register a file or folder as a model by calling Model.register().
In addition to the content of the model file itself, your registered model will also store model metadata -- model description, tags, and framework information -- that will be useful when managing and deploying models in your workspace. Using tags, for instance, you can categorize your models and apply filters when listing models in your workspace.
model = Model.register(workspace=ws,
model_name='', # Name of the registered model in your workspace.
model_path='', # Local file to upload and register as a model.
model_framework=Model.Framework.SCIKITLEARN, # Framework used to create the model.
model_framework_version=sklearn.__version__, # Version of scikit-learn used to create the model.
sample_input_dataset=input_dataset,
sample_output_dataset=output_dataset,
resource_configuration=ResourceConfiguration(cpu=1, memory_in_gb=0.5),
description='Ridge regression model to predict diabetes progression.',
tags={'area': 'diabetes', 'type': 'regression'})
print('Name:', model.name)
print('Version:', model.version)
Deploy machine learning models to Azure: https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where?tabs=python
To Troubleshooting remote model deployment Please follow the document.