How to configure `backend-store-uri` with huggingface Trainer - mlflow

When configuring a Hugging Face TrainingArguments https://huggingface.co/transformers/v4.8.0/main_classes/trainer.html you can set the logging_dir and output_dir.
There is also the mlruns directory which according to https://mlflow.org/docs/latest/tracking.html#backend-stores you can configure using --set-backend-uri. Though that is an mlflow doc, not a Hugging Face doc.
What is the best way to specify a different mlruns directory programmatically when setting up the Hugging Face Trainer?

Related

Error: That model does not exist (OpenAI)

When using a model I fine-tuned for GPT-3 using openai api from CLI, it stopped working and I get an error with this message: "That model does not exist".
But this is a model I have used before, so it should exist.
Sometimes with new versions old fine-tuned stop working.
To check out current fine tuned models besides running openai api fine_tunes.list you should check the fine-tuned model showing up in Playground with the other models.

Register Sentencetransformer model on Azure ML

Basically title. The Azure documentation for v2 is constantly getting updated, and as of now i have no resource to find out how you can register a pre-trained model from SentenceTransformers on AzureML for future use in endpoints. The library is based on Pytorch, but so far I've had no luck in using MLFlow(mentioned in the docs) to register it.
I don't have much code to show, so any help whatsoever would be appreciated.
With MLFlow, you have to first save or log your model before you can register it. But with log_model you can do both in one step
mlflow.pytorch.log_model(model, "my_model_path", registered_model_name="fancy")
Then it is easiest to deploy it from the AzureML Studio:

OpenVino toolkit training repo and pre-trained models

Under the openvinotoolkit is a repo for training deepReID and there are a number of projects such as training a model for person attributes
Is this the codebase that was used to train the pre-trained models provided with OpenVino?
Is it possible to provide a script to show how to obtain the pre-trained models using the code?
The person attribute uses a dataset with more attributes than provided by the pre-trained model. Is there an updated model using this repo?
Deep-object-reid uses Torchreid. Torchreid is a library for deep-learning person re-identification, written in PyTorch. This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.
You can download the pretrained models from Open Model Zoo by going to the directory:
openvino/deployment_tools/open_model_zoo/tools/downloader
Then, run the following command to download pre-trained models:
./downloader.py --name <model_name>
For more details, please visit this page.
This repo is forked from KaiyangZhou/deep-person-reid.

Download pre-trained BERT model locally

I am using the SentenceTransformers library (here: https://pypi.org/project/sentence-transformers/#pretrained-models) for creating embeddings of sentences using the pretrained model bert-base-nli-mean-tokens. I have an application that will be deployed to a device that does not have internet access. How can I save this model locally so that when I call it, it loads the model locally, rather than attempting to download from the internet? As the library maintainers make clear, the method SentenceTransformer downloads the model from the internet (see here: https://pypi.org/project/sentence-transformers/#pretrained-models) and I cannot find a method for saving the model locally.
Hugging face usage
You can download the models locally by using the Hugging Face transformer library method.
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
model = AutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
tokenizer.save_pretrained('./local_directory/')
model.save_pretrained('./local_directory/')
After instantiating the SentenceTransformer via download, you can then save it to any path of your choosing with the 'save()' method.
model = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens')
model.save('/my/local/directory/for/models/')
The accepted answer doesn't work, as it doesn't have the encapsulating folder and config.json that SentenceTransformer is looking for

How to use the trained model developed in AZURE ML

I trained a model in AZURE ML. Now i want to use that model in my ios app to predict the outputĀ .
How to download the model from AZURE and use it my swift code.
As far as I know, the model could run in Azure Machine Learning Studio.It seems that you are unable to download it, the model could do nothing outside of Azure ML.
Here is a similar post for you to refer, I have also tried #Ahmet's
method, but result is like #mrjrdnthms says.

Resources