OpenVino toolkit training repo and pre-trained models - openvino

Under the openvinotoolkit is a repo for training deepReID and there are a number of projects such as training a model for person attributes
Is this the codebase that was used to train the pre-trained models provided with OpenVino?
Is it possible to provide a script to show how to obtain the pre-trained models using the code?
The person attribute uses a dataset with more attributes than provided by the pre-trained model. Is there an updated model using this repo?

Deep-object-reid uses Torchreid. Torchreid is a library for deep-learning person re-identification, written in PyTorch. This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.
You can download the pretrained models from Open Model Zoo by going to the directory:
openvino/deployment_tools/open_model_zoo/tools/downloader
Then, run the following command to download pre-trained models:
./downloader.py --name <model_name>
For more details, please visit this page.
This repo is forked from KaiyangZhou/deep-person-reid.

Related

Error: That model does not exist (OpenAI)

When using a model I fine-tuned for GPT-3 using openai api from CLI, it stopped working and I get an error with this message: "That model does not exist".
But this is a model I have used before, so it should exist.
Sometimes with new versions old fine-tuned stop working.
To check out current fine tuned models besides running openai api fine_tunes.list you should check the fine-tuned model showing up in Playground with the other models.

How to configure `backend-store-uri` with huggingface Trainer

When configuring a Hugging Face TrainingArguments https://huggingface.co/transformers/v4.8.0/main_classes/trainer.html you can set the logging_dir and output_dir.
There is also the mlruns directory which according to https://mlflow.org/docs/latest/tracking.html#backend-stores you can configure using --set-backend-uri. Though that is an mlflow doc, not a Hugging Face doc.
What is the best way to specify a different mlruns directory programmatically when setting up the Hugging Face Trainer?

MLkit tflite file configuration

i am developing using ML-Kit. But there is a path problem. It's the same as the example, but what's wrong with it? Like this, enter image description here
If you plan to use ML Kit, you can ignore that message and everything should works fine using ML Kit.
ML Model Binding is another feature that uses codegen to generate a wrapper java class for your model. It's works best when your model has metadata inside.
Here list models with metadata if you want to try.

Download pre-trained BERT model locally

I am using the SentenceTransformers library (here: https://pypi.org/project/sentence-transformers/#pretrained-models) for creating embeddings of sentences using the pretrained model bert-base-nli-mean-tokens. I have an application that will be deployed to a device that does not have internet access. How can I save this model locally so that when I call it, it loads the model locally, rather than attempting to download from the internet? As the library maintainers make clear, the method SentenceTransformer downloads the model from the internet (see here: https://pypi.org/project/sentence-transformers/#pretrained-models) and I cannot find a method for saving the model locally.
Hugging face usage
You can download the models locally by using the Hugging Face transformer library method.
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
model = AutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens")
tokenizer.save_pretrained('./local_directory/')
model.save_pretrained('./local_directory/')
After instantiating the SentenceTransformer via download, you can then save it to any path of your choosing with the 'save()' method.
model = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens')
model.save('/my/local/directory/for/models/')
The accepted answer doesn't work, as it doesn't have the encapsulating folder and config.json that SentenceTransformer is looking for

How to use the trained model developed in AZURE ML

I trained a model in AZURE ML. Now i want to use that model in my ios app to predict the outputĀ .
How to download the model from AZURE and use it my swift code.
As far as I know, the model could run in Azure Machine Learning Studio.It seems that you are unable to download it, the model could do nothing outside of Azure ML.
Here is a similar post for you to refer, I have also tried #Ahmet's
method, but result is like #mrjrdnthms says.

Resources