Code for Best Run and Model from previous experiment [closed] - azure-machine-learning-service

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
For the best run and fitted model from a previously run experiment, Looking for the python code.

Below is the code you can reuse
https://github.com/microsoft/MLOpsPython/blob/master/diabetes_regression/evaluate/evaluate_model.py
Assuming in each previous experiment run, a model was registered with a tag that contains a metric of interest (test_mae for example), below is the code to retrieve the version with lowest mae.
from azureml.core.model import Model
model_name = "YOUR_MODEL_NAME"
model_path = "LOCAL_PATH”
model_version_list = [(model.version,float(model.tags["test_mae"])) for model in Model.list(workspace = ws,name =model_name)]
model_version_list.sort(key = lambda a: a[0])
lowest_mae_version =model_version_list[0][0]
print("best version is {} with mae at {}".format(lowest_mae_version,model_version_list[0][1]))
model = Model(name = model_name,workspace = ws, version =lowest_mae_version)
model.download(model_path, exist_ok=True)
when the model has not been registered,models in an automl run and would like to get all the models and compare the results depending on featurization, method used, and metrics, also with other data sets. The models are all inside the workspace with the GUI you can see them and download them by hand.

Related

Hello World aka. MNIST with feed forward gets less accuracy in comparison of plain with DistributedDataParallel (DDP) model with only one node [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
This is a cross-post to my question in the Pytorch forum.
When using DistributedDataParallel (DDP) from PyTorch on only one node I expect it to be the same as a script without DistributedDataParallel.
I created a simple MNIST training setup with a three-layer feed forward neural network. It gives significantly lower accuracy (around 10%) if trained with the same hyperparameters, same epochs, and generally same code but the usage of the DDP library.
I created a GitHub repository demonstrating my problem.
I hope it is a usage error of the library, but I do not see how there is a problem, also colleges of mine did already audit the code. Also, I tried it on macOS with a CPU and on three different GPU/ubuntu combinations (one with a 1080-Ti, one with a 2080-Ti and a cluster with P100s inside) all giving the same results. Seeds are fixed for reproducibility.
You are using different batch sizes in your two experiments: batch_size=128, and batch_size=32 for mnist-distributed.py and mnist-plain.py respectively. This would indicate that you won't have the same performance result with those two trainings.

ASIC design for a specific Fully-connected nn or for a CNN [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
my question is:
for example: i have a trained FCC and i want to implement it on Hardware(ASIC). i want to ask how to utilize weights and biases from trained model in verilog ?
Should i make RAM and then store the values in it, or is any other way to be used?
I need this values(weights and biases) to propagate them to MAC units.
The weights and biases need to be converted into specific number format (say Fixed Point) and then stored in RAM.
Then the values should be fetched and given to the MAC units.

How to retrain AdaBoostClassifier with new data in python? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Scenario: Today I trained AdaBoostClassifier with past 1 week data and next week needs to train with new 1 week data to the existing trained classifier.
For Randomforest I'm using warm_start=True. where AdaBoostClassifier not supported directly.
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html
It seems you want to perform incremental learning. In sklearn, it is not possible to perform it using AdaBoost.
The algorithms that have implemented incremental learning are listed here. You will notice these algorithms implement the method partial_fit().
If you want to keep using the AdaBoostClassifier, you should retrain the model using all the data (the past 1 week and the new 1 week).

Testing dump classifier with unseen data [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I trained a classifier and I dump it and dump its vectorizer. Now I want to test it with unseen data by loading the classifier and vectorizer. Can someone help?
Check out: http://scikit-learn.org/stable/modules/model_persistence.html
First load your vectorizer back from file:
from sklearn.externals import joblib
vectorizer = joblib.load('your_vectorizer.pkl')
clf = joblib.load('your_classifier.pkl')
Then it works the same as before you dumped it to file with pickle. I.e.:
vectorized_data = vectorizer.transform(unseen_data)
predictions = clf.predict(vectorized_data)

Best architecture for object recognition [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm evaluating the options of using HTM (hierarchical temporal memory) and CNN (convolutional neural network) for object recognition. Which architecture (model) would is most appropriate in this case?
Convolutional Neural Network and its variants are best tool for object recognition .
You can try with AlexNet,VGGNEt, ResNet, Batch Normalization , Dropout etc.
Always prefer using pretrained models and using transfer learning first in these cases. You can check out the implementation of Inception V3 etc. for object detection on tensorflow website and use them for transfer learning for your project.

Resources