Loading Pytorch model doesn't seem to work - pytorch

I want to load a pretrained model from PyTorch. Specifically, I want to run the SAT-Speaker-with-emotion-grounding (431MB) model from this repo. However, I don't seem to be able to load it. When I download the model and run the script below, I get a dictionary and not the model.
Loading the model:
model_emo = torch_load_model('best_model.pt', map_location=torch.device('cpu'))
Running the model
model_emo(image)
The error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-12-e80ccbe8b6ed> in <module>
----> 1 model_emo(image)
TypeError: 'dict' object is not callable
Now, the docs say that I should instantiate the model class and then load the checkpoint data. However, I don't know what model class this belongs to, and the documents don't say. Does anyone have any advice on how to proceed with this issue? Thanks.

Related

Bert Model not loading with pickle

I trained a Bert Model for NER. It worked fine (obviously it took time to learn). I saved the model with pickle as
with open('model_pkl', 'wb') as file:
pickle.dump(model, file)
When I am trying to load this saved model I am getting following error. AttributeError: Can't get attribute 'BertModel' on <module '__main__' from '<input>'>. This method works for lists, dictionaries etc but producing error on pytorch model. I am using python 3.8.10
Try using torch.save and torch.load.

upgraded sklearn makes my previous onehotencoder failed to tranform

I stored one of my previous ml model into pickle and plan to use it later for production.
Everything works fine for quite a while. Months later, I upgraded my sklearn, now i load it i got this warning...
> c:\programdata\miniconda3\lib\site-packages\sklearn\base.py:318:
> UserWarning: Trying to unpickle estimator OneHotEncoder from version
> 0.20.1 when using version 0.22.2.post1. This might lead to breaking code or invalid results. Use at your own risk. UserWarning)
When i used it for transform, i got this error:
model_pipeline["ohe"].transform(df)
Error says:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-72436472fbb4> in <module>
----> 1 model_pipeline["ohe"].transform(df_merge[['CATEGORY']][:])
c:\programdata\miniconda3\lib\site-packages\sklearn\preprocessing\_encoders.py in transform(self, X)
392 n_samples, n_features = X_int.shape
393
--> 394 if self.drop is not None:
395 to_drop = self.drop_idx_.reshape(1, -1)
396
AttributeError: 'OneHotEncoder' object has no attribute 'drop'
This is model pipeline is trained very expensively. Is there any for me to fix this model pipeline without retraining everything? Thanks!
I've also encountered the same problem. In my case it was because of trying to load and use encoder I've created with a prior version of scikit-learn. When I re-created the encoder and saved, problem after loading disappeared.

UnboundLocalError: local variable 'photoshop' referenced before assignment

I am working on Dog Breed classifier, I am getting the following the error when I run the train code for my model.
I tried to downgrade the pillow version, but still facing the same issue.
Error shown in the line model_scratch:
---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
<ipython-input-12-360fef19693f> in <module>
1 #training the model
2 model_scratch = train(5, loaders_scratch, model_scratch, optimizer_scratch,
----> 3 criterion_scratch)
<ipython-input-11-c90fddb93f0d> in train(n_epochs, loaders, model, optimizer, criterion)
9 #train model
10 model.train()
---> 11 for batch_idx, (data,target) in enumerate(loaders['train']):
12
13 # zero the parameter (weight) gradients
UnboundLocalError: local variable 'photoshop' referenced before assignment
This is a known issue in Pillow 6.0.0. Based on the line number information in your linked full stack trace, I think your downgrade didn't succeed, and you are still using 6.0.0. Either downgrading to 5.4.1, or building from the latest source, should fix this problem, although the latter option is probably a little difficult for the average user.

How can I fix "Unknown layer: DenseFeatures" while importing pre-trained keras model?

I was following the tutorial for classifying structured data from here. I trained the model as described with my data which only comprised of numeric data.
I have trained and optimised my model on google colab and downloaded it locally to test it on some new data.
However, when I load the model using this snippet.
from keras.models import load_model
model = load_model("my_model.h5")
I get the following error
Unknown layer: DenseFeatures
...along with the trace.
I have tried setting up custom_objects while loading the model like this
model = load_model("my_model.h5", custom_objects={'DenseFeatures': tf.keras.layers.DenseFeatures})
But I still get the following error:
__init__() takes at least 2 arguments (3 given)
What could I be doing wrong? I tried going through documentation but couldn't find anything of help on github.

How to get the cluster ID in Bisecting K-means method in pyspark

I've tried
from numpy import array
from pyspark.mllib.clustering import BisectingKMeans, BisectingKMeansModel
I'm using the iris.data set:
iris_model.transform(iris)
but I get this error:
AttributeError
Traceback (most recent call last)
<ipython-input-241-59b5e8c1e068> in <module>()
----> 1 iris_model.transform(iris)
AttributeError: 'BisectingKMeansModel' object has no attribute 'transform'
I can get the ClusterCenters and I get the array, but I need the group of which each case belongs to.
Thanks
You probably mismatch Spark ML and MLlib APIs.
MLLib package was the first package, but then developers started to build new package, ML, which works with DataFrames.
Change your package to pyspark.ml.clustering and you will have new version, which has transform function and work with DataFrame and new ML Pipelines. I suggest yo build Pipeline when you will have algorithm working :)

Resources