UnboundLocalError: local variable 'photoshop' referenced before assignment - conv-neural-network

I am working on Dog Breed classifier, I am getting the following the error when I run the train code for my model.
I tried to downgrade the pillow version, but still facing the same issue.
Error shown in the line model_scratch:
---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
<ipython-input-12-360fef19693f> in <module>
1 #training the model
2 model_scratch = train(5, loaders_scratch, model_scratch, optimizer_scratch,
----> 3 criterion_scratch)
<ipython-input-11-c90fddb93f0d> in train(n_epochs, loaders, model, optimizer, criterion)
9 #train model
10 model.train()
---> 11 for batch_idx, (data,target) in enumerate(loaders['train']):
12
13 # zero the parameter (weight) gradients
UnboundLocalError: local variable 'photoshop' referenced before assignment

This is a known issue in Pillow 6.0.0. Based on the line number information in your linked full stack trace, I think your downgrade didn't succeed, and you are still using 6.0.0. Either downgrading to 5.4.1, or building from the latest source, should fix this problem, although the latter option is probably a little difficult for the average user.

Related

Loading Pytorch model doesn't seem to work

I want to load a pretrained model from PyTorch. Specifically, I want to run the SAT-Speaker-with-emotion-grounding (431MB) model from this repo. However, I don't seem to be able to load it. When I download the model and run the script below, I get a dictionary and not the model.
Loading the model:
model_emo = torch_load_model('best_model.pt', map_location=torch.device('cpu'))
Running the model
model_emo(image)
The error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-12-e80ccbe8b6ed> in <module>
----> 1 model_emo(image)
TypeError: 'dict' object is not callable
Now, the docs say that I should instantiate the model class and then load the checkpoint data. However, I don't know what model class this belongs to, and the documents don't say. Does anyone have any advice on how to proceed with this issue? Thanks.

ValueError: too many dimensions 'str' python in pytorch

I keep getting the error:
ValueError: too many dimensions 'str'
I attach my colab notebook to have a look at it. Have not found anything online yet that helps me solve the problem.
link:
https://colab.research.google.com/drive/1ikol2D8mmiIPKhNHbcFlTfVpuU_Gf9BZ?usp=sharing
I've seen this error as well in my Jupyter notebook. I can reproduce it with the following simple code:
Input:
tensor(['a'])
Output:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-55-3bdb0dbfafc2> in <module>
----> 1 tensor(['a'])
...(stacktrace)...
ValueError: too many dimensions 'str'
Apparently PyTorch tensors are different from Numpy arrays in that they don't work with strings, only integers, floats, and booleans.
The above error indicates that there are too many strings passed to the tensor (i.e. even one string is too many). When I change the code to the following, it works fine:
Input:
tensor([1])
Output:
tensor([1])
I haven't checked your notebook yet, but I just solved mine with the same error. Just double check if all training datasets and labels are converted to numeric values or tensors. If you have multiple columns in your dataframe, remove the one that does not need to feed into the training loop.

upgraded sklearn makes my previous onehotencoder failed to tranform

I stored one of my previous ml model into pickle and plan to use it later for production.
Everything works fine for quite a while. Months later, I upgraded my sklearn, now i load it i got this warning...
> c:\programdata\miniconda3\lib\site-packages\sklearn\base.py:318:
> UserWarning: Trying to unpickle estimator OneHotEncoder from version
> 0.20.1 when using version 0.22.2.post1. This might lead to breaking code or invalid results. Use at your own risk. UserWarning)
When i used it for transform, i got this error:
model_pipeline["ohe"].transform(df)
Error says:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-72436472fbb4> in <module>
----> 1 model_pipeline["ohe"].transform(df_merge[['CATEGORY']][:])
c:\programdata\miniconda3\lib\site-packages\sklearn\preprocessing\_encoders.py in transform(self, X)
392 n_samples, n_features = X_int.shape
393
--> 394 if self.drop is not None:
395 to_drop = self.drop_idx_.reshape(1, -1)
396
AttributeError: 'OneHotEncoder' object has no attribute 'drop'
This is model pipeline is trained very expensively. Is there any for me to fix this model pipeline without retraining everything? Thanks!
I've also encountered the same problem. In my case it was because of trying to load and use encoder I've created with a prior version of scikit-learn. When I re-created the encoder and saved, problem after loading disappeared.

Converting a singular variable regression to multi-variable regression in Tensorflow

Note: I didn't write the code, here is the code I'm trying to modify this and
my repository is here.
I'm currently trying to convert a singular variable regression to a multi-variable DNN regression. Line 76 in regressor_full.py always returns an error about incompatible shape. I've changed the input to the shape of 11, however, I don't know to change the output layer to 11 too.
Line 76 : feature_columns = [tf.feature_column.numeric_column('X', shape=(11,))]
However, I'm sure that the input and output tensors aren't the only 2 things that have to be changed. Could you guys help me with adapting the repo? Thank you
To adapt the regressor for 11 targets/outputs, change the value of label_dimension to 11 as follows [Line #97 in your code]:
regressor = skflow.DNNRegressor(feature_columns=feature_columns,
label_dimension=11,
hidden_units=hidden_layers,
model_dir=MODEL_PATH,
dropout=dropout,
config=test_config)
Successful training log:

Gensim: KeyedVectors.train()

I downloaded Wikipedia word vectors from here. I loaded the vectors with:
model_160 = KeyedVectors.load_word2vec_format(wiki_160_path, binary=False)
and then want to train them with:
model_160.train()
I get the error back:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-11-22a9f6312119> in <module>()
----> 1 model.train()
AttributeError: 'KeyedVectors' object has no attribute 'train'
My question is now:
It seems like KeyedVectors has no train function, but I want to continue training the vectors on my personal sentences, instead of just using the Wikipedia vectors. How is this possible?
Thanks in advance, Jan
You can't use KeyedVectors for that.
From the documentation:
Word vector storage and similarity look-ups.
The word vectors are considered read-only in this class.
And also:
The word vectors can also be instantiated from an existing file on disk in the word2vec C format as a KeyedVectors instance.
[...]
NOTE: It is impossible to continue training the vectors loaded from the C format
because hidden weights, vocabulary frequency and the binary tree is
missing.

Resources