i make a model and save the configuration as:
def checkpoint(state, ep, filename='./Risultati/checkpoint.pth'):
if ep == (n_epoch-1):
print('Saving state...')
torch.save(state,filename)
checkpoint({'state_dict':rnn.state_dict()},ep)
and then i want load this configuration :
state_dict= torch.load('./Risultati/checkpoint.pth')
rnn.state_dict(state_dict)
when i try, this is the error:
Traceback (most recent call last):
File "train.py", line 288, in <module>
rnn.state_dict(state_dict)
File "/home/marco/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 593, in state_dict
destination._metadata[prefix[:-1]] = dict(version=self._version)
AttributeError: 'dict' object has no attribute '_metadata'
where i do wrong?
thx in advance
You need to load rnn.state_dict() stored in the dictionary you loaded:
rnn.load_state_dict(state_dict['state_dict'])
Look at load_state_dict method for more information.
Related
I'm using Jupyter to learn and practice machine learning. I created a Pipeline object with many classes from Scikit Learn and custom classes that I wrote. After that I saved this Pipeline object in a file 'classif_pipeline.pkl.z' using
joblib.dump(pipeline, 'classif_pipeline.pkl.z').
The problem is when I try to load this file in a different computer I get the error message bellow.
Code first:
import joblib
full_pipeline = joblib.load('classif_pipeline.pkl.z')
Error message. Also, I have the same version of Scikit Learn and joblib in this pc too.
Traceback (most recent call last):
File "/media/backup/programming/python/jupyter/classification/main.py", line 3, in <module>
full_pipeline = joblib.load('classif_pipeline.pkl.z')
File "/home/guilherme/.local/lib/python3.10/site-packages/joblib/numpy_pickle.py", line 658, in load
obj = _unpickle(fobj, filename, mmap_mode)
File "/home/guilherme/.local/lib/python3.10/site-packages/joblib/numpy_pickle.py", line 577, in _unpickle
obj = unpickler.load()
File "/usr/lib/python3.10/pickle.py", line 1213, in load
dispatch[key[0]](self)
File "/usr/lib/python3.10/pickle.py", line 1538, in load_stack_global
self.append(self.find_class(module, name))
File "/usr/lib/python3.10/pickle.py", line 1582, in find_class
return _getattribute(sys.modules[module], name)[0]
File "/usr/lib/python3.10/pickle.py", line 331, in _getattribute
raise AttributeError("Can't get attribute {!r} on {!r}"
AttributeError: Can't get attribute 'DependentsImputer' on <module '__main__' from '/media/backup/programming/python/jupyter/classification/main.py'>
DependentsImputer is one of the many other classes I implemented in the Jupyter notebook.
How can I load this file?
I got the following error:
Traceback (most recent call last):
File "/home/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main2_distance_sl_vs_maml.py", line 790, in <module>
main_data_analyis()
File "/home/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main2_distance_sl_vs_maml.py", line 597, in main_data_analyis
args.mdl2 = get_sl_learner(args)
File "/raid/projects/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/data_analysis/common.py", line 195, in get_sl_learner
model = load_original_rfs_ckpt(args, path_to_checkpoint=args.path_2_init_sl)
File "/raid/projects/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/data_analysis/common.py", line 168, in load_original_rfs_ckpt
ckpt = torch.load(path_to_checkpoint, map_location=args.device)
File "/home/miranda9/miniconda3/envs/meta_learning_a100/lib/python3.9/site-packages/torch/serialization.py", line 608, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/miranda9/miniconda3/envs/meta_learning_a100/lib/python3.9/site-packages/torch/serialization.py", line 794, in _legacy_load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: storage has wrong size: expected 0 got 64
Why and how does one fix it?
related: https://discuss.pytorch.org/t/runtimeerror-storage-has-wrong-size/88109/4
This issue was removed from me by simply re-uploading the model ckpt file to the cluster.
Other point of view see: https://discuss.pytorch.org/t/runtimeerror-storage-has-wrong-size/88109/4
I'm trying to convert this t7 model to pytorch or to caffe or to caffe2 or to any other model..
this is what I get when converting to pytorch with the code from:
https://github.com/clcarwin/convert_torch_to_pytorch
has anyone had this error or know what to do with it?
roy#roy-Lenovo:~/convert_torch_to_pytorch$ python convert_torch.py -m model_a.t7
Traceback (most recent call last):
File "convert_torch.py", line 314, in <module>
torch_to_pytorch(args.model,args.output)
File "convert_torch.py", line 262, in torch_to_pytorch
slist = lua_recursive_source(lnn.Sequential().add(model))
File "/home/roy/.local/lib/python2.7/site-packages/torch/legacy/nn/Sequential.py", line 15, in add
self.output = module.output
File "/home/roy/.local/lib/python2.7/site-packages/torch/utils/serialization/read_lua_file.py", line 99, in __getattr__
return self._obj.get(k)
AttributeError: 'list' object has no attribute 'get'
Traceback (most recent call last):
File "dac.py", line 87, in
X_train=load_create_padded_data(X_train=X_train,savetokenizer=False,isPaddingDone=False,maxlen=sequence_length,tokenizer_path='./New_Tokenizer.tkn')
File "/home/dpk/Downloads/DAC/New_Utils.py", line 92, in load_create_padded_data
X_train=tokenizer.texts_to_sequences(X_train)
File "/home/dpk/anaconda2/envs/venv/lib/python2.7/site-packages/keras_preprocessing/text.py", line 278, in texts_to_sequences
return list(self.texts_to_sequences_generator(texts))
File "/home/dpk/anaconda2/envs/venv/lib/python2.7/site-packages/keras_preprocessing/text.py", line 296, in texts_to_sequences_generator
oov_token_index = self.word_index.get(self.oov_token)
AttributeError: 'Tokenizer' object has no attribute 'oov_token'
Probably this one:
You can manually set tokenizer.oov_token = None to fix this.
Pickle is not a reliable way to serialize objects since it assumes
that the underlying Python code/modules you're importing have not
changed. In general, DO NOT use pickled objects with a different
version of the library than what was used at pickling time. That's not
a Keras issue, it's a generic Python/Pickle
https://github.com/keras-team/keras/issues/9099
To fix this I manually set
self.oov_token = None
But not
tokenizer.oov_token = None
I have a problem when using word2vec and lstm, the code is:
def input_transform(string):
words=jieba.lcut(string)
words=np.array(words).reshape(1,-1)
model=Word2Vec.load('lstm_datamodel.pkl')
combined=create_dictionaries(model,words)
return combined
def lstm_predict(string):
print ('loading model......')
with open('lstm_data.yml', 'r') as f:
yaml_string = yaml.load(f)
model = model_from_yaml(yaml_string)
print ('loading weights......')
model.load_weights('lstm_data.h5')
model.compile(loss='binary_crossentropy',
optimizer='adam',metrics=['accuracy'])
data=input_transform(string)
data.reshape(1,-1)
#print data
result=model.predict_classes(data)
if result[0][0]==1:
print (string,' positive')
else:
print (string,' negative')
and the error is:
Traceback (most recent call last):
File "C:\Python36\lib\site-packages\gensim\models\word2vec.py", line 1312, in load
model = super(Word2Vec, cls).load(*args, **kwargs)
File "C:\Python36\lib\site-packages\gensim\models\base_any2vec.py", line 1244, in load
model = super(BaseWordEmbeddingsModel, cls).load(*args, **kwargs)
File "C:\Python36\lib\site-packages\gensim\models\base_any2vec.py", line 603, in load
return super(BaseAny2VecModel, cls).load(fname_or_handle, **kwargs)
File "C:\Python36\lib\site-packages\gensim\utils.py", line 423, in load
obj._load_specials(fname, mmap, compress, subname)
AttributeError: 'dict' object has no attribute '_load_specials'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/GitHub/reviewsentiment/veclstm.py", line 211, in <module>
lstm_predict(string)
File "C:/GitHub/reviewsentiment/veclstm.py", line 191, in lstm_predict
data=input_transform(string)
File "C:/GitHub/reviewsentiment/veclstm.py", line 177, in input_transform
model=Word2Vec.load('lstm_datamodel.pkl')
File "C:\Python36\lib\site-packages\gensim\models\word2vec.py", line 1323, in load
return load_old_word2vec(*args, **kwargs)
File "C:\Python36\lib\site-packages\gensim\models\deprecated\word2vec.py", line 153, in load_old_word2vec
old_model = Word2Vec.load(*args, **kwargs)
File "C:\Python36\lib\site-packages\gensim\models\deprecated\word2vec.py", line 1618, in load
model = super(Word2Vec, cls).load(*args, **kwargs)
File "C:\Python36\lib\site-packages\gensim\models\deprecated\old_saveload.py", line 88, in load
obj._load_specials(fname, mmap, compress, subname)
AttributeError: 'dict' object has no attribute '_load_specials'enter code here
I am sorry for including so much code.
This is my first time to ask on StackOverflow, and I have tried my very best to find the answer on my own, but failed. So can you help me? Thank you very much!
The error is occurring on the line...
model=Word2Vec.load('lstm_datamodel.pkl')
...so all the other/later code you've supplied is irrelevant and superfluous.
The suffix of your filename, lstm_datamodel.pkl, suggests it may have been created via Python's pickle() facility. The gensim Word2Vec.load() method only expects to load models that were saved by the module's own save() routine, not any pickled object.
The gensim native save() does make use of pickle for some of its saving, but not all, and thus wouldn't expect a fully-pickled object in the file provided.
This might be cause of your problem. You could try instead a load based entirely on Python pickle:
model = pickle.load('lstm_datamodel.pkl')
Alternatively, if you can reconstruct the model in the file, but be sure to save it via the native gensim model.save(filename), that might also resolve the problem.