I am trying to run an Azure ML pipeline. This pipeline trains a model, saves it a pickle file and then tries to unpickle it in the next step. When trying to unpickle it, I am facing the below issue in any random run:
Traceback (most recent call last):
File "batch_scoring.py", line 199, in
clf = joblib.load(open(model_path, 'rb'))
File "/azureml-envs/azureml_347514cea2002d6bd71b42aceb1e4eeb/lib/python3.6/site-packages/joblib/numpy_pickle.py", line 595, in load
obj = _unpickle(fobj)
File "/azureml-envs/azureml_347514cea2002d6bd71b42aceb1e4eeb/lib/python3.6/site-packages/joblib/numpy_pickle.py", line 529, in _unpickle
obj = unpickler.load()
File "/azureml-envs/azureml_347514cea2002d6bd71b42aceb1e4eeb/lib/python3.6/pickle.py", line 1048, in load
raise EOFError
EOFError
Has anyone faced this issue before?
I get same error when I m trying to unpickle the model from output folder/ model registry. In my case the pkl was not properly formed during the experiment. Try to re-run the experiment (I did it without changing any line and it works for me). In my case even the first pickle was a smaller size than the good one. Hope this helps :)
Related
I downloaded YOLOv7 from the github-page, trained it on a custom dataset, took the weights and trained it again on a different custom dataset. Both Trainings worked as expected.
Now I want to use the final weights for detection in new videos, but I get an error, when YOLO tries to load the weights (see message below).
Edit: I found the answer to my question. The problem was in the path to my weights there were upper case characters. Turns out that they are converted to lower case and the file could not be found anymore.
Thanks #gspr for pointing me to the right direction.
Traceback (most recent call last):
File "detect.py", line 195, in <module>
detect()
File "detect.py", line 34, in detect
model = attempt_load(weights, map_location=device) # load FP32 model
File "/home/mahler/yolo/yolov7/models/experimental.py", line 241, in attempt_load
attempt_download(w)
File "/home/mahler/yolo/yolov7/utils/google_utils.py", line 31, in attempt_download
tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]
IndexError: list index out of range
I'm trying to train research model ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8 using the MultiWorkerMirroredStrategy (by setting --num_workers=2 in the invocation of model_main_tf2.py). I'm trying to train across two workers (0 and 1), each with a single GPU. However, when I attempt this I get the following error, always on worker 1:
Traceback (most recent call last):
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 553, in __next__
return self.get_next()
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 610, in get_next
return self._get_next_no_partial_batch_handling(name)
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 642, in _get_next_no_partial_batch_handling
replicas.extend(self._iterators[i].get_next_as_list(new_name))
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 1594, in get_next_as_list
return self._format_data_list_with_options(self._iterator.get_next())
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\data\ops\multi_device_iterator_ops.py", line 580, in get_next
result.append(self._device_iterators[i].get_next())
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 889, in get_next
return self._next_internal()
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 819, in _next_internal
ret = gen_dataset_ops.iterator_get_next(
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\ops\gen_dataset_ops.py", line 2922, in iterator_get_next
_ops.raise_from_not_ok_status(e, name)
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\framework\ops.py", line 7186, in raise_from_not_ok_status
raise core._status_to_exception(e) from None # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.OutOfRangeError: End of sequence [Op:IteratorGetNext]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\JS\Desktop\Tensorflow\models\research\object_detection\model_main_tf2.py", line 114, in <module>
tf.compat.v1.app.run()
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\platform\app.py", line 36, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\absl\app.py", line 312, in run
_run_main(main, args)
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\absl\app.py", line 258, in _run_main
sys.exit(main(argv))
File "C:\Users\JS\Desktop\Tensorflow\models\research\object_detection\model_main_tf2.py", line 105, in main
model_lib_v2.train_loop(
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\object_detection\model_lib_v2.py", line 605, in train_loop
load_fine_tune_checkpoint(
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\object_detection\model_lib_v2.py", line 401, in load_fine_tune_checkpoint
_ensure_model_is_built(model, input_dataset, unpad_groundtruth_tensors)
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\object_detection\model_lib_v2.py", line 161, in _ensure_model_is_built
features, labels = iter(input_dataset).next()
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 549, in next
return self.__next__()
File "C:\Users\JS\.conda\envs\tensor2\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 555, in __next__
raise StopIteration
StopIteration
Worker 0 eventually fails after detecting that worker 1 has gone down.
This error happens regardless of the physical machines on which the two workers run. In other words I see it if I'm running both workers on a single machine (using localhost) OR different machines on the same network.
Based on the trace in the error messages, the error appears to be occurring whenever the training loop attempts to iterate over the training data generated by strategy.experimental_distribute_datasets_from_function. Note that if I change the strategy to MirroredStrategy it runs fine on a single machine (no other changes made). I'm not sure if I'm doing something wrong or if there is a bug in the object detection API.
My setup on both machines is identical (I basically followed the setup instructions on the object detection web-site):
Windows 10
Tensorflow 2.8.0
Cuda Toolkit 11.2
cudnn 8.1
Has anyone ever seen this error before? If so, is there a way around it?
Ok, I think I understand the issue. In the object detection library there is a file called dataset_builder.py that builds the training dataset from the TFRecord stored in the file specified in the pipeline.config file (in the input_path item of the tf_record_input_reader). The function that actually reads the TFRecord file is _read_dataset_internal. This function treats the input_path of the pipeline config as a LIST OF FILES and then applies a sharding function (passed as an argument) to divide the files between the replicas doing the training (one replica per worker). Since my input_path only specified a single TFRecord file it was assigned to the first replica and the other replicas were given empty filenames!! Thus only the first replica actually had an input dataset to work with, hence the crash.
The solution was to split the training data across two files (two TFRecords) and then set the input_path in pipeline.config to be a list of paths rather than a single path. Once I did this it appears as though the model trained successfully (at least it didn't crash).
I'm not sure if this is a bug in the object detection code or not. I assumed that if I only had one training record (visible to both workers) that both workers would use it and just batch the data accordingly. I'm just not sure if the assumption itself is wrong or if the assumption is correct and the code is wrong.
Anyway, I this helps anyone who might be wrestling with the same issue.
I am trying to implement pretrained model of following repository. I need your assistance to rectify the error.
RuntimeError: unexpected EOF, expected 3302200 more bytes. The file might be corrupted.
I tried to implement pretrained model of CANNet present on following repo using google Collab and followed all steps of (Prerequisites, cloning, Data Preparation, and Testing)
https://github.com/gjy3035/NWPU-Crowd-Sample-Code.git
The detailed error is given below
Traceback (most recent call last):
File "test.py", line 118, in
main()
File "test.py", line 46, in main
test(lines, model_path)
File "test.py", line 55, in test
net.load_state_dict(torch.load(model_path))
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 779, in _legacy_load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 3302200 more bytes. The file might be corrupted.
Check out this github link: https://github.com/huggingface/transformers/issues/1491
It proposes one should use the force_download arg. This is equivalent to force_reload assuming you're using torch.load.hub to load the pretrained model. The other option proposed is applicable to windows users is to delete the downloaded model and download it again.
I have the same issue but --setting force_reload=True hasn't cleared it for me, I'm thinking I have space problems, but I think it's worth a shot on your end.
I also faced the same same problem while I was evaluating my trained model on google collab. I found that the model was taking a lot of time to get fully uploaded to the machine. I was testing with the incompletely uploaded model. when I ensured that the model has been fully uploaded and then I ran, it worked.
I'm trying to load an already trained word2vec model downloaded from here by using the following code, as suggested by the aforementioned website:
from gensim.models import Word2Vec
model=Word2Vec.load('wiki_iter=5_algorithm=skipgram_window=10_size=300_neg-samples=10.m')
When I try to execute that code, I get the following error:
UserWarning: detected Windows; aliasing chunkize to chunkize_serial
warnings.warn("detected Windows; aliasing chunkize to chunkize_serial")
Traceback (most recent call last):
File "d:\DavideV\documents\visual studio 2017\Projects\tesi\tesi\tesi.py", line 112, in <module>
model=Word2Vec.load('wiki_iter=5_algorithm=skipgram_window=10_size=300_neg-samples=10.m')
File "C:\Users\admin\Anaconda3\lib\site-packages\gensim\models\word2vec.py", line 979, in load
return load_old_word2vec(*args, **kwargs)
File "C:\Users\admin\Anaconda3\lib\site-packages\gensim\models\deprecated\word2vec.py", line 155, in load_old_word2vec
'size': old_model.vector_size,
AttributeError: 'Word2Vec' object has no attribute 'vector_size'
I suppose that this is due to the fact that the model has probably been trained with a previous version of gensim, but I would prefer to avoid to retrain it.
How can I solve this problem? Thanks.
I have trained a Word2Vec model with PySpark and saved it. When loading the model .findSynonyms method does not work.
model = word2vec.fit(text)
model.save(sc, 'w2v_model')
new_model = Word2VecModel.load(sc, 'w2v_model')
new_model.findSynonyms('word', 4)
Getting the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/spark/python/pyspark/mllib/feature.py", line 487, in findSynonyms
words, similarity = self.call("findSynonyms", word, num)
ValueError: too many values to unpack
I found the following, but not sure how the issue was fixed: https://issues.apache.org/jira/browse/SPARK-12016
Please let me know if there are any work arounds!
Many thanks.
Looks like it's fixed on 1.6.1 but not on 1.5.2.
The error is not about findSynonyms but about Word2VecModel.load.
I checked it works on 1.6.1.; no error while loading the model and calling findSynonyms method.
I guess v. 1.5.2 is not fixed yet.