PyTorch - Torchvision - BrokenPipeError: [Errno 32] Broken pipe - python-3.x

I'm trying to carry out the tutorial named "Training a classifier" with PyTorch.
WHen trying to debug this part of the code :
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
I get this error message :
Files already downloaded and verified Files already downloaded and verified
Files already downloaded and verified Files already downloaded and verified Traceback (most recent call last):
File "<string>", line 1, in <module>
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 277, in
_fixup_main_from_path
run_name="__mp_main__")
File "D:\Anaconda\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "D:\Anaconda\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "D:\Anaconda\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "d:\Yggdrasil\Programmation\PyTorch\TutorialCIFAR10.py", line 36, in <module>
dataiter = iter(trainloader)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
return _DataLoaderIter(self)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
w.start()
File "D:\Anaconda\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 136, in
_check_not_importing_main
is not going to be frozen to produce an executable.)
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Traceback (most recent call last):
File "d:\Yggdrasil\Programmation\PyTorch\TutorialCIFAR10.py", line 36, in <module>
dataiter = iter(trainloader)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
return _DataLoaderIter(self)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
w.start()
File "D:\Anaconda\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj) File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 65, in
__init__
reduction.dump(process_obj, to_child)
File "D:\Anaconda\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
All the previous lines in the tutorial are working perfectly.
Does someone know how to solve this, please ?
Thanks a lot in advance

The question happened because Windows cannot run this DataLoader in 'num_workers' more than 0.
You can see where the trainloader come from.we can see
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
We need to change the 'num_workers' to 0.like this:
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=0)
Every trainloaders need to change like this.

Got the same error. The following workaround works for me:
def run():
# code goes here
if __name__ == '__main__':
run()

This doesn't look to be a PyTorch problem. Try executing the code in Jupyter notebooks and other environment troubleshooting.

you need to add a if-clause protection as stated in the pytorch docs:
https://pytorch.org/docs/stable/notes/windows.html#usage-multiprocessing

Related

Pycharm: shows korean language file name which is in english

My system language is English (USA). I downloaded MJSynth from the Oxford website when I start to train a model it shows [Errno 2] No such file or directory: '/media/cvpr/CM_22/mjsynth/mnt/ramdisk/max/90kDICT32px/신_1174986.jpg' which is extremely strange for me. I am sure that there is no problem with batch size I already checked that
Last shows 신_1174986.jpg which is true how i can change language to english
Traceback (most recent call last):
File "/media/cvpr/CM_22/doctr/references/recognition/train_pytorch.py", line 485, in <module>
main(args)
File "/media/cvpr/CM_22/doctr/references/recognition/train_pytorch.py", line 396, in main
fit_one_epoch(model, train_loader, batch_transforms, optimizer, scheduler, mb, amp=args.amp)
File "/media/cvpr/CM_22/doctr/references/recognition/train_pytorch.py", line 118, in fit_one_epoch
for images, targets in progress_bar(train_loader, parent=mb):
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/fastprogress/fastprogress.py", line 50, in __iter__
raise e
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/fastprogress/fastprogress.py", line 41, in __iter__
for i,o in enumerate(self.gen):
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
data = self._next_data()
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
return self._process_data(data)
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
data.reraise()
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/_utils.py", line 543, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/cvpr/CM_22/doctr/doctr/datasets/datasets/base.py", line 48, in __getitem__
img, target = self._read_sample(index)
File "/media/cvpr/CM_22/doctr/doctr/datasets/datasets/pytorch.py", line 37, in _read_sample
else read_img_as_tensor(os.path.join(self.root, img_name), dtype=torch.float32)
File "/media/cvpr/CM_22/doctr/doctr/io/image/pytorch.py", line 52, in read_img_as_tensor
pil_img = Image.open(img_path, mode="r").convert("RGB")
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/PIL/Image.py", line 2912, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '/media/cvpr/CM_22/mjsynth/mnt/ramdisk/max/90kDICT32px/신_1174986.jpg'

attribute lookup s3.ServiceResource on boto3.resources.factory failed

I want to try my model. The data is saved in AWS. I use boto3 simply like
self.s3_img = S3Images(boto3.resource('s3'))
self.s3_obj = S3GetObjects()
I met this error when I feed the data and model in to the pytorch training pipeline.
The code looks like
import pytorch_lightning as pl
from pytorch_lightning import Trainer
trainer = Trainer(
checkpoint_callback=checkpoint_callback,
callbacks=get_callbacks(chkpt_path),
fast_dev_run=False,
max_epochs=100,
resume_from_checkpoint=checkpoint_path
)
trainer.fit(model)
The error is
File "main.py", line 191, in <module>
train()
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/hydra/main.py", line 20, in decorated_main
run_hydra(
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/hydra/_internal/utils.py", line 171, in run_hydra
hydra.run(
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 82, in run
return run_job(
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/hydra/plugins/common/utils.py", line 109, in run_job
ret.return_value = task_function(task_cfg)
File "main.py", line 176, in train
trainer.fit(model)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1084, in fit
results = self.accelerator_backend.train(model)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/pytorch_lightning/accelerators/cpu_backend.py", line 39, in train
results = self.trainer.run_pretrain_routine(model)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 305, in _evaluate
for batch_idx, batch in enumerate(dataloader):
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 352, in __iter__
return self._get_iterator()
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 801, in __init__
w.start()
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/admin/opt/anaconda3/envs/kk/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'boto3.resources.factory.s3.ServiceResource'>: attribute lookup s3.ServiceResource on boto3.resources.factory failed
Can anyone tell me what's the meaning of this error and how to solve it? Thanks for any suggestions and help!

How to create a custom keras generator to fit multiple outputs and use workers

I have one input, and multiple outputs, like a multilabel classification, but I chose to try another approach to see if I have any improvements.
I have these generators, I'm using flow_from_dataframe because I have a huge dataset (200k):
self.train_generator = datagen.flow_from_dataframe(
dataframe=train,
directory='dataset',
x_col='Filename',
y_col=columns,
batch_size=BATCH_SIZE,
color_mode='rgb',
class_mode='raw',
shuffle=True,
target_size=(HEIGHT,WIDTH))
self.test_generator = datatest.flow_from_dataframe(
dataframe=test,
directory='dataset',
x_col='Filename',
y_col=columns,
batch_size=BATCH_SIZE,
color_mode='rgb',
class_mode='raw',
target_size=(HEIGHT,WIDTH))
I'm passing to fit using this function:
def generator(self, generator):
while True:
X, y = generator.next()
y = [y[:,x] for x in range(len(columns))]
yield X,[y]
If I fit like this:
self.h = self.model.fit_generator(self.generator(self.train_generator),
steps_per_epoch=self.STEP_SIZE_TRAIN,
validation_data=self.generator(self.test_generator),
validation_steps=self.STEP_SIZE_TEST,
epochs=50,
verbose = 1,
workers = 2,
)
I get :
RuntimeError: Your generator is NOT thread-safe. Keras requires a thread-safe generator when `use_multiprocessing=False, workers > 1`.
Using multiprocessing=True:
self.h = self.model.fit_generator(self.generator(self.train_generator),
steps_per_epoch=self.STEP_SIZE_TRAIN,
validation_data=self.generator(self.test_generator),
validation_steps=self.STEP_SIZE_TEST,
epochs=50,
verbose = 1,
workers = 2,
use_multiprocessing=True,
)
Results in:
File "C:\ProgramData\Anaconda3\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\ProgramData\Anaconda3\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\utils\data_utils.py", line 877, in _run
with closing(self.executor_fn(_SHARED_SEQUENCES)) as executor:
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\utils\data_utils.py", line 867, in pool_fn
pool = get_pool_class(True)(
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 212, in __init__
self._repopulate_pool()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 303, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 326, in _repopulate_pool_static
w.start()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
reduction.dump(process_obj, to_child)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'generator' object
File "C:\ProgramData\Anaconda3\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\ProgramData\Anaconda3\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\utils\data_utils.py", line 877, in _run
with closing(self.executor_fn(_SHARED_SEQUENCES)) as executor:
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\utils\data_utils.py", line 867, in pool_fn
pool = get_pool_class(True)(
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 212, in __init__
self._repopulate_pool()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 303, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 326, in _repopulate_pool_static
w.start()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
reduction.dump(process_obj, to_child)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'generator' object
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
Now I'm stuck, how to solve this?
According to documentation https://keras.io/api/preprocessing/image/
The argument class_mode can be set as "multi_output" so you don't need to create a custom generator:
class_mode: one of "binary", "categorical", "input", "multi_output", "raw", sparse" or None. Default: "categorical". Mode for yielding the targets:
- "binary": 1D numpy array of binary labels,
- "categorical": 2D numpy array of one-hot encoded labels. Supports multi-label output.
- "input": images identical to input images (mainly used to work with autoencoders),
- "multi_output": list with the values of the different columns,
- "raw": numpy array of values in y_col column(s),
- "sparse": 1D numpy array of integer labels,
- None, no targets are returned (the generator will only yield batches of image data, which is useful to use in model.predict()).
I am now being able to use workers > 1, but I am not having performance improvements.

PyTorch Getting Started example not working

I followed this tutorial in the Getting Started section on the PyTorch website: "Deep Learning with PyTorch: A 60 Minute Blitz" and I downloaded the code for "Training a Classifier" on the bottom of the page and I ran it, and it's not working for me. I'm using the CPU version of PyTorch if that makes a difference. I'm new to Python and basically learning it for Pytorch. Here's the error message, Control + K isn't working for me because I think the editing interface is different for the first few posts and Stack Overflow needs to fix it. Or it could just be my browser:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Anonymous\PycharmProjects\pytorchHelloWorld\train_network.py", line 100, in <module>
dataiter = iter(trainloader)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__
return _DataLoaderIter(self)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__
w.start()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Traceback (most recent call last):
File "C:/Users/Anonymous/PycharmProjects/pytorchHelloWorld/train_network.py", line 100, in <module>
dataiter = iter(trainloader)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__
return _DataLoaderIter(self)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__
w.start()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
The error is likely due to multiprocessing in DataLoader and Windows since the tutorial is using num_workers=2. Python3 documentation shares some guidelines on this:
Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process).
You can either set num_workers=0 or you need to wrap your code within if __name__ == '__main__'
# Safe DataLoader multiprocessing with Windows
if __name__ == '__main__':
# Code to load the data with num_workers > 1
Check this reply on PyTorch forum for more details and this issue on GitHub.

HDF5 reading and fit_generator multiprocessing error

I'm trying to multiprocess the fit_generator.
These are the problems that I face.
trainable_model.fit_generator(load_random_cached_bottlenecks(BATCH_SIZE, label_map, training_addr_label_map, train_npy_dir, 'h5py', h5py_file_train),epochs = EPOCHS, steps_per_epoch=iterations_per_epoch_t, validation_data = load_random_cached_bottlenecks(BATCH_SIZE, label_map, validation_addr_label_map, val_npy_dir, 'h5py', h5py_file_val), validation_steps=iterations_per_epoch_v, workers = 1, callbacks = callback_list, use_multiprocessing = True, max_queue_size = 32)
The main arguments that are causing problem: workers and use_multiprocessing.
When worker=1, use_multiprocessing=True/False runs with no problem.
If workers=5, use_multiprocessing=True its throwing errors. The weird thing is its running, but at some random iteration I'm getting errors like
KeyError: 'Unable to open object (bad local heap signature)'
or
KeyError: 'Unable to open object (wrong B-tree signature)'
Im using h5py to read the files. I have written custom generator for this purpose.
def load_random_cached_bottlenecks(batch_size, label_map,
addr_label_map, dirs, comp_type = 'h5py', hdf5_file = None):
'''
Parameters
----------
batch_size: Number of bottlenecks to be loaded along with the labels
label_map: The dictionary that maps the class_names and the index
addr_label_map: The dictionary that maps addrs of bottlenecks and the labels
hdf5_file: This is the hdf5 file object with reading enabled.
Returns
-------
batch: (bottlenecks_train, bottlenecks_labels) a batch of them which is equal to batch_size
'''
while True:
chosen_h5py = np.random.choice(dirs, size = batch_size)
# chosen_h5py = [dirs[i] for i in batch_index]
labels_for_chosen_h5py = [label_map[addr_label_map[i]] for i in chosen_h5py]
h5py_data = np.array([hdf5_file[i] for i in chosen_h5py])
h5py_onehot = to_categorical(labels_for_chosen_h5py, num_classes = LABEL_LENGTH)
# print (h5py_data.shape)
yield (h5py_data, h5py_onehot)
I have referred here, but couldn't solve my problem.
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.6/site-packages/keras/utils/data_utils.py", line 677, in _data_generator_task
generator_output = next(self._generator)
File "general_model.py", line 263, in load_random_cached_bottlenecks
h5py_data = np.array([hdf5_file[i] for i in chosen_h5py])
File "/opt/anaconda3/lib/python3.6/site-packages/keras/utils/data_utils.py", line 677, in _data_generator_task
generator_output = next(self._generator)
File "general_model.py", line 263, in load_random_cached_bottlenecks
h5py_data = np.array([hdf5_file[i] for i in chosen_h5py])
File "general_model.py", line 263, in <listcomp>
h5py_data = np.array([hdf5_file[i] for i in chosen_h5py])
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "general_model.py", line 263, in <listcomp>
h5py_data = np.array([hdf5_file[i] for i in chosen_h5py])
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/opt/anaconda3/lib/python3.6/site-packages/h5py/_hl/group.py", line 177, in __getitem__
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
File "/opt/anaconda3/lib/python3.6/site-packages/h5py/_hl/group.py", line 177, in __getitem__
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
KeyError: 'Unable to open object (wrong B-tree signature)'
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: 'Unable to open object (bad symbol table node signature)'
Traceback (most recent call last):
File "general_model.py", line 437, in <module>
train_with_bottlenecks(args, label_map, trainable_model, non_trainable_model, iterations_per_epoch_t, iterations_per_epoch_v)
File "general_model.py", line 326, in train_with_bottlenecks
validation_steps=iterations_per_epoch_v, workers = 4, callbacks = callback_list, use_multiprocessing = True, max_queue_size = 32)
File "/opt/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/opt/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 2194, in fit_generator
generator_output = next(output_generator)
File "/opt/anaconda3/lib/python3.6/site-packages/keras/utils/data_utils.py", line 793, in get
six.reraise(value.__class__, value, value.__traceback__)
File "/opt/anaconda3/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
KeyError: 'Unable to open object (wrong B-tree signature)'
Any help is appreciated! Thanks in Advance!
This is not a solution per-se but this solved this problem for me.
I got the a similar error: OSError: Can't read data (wrong B-tree signature)
when trying to use fit_generator when this one reads data from a hdf5_file, also inside an anaconda3 virtual env.
In my case I created a new virtual environment and re-installed the needed dependencies of the specific versions in which it was supposed to work, with this my code ran smoothly.

Resources