Creating and accessing datasets in an HDF5 file - python-3.x

I am trying to create an HDF5 file with two datasets, 'data' and 'label'. When I tried to access the said file, however, I got an error as follows:
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.4\helpers\pydev\pydevd.py", line 1664, in <module>
main()
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.4\helpers\pydev\pydevd.py", line 1658, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.4\helpers\pydev\pydevd.py", line 1068, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.4\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/pycharm/Input_Pipeline.py", line 140, in <module>
data_h5 = f['data'][:]
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "C:\Users\u20x47\PycharmProjects\PCL\venv\lib\site-packages\h5py\_hl\group.py", line 177, in __getitem__
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5o.pyx", line 190, in h5py.h5o.open
ValueError: Not a location (invalid object ID)
Code used to create the dataset:
h5_file.create_dataset('data', data=data_x, compression='gzip', compression_opts=4, dtype='float32')
h5_file.create_dataset('label', data=label, compression='gzip', compression_opts=1, dtype='uint8')
data_x an array of arrays. Each element in data_x is a 3D array of 1024 elements.
label is an array of arrays as well. Each element is a 1D array of 1 element.
Code to access the said file:
f = h5_file
data_h5 = f['data'][:]
label_h5 = f['label'][:]
print (data_h5, label_h5)
How can I fix this? Is this a syntax error or a logical one?

I was unable to reproduce the error.
Maybe you forgot to close the file or you change the content of your h5 during execution.
Also you can use print h5_file.items() to check the content of your h5 file
Tested code:
import h5py
import numpy as np
h5_file = h5py.File('test.h5', 'w')
# bogus data with the correct size
data_x = np.random.rand(16,8,8)
label = np.random.randint(100, size=(1,1),dtype='uint8')
#
h5_file.create_dataset('data', data=data_x, compression='gzip', compression_opts=4, dtype='float32')
h5_file.create_dataset('label', data=label, compression='gzip', compression_opts=1, dtype='uint8')
h5_file.close()
h5_file = h5py.File('test.h5', 'r')
f = h5_file
print f.items()
data_h5 = f['data'][...]
label_h5 = f['label'][...]
print (data_h5, label_h5)
h5_file.close()
Produces
[(u'data', <HDF5 dataset "data": shape (16, 8, 8), type "<f4">), (u'label', <HDF5 dataset "label": shape (1, 1), type "|u1">)]
(array([[[4.36837107e-01, 8.05664659e-01, 3.34415197e-01, ...,
8.89135897e-01, 1.84097692e-01, 3.60782951e-01],
[8.86442482e-01, 6.07181549e-01, 2.42844030e-01, ...,
[4.24369454e-01, 6.04596496e-01, 5.56676507e-01, ...,
7.22884715e-01, 2.45932683e-01, 9.18777227e-01]]], dtype=float32), array([[25]], dtype=uint8))

Related

pytorch: Merge three datasets with predefined and custom datasets

I am training an AI model to recognize handwritten hangul characters along with English characters and numbers. It means that I require three datasets custom korean character dataset and other datasets.
I have three datasets and now I am merging three datasets but when I print the train_set path it shows MJSynth only which is wrong.
긴장_1227682.jpg is in my custom korean dataset not in MJSynth
Code
custom_train_set = RecognitionDataset(
parts[0].joinpath("images"),
parts[0].joinpath("labels.json"),
img_transforms=Compose(
[
T.Resize((args.input_size, 4 * args.input_size), preserve_aspect_ratio=True),
# Augmentations
T.RandomApply(T.ColorInversion(), 0.1),
ColorJitter(brightness=0.3, contrast=0.3, saturation=0.3, hue=0.02),
]
),
)
if len(parts) > 1:
for subfolder in parts[1:]:
custom_train_set.merge_dataset(
RecognitionDataset(subfolder.joinpath("images"), subfolder.joinpath("labels.json"))
)
train_set = MJSynth(
train=True,
img_folder='/media/cvpr/CM_22/mjsynth/mnt/ramdisk/max/90kDICT32px',
label_path='/media/cvpr/CM_22/mjsynth/mnt/ramdisk/max/90kDICT32px/imlist.txt',
img_transforms=T.Resize((args.input_size, 4 * args.input_size), preserve_aspect_ratio=True),
)
_train_set = SynthText(
train=True,
recognition_task=True,
download=True, # NOTE: download can take really long depending on your bandwidth
img_transforms=T.Resize((args.input_size, 4 * args.input_size), preserve_aspect_ratio=True),
)
train_set.data.extend([(np_img, target) for np_img, target in _train_set.data])
train_set.data.extend([(np_img, target) for np_img, target in custom_train_set.data])
Traceback
Traceback (most recent call last):
File "/media/cvpr/CM_22/doctr/references/recognition/train_pytorch.py", line 485, in <module>
main(args)
File "/media/cvpr/CM_22/doctr/references/recognition/train_pytorch.py", line 396, in main
fit_one_epoch(model, train_loader, batch_transforms, optimizer, scheduler, mb, amp=args.amp)
File "/media/cvpr/CM_22/doctr/references/recognition/train_pytorch.py", line 118, in fit_one_epoch
for images, targets in progress_bar(train_loader, parent=mb):
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/fastprogress/fastprogress.py", line 50, in __iter__
raise e
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/fastprogress/fastprogress.py", line 41, in __iter__
for i,o in enumerate(self.gen):
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
data = self._next_data()
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
return self._process_data(data)
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
data.reraise()
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/_utils.py", line 543, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/cvpr/CM_22/doctr/doctr/datasets/datasets/base.py", line 48, in __getitem__
img, target = self._read_sample(index)
File "/media/cvpr/CM_22/doctr/doctr/datasets/datasets/pytorch.py", line 37, in _read_sample
else read_img_as_tensor(os.path.join(self.root, img_name), dtype=torch.float32)
File "/media/cvpr/CM_22/doctr/doctr/io/image/pytorch.py", line 52, in read_img_as_tensor
pil_img = Image.open(img_path, mode="r").convert("RGB")
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/PIL/Image.py", line 2912, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '/media/cvpr/CM_22/mjsynth/mnt/ramdisk/max/90kDICT32px/긴장_1227682.jpg'

Repeated error while running Face Recognition vs code

I am new to Deep Learning and my first Project is FACIAL EMOTION RECOGINISTION
I am trying to use this DeepFace library but seems kind of stuck at the moment can anyone help ?
import cv2
from cv2 import cvtColor
from deepface import DeepFace
import matplotlib.pyplot as plt
img = cv2.imread('Images\happy\happy_001.jpg')
# plt.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))
# plt.show()
predictions = DeepFace.analyze(img, actions = ['age', 'gender', 'race', 'emotion'])
and the error i am getting is
Traceback (most recent call last):
File "C:\Users\asus\OneDrive - Graphic Era University\Desktop\ML AND AI\FACE RECOG\test.py", line 11, in <module>
predictions=DeepFace.analyze(img, actions = ['age', 'gender', 'race', 'emotion'])
File "C:\Python 3.9\lib\site-packages\deepface\DeepFace.py", line 355, in analyze
models['gender'] = build_model('Gender')
File "C:\Python 3.9\lib\site-packages\deepface\DeepFace.py", line 61, in build_model
model = model()
File "C:\Python 3.9\lib\site-packages\deepface\extendedmodels\Gender.py", line 49, in loadModel
gender_model.load_weights(home+'/.deepface/weights/gender_model_weights.h5')
File "C:\Python 3.9\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Python 3.9\lib\site-packages\h5py\_hl\files.py", line 507, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
File "C:\Python 3.9\lib\site-packages\h5py\_hl\files.py", line 220, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5f.pyx", line 106, in h5py.h5f.open
OSError: Unable to open file (truncated file: eof = 232972459, sblock->base_addr = 0, stored_eof = 537149760)
I certainly don't know how to solve this .. can anyone help?
I am using VS CODE with python 3.9.6
Check that file: "h5py\h5f.pyx", line 106, in h5py.h5f.open. I guess your issue rises from there. Try to re-install the libs. Also, here your problem was discussed and, probably, solved: https://github.com/keras-team/keras/issues/6221

python Tensorflow 2.4.0 'input must be 4-dimensional[1,1,371,300,3]' ERROR

im running Nicholas Rennote's TFODCourse.
when i execute the Evaluate the model code:
python Tensorflow\models\research\object_detection\model_main_tf2.py --model_dir=Tensorflow\workspace\models\my_ssd_mobnet --pipeline_config_path=Tensorflow\workspace\models\my_ssd_mobnet\pipeline.config --checkpoint_dir=Tensorflow\workspace\models\my_ssd_mobnet
error occurs like this
Traceback (most recent call last):
File "Tensorflow\models\research\object_detection\model_main_tf2.py", line 115, in <module>
tf.compat.v1.app.run()
File "C:\Users\All_Nighter\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\All_Nighter\miniconda3\envs\TF\lib\site-packages\absl\app.py", line 303, in run
_run_main(main, args)
File "C:\Users\All_Nighter\miniconda3\envs\TF\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "Tensorflow\models\research\object_detection\model_main_tf2.py", line 82, in main
model_lib_v2.eval_continuously(
File "C:\Users\All_Nighter\miniconda3\envs\TF\lib\site-packages\object_detection-0.1-py3.8.egg\object_detection\model_lib_v2.py", line 1151, in eval_continuously
eager_eval_loop(
File "C:\Users\All_Nighter\miniconda3\envs\TF\lib\site-packages\object_detection-0.1-py3.8.egg\object_detection\model_lib_v2.py", line 928, in eager_eval_loop
for i, (features, labels) in enumerate(eval_dataset):
File "C:\Users\All_Nighter\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 761, in __next__
return self._next_internal()
File "C:\Users\All_Nighter\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 744, in _next_internal
ret = gen_dataset_ops.iterator_get_next(
File "C:\Users\All_Nighter\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\ops\gen_dataset_ops.py", line 2727, in iterator_get_next
_ops.raise_from_not_ok_status(e, name)
File "C:\Users\All_Nighter\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\framework\ops.py", line 6897, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: input must be 4-dimensional[1,1,371,300,3]
[[{{node ResizeImage/resize/ResizeBilinear}}]] [Op:IteratorGetNext]
I can't understand what is input must be 4-dimensional[1,1,371,300,3] means.
i tried Labeling again, and downgrade TF to 2.4.0. but still happend.
ssd_mobilenet model expects input
A three-channel image of variable size - the model does NOT support
batching. The input tensor is a tf.uint8 tensor with shape [1, height,
width, 3] with values in [0, 255]
In this case you are giving 4-dimensional input[1,1,371,300,3],
Reshape your input data as [1,371,300,3].

HDF5 reading and fit_generator multiprocessing error

I'm trying to multiprocess the fit_generator.
These are the problems that I face.
trainable_model.fit_generator(load_random_cached_bottlenecks(BATCH_SIZE, label_map, training_addr_label_map, train_npy_dir, 'h5py', h5py_file_train),epochs = EPOCHS, steps_per_epoch=iterations_per_epoch_t, validation_data = load_random_cached_bottlenecks(BATCH_SIZE, label_map, validation_addr_label_map, val_npy_dir, 'h5py', h5py_file_val), validation_steps=iterations_per_epoch_v, workers = 1, callbacks = callback_list, use_multiprocessing = True, max_queue_size = 32)
The main arguments that are causing problem: workers and use_multiprocessing.
When worker=1, use_multiprocessing=True/False runs with no problem.
If workers=5, use_multiprocessing=True its throwing errors. The weird thing is its running, but at some random iteration I'm getting errors like
KeyError: 'Unable to open object (bad local heap signature)'
or
KeyError: 'Unable to open object (wrong B-tree signature)'
Im using h5py to read the files. I have written custom generator for this purpose.
def load_random_cached_bottlenecks(batch_size, label_map,
addr_label_map, dirs, comp_type = 'h5py', hdf5_file = None):
'''
Parameters
----------
batch_size: Number of bottlenecks to be loaded along with the labels
label_map: The dictionary that maps the class_names and the index
addr_label_map: The dictionary that maps addrs of bottlenecks and the labels
hdf5_file: This is the hdf5 file object with reading enabled.
Returns
-------
batch: (bottlenecks_train, bottlenecks_labels) a batch of them which is equal to batch_size
'''
while True:
chosen_h5py = np.random.choice(dirs, size = batch_size)
# chosen_h5py = [dirs[i] for i in batch_index]
labels_for_chosen_h5py = [label_map[addr_label_map[i]] for i in chosen_h5py]
h5py_data = np.array([hdf5_file[i] for i in chosen_h5py])
h5py_onehot = to_categorical(labels_for_chosen_h5py, num_classes = LABEL_LENGTH)
# print (h5py_data.shape)
yield (h5py_data, h5py_onehot)
I have referred here, but couldn't solve my problem.
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.6/site-packages/keras/utils/data_utils.py", line 677, in _data_generator_task
generator_output = next(self._generator)
File "general_model.py", line 263, in load_random_cached_bottlenecks
h5py_data = np.array([hdf5_file[i] for i in chosen_h5py])
File "/opt/anaconda3/lib/python3.6/site-packages/keras/utils/data_utils.py", line 677, in _data_generator_task
generator_output = next(self._generator)
File "general_model.py", line 263, in load_random_cached_bottlenecks
h5py_data = np.array([hdf5_file[i] for i in chosen_h5py])
File "general_model.py", line 263, in <listcomp>
h5py_data = np.array([hdf5_file[i] for i in chosen_h5py])
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "general_model.py", line 263, in <listcomp>
h5py_data = np.array([hdf5_file[i] for i in chosen_h5py])
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/opt/anaconda3/lib/python3.6/site-packages/h5py/_hl/group.py", line 177, in __getitem__
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
File "/opt/anaconda3/lib/python3.6/site-packages/h5py/_hl/group.py", line 177, in __getitem__
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
KeyError: 'Unable to open object (wrong B-tree signature)'
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: 'Unable to open object (bad symbol table node signature)'
Traceback (most recent call last):
File "general_model.py", line 437, in <module>
train_with_bottlenecks(args, label_map, trainable_model, non_trainable_model, iterations_per_epoch_t, iterations_per_epoch_v)
File "general_model.py", line 326, in train_with_bottlenecks
validation_steps=iterations_per_epoch_v, workers = 4, callbacks = callback_list, use_multiprocessing = True, max_queue_size = 32)
File "/opt/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/opt/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 2194, in fit_generator
generator_output = next(output_generator)
File "/opt/anaconda3/lib/python3.6/site-packages/keras/utils/data_utils.py", line 793, in get
six.reraise(value.__class__, value, value.__traceback__)
File "/opt/anaconda3/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
KeyError: 'Unable to open object (wrong B-tree signature)'
Any help is appreciated! Thanks in Advance!
This is not a solution per-se but this solved this problem for me.
I got the a similar error: OSError: Can't read data (wrong B-tree signature)
when trying to use fit_generator when this one reads data from a hdf5_file, also inside an anaconda3 virtual env.
In my case I created a new virtual environment and re-installed the needed dependencies of the specific versions in which it was supposed to work, with this my code ran smoothly.

PyTorch - Torchvision - BrokenPipeError: [Errno 32] Broken pipe

I'm trying to carry out the tutorial named "Training a classifier" with PyTorch.
WHen trying to debug this part of the code :
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
I get this error message :
Files already downloaded and verified Files already downloaded and verified
Files already downloaded and verified Files already downloaded and verified Traceback (most recent call last):
File "<string>", line 1, in <module>
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 277, in
_fixup_main_from_path
run_name="__mp_main__")
File "D:\Anaconda\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "D:\Anaconda\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "D:\Anaconda\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "d:\Yggdrasil\Programmation\PyTorch\TutorialCIFAR10.py", line 36, in <module>
dataiter = iter(trainloader)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
return _DataLoaderIter(self)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
w.start()
File "D:\Anaconda\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 136, in
_check_not_importing_main
is not going to be frozen to produce an executable.)
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Traceback (most recent call last):
File "d:\Yggdrasil\Programmation\PyTorch\TutorialCIFAR10.py", line 36, in <module>
dataiter = iter(trainloader)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
return _DataLoaderIter(self)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
w.start()
File "D:\Anaconda\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj) File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 65, in
__init__
reduction.dump(process_obj, to_child)
File "D:\Anaconda\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
All the previous lines in the tutorial are working perfectly.
Does someone know how to solve this, please ?
Thanks a lot in advance
The question happened because Windows cannot run this DataLoader in 'num_workers' more than 0.
You can see where the trainloader come from.we can see
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
We need to change the 'num_workers' to 0.like this:
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=0)
Every trainloaders need to change like this.
Got the same error. The following workaround works for me:
def run():
# code goes here
if __name__ == '__main__':
run()
This doesn't look to be a PyTorch problem. Try executing the code in Jupyter notebooks and other environment troubleshooting.
you need to add a if-clause protection as stated in the pytorch docs:
https://pytorch.org/docs/stable/notes/windows.html#usage-multiprocessing

Resources