I am iterating over a torch.utils.data.DataLoader with its associated torch.utils.data.Dataset. I notice when changing one line in the __getitem__ method for the dataset, I get the following error:
RuntimeError: DataLoader worker (pid 10666) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
The __getitem__ looks like this before:
def __getitem__(self, idx):
datafilename = os.path.join(self.root_dir, self.labelfile.iloc[idx,2])
X = np.loadtxt(datafilename, delimiter=',', dtype=np.int32)
X = torch.tensor(X, dtype=torch.float)
return X
and like this after:
def __getitem__(self, idx):
datafilename = os.path.join(self.root_dir, self.labelfile.iloc[idx,2])
with open(datafilename, 'r') as f:
X = [int(x) for x in f.readline().split(',')]
X = torch.tensor(X, dtype=torch.float)
return X
I am running with the VSCode debugger if that makes any difference. This behavior is persistent even when num_workers=1 and I have tried on two separate machines with the same error. I believe this not due to hardware, but maybe a memory leak. Also the second version is about 7x faster so I would prefer using that version.
Related
I ran into a memory leak running detect_image(...) which is provided by darknet.py. I was detecting objects in an endless while loop. I'm using Ubuntu 20.04, Python 3.8.10, OpenCV 4.5.2 and Cuda 10.2.
darknet.py already has a function to take care of this, namely free_image(image). For some reason, this isn't called in the function detect_image(...) . I have added this underneath free_detections(detections, num) and the memory leak is taken care of. Here's the exact code:
def detect_image(network, class_names, image_path, thresh=.5, hier_thresh=.5, nms=.45):
"""
Returns a list with highest confidence class and their bbox
"""
pnum = pointer(c_int(0))
image = load_image(image_path,0,0)
predict_image(network, image)
detections = get_network_boxes(network, image.w, image.h,
thresh, hier_thresh, None, 0, pnum, 0)
num = pnum[0]
if nms:
do_nms_sort(detections, num, len(class_names), nms)
predictions = remove_negatives(detections, class_names, num)
predictions = decode_detection(predictions)
free_detections(detections, num)
free_image(image) # this was missing...
return sorted(predictions, key=lambda x: x[1])```
When training a model using a custom dataset in Pytorch 1.4 the following error is thrown after a seemingly random amount of epochs.
RuntimeError: Couldn't open shared file mapping: <torch_15324_2327643205>, error code: <1455>
The dataset is wrapped in a torch.utils.data.DataLoader and uses 4 workers, equal to the amount of physical cores.
class TSNDataSet(data.Dataset):
def __init__(self, pickle_file_paths, transforms):
self.pickle_file_paths = pickle_file_paths # list with file paths to pickle files
self.dataset_size = len(pickle_file_paths)
def __getitem__(self, index):
with open(self.pickle_file_paths[index], 'rb') as f:
mffs = pickle.load(f)
return mffs, index
def __len__(self):
return self.dataset_size
It would be helpful to know what the error means and what the possible solutions are.
How to use tfrecord with pytorch?
I have downloaded "Youtube8M" datasets with video-level features, but it is stored in tfrecord.
I tried to read some sample from these file to convert it to numpy and then load in pytorch. But it failed.
reader = YT8MAggregatedFeatureReader()
files = tf.gfile.Glob("/Data/youtube8m/train*.tfrecord")
filename_queue = tf.train.string_input_producer(
files, num_epochs=5, shuffle=True)
training_data = [
reader.prepare_reader(filename_queue) for _ in range(1)
]
unused_video_id, model_input_raw, labels_batch, num_frames = tf.train.shuffle_batch_join(
training_data,
batch_size=1024,
capacity=1024 * 5,
min_after_dequeue=1024,
allow_smaller_final_batch=True ,
enqueue_many=True)
with tf.Session() as sess:
label_numpy = labels_batch.eval()
print(type(label_numpy))
But this step have no result, just stuck for a long while without any response.
One work around is to use tensorflow 1.1* eager mode or tensorflow 2+ to loop through the dataset(so you can use var len feature, use buckets window), then just
torch.as_tensor(val.numpy()).to(device) to use in torch.
You can use the DALI library to load the tfrecords directly in a PyTorch code.
You can find out, how to do it in their documentation.
Maybe this can help you: TFRecord reader for PyTorch
I cooked up this:
class LiTS(torch.utils.data.Dataset):
def __init__(self, filenames):
self.filenames = filenames
def __len__(self):
return len(self.filenames)
def __getitem__(self, idx):
volume, segmentation = None, None
if idx >= len(self):
raise IndexError()
ds = tf.data.TFRecordDataset(filenames[idx:idx+1])
for x, y in ds.map(read_tfrecord):
volume = torch.from_numpy(x.numpy())
segmentation = torch.from_numpy(y.numpy())
return volume, segmentation
I am using keras with a tensorflow-gpu back end on a Ubuntu 17.04 VM.
I have created a custom generator to read inputs and classes from pickle files, but it seems to get the following error:
terminate called after throwing an instance of 'std::ba
d_alloc'
what(): std::bad_alloc
the code for loading data can be seen here:
def data_gen(self, pklPaths, batch_size=16):
while True:
data = []
labels = []
for i, pklPath in enumerate(pklPaths):
# print(pklPath)
image = pickle.load(open(pklPath, 'rb'))
for i in range(batch_size):
# Set a label
data.append(image[0][0])
labels.append(image[1][1])
yield np.array(data), np.array(labels)
then in the train section i'm using a fit generator:
vm_model.fit_generator(vm.data_gen(pkl_train), validation_data=vm.data_gen(pkl_validate), epochs=15, verbose=2,
steps_per_epoch=(5000/16), validation_steps=(1000/16), callbacks=[tb])
the generator should have better memory management than loading everything, however it doesn't seem to be the case! any ideas?
ok, so i found the issue so I'm answering my own question.
Basically, previous one had one unnecessary loop and also kept increasing the size of data and labels essentially loading the entire data in memory:
def data_gen(self, pklPaths, batch_size=16):
while True:
data = []
labels = []
for i, pklPath in enumerate(pklPaths):
# load pickle
image = pickle.load(open(pklPath, 'rb'))
# append
data.append(image[0][0])
labels.append(image[1])
# if batch is complete yield data and labels and reset
if i % batch_size == 0 and i != 0:
yield np.array(data), np.array(labels)
data.clear()
labels.clear()
I want to be able to load an existing tensorflow network simultaneously from several processes using multiprocessing library to do inference on different cores simultaneously.
def spawn_process(x):
g = tf.Graph()
sess = tf.Session(graph=g)
with g.as_default():
meta_graph = tf.train.import_meta_graph('model.meta')
meta_graph.restore(sess, tf.train.latest_checkpoint(chkp_dir)
x_ph = tf.get_collection('x')[0] # placeholder tensor that we use to pass in new x's to do inference on.
pred = tf.get_collection('pred')[0] # tensor responsible for computing prediction given x
prediction = sess.run(pred, feed_dict={x_ph: x})
This is basically the function I want to pass to Pool.map to infer parallely.
Above function works with just map and it looks like this
predictions = list(map(spawn_process, range(10)))
predictions = [spawn_process(x) for x in range(10)]
Both of the above work as expected.
But when I try do this, it fails, and each process just hangs right before the meta_graph.restore line and I'm stumped.
p = multiprocess.Pool(4)
predictions = p.map(spawn_process, range(10))
p.close()
p.join()
I don't know why this isn't working with tensorflow, this normally works for me when I try to do any sort of computation parallely. It stops right before the meta_graph.restore line and all the processes hang there.