pytorch dataloader default_collate argument use with to(device) - pytorch

Ive been trying to integrate the to(device) inside my dataloader using to(device) as seen in https://github.com/pytorch/pytorch/issues/11372
I defined it on FashionMNIST in the following way:
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
batch_size = 32
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/',
download=True,
train=True,
transform=transforms.ToTensor())
rain_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=False, collate_fn=lambda x: default_collate(x).to(device))
But i get the following error:
AttributeError: 'list' object has no attribute 'to'
It seems that the output of default collate is a list of length 2 with the first element being the image tensor and the second the labels tensor (since its the output of next(iter(train_loader)) with collate_fn=None), so I tried with the following defined function:
def to_device_list(l, device):
return [l[0].to(device), l[1].to(device)]
train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=False, collate_fn=lambda x: to_device_list(x, device))
And I got the following error:
AttributeError: 'tuple' object has no attribute 'to'
Any help please on how to do it?

The fashion mnist dataset returns a tuple of img and target, where the img is tensor and target is int value for class.
Now, your dataloader takes batch size samples from dataset class to get list of samples. Note, this list of samples is now, List[Tuple[Tensor, int]](using typing annotation here). Then it calls, collate function to convert List[Tuple[Tensor, int]] into List[Tensor], where this list has 2 tensors. The first tensor is stacked array of images of size [32, 1, 28, 28], where 32 was batch size and second tensor is tensor array of int values(class labels).
The default_collate function, just converts array of structures to structures of array.
Now, when you use collate_fn=lambda x: default_collate(x).to(device), notice that default_collate returns you a list of tensors. So calling .to on list wont work and should be called on all elements of the list.
Solution
Use
collate_fn=lambda x: list(map(lambda x: x.to(device), default_collate(x))))
The map function transfers each element of list(from default_collate) to cuda, and finally, call list, since map is evaluated lazy in python3.

Related

could not determine the shape of object type 'Data' - converting pytorch list to cuda?

I want to convert a list to a format that can be read into a model in CUDA format.
I have this:
print(type(train_dataset))
train_dataset = torch.tensor(train_dataset, device='cuda:0')
print(type(train_dataset))
The output is:
<class 'list'>
Traceback (most recent call last):
File "test_pytorch_test_gpu2.py", line 882, in <module>
train_dataset = torch.tensor(train_dataset, device='cuda:0')
ValueError: could not determine the shape of object type 'Data'
Could someone explain how to convert a list to a format for cuda/what is wrong with what I did?
A Dataset is an object wrapping around your training data and it's main function is to organize the data, the labels, and possibly their augmentations.
On top of Dataset one usually uses a DataLoader: this object collects training samples from the underlying Dataset and puts them into tensors representing mini-batches for training.
Your code should look something like this:
# one-time setup of training data
train_dataset = ... # your code that construct the Dataset
train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True, num_workers=2)
# training loop
for e in range(num_epochs):
# each epoch represents one pass over all the training samples.
for (x, y) in train_loader: # iterate over the data one _batch_ at a time
# move training tensors, extracted from the Dataset, to GPU
x = x.to(device)
y = y.to(device)
# rest of your training code here:
pred = model(x)
# ...

how do multiply each layer of a tensor with another tensor?

I am trying to multiply each layer of a tensor with the first layer of the tensor.
x1 = bert_model_1([x1_in, x2_in])
x1_begin = Lambda(lambda x: x[:,0])(x1) #obtain the first layer of the bert tensor
x1_begin = Lambda(keras.layers.multiply(x11, x1_begin) for x11 in x1)([x1, x1_begin])
when i ran the code above, i keep getting the following errors,
<generator object build_corrector. . at 0x00000249DB234C48> is not a callable object.
the error seems to happen in the last line, how do i iterate each layer in the tensor?

class_weight in fit_generator as np.array or dictionary?

I am trying to recreate binary image using a UNet. But the classification labels are extremely skewed (~10% are 0s and rest are 1s in the image). So I got the weights for each class using skearn. This is what I do:
wts = np.array([5.76901408, 0.54744721])
class_wts = dict(enumerate(wts))
Now, when I put this in fit_generator:
history_sgd = model.fit_generator(training_generator, validation_data=valid_generator, steps_per_epoch=train_steps, validation_steps=valid_steps, epochs=epochs, verbose=1, class_weight = class_wts)
I get the following error:
ValueError: class_weight not supported for 3+ dimensional targets.
Could this be because of the generator shape? Because that would be of the shape (batch_size, <image_size>).
Moreover, when I use class_weight as numpy array instead of dictionary, the code works. Why does that happen? I checked a lot of resources online but I cannot figure the difference between the two and why the numpy array format works but not the dictionary.

how to save and restore tf.estimator model

I would like to save and restor my tf.estimator model. Although I tried to follow other related issues on stackoverflow, I could not be successful. The following input_fn can provide the data to be predicted. But I do not know how to use it to save and restore the model to predict.
Btw, my return dataset has a shape of [batch_size, dim] where dtype is float32
def predict_input_fn(path, dim, batch_size):
dataset = ds.get_dataset(path,
dim)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(1)
return dataset
What I have tried until so far is the following but it did not work as expected, could you please help me to save and restore such a model?
Trial
def serving_input_receiver_fn():
features = tf.placeholder(
dtype=tf.float32, shape=[None, batch_size])
fn = lambda x : precict_input_fn(path, dim, batch_size)
mapped_fn = tf.map_fn(fn, features)
return tf.estimator.export.ServingInputReceiver(mapped_fn, features)
estimator.export_savedmodel(model_save_path, serving_input_receiver_fn)
Error:
Failed to convert object of type <class 'tensorflow.python.data.ops.dataset_ops.PrefetchDataset'> to Tensor. Contents: <PrefetchDataset shapes: (?, 1024), types: tf.float32>. Consider casting elements to a supported type

Training a `BoostedTreesClassifier` in Tensorflow?

I am learning TensorFlow and am trying to train a BoostedTreesClassifier (premade estimator). However, I cannot get it to work with my bucketized columns. Below is my bucketized column:
age_bucket_column = tf.feature_column.bucketized_column(tf.numeric_column(key='age'), [20, 40, 60])
Here is my train input function (note features is a Pandas DataFrame):
def train_input_fn(features, labels, batch_size):
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))
dataset = dataset.shuffle(buffer_size=1000).repeat(count=None).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
Here is my estimator:
boosted_trees_classifier = tf.estimator.BoostedTreesClassifier(
feature_columns=[age_bucket_column],
n_batches_per_layer=100
)
And here is my code to train it:
classifier.train(
input_fn=lambda: train_input_fn(train_X, train_y, 100),
steps=1000
)
However, when I run it, I get the following error:
ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int64: 'Tensor("IteratorGetNext:13", shape=(?,), dtype=int64, device=/device:CPU:0)'
Note that when I run the same code but with another model (say a LinearClassifier or DNNClassifier) it works perfectly. What am I doing wrong? Thank you in advance!
This is probably because of your labels are of type int64. Cast them to float32
train_y = pd.Series(train_y , index=np.array(range(1, train_y.shape[0] + 1)), dtype=np.float32)

Resources