How to add new sample to CIFAR10 torchvision? - pytorch

Hi I want to add my own images to the CIFAR10 dataset in torchvision, how can I do that?
train_data = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=train_transform)
train_data.add # or a workaround!
thanks

You can either create a custom dataset for CIFAR10, using the raw cifar10 images here or you can still use the CIFAR10 dataset inside your new custom dataset and then add your logic in the __getitem__() method.
This is a simple example to get you going :
class CIFAR10_2(torch.utils.data.Dataset):
def __init__(self, dataset_path='/cifar10', transformations=None, should_download=True):
self.dataset_train = torchvision.datasets.CIFAR10(dataset_path, download=should_download)
self.transformations = transformations
def __getitem__(self, index):
# do as you wish , add your logic here
(img, label) = self.dataset_train[index]
# for transformations for example
if self.transformations is not None:
return self.transformations(img), label
return img, label
def __len__(self):
return len(self.dataset_train)
you can get fancy and add logic for test,validation, etc and do what ever you like.

Related

Change image labels when using pytorch

I am loading an image dataset with pytorch as seen below:
dataset = datasets.ImageFolder('...', transform=transform)
loader = DataLoader(dataset, batch_size=args.batchsize)
The dataset is i na folder with structure as seen below:
dataset/
class_1/
class_2/
class_3/
So in result each image in class_1 folder has a label of 0..etc.
However i would like to change these labels and randomly assign a label to each image in the dataset. What i tried is:
new_labels = [random.randint(0, 3) for i in range(len(dataset.targets))]
dataset.targets = new_labels
This however does not change the labels as i wanted due to some errors later in model training.
Is this the correct way to do it or is tehre a more appropriate one?
You can have a transformation for the labels:
import random
class rand_label_transform(object):
def __init__(self, num_labels):
self.num_labels = num_labels
def __call__(self, labels):
# generate new random label
new_label = random.randint(0, self.num_labels - 1)
return new_label
dataset = datasets.ImageFolder('...', transform=transform, target_transform=rand_label_transform(num_labels=3))
See ImageFolder for more details.

Applying a simple transformation to get a binary image using pytorch

I'd like to binarize image before passing it to the dataloader, I have created a dataset class which works well. but in the __getitem__() method I'd like to threshold the image:
def __getitem__(self, idx):
# Open image, apply transforms and return with label
img_path = os.path.join(self.dir, self.filelist[filename"])
image = Image.open(img_path)
label = self.x_data.iloc[idx]["label"]
# Applying transformation to the image
if self.transforms is not None:
image = self.transforms(image)
# applying threshold here:
my_threshold = 240
image = image.point(lambda p: p < my_threshold and 255)
image = torch.tensor(image)
return image, label
And then I tried to invoke the dataset:
data_transformer = transforms.Compose([
transforms.Resize((10, 10)),
transforms.Grayscale()
//transforms.ToTensor()
])
train_set = MyNewDataset(data_path, data_transformer, rows_train)
Since I have applied the threshold on a PIL object I need to apply afterwards a conversion to a tensor object , but for some reason it crashes. can somebody please assist me?
Why not apply the binarization after the conversion from PIL.Image to torch.Tensor?
class ThresholdTransform(object):
def __init__(self, thr_255):
self.thr = thr_255 / 255. # input threshold for [0..255] gray level, convert to [0..1]
def __call__(self, x):
return (x > self.thr).to(x.dtype) # do not change the data type
Once you have this transformation, you simply add it:
data_transformer = transforms.Compose([
transforms.Resize((10, 10)),
transforms.Grayscale(),
transforms.ToTensor(),
ThresholdTransform(thr_255=240)
])

Torchtext 0.7 shows Field is being deprecated. What is the alternative?

Looks like the previous paradigm of declaring Fields, Examples and using BucketIterator is deprecated and will move to legacy in 0.8. However, I don't seem to be able to find an example of the new paradigm for custom datasets (as in, not the ones included in torch.datasets) that doesn't use Field. Can anyone point me at an up-to-date example?
Reference for deprecation:
https://github.com/pytorch/text/releases
It took me a little while to find the solution myself. The new paradigm is like so for prebuilt datasets:
from torchtext.experimental.datasets import AG_NEWS
train, test = AG_NEWS(ngrams=3)
or like so for custom built datasets:
from torch.utils.data import DataLoader
def collate_fn(batch):
texts, labels = [], []
for label, txt in batch:
texts.append(txt)
labels.append(label)
return texts, labels
dataloader = DataLoader(train, batch_size=8, collate_fn=collate_fn)
for idx, (texts, labels) in enumerate(dataloader):
print(idx, texts, labels)
I've copied the examples from the Source
Browsing through torchtext's GitHub repo I stumbled over the README in the legacy directory, which is not documented in the official docs. The README links a GitHub issue that explains the rationale behind the change as well as a migration guide.
If you just want to keep your existing code running with torchtext 0.9.0, where the deprecated classes have been moved to the legacy module, you have to adjust your imports:
# from torchtext.data import Field, TabularDataset
from torchtext.legacy.data import Field, TabularDataset
Alternatively, you can import the whole torchtext.legacy module as torchtext as suggested by the README:
import torchtext.legacy as torchtext
There is a post regarding this. Instead of the deprecated Field and BucketIterator classes, it uses the TextClassificationDataset along with the collator and other preprocessing. It reads a txt file and builds a dataset, followed by a model. Inside the post, there is a link to a complete working notebook. The post is at: https://mmg10.github.io/pytorch/2021/02/16/text_torch.html. But you need the 'dev' (or nightly build) of PyTorch for it to work.
From the link above:
After tokenization and building vocabulary, you can build the dataset as follows
def data_to_dataset(data, tokenizer, vocab):
data = [(text, label) for (text, label) in data]
text_transform = sequential_transforms(tokenizer.tokenize,
vocab_func(vocab),
totensor(dtype=torch.long)
)
label_transform = sequential_transforms(lambda x: 1 if x =='1' else (0 if x =='0' else x),
totensor(dtype=torch.long)
)
transforms = (text_transform, label_transform)
dataset = TextClassificationDataset(data, vocab, transforms)
return dataset
The collator is as follows:
def __init__(self, pad_idx):
self.pad_idx = pad_idx
def collate(self, batch):
text, labels = zip(*batch)
labels = torch.LongTensor(labels)
text = nn.utils.rnn.pad_sequence(text, padding_value=self.pad_idx, batch_first=True)
return text, labels
Then, you can build the dataloader with the typical torch.utils.data.DataLoader using the collate_fn argument.
Well it seems like pipeline could be like that:
import torchtext as TT
import torch
from collections import Counter
from torchtext.vocab import Vocab
# read the data
with open('text_data.txt','r') as f:
data = f.readlines()
with open('labels.txt', 'r') as f:
labels = f.readlines()
tokenizer = TT.data.utils.get_tokenizer('spacy', 'en') # can remove 'spacy' and use a simple built-in tokenizer
train_iter = zip(labels, data)
counter = Counter()
for (label, line) in train_iter:
counter.update(tokenizer(line))
vocab = TT.vocab.Vocab(counter, min_freq=1)
text_pipeline = lambda x: [vocab[token] for token in tokenizer(x)]
# this is data-specific - adapt for your data
label_pipeline = lambda x: 1 if x == 'positive\n' else 0
class TextData(torch.utils.data.Dataset):
'''
very basic dataset for processing text data
'''
def __init__(self, labels, text):
super(TextData, self).__init__()
self.labels = labels
self.text = text
def __getitem__(self, index):
return self.labels[index], self.text[index]
def __len__(self):
return len(self.labels)
def tokenize_batch(batch, max_len=200):
'''
tokenizer to use in DataLoader
takes a text batch of text dataset and produces a tensor batch, converting text and labels though tokenizer, labeler
tokenizer is a global function text_pipeline
labeler is a global function label_pipeline
max_len is a fixed len size, if text is less than max_len it is padded with ones (pad number)
if text is larger that max_len it is truncated but from the end of the string
'''
labels_list, text_list = [], []
for _label, _text in batch:
labels_list.append(label_pipeline(_label))
text_holder = torch.ones(max_len, dtype=torch.int32) # fixed size tensor of max_len
processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int32)
pos = min(200, len(processed_text))
text_holder[-pos:] = processed_text[-pos:]
text_list.append(text_holder.unsqueeze(dim=0))
return torch.FloatTensor(labels_list), torch.cat(text_list, dim=0)
train_dataset = TextData(labels, data)
train_loader = DataLoader(train_dataset, batch_size=2, shuffle=False, collate_fn=tokenize_batch)
lbl, txt = iter(train_loader).next()

pytorch: how can I use picture as label in dataloader?

I want to do some image reconstruction using autoencoders in pytorch, however, I didn't find a way to use image as label for an input image.(the label image is different from original ones)
I've tried the image folder method, but I think that's for classfication and I am currently unable to come up with one solution. Should I create a custom dataset for this...
Thanks in advance!
Write your custom Dataset, below is a simple example.
import torch.utils.data.Dataset as Dataset
class CustomDataset(Dataset):
def __init__(self, input_imgs, label_imgs, transform):
self.input_imgs = input_imgs
self.label_imgs = label_imgs
self.transform = transform
def __len__(self):
return len(self.input_imgs)
def __getitem__(self, idx):
input_img, label_img = self.input_imgs[idx], self.label_imgs[idx]
return self.transform(input_img), self.transform(label_img)
And then, pass it to Dataloader:
dataloader = DataLoader(CustomDataset)

load multi-modal data with pytorch

I'm trying to load multi-modal data (e.g. text and image) in pytorch for image classification. I do not know how to load them simultaneously, like the following code.
def __init__(self, img_path, txt_path, transform=None, loader=default_loader):
def __len__(self):
return len(self.img_name)
def __getitem__(self, item):
Can anyone help me?
In __getitem__, you can use a dictionary or a tuple to represent one sample of your data. Later during training when you create a dataloader using the dataset, pytorch will automatically create batches of dictonary or tuples.
If you want to create samples in a much more different way, check out collate_fn in pytorch.
The method getitem(self, item) would help you do this.
For example:
def __getitem__(self, item): # item can be thought as an index
text = textList[item] # textList would be a list containing the text you want to input into the model for element 'item'
img = imgList[item] # imgList would be a list containing the images you want to input into the model for element 'item'
input = [text, img]
y = labels[item] # labels would be a list containing the label for the input of the text and img. This is your target.
return input, y

Resources