Combine multiple DataLoaders sequentially - pytorch

I'm interested in how I'd go about combining multiple DataLoaders sequentially for training. I understand I can use ConcatDataset to combine datasets first, but this does not work for my use case. I have a custom collate_fn that is passed to each dataloader, and this function depends on an attribute of the underlying Dataset. So, I'll have a set of custom DataLoaders like the following:
def custom_collate(sample, ref):
data = clean_sample(torch.stack([x[0] for x in sample]), ref)
labels = torch.tensor([x[1] for x in sample])
return data, labels
class CollateLoader(torch.utils.data.DataLoader):
def __init__(self, ref, *args, **kwargs):
collate_fn = functools.partial(custom_collate, ref=ref)
super().__init__(collate_fn = collate_fn, *args, **kwargs)
Where ref is a property of the custom Dataset class and is passed on initialization of a CollateLoader. Also, I know transforms can be applied in the Dataset, but in my case it must be done batch-wise.
So, how would I go about combining multiple DataLoaders? In the PyTorch-Lightning LightningDataModule, we can do something like
def train_dataloader(self):
return [data_loader_1, data_loader_2]
But this will return a list of batches, not the batches sequentially.

I ran into the same problem and found a workaround. I overrided the epoch training loop using the Loops API from PytorchLightning, defining a class CustomLoop which inherits from pytorch_lightning.loops.TrainingEpochLoop, and overrided the advance() method. I copy pasted the source code from pytorch_lightning and replaced these lines with:
if not hasattr(self,'dataloader_idx'):
self.dataloader_idx=0
if not isinstance(data_fetcher, DataLoaderIterDataFetcher):
batch_idx = self.batch_idx + 1
batch = next(data_fetcher.dataloader.loaders[self.dataloader_idx])
self.dataloader_idx+=1
if self.dataloader_idx == len(data_fetcher.dataloader.loaders):
self.dataloader_idx = 0
else:
batch_idx, batch = next(data_fetcher)
That way, instead of iterating over the CombinedLoader, i make it iterate over one dataloader at a time.
Then, to make use of this custom loop you have to replace the default loop in the Trainer:
trainer.fit_loop.replace(epoch_loop=CustomLoop)
trainer.fit(my_model)

You can return [train_dataloader, train_2_dataloader] and then you take two batches, each dataloader, so, you can apply a for and sum losses

Related

Pythonic way of reducing the subclasses

background: so, I am working on an NLP problem. where I need to extract different types of features based on different types of text documents. and I currently have a setup where there is a FeatureExtractor base class, which is subclassed multiple times depending on the different types of docs and all of them calculate a different set of features and return a pandas data frame as output.
all these subclasses are further called by one wrapper type class called FeatureExtractionRunner which calls all the subclasses and calculates the features on all docs and returns the output for all types of docs.
Problem: this pattern of calculating features leads to lots of subclasses. currently, I have like 14 subclasses, since I have 14 types of docs.it might expand further. and this is too many classes to maintain. Is there an alternative way of doing this? with less subclassing
here is some sample representative code of what i explained:
from abc import ABCMeta, abstractmethod
class FeatureExtractor(metaclass=ABCMeta):
#base feature extractor class
def __init__(self, document):
self.document = document
#abstractmethod
def doc_to_features(self):
return NotImplemented
class ExtractorTypeA(FeatureExtractor):
#do some feature calculations.....
def _calculate_shape_features(self):
return None
def _calculate_size_features(self):
return None
def doc_to_features(self):
#calls all the fancy feature calculation methods like
f1 = self._calculate_shape_features(self.document)
f2 = self._calculate_size_features(self.document)
#do some calculations on the document and return a pandas dataframe by merging them (merge f1, f2....etc)
data = "dataframe-1"
return data
class ExtractorTypeB(FeatureExtractor):
#do some feature calculations.....
def _calculate_some_fancy_features(self):
return None
def _calculate_some_more_fancy_features(self):
return None
def doc_to_features(self):
#calls all the fancy feature calculation methods
f1 = self._calculate_some_fancy_features(self.document)
f2 = self._calculate_some_more_fancy_features(self.document)
#do some calculations on the document and return a pandas dataframe (merge f1, f2 etc)
data = "dataframe-2"
return data
class ExtractorTypeC(FeatureExtractor):
#do some feature calculations.....
def doc_to_features(self):
#do some calculations on the document and return a pandas dataframe
data = "dataframe-3"
return data
class FeatureExtractionRunner:
#a class to call all types of feature extractors
def __init__(self, document, *args, **kwargs):
self.document = document
self.type_a = ExtractorTypeA(self.document)
self.type_b = ExtractorTypeB(self.document)
self.type_c = ExtractorTypeC(self.document)
#more of these extractors would be there
def call_all_type_of_extractors(self):
type_a_features = self.type_a.doc_to_features()
type_b_features = self.type_b.doc_to_features()
type_c_features = self.type_c.doc_to_features()
#more such extractors would be there....
return [type_a_features, type_b_features, type_c_features]
all_type_of_features = FeatureExtractionRunner("some document").call_all_type_of_extractors()
Answering the question first, you may avoid subclassing entirely at the cost of writing the __init__ method each time. Or you may get rid off the classes entirely and convert them to a bunch of functions. Or even you may join all the classes in a single one. Note that none of these methods will make the code simpler or more maintainable, indeed they would just change it's shape to some extent.
IMHO this situation is a perfect example of inherent problem complexity by which I mean that the domain (NLP) and particular use case (doc feature extraction) are complex in and out themselves.
For example, featureX and featureY are likely to be totally different things that cannot be calculated altogether, thus you end up with one method each. Similarly, the procedure to merge these features in a dataframe might be different than the one to merge the fancy features. Having lots of functions/classes in this situation seems totally reasonable to me, also having them separate is logical and maintainable wise.
That said real code reduction might be possible if you can combine some feature calculation methods into a more generic function, tough I can't say for sure if it would be possible.

TFBertMainLayer gets less accuracy compared to TFBertModel

I had a problem with saving weights of TFBertModel wrapped in Keras. the problem is described here in GitHub issue and here in Stack Overflow.The solution proposed in both cases is to use
config = BertConfig.from_pretrained(transformer_model_name)
bert = TFBertMainLayer(config=config,trainable=False)
instead of
bert = TFBertModel.from_pretrained(transformer_model_name, trainable=False)
The problem is that when I change my model to the former code, the accuracy decreases by 10 percent.While the parameters count in both cases are the same. I wonder what is the reason and how can be prevented?
It seems like the performance regression in the code snippet that instantiates MainLayer directly occurs because the pre-trained weights are not being loaded. You can load the weights by either:
Calling TFBertModel.from_pretrained and grabbing the MainLayer from the loaded TFBertModel
Creating the MainLayer directly, then loading the weights in a similar way to from_pretrained
Why This Happens
When you call TFBertModel.from_pretrained, it uses the function TFPreTrainedModel.from_pretrained (via inheritance) which handles a few things, including downloading, caching, and loading the model weights.
class TFPreTrainedModel(tf.keras.Model, TFModelUtilsMixin, TFGenerationMixin):
...
#classmethod
def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs):
...
# Load model
if pretrained_model_name_or_path is not None:
if os.path.isfile(os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_NAME)):
# Load from a TF 2.0 checkpoint
archive_file = os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_NAME)
...
resolved_archive_file = cached_path(
archive_file,
cache_dir=cache_dir,
force_download=force_download,
proxies=proxies,
resume_download=resume_download,
local_files_only=local_files_only,
)
...
model.load_weights(resolved_archive_file, by_name=True)
(If you read the actual code, a lot has been ...'ed out above).
However, when you instantiate TFBertMainLayer directly, it doesn't do any of this set up work.
#keras_serializable
class TFBertMainLayer(tf.keras.layers.Layer):
config_class = BertConfig
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
self.num_hidden_layers = config.num_hidden_layers
self.initializer_range = config.initializer_range
self.output_attentions = config.output_attentions
self.output_hidden_states = config.output_hidden_states
self.return_dict = config.use_return_dict
self.embeddings = TFBertEmbeddings(config, name="embeddings")
self.encoder = TFBertEncoder(config, name="encoder")
self.pooler = TFBertPooler(config, name="pooler")
... rest of the class
Essentially, you need to make sure these weights are being loaded.
Solutions
(1) Using TFAutoModel.from_pretrained
You can rely on transformers.TFAutoModel.from_pretrained to load the model, then just grab the MainLayer field from the specific subclass of TFPreTrainedModel. For example, if you wanted to access a distilbert main layer, it would look like:
model = transformers.TFAutoModel.from_pretrained(`distilbert-base-uncased`)
assert isinstance(model, TFDistilBertModel)
main_layer = transformer_model.distilbert
You can see in modeling_tf_distilbert.html
that the MainLayer is a field of the model.
This is less code and less duplication, but has a few disadvantages. It's less easy to change the pre-trained model you're going to use, because now you're depending on the fieldname, if you change the model type, you'll have to change the field name (for example in TFAlbertModel the MainLayer field is called albert). In addition, this doesn't seem to be the intended way to use huggingface, so this could change under your nose, and your code could break with huggingface updates.
class TFDistilBertModel(TFDistilBertPreTrainedModel):
def __init__(self, config, *inputs, **kwargs):
super().__init__(config, *inputs, **kwargs)
self.distilbert = TFDistilBertMainLayer(config, name="distilbert") # Embeddings
[DOCS] #add_start_docstrings_to_callable(DISTILBERT_INPUTS_DOCSTRING)
#add_code_sample_docstrings(
tokenizer_class=_TOKENIZER_FOR_DOC,
checkpoint="distilbert-base-uncased",
output_type=TFBaseModelOutput,
config_class=_CONFIG_FOR_DOC,
)
def call(self, inputs, **kwargs):
outputs = self.distilbert(inputs, **kwargs)
return outputs
(2) Re-implementing the weight loading logic from from_pretrained
You can do this by essentially copy/pasting the parts of from_pretrained that are relevant to loading weights. This also has some serious disadvantages, you'll be duplicating logic that that can fall out of sync with the huggingface libraries. Though you could likely write it in a way that is more flexible and robust to underlying model name changes.
Conclusion
Ideally this is something that will get fixed internally by the huggingface team, either by providing a standard function to create a MainLayer, wrapping the weight loading logic into its own function that can be called, or by supporting serialization on the model class.

scipy.optimize.minimize() constraints depend on cost function

I'm running a constrained optimisation with scipy.optimize.minimize(method='COBYLA').
In order to evaluate the cost function, I need to run a relatively expensive simulation to compute a dataset from the input variables, and the cost function is one (cheap to compute) property of that dataset. However, two of my constraints are also dependent on that expensive data.
So far, the only way I have found to constrain the optimisation is to have each of the constraint functions recompute the same dataset that the cost function already has calculated (simplified quasi-code):
def costfun(x):
data = expensive_fun(x)
return(cheap_fun1(data))
def constr1(x):
data = expensive_fun(x)
return(cheap_fun2(data))
def constr2(x):
data = expensive_fun(x)
return(cheap_fun3(data))
constraints = [{'type':'ineq', 'fun':constr1},
{'type':'ineq', 'fun':constr2}]
# initial guess
x0 = np.ones((6,))
opt_result = minimize(costfun, x0, method='COBYLA',
constraints=constraints)
This is clearly not efficient because expensive_fun(x) is called three times for every x.
I could change this slightly to include a universal "evaluate some cost" function which runs the expensive computation, and then evaluates whatever criterion it has been given. But while that saves me from having to write the "expensive" code several times, it still runs three times for every iteration of the optimizer:
# universal cost function evaluator
def criterion_from_x(x, cfun):
data = expensive_fun(x)
return(cfun(data))
def costfun(data):
return(cheap_fun1(data))
def constr1(data):
return(cheap_fun2(data))
def constr2(data):
return(cheap_fun3(data))
constraints = [{'type':'ineq', 'fun':criterion_from_x, 'args':(constr1,)},
{'type':'ineq', 'fun':criterion_from_x, 'args':(constr2,)}
# initial guess
x0 = np.ones((6,))
opt_result = minimize(criterion_from_x, x0, method='COBYLA',
args=(costfun,), constraints=constraints)
I have not managed to find any way to set something up where x is used to generate data at each iteration, and data is then passed to both the objective function as well as the constraint functions.
Does something like this exist? I've noticed the callback argument to minimize(), but that is a function which is called after each step. I'd need some kind of preprocessor which is called on x before each step, whose results are then available to the cost function and constraint evaluation. Maybe there's a way to sneak it in somehow? I'd like to avoid writing my own optimizer.
One, more traditional, way to solve this would be to evaluate the constraints in the cost function (which has all the data it needs for that, have it add a penalty for violated constraints to the main cost function, and run the optimizer without the explicit constraints, but I've tried this before and found that the main cost function can become somewhat chaotic in cases where the constraints are violated, so an optimizer might get stuck in some place which violates the constraints and not find out again.
Another approach would be to produce some kind of global variable in the cost function and write the constraint evaluation to use that global variable, but that could be very dangerous if multithreading/-processing gets involved, or if the name I choose for the global variable collides with a name used anywhere else in the code:
'''
def costfun(x):
global data
data = expensive_fun(x)
return(cheap_fun1(data))
def constr1(x):
global data
return(cheap_fun2(data))
def constr2(x):
global data
return(cheap_fun3(data))
'''
I know that some people use file I/O for cases where the cost function involves running a large simulation which produces a bunch of output files. After that, the constraint functions can just access those files -- but my problem is not that big.
I'm currently using Python v3.9 and scipy 1.9.1.
You could write a decorator class in the same vein to scipy's MemoizeJac that caches the return values of the expensive function each time it is called:
import numpy as np
class MemoizeData:
def __init__(self, obj_fun, exp_fun, constr_fun):
self.obj_fun = obj_fun
self.exp_fun = exp_fun
self.constr_fun = constr_fun
self._data = None
self.x = None
def _compute_if_needed(self, x, *args):
if not np.all(x == self.x) or self._data is None:
self.x = np.asarray(x).copy()
self._data = self.exp_fun(x)
def __call__(self, x, *args):
self._compute_if_needed(x, *args)
return self.obj_fun(self._data)
def constraint(self, x, *args):
self._compute_if_needed(x, *args)
return self.constr_fun(self._data)
Followingly, the expensive function is only evaluated once for each iteration. Then, after writing all your constraints into one constraint function, you could use it like this:
from scipy.optimize import minimize
def all_constrs(data):
return np.hstack((cheap_fun2(data), cheap_fun3(data)))
obj = MemoizeData(cheap_fun1, expensive_fun, all_constrs)
constr = {'type': 'ineq', 'fun': obj.constraint}
x0 = np.ones(6)
opt_result = minimize(obj, x0, method="COBYLA", constraints=constr)
While Joni was writing their answer, I found another one, which is admittedly more hacky. I prefer theirs, but for the sake of completeness, I wanted to post this one, too.
It's derived from the material from https://mdobook.github.io/ and the accompanying video tutorials from BYU FLow Lab, in particular this video:
The trick is to use non-local variables to keep a cache of the last evaluation of the expensive function:
import numpy as np
last_x = None
last_data = None
def compute_data(x):
data = expensive_fun(x)
return(data)
def get_last_data(x):
nonlocal last_x, last_data
if not np.array_equal(x, last_x):
last_data = compute_data(x)
last_x = x
return(last_data)
def costfun(x):
data = get_last_data(x)
return(cheap_fun1(data)
def constr1(x):
data = get_last_data(x)
return(cheap_fun2(data)
def constr2(x):
data = get_last_data(x)
return(cheap_fun3(data)
...and then everything can progress as in my original code in the question.
Reasons why I prefer Joni's class-based version:
variable scopes are clearer than with nonlocal
If some of the functions allow calculation of their Jacobian, or there are other things worth buffering, the added complexity is held in check better than with
Having a class instance do all the work also allows you to do other interesting things, like keeping a record of all past evaluations and the path taken by the optimizer, without having to use a separate callback function. Very useful for debugging/tweaking convergence if the optimizer won't converge or takes too long, but also to visualize or otherwise investigate the objective function or similar.
The same ability might actually be really cool for things like constructing a response surface model from the results of previous function evaluations. That could be used to establish a starting guess in case the expensive function is some numerical method that benefits from a good starting point.
Both approaches allow the use of "cheap" constraints which don't require the expensive function to be evaluated, by simply providing them as separate functions. Not sure whether that would help much with compute times, though. I suppose that would depend on the algorithm used by the optimizer.

How do I know whether an instance is stored on GPU with PyTorch?

I'm learning PyTorch recently, and this question comes up.
For example, if I have a net inheriting the "torch.nn.Module".
class Net(torch.nn.Module):
def __init__(self, something):
super(net, self).__init__()
self.p1=something
def forward():
pass
net1=Net(123)
net1.cuda() ##Here I can't see what is changed.
Then how can I know whether net1 (and that something) is stored on GPU.
I've read how the *.cuda() works, seems like let all the "children" run the *.cuda(). I tried to see what the "children" are. It seems the net1 above has no children.
To check a simple tensor, you can check the is_cuda attribute. For example:
x = torch.zeros(100).cuda()
y = torch.zeros(100)
print(x.is_cuda) # True
print(y.is_cuda) # False
To check a model, a think the easiest way is using the parameters() method, which returns all trainable parameters of your model.
next(model.parameters()).is_cuda

Tensorflow Merge Datasets Alternatively

So I am writing a GAN in tensorflow, and need the discriminator and generator to be objects. Now I am having problems with creating the training dataset for the discriminator.
Currently the relevant part of my code looks like this:
self.dataset=tf.data.Dataset.from_tensor_slices((self.y_,self.x_)) #creates dataset
self.fake_dataset=tf.data.Dataset.from_tensor_slices((self.x_fake_)) #creates dataset
self.dataset=self.dataset.shuffle(buffer_size=BUFFER_SIZE) #shuffles
self.fake_dataset=self.fake_dataset.shuffle(buffer_size=BUFFER_SIZE) #shuffles
self.dataset=self.dataset.repeat().batch(self.batch_size) #batches
self.fake_dataset=self.fake_dataset.repeat().batch(self.batch_size) #batches
self.iterator=tf.data.Iterator.from_structure(self.dataset.output_types,self.dataset.output_shapes) #creates iterators
self.fake_iterator=tf.data.Iterator.from_structure(self.fake_dataset.output_types,self.fake_dataset.output_shapes) #creates iterators
self.x=self.iterator.get_next()
self.x_fake=self.fake_iterator.get_next()
self.dataset_init_op = self.iterator.make_initializer(self.dataset,name=self.name+'_dataset_init')
self.fake_dataset_init_op=self.fake_iterator.make_initializer(self.fake_dataset,name=self.name+'_dataset_init')
What I need is for the function to alternatively give one batch of self.x, followed by one batch of self.x_fake.
Is there an easy way to do this, or will I have to results to a counter and an if statement?
Not sure if I'm understanding exactly what you need, but if you want to get use the different iterators alternatively in the same call that would be defined at graph construction time, and so you could use Python logic to choose the iterator you need. For example:
def __init__(self):
# Make graph and iterators...
self._use_fake_batch = False
def next_batch(self):
iter = self.fake_iterator if self._use_fake_batch else self.iterator
self._use_fake_batch = not self._use_fake_batch
return iter.get_next()
Or without an additional variable, using itertools:
from itertools import chain, repeat
def __init__(self):
# Make graph and iterators...
self._iterators = chain.from_iterable(repeat((self.iterator, self.fake_iterator)))
def next_batch(self):
return next(self._iterators).get_next()

Resources