I am trying to code a specific version of a Random Forest and to make both the training and the prediction computations parallel using joblib Parallel.
Suppose that I have written a TreeEstimator with .fit and .predict methods. The .fit method outputs the construction of the tree and the .predict method outputs an array of value. Then my code looks like this.
class RandomForest()
def __init__():
...
def fit(self, dataset):
self.roots = Parallel(n_jobs=self.n_jobs)(delayed(tree.fit)(dataset)
for tree in self.trees)
def predict(self, dataset):
preds = Parallel(n_jobs=self.n_jobs)(delayed(self.parallel_pred)(row)
for row in dataset)
return np.array(preds)
def parallel_pred(self, row):
pred = Parallel(n_jobs=self.n_jobs)(delayed(tree.predict_row)(self.roots[i],row)
for i,tree in enumerate(self.trees))
return pred
The .fit methods works just fine: all my CPUs are used up to a good percentage. However, the .predict methods seems to mainly do the computations on only one CPU and few others CPUs are sometimes used at ~2-5% at most. I have also tried doing the following: one loop for over all the rows of the dataset and applying the .parallel_pred method on each row of the dataset. It does not work, i.e runs mostly on one CPU. I have also tried doing a
Parallel()(delayed(tree_preds)(row) for row in dataset) with the tree_preds function being a for loop over all trees to get each prediction one by one but it still runs on mostly one CPU.
To sum up, I want to do one loop over my test dataset and one loop over my tree estimators to get for each row of the dataset all the predictions of all trees. I would like to make this in a parallelizable way.
I use Python 3.+ with Ubuntu 20.+)
Related
I am working on a binary classification problem. I have ~1.5 million data points, and the dimensionality of the feature space is 1 million. This dataset is stored as a sparse array, with a density of ~0.0001. For this post, I'll limit the scope to assume that the model is a shallow feedforward neural network, and also assume that the dimensionality has already been optimized (so cannot be reduced below 1 million). Naiive approaches to create mini-batches out of this data to feed into the network would take a lot of time (As an example, a basic approach of creating a TensorDataset (map style) from a torch.sparse.FloatTensor representation of the input array, and wrapping a DataLoader around it, means ~20s to get a mini-batch of 32 to the network, as opposed to say ~0.1s to perform the actual training). I am looking for ways to speed this up.
What I've tried
I first figured that reading from such a large sparse array in every iteration of the DataLoader was computationally intensive, so I broke down this sparse array into smaller sparse arrays
For the DataLoader to read from these multiple sparse arrays in an iterative fashion, I replaced the map style dataset that I had inside the DataLoader with an IterableDataset, and streamed these smaller sparse arrays into this IterableDataset like so:
from itertools import chain
from scipy import sparse
class SparseIterDataset(torch.utils.data.IterableDataset):
def __init__(self, fpaths):
super(SparseIter).__init__()
self.fpaths = fpaths
def read_from_file(self, fpath):
data = sparse.load_npz(fpath).toarray()
for d in data:
yield torch.Tensor(d)
def get_stream(self, fpaths):
return chain.from_iterable(map(self.read_from_file, fpaths))
def __iter__(self):
return self.get_stream(self.fpaths)
With this approach, I was able to bring down the time from the naiive base case of ~20s to ~0.2s per minibatch of 32. However, given that my dataset has ~1.5 million samples, this still implies a lot of time spent on even making one pass through the dataset. (As a comparison, even though it's slightly apples to oranges, running a logistic regression on scikit-learn on the original sparse array takes about ~6s per iteration through the whole dataset. With pytorch, with the approach I just outlined, it would take ~3000s just to load all the minibatches in an epoch)
One thing which I am aware of but yet to try is using multiprocess data loading by setting the num_workers argument in the DataLoader. I believe this has its own catches in the case of iterable style datasets though. Plus even a 10x speedup would still mean ~300s per epoch in loading mini batches. I feel I'm being inordinately slow! Are there any other approaches/improvements/best practices that you could suggest?
Your dataset in un-sparsified form would be 1.5M x 1M x 1 byte = 1.5TB as uint8, or 1.5M x 1M x 4 byte = 6TB as float32. Simply reading 6TB from memory to CPU could take 5-10 minutes on a modern CPU (depending on the architecture), and transfer speeds from CPU to GPU would be a bit slower than that (NVIDIA V100 on PCIe has 32GB/s theoretical).
Approaches:
Benchmark everything individually - eg in jupyter
%%timeit data = sparse.load_npz(fpath).toarray()
%%timeit dense = data.toarray() # un-sparsify for comparison
%%timeit t = torch.tensor(data) # probably about the same as the line above
Also print out the shapes and datatypes of everything to make sure they are as expected. I haven't tried running your code but I am pretty sure that (a) sparse.load_npz is extremely fast and unlikely to be a bottleneck, but (b) torch.tensor(data) produces a dense tensor and is also quite slow here
Use torch.sparse. I think torch sparse tensors can be used as regular tensors in most cases. You'd have to do some data prep to convert from scipy.sparse to torch.sparse:
A sparse tensor is represented as a pair of dense tensors: a tensor of
values and a 2D tensor of indices. A sparse tensor can be constructed by
providing these two tensors, as well as the size of the sparse tensor
You mention torch.sparse.FloatTensor but I'm pretty sure you're not making sparse tensors in your code - there is no reason to expect those would be constructed simply from passing a scipy.sparse array to a regular tensor constructor, since that's not how they're usually made.
If you figure out a good way to do this, I recommend you post it as a project or git on github, it would be quite useful.
If torch.sparse doesn't work out, think of other ways to either convert the data to dense only on the GPU, or avoid converting it entirely.
See also:
https://towardsdatascience.com/sparse-matrices-in-pytorch-be8ecaccae6
https://github.com/rusty1s/pytorch_sparse
Suppose there are variables in different shapes:
tvs = model.trainable_variables
In case one wants to apply some custom optimization operations to these variables, each variable should have its own graph:
ops = []
for idx, tv in enumerate(tvs):
#tf.function
def op(xs):
# Some operations
tv.assign_add(xs[idx])
ops.append(op)
and then apply these operations somewhere else (e.g. in training loop):
#tf.function
def step_ops(xs):
for op in ops : op(xs)
in which step_ops takes list of tensors (e.g., gradients of variables) as input.
The question is, will op insides ops runs parallelly in step_ops?
If not, how to get it run parallelly? Or are there any better approaches to achieve this?
Many thanks.
I have a script that performs a Gatys-like neural style transfer. It uses style loss, and a total variation loss. I'm using the GradientTape() to compute my gradients. The losses that I have implemented seem to work fine, but a new loss that I added isn't being properly accounted for by the GradientTape(). I'm using TensorFlow with eager execution enabled.
I suspect it has something to do with how I compute the loss based on the input variable. The input is a 4D tensor (batch, h, w, channels). At the most basic level, the input is a floating point image, and in order to compute this new loss I need to convert it to a binary image to compute the ratio of one pixel color to another. I don't want to actually go and change the image like that during every iteration, so I just make a copy of the tensor(in numpy form) and operate on that to compute the loss. I do not understand the limitations of the GradientTape, but I believe it is "losing the thread" of how the input variable is used to get to the loss when it's converted to a numpy array.
Could I make a copy of the image tensor and perform binarizing operations & loss computation using that? Or am I asking tensorflow to do something that it just can not do?
My new loss function:
def compute_loss(self, **kwargs):
loss = 0
image = self.model.deprocess_image(kwargs['image'].numpy())
binarized_image = self.image_decoder.binarize_image(image)
volume_fraction = self.compute_volume_fraction(binarized_image)
loss = np.abs(self.volume_fraction_target - volume_fraction)
return loss
My implementation using the GradientTape:
def compute_grads_and_losses(self, style_transfer_state):
"""
Computes gradients with respect to input image
"""
with tf.GradientTape() as tape:
loss = self.loss_evaluator.compute_total_loss(style_transfer_state)
total_loss = loss['total_loss']
return tape.gradient(total_loss, style_transfer_state['image']), loss
An example that I believe might illustrate my confusion. The strangest thing is that my code doesn't have any problem running; it just doesn't seem to minimize the new loss term whatsoever. But this example won't even run due to an attribute error: AttributeError: 'numpy.float64' object has no attribute '_id'.
Example:
import tensorflow.contrib.eager as tfe
import tensorflow as tf
def compute_square_of_value(x):
a = turn_to_numpy(x['x'])
return a**2
def turn_to_numpy(arg):
return arg.numpy() #just return arg to eliminate the error
tf.enable_eager_execution()
x = tfe.Variable(3.0, dtype=tf.float32)
data_dict = {'x': x}
with tf.GradientTape() as tape:
tape.watch(x)
y = compute_square_of_value(data_dict)
dy_dx = tape.gradient(y, x) # Will compute to 6.0
print(dy_dx)
Edit:
From my current understanding the issue arises that my use of the .numpy() operation is what makes the Gradient Tape lose track of the variable to compute the gradient from. My original reason for doing this is because my loss operation requires me to physically change values of the tensor, and I don't want to actually change the values used for the tensor that is being optimized. Hence the use of the numpy() copy to work on in order to compute the loss properly. Is there any way around this? Or is shall I consider my loss calculation to be impossible to implement because of this constraint of having to perform essentially non-reversible operations on the input tensor?
The first issue here is that GradientTape only traces operations on tf.Tensor objects. When you call tensor.numpy() the operations executed there fall outside the tape.
The second issue is that your first example never calls tape.watche on the image you want to differentiate with respect to.
I am using multiple datasets. I have multiple losses, each of which must be evaluated on a subset of these datasets. I want to generate a batch from each dataset, and evaluate each loss on all of its appropriate batches. Some of the losses are pairwise (need to load pairs of corresponding datapoints) whereas others are computed on single datapoints. I need to design this in such a way that is open to easily adding new datasets. Is there any pytorch builtin that would help with this? What is the best way to design this in pytorch? Thanks in advance.
It's not clear from your question what exactly your settings are.
However, you can have multiple Datasets instances, one for each of your datasets.
On top of your datasets, you can implement a "tagged dataset", a dataset that adds a "tag" for all samples:
class TaggedDataset(data.Dataset):
def __init__(dataset, tag):
super(TaggedDataset, self).__init__()
self.ds_ = dataset
self.tag_ = tag
def __len__(self):
return len(self.ds_)
def __getitem__(self, index):
return self.ds_[index], self.tag_
Give a different tag to each dataset, concat all of them into a single ConcatDataset, and wrap a regular DataLoader around it.
Now, in your training code
for input, label, tag in my_tagged_loader:
# process each input according to the dataset tag it got.
I am attempting to implement a Lambda layer that will produce a custom loss function. In the layer, I need to be able to compare every element in a batch to every other element in the batch in order to calculate the cost. Ideally, I want code that looks something like this:
for el_1 in zip(y_pred, y_true):
for el_2 in zip(y_pred, y_true):
if el_1[1] == el_2[1]:
# Perform a calculation
else:
# Perform a different calculation
When I true this, I get:
TypeError: TensorType does not support iteration.
I am using Keras version 2.0.2 with a Theano version 0.9.0 backend. I understand that I need to use Keras tensor functions in order to do this, but I can't figure out any tensor functions that do what I want.
Also, I am having difficulty understanding precisely what my Lambda function should return. Is it a tensor of the total cost for each sample, or is it just a total cost for the batch?
I have been beating my head against this for days. Any help is deeply appreciated.
A tensor in Keras commonly has at least 2 dimensions, the batch and the neuron/unit/node/... dimension. A dense layer with 128 units trained with a batch size of 64 would therefore yields a tensor with shape (64,128).
Your LambdaLayer processes tensors as any other layer does, plugging it in after your dense layer from before will give you a tensor with shape (64,128) to process. Processing a tensor works similar to how calculations on numpy arrays works (or any other vector processing library really): you specify one operation to broadcast over all elements in the data structure.
For example, your custom cost is the difference for each value in the batch, you would implement it like so:
cost_layer = LambdaLayer(lambda a,b: a - b)
The - operation is broadcasted over a and b and will return a suitable result provided the dimensions match. The takeaway is that you really only can specify one operation for every value. If you want to do more complex tasks, for example computations based on the value you need single operations that take two operations and apply the correct one accordingly, i.e. the switch operation.
The syntax for K.switch is
K.switch(condition, then_expression, else_expression)
For example, if you want to subtract both values when a != b but add them when they are equal, you would write:
import keras.backend as K
cost_layer = LambdaLayer(lambda a,b: K.switch(a != b, a - b, a + b))