Using PyTorch for scientific computation - pytorch

I would like to use PyTorch as a scientific computation package. It has much to recommend it in that respect - its Tensors are basically GPU-accelerated numpy arrays, and its autograd mechanism is potentially useful for a lot of things besides neural networks.
However, the available tutorials and documentation seem strongly geared towards quickly getting people up and running using it for machine learning. Although there is lots of good information available on the Tensor and Variable classes (and I understand that material reasonably well), the nn and optim packages always seem to be introduced by example rather than by explaining the API, which makes it hard to figure out exactly what's going on.
My main question at this point is whether I can use the optim package without also using the nn package, and if so how to do so. Of course I can always implement my simulations as subclasses of nn.Module even though they are not neural networks, but I would like to understand what happens under the hood when I do this, and what benefits/drawbacks it would give for my particular application.
More broadly, I would appreciate pointers to any resource that gives more of a logical overview of the API (for nn and optim specifically), rather than just presenting examples.

This is a partial self-answer to the specific question about using optim without using nn. The answer is, yes, you can do that. In fact, from looking at the source code, the optim package doesn't know anything about nn and only cares about Variables and tensors.
The documentation gives the following incomplete example:
optimizer = optim.Adam([var1, var2], lr = 0.0001)
and then later:
for input, target in dataset:
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
The function model isn't defined anywhere and looks like it might be something to do with nn, but in fact it can just be a Python function that computes output from input using var1 and var2 as parameters, as long as all the intermediate steps are done using Variables so that it can be differentiated. The call to optimizer.step() will update the values of var1 and var2 automatically.
In terms of the structure of PyTorch overall, it seems that optim and nn are independent of one another, with nn being basically just a convenient way to chain differentiable functions together, along with a library of such functions that are useful in machine learning. I would still appreciate pointers to a good technical overview of the whole package, though.

Related

How pytorch implement forward for a quantized linear layer?

I have a quantized model in pytorch and now I want to extract the parameter of the quantized linear layer and implement the forward manually.
I search the source code but only find this function.
def forward(self, x: torch.Tensor) -> torch.Tensor:
return torch.ops.quantized.linear(
x, self._packed_params._packed_params, self.scale, self.zero_point)
But no where I can find how torch.ops.quantized.linear is defined.
Can someone give me a hind how the forward of quantized linear are defined?
In answer to the question of where torch.ops.quantized.linear is, I was looking for the same thing but was never able to find it. I believe it's probably somewhere in the aten (C++ namespace). I did, however, find some useful PyTorch-based implementations in the NVIDIA TensorRT repo below. It's quite possible these are the ones actually called by PyTorch via some DLLs. If you're trying to add quantization to a custom layer, these implementations walk you through it.
You can find the docs here and the GitHub page here.
For the linear layer specifically, see the QuantLinear layer here
Under the hood, this calls TensorQuantFunction.apply() for post-training quantization or FakeTensorQuantFunction.apply() for quantization-aware training.

Custom Loss Function with Spacy Textcat

I've been looking around for a while now. I would like to know if it's possible to modify/customize the loss function of the spaCy textcategorizer.
I mean, when you want to distill a model (for instance BERT) and want to add a regression component in the loss function to optimize (regarding the probabilities of each class instead only the labels), I don't understand where I should look for. I tried to explore some spaCy code but there is only a function to get the loss.
If someone know where to look for to visualize the loss function and change it (by writing a subclass for instance) it would be nice !
Thanks
Arnault
SpaCy is ultimately built on top of thinc and therefore, if you want to do custom work, you should tinker with Thinc, not SpaCy. SpaCy typically allows you to initialize a pipe with a raw Thinc model.
Especially since SpaCy's philosophy is to provide one implementation that works well not necessarily a super customizable framework.

XGboost classifier

I am new to XGBoost and I am currently working on a project where we have built an XGBoost classifier. Now we want to run some feature selection techniques. Is backward elimination method a good idea for this? I have used it in regression but I am not sure if/how to use it in a classification problem. Any leads will be greatly appreciated.
Note: I have already tried permutation line importance and it has yielded good results! Looking for another method to evaluate the features in the model.
Consider asking your question on Cross Validated since feature selection is more about theory/practice than code.
What is your concern ? Remove "noisy" features who drive down your results, obtain a sparse model ? Backward selection is one way to do of course. That being said, not sure if you are aware of this but XGBoost computes its own "variable importance" values.
# plot feature importance using built-in function
from xgboost import XGBClassifier
from xgboost import plot_importance
from matplotlib import pyplot
model = XGBClassifier()
model.fit(X, y)
# plot feature importance
plot_importance(model)
pyplot.show()
Something like this. This importance is based on how many times a feature is used to make a split. You can then define for instance a threshold below which you do not keep the variables. However do not forget that :
This variable importance has been obtained on the training data only
The removal of a variable with high importance may not affect your prediction error, e.g. if it is correlated with another highly important variable. Other tricks such as this one may exist.

I want to customise the last layer of VGG 19 architecture for a classification. which will be more useful keras or pytorch?

I want to customise the last layer of VGG 19 architecture for a classification problem. which will be more useful keras or pytorch?
It heavily depends on what you want to do with it.
While Keras offers different backends, such as TensorFlow or Theano (which in turn can offer you a little more flexibility), and transfers better to production systems,
PyTorch is definitely also easy to implement. Additionally, it offers great scaling on (multi-)GPU systems, since it is trivial to outsource your computations in a PyTorch model. I do not know how easy that is in Keras (never done it, so I genuinely cannot judge).
If you just want to play around with one of the frameworks, it usually boils down to personal preference. I personally prefer PyTorch, due to its more "python-esque" approach to things, but I know many people that prefer Keras because of its clear and simple layout and documentation.
Providing a little more information, or your context, can also potentially increase the quality of the answers you receive.

Custom operation implementation for RBM/DBN with tensorflow?

Since Google released out tensorflow, it becomes kind of trend in the current deep learning selections.
I'd like to do some experiments about RBM/DBN (Restricted Boltzmann Machine/Deep Belief Network), I've made some attempt by myself and kind of implement it well through the combination of available APIs from tensorflow. See code and previous answer.
So, if doesn't bother the code running performance, here's the gift for RBM/DBN implementation with tensorflow.
But, the running performance must be considered for the future. Because of the special progress of CD (Contrastive Divergence) algorithm, I think it just works against the framework (data flow graph) used by tensorflow. That's why my code seems weired.
So, the custom operation should be implemented for acceleration. I've followed the current documentation about adding custom ops.
REGISTER_OP("NaiveRbm")
.Input("visible: float32")
.Input("weights: float32")
.Input("h_bias: float32")
.Input("v_bias: float32")
.Output("hidden: float32")
.Doc(R"doc(
Naive Rbm for seperate training use. DO NOT mix up with other operations
)doc");
In my design, NaiveRbm should is an operation that takes visible,weights,h_bias,v_bias as input, but output by only first 3 Variables ( simply sigmoid(X*W+hb) ), its gradient should return at least gradients for last 3 Variables.
Imagine example psuedo code like this:
X = tf.placeholder()
W1, hb1, vb1 = tf.Variable()
W2, hb2, vb2 = tf.Variable()
rbm1 = NaiveRbm(X,W1,hb1,vb1)
train_op = tf.train.MomentumOptimizer(0.01, 0.5).minimize(rbm1)
rbm2 = NaiveRbm(tf.stop_gradient(rbm1), W2, hb2, vb2)
train_op2 = tf.train.MomentumOptimizer(0.01, 0.5).minimize(rbm2)
with tf.Session() as sess:
for batch in batches:
sess.run(train_op, feed_dict={X: batch})
for batch in batches:
sess.run(train_op2, feed_dict={X: batch})
But the tensorflow library is too complex for me. And after too much time seeking for how to implement these existing operations (sigmoid, matmul, ma_add, relu, random_uniform) in custom operation, no solution is found by myself.
So, I'd like to ask if someone could help me achieve the remain works.
PS: before getting some ideas, I'd like to dive into Theano since it implements RBM/DBN already. Just in my opinion, Caffe is kind of not suitable for RBM/DBN because of its framework.
Update: After scratch through the tutorials from Theano, I found the key reason for Theano implemented the RBM/DBN while the tensorflow haven't is the scan technology. So, there might wait tensorflow to implement scan technology to prepare for RBM/DBN implementation.

Resources