How to find built-in function source code in pytorch - pytorch

I am trying to do research on batch normalization, and had to make some modifications for the pytorch BN code. I dig into the pytorch code and got stuck with torch.nn.functional.batch_norm, which references torch.batch_norm.
The problem is that torch.batch_norm cannot be further found in the torch library. Is there any way I can find the source code of this built-in function and re-implement it? Thanks!

It's there, but it's not defined in Python. They're defined in C++ in the aten/ directories.
For CPU, the implementation (one of them, it depends on whether or not the input is contiguous) is here: https://github.com/pytorch/pytorch/blob/420b37f3c67950ed93cd8aa7a12e673fcfc5567b/aten/src/ATen/native/Normalization.cpp#L61-L126
For CUDA, the implementation is here: https://github.com/pytorch/pytorch/blob/7aae51cdedcbf0df5a7a8bf50a947237ac4b3ee8/aten/src/ATen/native/cudnn/BatchNorm.cpp#L52-L143

Related

Can JAGS (Just Another Gibbs Sampler) deal with ordinary differential equations?

I searched about the JAGS's manual and one post (here in 2012) about the ordinary differential equation (ode). I was thinking JAGS can because it's similar to WinBUGS (which does it through WBdiff interface). However, if I let JAGS read in my ode code, it cannot even recognize the D(y[...], t) expression.
Can JAGS deal with ode? Maybe I missed a plug-in in JAGS like WBdiff?
While WinBUGS/OpenBUGS/JAGS have almost equivalent syntax/feature sets, there are a few differences between them: one of these is that there is no ODE solver included as part of a standard JAGS installation.
However, JAGS is extensible using user-specified modules (like the plug-ins you mentioned), which provide new functions/distributions using C++ code that can then be used within JAGS when that module is loaded. It would certainly be possible to implement an ODE this way using e.g. the ODE solvers included in the boost C++ library. To do so you will need the following:
Familiarity with C++
Instructions for how to build a module for JAGS
I can't help you with the former, so this may be a dead-end for you if you have never used C++ before. But there is a tutorial available for how to build a JAGS module: https://pubmed.ncbi.nlm.nih.gov/23959766/ This article shows how to build a standalone module, but if you are happy to accept the limitation of using JAGS from R (as most people do) then it is MUCH easier to build a JAGS module within an R package - you could follow code in the runjags package as an example https://cran.r-project.org/package=runjags
If you are thinking of trying to do this yourself then I could potentially help with a few pointers along the way. Of course, it is also possible that someone else has already done this, but if so then I am not aware of it.

Doc2Vec' object has no attribute 'neg_labels' when trying to use pretrained model

So I'm trying to use a pretrained Doc2vec for my semantic search project. I tried with this one https://github.com/jhlau/doc2vec (English Wikipedia DBOW) and with the forked version of Gensim (0.12.4) and python 2.7
It works fine when I use most_similar but when i try to use infer_vector I get this error:
AttributeError: 'Doc2Vec' object has no attribute 'neg_labels'
what can i do to make this work?
For reasons given in this other answer, I'd recommend against using a many-years-old custom fork of Gensim, and also find those particular pre-trained models a little fishy in their sizes to actually contain all the purported per-article vectors.
But also: that error resembles a very-old bug which only showed up if Gensim was not fully installed to have the necessary Cython-optimized routines for fast training/inference operations. (That caused some older, seldom-run code to be run that had a dependency on the missing neg_labels. Newer versions of Gensim have eliminated that slow code-path entirely.)
My comment on an old Gensim issue has more details, and a workaround that might help - but really, the much better thing to do for quality results & speedy code is to use a current Gensim, & train your own model.

Pytorch-Forecasting N-Beats model with SELU() activation function?

I am working at timeseries forecasting, and I've using the PyTorch lib pytorch-forecasting lately. If you didn't know it, try it. It's great.
I am interested in SELU activation function for Self-Normalizing-Networks (SNNs, see, e.g., the docs). As I didn't find any N-Beats corrected to use SELU and its requirements (i.e. AlphaDropout, proper weights init), I made an implementation myself.
It would be great if any of you with experience with these concepts -NBeats architecture, pytorch-forecasting, or SELU()- could review whether everything is right in my implementation.
My implementation here: https://gist.github.com/pnmartinez/fef1f488497fa85a2cc1626af2a5b4bd

How to find the source code of torch.solve?

I am concerned whether torch.solve() examine the condition of the coefficient matrix for a linear system and employ desirable preconditionings; thus I am curious about its implementation details. I have read through several answers trying to track down the source file but in vain. I hope somebody can help me to locate its definition in the ATen library.
I think it just uses LAPACK for CPU and CUBLAS for GPU, since torch.solve is listed under "BLAS and LAPACK Operations" on the official docs.
Then we're looking for wrapper code, which I believe is this part.

pytorch - Where is “conv1d” implemented?

I wanted to see how the conv1d module is implemented
https://pytorch.org/docs/stable/_modules/torch/nn/modules/conv.html#Conv1d. So I looked at functional.py but still couldn’t find the looping and cross-correlation computation.
Then I searched Github by keyword ‘conv1d’, checked conv.cpp https://github.com/pytorch/pytorch/blob/eb5d28ecefb9d78d4fff5fac099e70e5eb3fbe2e/torch/csrc/api/src/nn/modules/conv.cpp 1 but still couldn’t locate where the computation is happening.
My question is two-fold.
Where is the source code that "conv1d” is implemented?
In general, if I want to check how the modules are implemented, where is the best place to find? Any pointer to the documentation will be appreciated. Thank you.
It depends on the backend (GPU, CPU, distributed etc) but in the most interesting case of GPU it's pulled from cuDNN which is released in binary format and thus you can't inspect its source code. It's a similar story for CPU MKLDNN. I am not aware of any place where PyTorch would "handroll" it's own convolution kernels, but I may be wrong. EDIT: indeed, I was wrong as pointed out in an answer below.
It's difficult without knowing how PyTorch is structured. A lot of code is actually being autogenerated based on various markup files, as explained here. Figuring this out requires a lot of jumping around. For instance, the conv.cpp file you're linking uses torch::conv1d, which is defined here and uses at::convolution which in turn uses at::_convolution, which dispatches to multiple variants, for instance at::cudnn_convolution. at::cudnn_convolution is, I believe, created here via a markup file and just plugs in directly to cuDNN implementation (though I cannot pinpoint the exact point in code when that happens).
Below is an answer that I got from pytorch discussion board:
I believe the “handroll”-ed convolution is defined here: https://github.com/pytorch/pytorch/blob/master/aten/src/THNN/generic/SpatialConvolutionMM.c 3
The NN module implementations are here: https://github.com/pytorch/pytorch/tree/master/aten/src
The GPU version is in THCUNN and the CPU version in THNN

Resources