Converting PyTorch Module to Networkx Graph? - pytorch

What's the simplest way to convert a PyTorch module (e.g. an RNN, LSTM) into a Python networkx graph?

Related

setup onnx to parsing onnx graph in c++

I'm trying to load an onnx file and print all the tensor dimensions in the graph(has to perform shape inference). I can do this in python by just importing from onnx import shape_inference, onnx. Is there any documentation on setting up onnx to use it in a c++ program?
If anyone looking for a solution, I've created a project here which uses libonnx.so to perform shape inference.

Python 3: Geopandas dataframe with CRS coordinates into Graph to find connected components and other graph properties?

I have a geopandas dataframe where I find to use some graph theory package to find graph properties such as connected components.
How can I find graph-theoretic properties conveniently with Geopandas dataframe?
You can use pysal to generate spatial weights matrix (which is internally graph) - http://pysal.org/notebooks/lib/libpysal/weights.html. All weights classes have from_dataframe option.
Spatial weights can be further exported to networkx Graph object for further graph-based analysis.
import libpysal
import geopandas
df = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
W = libpysal.weights.Queen.from_dataframe(df) # generate spatial weights
G = W.to_networkx() # get networkx.Graph
Notice that for some of the things (like components), you can use weights directly - see attributes in docs https://pysal.org/libpysal/generated/libpysal.weights.W.html#libpysal.weights.W.

How to create a dataset for CNN from Mri Nifti files?

I have data in nifti format, these are 3 axial images of an animal’s brain, how to create a dataset for training a convolutional neural network, for segmenting brain regions, by python 3?
You can use nibabel library for loading nifti files using nibabel.load(path). And from that, you can get numpy array and combine all arrays to form a dataset in numpy arrays or convert it to H5py format as your choice.

sklearn: Regression models on sparse data?

Does python's scikit-learn have any regression models that work well with sparse data?
I was poking around and found this "sparse linear regression" module, but it seems outdated. (It's so old that scikit-learn was called 'scikits-learn' at the time, I think.)
Most scikit-learn regression models (linear such as Ridge, Lasso, ElasticNet or non-linear, e.g. with RandomForestRegressor) support both dense and sparse input data recent versions of scikit-learn (0.16.0 is the latest stable version at the time of writing).
Edit: if you are unsure, check the docstring of the fit method of the class of interest.

Convert scikit-learn SVM model to LibSVM

I have trained a SVM (svc) using scikit-learn over half a terabyte of data. The model is working fine and I need to port it to C, but I don't want to re-train the SVM from scratch because it takes way too long for me. Is there a way to easily export the model generated by scikit-learn and import it into LibSVM? Internally scikit-learn uses LibSVM so theoretically it should be possible, but I haven't been able to find anything in the documentation. Any suggestion?
Is there a way to easily export the model generated by scikit-learn and import it into LibSVM?
No. The scikit-learn version of LIBSVM has been hacked up severely to fit it into the Python environment and the model is stored as NumPy/SciPy data structures.
Your best shot is to study the SVM decision function and reimplement it in C. The support vectors can be obtained from the SVC object as NumPy arrays, which are easily translated to C arrays.

Resources