I have been looking for a version of Stable-Diffusion which would be able to run on the IPU's. Currently (due to the high availability) so far I can find CUDA based ones only.
Now I wonder if there is a way to run scripts/trainers/learning etc that are Cuda based on IPU? For example a translation program in between.
I doubt there is, and I bet as I cannot find a IPU version I'll have to modify the scripts :(.
There is the HuggingFace optimum library which acts as the interoperability layer for transformers to run on IPUs. You can find Stable Diffusion there.
For other models that are not supported in the library, there's a guide on how you could modify your script to make it IPU-compatible here
Related
I am doing a meta learning research and am using the MAML optimization provided by learn2learn. However as one of the baseline, I would like to test a non-meta-learning approach, i.e. the traditional training + testing.
Due to the lightning's internal usage of optimizer it seems that it is difficult to make the MAML work with learn2learn in lightning, so I couldn't use lightning in my meta-learning setup, however for my baseline, I really like to use lightning in that it provides many handy functionalities like deepspeed or ddp out of the box.
Here is my question, other than setting up two separate folders/repos, how could I mix the vanilia pytorch (learn2learn) with pytorch lightning (baseline)? What is the best practice?
Thanks!
Decided to answer my question. So I ended up using the torch lightning's manual optimization so that I can customize the optimization step. This would make both approaches using the same framework, and I think is better than maintaining 2 separate repos.
I wanted to see how the conv1d module is implemented
https://pytorch.org/docs/stable/_modules/torch/nn/modules/conv.html#Conv1d. So I looked at functional.py but still couldn’t find the looping and cross-correlation computation.
Then I searched Github by keyword ‘conv1d’, checked conv.cpp https://github.com/pytorch/pytorch/blob/eb5d28ecefb9d78d4fff5fac099e70e5eb3fbe2e/torch/csrc/api/src/nn/modules/conv.cpp 1 but still couldn’t locate where the computation is happening.
My question is two-fold.
Where is the source code that "conv1d” is implemented?
In general, if I want to check how the modules are implemented, where is the best place to find? Any pointer to the documentation will be appreciated. Thank you.
It depends on the backend (GPU, CPU, distributed etc) but in the most interesting case of GPU it's pulled from cuDNN which is released in binary format and thus you can't inspect its source code. It's a similar story for CPU MKLDNN. I am not aware of any place where PyTorch would "handroll" it's own convolution kernels, but I may be wrong. EDIT: indeed, I was wrong as pointed out in an answer below.
It's difficult without knowing how PyTorch is structured. A lot of code is actually being autogenerated based on various markup files, as explained here. Figuring this out requires a lot of jumping around. For instance, the conv.cpp file you're linking uses torch::conv1d, which is defined here and uses at::convolution which in turn uses at::_convolution, which dispatches to multiple variants, for instance at::cudnn_convolution. at::cudnn_convolution is, I believe, created here via a markup file and just plugs in directly to cuDNN implementation (though I cannot pinpoint the exact point in code when that happens).
Below is an answer that I got from pytorch discussion board:
I believe the “handroll”-ed convolution is defined here: https://github.com/pytorch/pytorch/blob/master/aten/src/THNN/generic/SpatialConvolutionMM.c 3
The NN module implementations are here: https://github.com/pytorch/pytorch/tree/master/aten/src
The GPU version is in THCUNN and the CPU version in THNN
I am trying to run some simulation on Python for a social network in which agents play a coalition bargaining game. Which package is the most suitable for my needs? Are there examples that I could use when constructing my own code?
The documentation for mesa is a good place to start. Also their GitHub has a solid number of examples that you can pull from. I have found that the developers of Mesa are super responsive to their GitHub issues as well (almost always responding within a matter of hours) so that has been helpful to me as I've found things that needed fixing in the tutorials.
I have also found it helpful to go off of some of the example models included in NetLogo when you install it (see https://ccl.northwestern.edu/netlogo/models/). It is not in Python of course, but it is helpful to see how they set it up and is relatively easy to implement their ideas in python with mesa.
In regards to which package would be most suitable, I think it would depend on how large of a simulation you are hoping to run. Mesa has been good for smaller/medium scale simulations, but if you are hoping to run something huge you may need to look elsewhere.
Update:
after some extra search. I thin I am overuse scikit-learn. if I want a production ML tools. I should use something like mahout which built on hadoop. scikit-learn is more like a toy tools for experiment ideas.
I am new to scikit-learn. I try to use scikit-learn to train a model, I want to experiment different feature combinationes and data pre-processing techniques. Each experiment will takes few hours(in order to minimize error, I will run every experiment 10 times with different train-test split), So I wrote some python script to run experiment one by one automatically, when an experiment is done, it will send me an email.
It works well, I found another server that is available to run my experiment today, it seems reasonable I should write some script that can run experiments in a distribution-fashion. There are big data platforms like hadoop, but I find that it is not for python and scikit-learn(please point out to me If my understanding of hadoop is wrong).
Because scikit-learn is an "old" library, so I think there should have existing libraries that have these capabilities that I want. or I am running in wrong direction of scikit-learn?
I try to google "scikit-learn task managment", But nothing I want turn out. other key word to search is also very welcome.
See "Experimentation frameworks" at http://scikit-learn.org/dev/related_projects.html
Usually I develop image processing or recognition programs on windows. But I got a customer who requires me to implement one on Linux platform.
Because his platform is embedded system, I don't know for sure that OpenCV would be available. Could anyone give me some clue to get started?
You can package OpenCV with your application.
The word 'embedded' makes me nervous - image recognition can be very computationally expensive. You may need to roll your own code to fit the target constraints.
The starting point of your own code is likely to implement a Haar-like recogniser.
This is of course what you'd likely be using OpenCV to do. A more ambitious recogniser is HOG. Here's a nice comparison of them.
OpenCV is in standard repositories for Ubuntu and/or Debian Linux. As such it should run on many processors including ARM. If it runs a full Debian, it is a matter of apt-cache search opencv, then install the modules you want via apt-get install.
The big gotcha is the embedded part. If it doesn't run a full Linux, then you may end up compiling for a very long time. Cross your fingers it runs a full Linux (like Debian.)
Adaboost should be a good fit for use as a learning algorithm. Paul Viola and Michael Jones have an interesting paper on efficient face detection using Adaboost and Haar classifiers. There's a lot of math there, but it's worth reading.