When is it advantageous to use theano's scan function - theano

I've been reading the Theano documentation on scan and find myself confused by two, seemingly, contradictory statements.
On http://deeplearning.net/software/theano/tutorial/loop.html#scan, one of the advantages of scan is listed as:
Slightly faster than using a for loop in Python with a compiled Theano function.
But, on http://deeplearning.net/software/theano/library/scan.html#lib-scan, in a section on optimizing the use of scan, it says:
Scan makes it possible to define simple and compact graphs that can do
the same work as much larger and more complicated graphs. However, it
comes with a significant overhead. As such, **when performance is the
objective, a good rule of thumb is to perform as much of the computation
as possible outside of Scan**. This may have the effect of increasing
memory usage but can also reduce the overhead introduces by using Scan.
My reading of 'performance', here, is as a synonym for speed. So, I'm left confused as to when/if scan will lead to shorter runtimes, once compilation has been completed.

If your expression intrinsically needs a for-loop, then you sometimes have two options:
Build the expression using a python for loop
Build the expression using scan
Option 1 only works if you know in advance the length of your for-loop. It can happen that the length of your for-loop depends on a symbolic variable that is not available at script-writing time. In that case you need to use scan. Although oftentimes you can formulate the problem either way (see the absence of scan in tensorflow).
As for time performance, there have been a number of results showing that it really depends on the problem which one is faster.

Related

How to make a Python "rust-Extension" module that behaves exactly like C-Extensions in terms of call overhead and processing speed?

The closer option I have found is pyo3, but it isn't clear to me if it adds any extra overhead when compared to the traditional C-extensions.
From here it seems like such C-extension behavior is possible through borrowed objects (I still have to understand this concept in detail).
Part of my question comes from the fact the build process (Section Python with Rust here) is entirely managed by cargo which references both cpython and pyo3.
For an example of approach that adds some overhead, but not rust-based, see this comparison.
A related question is about portability, since it seems there is a kind of overhead-portability tradeoff.
For those who prefer to know the concrete case, it is about small hash-like operations that are used millions of times in unpredictable order. So neither a pure Python nor a batch native approach are going to help here. Additionally, there are already gains in a first attempt using a C-extension when compared to pure Python. Now, I'd like to implement it in Rust before writing the remaining functions.

Multi-thread usage in Dymola slow down solution speed

Does using multi-core functionality in Dymola2020x always speed up the solution? My observation is using Advanced.ParallelizeCode=true for a model with DOF~23k; compiling time is comparable with single thread however the solution time with default solver is slower.
Any comments are appreciated!
Multi-core functionality of a single model does not always speed up execution.
There are a number of possible explanations:
There are so many dependencies that it isn't possible to parallelize at all. (Look at translation log - this is fairly clear).
It's only possible to parallelize a small part of the model. (Look at translation log - this takes more time).
The model uses many external function (or FMUs), and by default Dymola treat them as critical sections. (See release notes and manual on __Dymola_ThreadSafe and __Dymola_CriticalRegion).
In versions before Dymola 2020x you might have to set the environment variable OMP_WAIT_POLICY=PASSIVE. (Shouldn't be needed in your version.)
Using decouple as described in https://www.claytex.com/tech-blog/decouple-blocks-and-model-parallelisation/ can help for the first two.
Note that an alternatives to parallelization within the model is to parallelize a sweep of parameters (if that is your scenario). That is done automatically for sweep parameters, and without any of these drawbacks.

OpenMDAO version 2.x File Variable Workaround

I'm new to OpenMDAO and started off with the newest version (version 2.3.1 at the time of this post).
I'm working on the setup to a fairly complicated aero-structural optimization using several external codes, specifically NASTRAN and several executables (compiled C++) that post process NASTRAN results.
Ideally I would like to break these down into multiple components to generate my model, run NASTRAN, post process the results, and then extract my objective and constraints from text files. All of my existing interfaces are through text file inputs and outputs. According to the GitHub page, the file variable feature that existed in an old version (v1.7.4) has not yet been implemented in version 2.
https://github.com/OpenMDAO/OpenMDAO
Is there a good workaround for this until the feature is added?
So far the best solution I've come up with is to group everything into one large component that maps input variables to final output by running everything instead of multiple smaller components that break up the process.
Thanks!
File variables themselves are no longer implemented in OpenMDAO. They caused a lot of headaches and didn't fundamentally offer useful functionality because they requires serializing the whole file into memory and passing it around as string buffers. The whole process was just duplicative and inefficient, since the files were ultimately getting written and read from disk far more times than were necessary.
In your case since you're setting up an aerostructural problem, you really wouldn't want to use them anyway. You will want to have access to either analytic or at least semi-analytic total derivatives for efficient execution. So what that means is that the boundary of each component must composed of only floating point variables or arrays of floating point variables.
What you want to do is wrap your analysis tools using ExternalCodeImplicitComp, which tells openmdao that the underlying analysis is actually implicit. Then, even if you use finite-differences to compute the partial derivatives you only need to FD across the residual evaluation. For NASTRAN, this might be a bit tricky to set up, since I don't know if it directly exposes the residual evaluation, but if you can get to the stiffness matrix then you should be able to compute it. You'll be rewarded for your efforts with a greatly improved efficiency and accuracy.
Inside each wrapper, you can use the built in file wrapping tools to read through the files that were written and pull out the numerical values, which you then push into the outputs vector. For NASTRAN you might consider using pyNASTRAN, instead of the file wrapping tools, to save yourself some work.
If you can't expose the residual evaluation, then you can use ExternalCodeComp instead and treat the analysis as if it was explicit. This will make your FD more costly and less accurate, but for linear analyses you should be ok (still not ideal, but better than nothing).
The key idea here is that you're not asking OpenMDAO to pass around file objects. You are wrapping each component with only numerical data at its boundaries. This has the advantage of allowing OpenMDAO's automatic derivatives features to work (even if you use FD to compute the partial derivatives). It also has a secondary advantage that if you (hopefully) graduate to in-memory wrappers for your codes then you won't have to update your models. Only the component's internal code will change.

Why devectorization in Julia is encouraged?

Seems like writing devectorized code is encouraged in Julia.
There is even a package that tries to do that for you.
My question is why?
First of all, speaking from the user experience aspect, vectorized code is more concise (less code, then less likelihood of bugs), more clear (hence easier to debug), more natural way of writing code (at least for someone who comes from scientific computing background, whom Julia tries to cater to). Being able to write something like vector'vector or vector'Matrix*vector is very important, because it corresponds to actual mathematical representation, and this is how scientific computing guys think of it in their head (not in nested loops). And I hate the fact that this is not the best way to write this, and reparsing it into loops will be faster.
At the moment it seems like there is a conflict between the goal of writing the code that is fast and the code that is concise/clear.
Secondly, what is the technical reason for this? Ok, I understand that vectorized code creates extra temporaries, etc., but vectorized functions (for example, broadcast(), map(), etc.) have a potential of multithreading them, and I think that the benefit of multithreading can outweigh the overhead of temporaries and other disadvantages of vectorized functions making them faster than regular for loops.
Do current implementations of vectorized functions in Julia do implicit multithreading under the hood?
If not, is there work / plans to add implicit concurrency to vectorized functions and to make them faster than loops?
For easy reading I decided to turn my comment marathon above into an answer.
The core development statement behind Julia is "we are greedy". The core devs want it to do everything, and do it fast. In particular, note that the language is supposed to solve the "two-language problem", and at this stage, it looks like it will accomplish this by the time v1.0 hits.
In the context of your question, this means that everything you are asking about is either already a part of Julia, or planned for v1.0.
In particular, this means that if your programming problem lends itself to vectorized code, then write vectorized code. If it is more natural to use loops, use loops.
By the time v1.0 hits, most vectorized code should be as fast, or faster, than equivalent code in Matlab. In many cases, this development goal has already been achieved, since many vector/matrix operations in Julia are sent to the appropriate BLAS routines by the compiler.
Regarding multi-threading, native multi-threading is currently being implemented for Julia, and I believe an experimental set of routines is already available on the master branch. The relevant issue page is here. Implicit multithreading for some vector/matrix operations is already in theory available in Julia, since Julia calls BLAS. I'm not sure if it is switched on by default.
Be aware though, that many vectorized operations will still (currently) be much faster in MATLAB, since MATLAB have been writing specialised multi-threaded C libraries for years and then calling them under the hood. Once Julia has native multi-threading, I expect Julia will overtake MATLAB, since at that point the entire dev community can scour the standard Julia packages and upgrade them to take advantage of native multi-threading wherever possible.
In contrast, MATLAB does not have native multi-threading, so you are relying on Mathworks to provide specialised multi-threaded routines in the form of underlying C libraries.
You can and should write vector'*matrix*vector (or perhaps dot(vector, matrix*vector) if you prefer a scalar output). For things like matrix multiplication, you're much better off using vectorized notation, because it calls the underlying BLAS libraries which are more heavily optimized than code produced by basically any language/compiler combination.
In other places, as you say you can benefit from devectorization by avoiding temporary intermediates: for example, if x is a vector, the expression
y = exp(x).*x + 5
creates 3 temporary vectors: one for a = exp(x), one for b = a.*x and one for y = b + 5. In contrast,
y = [exp(z)*z+5 for z in x]
creates no temporary intermediates. Since loops and comprehensions in julia are fast, there is no disadvantage to writing the devectorized version, and in fact it should perform slightly better (especially with performance annotations like #simd, where appropriate).
The arrival of threads may change things (making vectorized exp faster than a "naive" exp), but in general I'd say you should regard this as an "orthogonal" issue: julia will likely make multithreading so easy to use that you yourself might write operations using multiple threads, and consequently the vectorized "library" routine still has no advantage over code you might write yourself. In other words, you might use multiple threads but still write devectorized code to avoid those temporaries.
In the longer term, a "sufficiently smart compiler" may avoid temporaries by automatically devectorizing some of these operations, but this is a much harder route, with potential traps for the unwary, than it may seem.
Your statement that "vectorized code is always more concise, and easier to understand" is, however, not true: many times while writing Matlab code, you have to go to extremes to come up with a vectorized way of writing what are actually simple operations when thought of in terms of loops. You can search the mailing lists for countless examples; one that I recall on SO is How to find connected components in a matrix using Julia.

Expression trees vs IL.Emit for runtime code specialization

I recently learned that it is possible to generate C# code at runtime and I would like to put this feature to use. I have code that does some very basic geometric calculations like computing line-plane intersections and I think I could gain some performance benefits by generating specialized code for some of the methods because many of the calculations are performed for the same plane or the same line over and over again. By specializing the code that computes the intersections I think I should be able to gain some performance benefits.
The problem is that I'm not sure where to begin. From reading a few blog posts and browsing MSDN documentation I've come across two possible strategies for generating code at runtime: Expression trees and IL.Emit. Using expression trees seems much easier because there is no need to learn anything about OpCodes and various other MSIL related intricacies but I'm not sure if expression trees are as fast as manually generated MSIL. So are there any suggestions on which method I should go with?
The performance of both is generally same, as expression trees internally are traversed and emitted as IL using the same underlying system functions that you would be using yourself. It is theoretically possible to emit a more efficient IL using low-level functions, but I doubt that there would be any practically important performance gain. That would depend on the task, but I have not come of any practical optimisation of emitted IL, compared to one emitted by expression trees.
I highly suggest getting the tool called ILSpy that reverse-compiles CLR assemblies. With that you can look at the code actually traversing the expression trees and actually emitting IL.
Finally, a caveat. I have used expression trees in a language parser, where function calls are bound to grammar rules that are compiled from a file at runtime. Compiled is a key here. For many problems I came across, when what you want to achieve is known at compile time, then you would not gain much performance by runtime code generation. Some CLR JIT optimizations might be also unavailable to dynamic code. This is only an opinion from my practice, and your domain would be different, but if performance is critical, I would rather look at native code, highly optimized libraries. Some of the work I have done would be snail slow if not using LAPACK/MKL. But that is only a piece of the advice not asked for, so take it with a grain of salt.
If I were in your situation, I would try alternatives from high level to low level, in increasing "needed time & effort" and decreasing reusability order, and I would stop as soon as the performance is good enough for the time being, i.e.:
first, I'd check to see if Math.NET, LAPACK or some similar numeric library already has similar functionality, or I can adapt/extend the code to my needs;
second, I'd try Expression Trees;
third, I'd check Roslyn Project (even though it is in prerelease version);
fourth, I'd think about writing common routines with unsafe C code;
[fifth, I'd think about quitting and starting a new career in a different profession :) ],
and only if none of these work out, would I be so hopeless to try emitting IL at run time.
But perhaps I'm biased against low level approaches; your expertise, experience and point of view might be different.

Resources