I am new to the subject "modeling of physical systems". I read some basic literature and did some tutorials in Modelica and Simulink/Simscape. I wanted to ask you, if I understand the following content correctly:
Symbolic manipulation is the process of transforming a differential-algebraic system of equation (physical model: DAE) into a system of differential equations (ODE) that can be solved by standard solvers (Runge, Kutta, BDF, ...)
There are also solver that can solve DAE's directly. But Modelica (openModelica, Dymola) and Simscape transfer the System into an ODE (why are this methods better compared to direct DAE solvers?)
A "flat Modelica code" is the result ( = ODE) of the transformation.
Thank you very much for your answers.
Symbolic processing for Modelica includes:
remove object oriented structure and obtain an hybrid DAE (flat Modelica)
perform matching, index reduction, casualization to get an ODE
perform optimization (tearing, common subexpression elimination, etc)
generate code for a particular solver
OpenModelica can also solve the system in DAE mode without transforming it to ODE and I guess other Modelica tools can also do that.
A "flat Modelica code" is Modelica code where the object orientation is removed, connect equations are expanded to normal equations. The result is a hybrid DAE.
See Modelica Spec 3.3 for more info about all this (for example Appendix C):
https://modelica.org/documents/ModelicaSpec33Revision1.pdf
So I think your understanding of the terminology is very good too.
Due to the declarative way (opposed to imperative) of programming in modelica, we get immediately very high numbers of algebraic equations. Solving these (partly) symbolically has, above all, these essential advantages:
Speed. Without eliminating algebraic loops, modelica would not be practically usable for any real-world problem and even then only in simple cases no algebraic equations remain. It would be too slow and would force you to do transformations manually yourself in modelica too (as in imperative languages e.g. in C/C++ or Simulink). Even today modelica can still be slower than manually transformed and optimized solutions.
Moreover modelica applications often need simulations in real-time.
Correctness. Symbolic transformations are based on proofs and modelica applications often are in the area of safety critical or cyber-physical systems.
One additional consideration is that there are different forms of DAEs, and modeling often lead to high-index DAEs that are complicated to solve numerically (*). (Note "high" means index greater than 1, typically 2 - but sometimes even higher.)
Symbolic transformations can reduce high-index DAEs to semi-explicit index 1 DAEs, and then by (numerically) solving the systems of equations they are transformed into ODEs.
Thus even if a tool solves DAEs directly it is normally the semi-explicit index 1 DAEs that are solved, not the original high index DAE.
(I know this answer is late. The hybrid part for the symbolic transformations is more complicated, still working on that.)
For more information see https://en.wikipedia.org/wiki/Differential-algebraic_system_of_equations
(*): There are some solvers for high index DAEs (in particular index 2), but typically they rely on a specific structure of the model and finding that structure requires similar techniques as reducing the index to 1.
Related
I have a Gurobi licence and I am after a good MILP/LP modelling language, which should be
free/open source
intuitive, i.e. something that looks like (taken from MiniZinc)
var int: x;
constraint x >= 0.5;
solve minimize x;
fast: the time to build the model and send it to Gurobi should be of similar order to the best ones (AMPL GAMS etc.)
flexible/powerful (ability to deal with 3D+ arrays, activate/deactivate constraints easily, provide initial solutions to the solver, etc.)
Of course, and correct me if I'm wrong, AMPL GAMS fail at 1), Python and R fail at 2) (and perhaps at 3)?).
How about GLPK, Minizinc, ZIMPL etc.? They satisfy 1) and 2) but what about 3) and 4)? Are they as good as AMPL in this regard? If not, is there a modelling language satisfying 1-4?
I've used AMPL with Gurobi for mid-sized MIPs (~ 100k-1m variables?) and MiniZinc, mostly with Gecode, for smaller combinatorial problems. I've seen some Gurobi work done with R and Python, but haven't used it that way myself.
I'm less familiar with the other options. My understanding is that GAMS is quite similar to AMPL and much of what I have to say about AMPL may also be valid for GAMS, but I can't vouch for it.
Of course, and correct me if I'm wrong, AMPL GAMS fail at 1),
Yes, generally. There is an exception which probably isn't helpful for your specific requirements but might be useful to others: you can get free use of AMPL, Gurobi, and many other optimisation products, by using the NEOS web service. This is restricted to academic non-commercial purposes and you have to grant NEOS certain rights in relation to the problems you send them; definitely read those terms of service before using it. It also requires waiting for an available server, so if speed is a high priority this probably isn't the solution for you.
Python and R fail at 2) (and perhaps at 3)?).
In my limited experience, yes for (2). AMPL, GAMS, and MiniZinc are designed specifically for defining optimisation problems, so it's unsurprising that their syntax is more user-friendly for that purpose than languages like Python and R.
The flip-side to this is that if you want to do just about anything other than defining an optimisation problem with these languages, Python/R/etc. will probably be better for that purpose.
On speed: for the problems I usually work with, AMPL takes maybe a couple of seconds to build and presolve a MIP model which takes Gurobi a couple of minutes to solve. Obviously this is going to vary somewhat with hardware and details of the problem, but in general I would expect build time to be small compared to solve time for any of the solutions under discussion. Even with a good solver like Gurobi, big MIPs are hard. Many of the serious optimisation programmers I've met do use Python, so I presume the performance side is good enough.
However, that doesn't mean the choice of language/platform is irrelevant to speed. One of the nice features of AMPL (and also GAMS) is presolve, which attempts to reduce the problem size before sending it to the solver. My standard problems have a lot of redundant variables and constraints; AMPL identifies and eliminates many of these, reducing the problem size by about 80% and giving a noticeable improvement in solver time (as compared to runs where I switch off presolve, which I sometimes do for debugging-related reasons). This might be a consideration if you expect a lot of redundancy.
flexible/powerful (ability to deal with 3D+ arrays, activate/deactivate constraints easily, provide initial solutions to the solver, etc.)
MiniZinc handles up to 6D arrays, which may or may not be enough depending on your applications.
It's more flexible than AMPL in some areas and less so in others. AMPL has a lot of set-based functionality that I find useful (e.g. I can define a variable whose index set is something like "pairs of non-identical cities separated by no more than 500 km") and MiniZinc doesn't have this. OTOH, MiniZinc seems to be better than AMPL for solver-hopping, e.g. if I write a MZ model with a combinatorial constraint like "alldifferent" but then try to run it on a solver that doesn't recognise such constraints, MZ will translate it into something the solver can deal with.
I haven't tried deactivating constraints in MZ other than by commenting them out, so I can't help there, and similarly on providing initial solutions.
Overall, MiniZinc is a good choice to consider. Some pluses and minuses relative to AMPL ("free" being a big plus!) but it fills a similar niche.
IMHO, there is no such system if you consider the Python interfaces/modeling environments to SCIP or Gurobi too complicated:
x = model.addVar()
y = model.addVar(vtype="INTEGER")
model.setObjective(x + y)
model.addCons(2*x - y*y >= 0)
model.optimize()
To me this looks quite natural and straight forward. The immense benefit of using an actual programming language instead of modeling language is that you can do anything in there, while there will always be boundaries in the latter.
If you are a looking for a modeling GUI, you should check out LITIC. It can be used almost entirely with drag-and-drop operations: https://litic.com/showcase.html
I've used a lot of the options mentioned, and some not yet mentioned
GAMS
GAMS' Python API
GAMS' MATLAB API
AMPL
FICO Xpress Mosel
FICO Xpress Model's Python API
IBM ILOG OPL
Gurobi's Python API
PuLP (Python)
Pyomo (Python)
Python-MIP
JuMP (Julia)
MATLAB Optimization Toolbox
Google OR-Tools
Based on your requirements, I'd suggest trying Python-MIP, PuLP or JuMP. They are free and have easy syntax with no limit on array dimensionality.
Take a look at Google or-tools. I’m not sure if getting initial solution to the solver is available in all of its interfaces, but if you use it in python, it should probably satisfy all 1-4.
Is it possible for a generalisation in UML to be implemented in Simatic SCL code (or Structured text code)?
The definition of a Generalisation in UML:
A generalisation is a relationship between a morew general classifier and a
more specific classifier. Each Instance of the specific classifier is also an
indirect instance of the general clasifier. Thus, the specific classifier
inherits the features of the more general classifier.
Features specified for instances of the general classifier are implicitly
specified for instances of the specific classifier. Any constraint applying
to instances of the general classifier also applies to instances of the
specific classifier.
In general the answer to this is no, not really. All means of programming PLCs (ladder, ST, FBD, etc) are generally only very lightly abstracted from the actual machine code. They are closer to assembly wrappers than to anything we would think of as a modern development language. Structured Text is closer to very primitive Pascal - it lacks most any sort of object oriented features.
The notion is that PLCs and PLC programmers have long since been used to an approach of extreme micromanagement when it comes to developing programs for them. The reasons for this are many - some more valid than others. Scott Whitlock wrote a good bit here outlining some of those reasons. A big one is that maintenance guys on the factory floor are often the ones trying to troubleshoot the machines and having clear, non-abstract, state-machine information available to them is much more valuable than the need for an elegant, minimal formulation to stroke the ego of the system developer.
PLC programming is a ruthlessly practical industry. If you have the choice between something 10% more practical and something 90% more elegant, the practical solution will always win.
With that said - there are some who are playing in this area. I suggest a quick read of this article for some examples of trying to make ST work a bit like you are suggesting. Still, I would be cautious before putting anything like this to work in a real factory with real machines that need to be both safe and reliably making money.
I recently learned that it is possible to generate C# code at runtime and I would like to put this feature to use. I have code that does some very basic geometric calculations like computing line-plane intersections and I think I could gain some performance benefits by generating specialized code for some of the methods because many of the calculations are performed for the same plane or the same line over and over again. By specializing the code that computes the intersections I think I should be able to gain some performance benefits.
The problem is that I'm not sure where to begin. From reading a few blog posts and browsing MSDN documentation I've come across two possible strategies for generating code at runtime: Expression trees and IL.Emit. Using expression trees seems much easier because there is no need to learn anything about OpCodes and various other MSIL related intricacies but I'm not sure if expression trees are as fast as manually generated MSIL. So are there any suggestions on which method I should go with?
The performance of both is generally same, as expression trees internally are traversed and emitted as IL using the same underlying system functions that you would be using yourself. It is theoretically possible to emit a more efficient IL using low-level functions, but I doubt that there would be any practically important performance gain. That would depend on the task, but I have not come of any practical optimisation of emitted IL, compared to one emitted by expression trees.
I highly suggest getting the tool called ILSpy that reverse-compiles CLR assemblies. With that you can look at the code actually traversing the expression trees and actually emitting IL.
Finally, a caveat. I have used expression trees in a language parser, where function calls are bound to grammar rules that are compiled from a file at runtime. Compiled is a key here. For many problems I came across, when what you want to achieve is known at compile time, then you would not gain much performance by runtime code generation. Some CLR JIT optimizations might be also unavailable to dynamic code. This is only an opinion from my practice, and your domain would be different, but if performance is critical, I would rather look at native code, highly optimized libraries. Some of the work I have done would be snail slow if not using LAPACK/MKL. But that is only a piece of the advice not asked for, so take it with a grain of salt.
If I were in your situation, I would try alternatives from high level to low level, in increasing "needed time & effort" and decreasing reusability order, and I would stop as soon as the performance is good enough for the time being, i.e.:
first, I'd check to see if Math.NET, LAPACK or some similar numeric library already has similar functionality, or I can adapt/extend the code to my needs;
second, I'd try Expression Trees;
third, I'd check Roslyn Project (even though it is in prerelease version);
fourth, I'd think about writing common routines with unsafe C code;
[fifth, I'd think about quitting and starting a new career in a different profession :) ],
and only if none of these work out, would I be so hopeless to try emitting IL at run time.
But perhaps I'm biased against low level approaches; your expertise, experience and point of view might be different.
i have planned to develop a tool that converts a program written in a programming language (eg: Java) to a common markup language (eg: XML) and that markup code is converted to another language (eg: C#).
in simple words, it is a programming language converter that converts program written in one language to another language.
i think it is possible but i don know where to start. i wanna know the possibilities to do so and information about some existing system.
What you are trying to do is extremely hard, but if you want to know what you are up for I've listed the steps you need to follow below:
First the hard bit:
First you obtain or derive an operational semantics for your source and target languages.
Then you enhance the semantics to capture your source and target memory models.
Then you need to unify the two enhanced-semantics within a common operational model.
Then you need to define a mapping from your source languages onto the common operational model.
Then you need to define a mapping from your operational model to your target language
Step 4, as you pointed out in your question, is trivial.
Step 1 is difficult, as most languages do not have sufficiently formal semantics specified; but I recommend checking out http://lucacardelli.name/TheoryOfObjects.html as this is the best starting point for building a traditional OO semantics.
Step 2 is almost certainly impossible in general, but may be merely obscenely difficult if you are willing to sacrifice some efficiency.
Step 3 will depend on how clean the result of step 1 turned out, but is going to be anything from delicate and tricky to impossible.
Step 5 is not going to be trivial, it is effectively writing a compiler.
Ultimately, what you propose to do is impossible in general, due to the difficulties inherited in steps 1 and 2. However it should be difficult, but doable, if you are willing to: severely restrict the source language constructs supported; pretty much forget handling threads correctly; and pick two languages with sufficiently similar semantics (ie. Java and C# are ok, but C++ and anything-else is not).
It depends on what languages you want to support, but in general this is a huge & difficult task unless you plan to only support a very small subset of each language.
The real problem is that each programming languages has different features (with some areas that overlap and others that don't) and different ways of solving the same problems -- and it's pretty tricky to detect the problem the programmer is trying to solve and convert that to a new idiom. :) And think about the differences between GUIs created in different languages....
See http://xmlvm.org/ as an example (a project aimed at converting between source code of many different languages, with an XML middle-point) -- the site covers in some depth the challenges they are tackling and the compromises they take, and (if you still have any interest in this kind of project...) ask more specific followup questions.
Notice specifically what the output source code looks like -- it's not at all readable, maintainable, efficient, etc..
It is "technically easy" to produce XML for any single langauge: build a parser, construct and abstract syntax tree, and dump out that tree as XML. (I build tools that do this off-the-shelf for many languages). By technically easy, I mean that the community knows how to do this (see any compiler textbook, e.g., Aho&Ullman Dragon book). I do not mean this is a trivial exercise in terms of effort, because real languages are complicated and messy; there have been many attempts to build C++ parsers and few successes. (I have one of the successes, and it was expensive to get right).
What is really hard (and I don't try to do) is produce XML according to a single schema in which the language semantics are exposed. And without that, it will be essentially impossible to write a translator from a generic XML to an arbitrary target language. This is known as the UNCOL problem and people have been looking since 1958 for the answer. I note that the Wikipedia article seems to indicate the problem is solved, but you can't find many references to UNCOL in the literature since 1961.
The closest attempt I've seen to this is the OMG's "ASTM" model (http://www.omg.org/spec/ASTM/1.0/Beta1/); it exports XMI which is XML. But the ASTM model has lots of escapes built into it to allow langauges that it doesn't model perfectly (AFAIK, that means every language) to extend the XMI in arbitrary ways so that the language-specific information can be encoded. Consequently each language parser produces a custom version of the XMI, and thus each reader has to pretty much know about the extensions and full generality vanishes.
Today I read that there is a software called WinCalibra (scroll a bit down) which can take a text file with properties as input.
This program can then optimize the input properties based on the output values of your algorithm. See this paper or the user documentation for more information (see link above; sadly doc is a zipped exe).
Do you know other software which can do the same which runs under Linux? (preferable Open Source)
EDIT: Since I need this for a java application: should I invest my research in java libraries like gaul or watchmaker? The problem is that I don't want to roll out my own solution nor I have time to do so. Do you have pointers to an out-of-the-box applications like Calibra? (internet searches weren't successfull; I only found libraries)
I decided to give away the bounty (otherwise no one would have a benefit) although I didn't found a satisfactory solution :-( (out-of-the-box application)
Some kind of (Metropolis algorithm-like) probability selected random walk is a possibility in this instance. Perhaps with simulated annealing to improve the final selection. Though the timing parameters you've supplied are not optimal for getting a really great result this way.
It works like this:
You start at some point. Use your existing data to pick one that look promising (like the highest value you've got). Set o to the output value at this point.
You propose a randomly selected step in the input space, assign the output value there to n.
Accept the step (that is update the working position) if 1) n>o or 2) the new value is lower, but a random number on [0,1) is less than f(n/o) for some monotonically increasing f() with range and domain on [0,1).
Repeat steps 2 and 3 as long as you can afford, collecting statistics at each step.
Finally compute the result. In your case an average of all points is probably sufficient.
Important frill: This approach has trouble if the space has many local maxima with deep dips between them unless the step size is big enough to get past the dips; but big steps makes the whole thing slow to converge. To fix this you do two things:
Do simulated annealing (start with a large step size and gradually reduce it, thus allowing the walker to move between local maxima early on, but trapping it in one region later to accumulate precision results.
Use several (many if you can afford it) independent walkers so that they can get trapped in different local maxima. The more you use, and the bigger the difference in output values, the more likely you are to get the best maxima.
This is not necessary if you know that you only have one, big, broad, nicely behaved local extreme.
Finally, the selection of f(). You can just use f(x) = x, but you'll get optimal convergence if you use f(x) = exp(-(1/x)).
Again, you don't have enough time for a great many steps (though if you have multiple computers, you can run separate instances to get the multiple walkers effect, which will help), so you might be better off with some kind of deterministic approach. But that is not a subject I know enough about to offer any advice.
There are a lot of genetic algorithm based software that can do exactly that. Wrote a PHD about it a decade or two ago.
A google for Genetic Algorithms Linux shows a load of starting points.
Intrigued by the question, I did a bit of poking around, trying to get a better understanding of the nature of CALIBRA, its standing in academic circles and the existence of similar software of projects, in the Open Source and Linux world.
Please be kind (and, please, edit directly, or suggest editing) for the likely instances where my assertions are incomplete, inexact and even flat-out incorrect. While working in related fields, I'm by no mean an Operational Research (OR) authority!
[Algorithm] Parameter tuning problem is a relatively well defined problem, typically framed as one of a solution search problem whereby, the combination of all possible parameter values constitute a solution space and the parameter tuning logic's aim is to "navigate" [portions of] this space in search of an optimal (or locally optimal) set of parameters.
The optimality of a given solution is measured in various ways and such metrics help direct the search. In the case of the Parameter Tuning problem, the validity of a given solution is measured, directly or through a function, from the output of the algorithm [i.e. the algorithm being tuned not the algorithm of the tuning logic!].
Framed as a search problem, the discipline of Algorithm Parameter Tuning doesn't differ significantly from other other Solution Search problems where the solution space is defined by something else than the parameters to a given algorithm. But because it works on algorithms which are in themselves solutions of sorts, this discipline is sometimes referred as Metaheuristics or Metasearch. (A metaheuristics approach can be applied to various algorihms)
Certainly there are many specific features of the parameter tuning problem as compared to the other optimization applications but with regard to the solution searching per-se, the approaches and problems are generally the same.
Indeed, while well defined, the search problem is generally still broadly unsolved, and is the object of active research in very many different directions, for many different domains. Various approaches offer mixed success depending on the specific conditions and requirements of the domain, and this vibrant and diverse mix of academic research and practical applications is a common trait to Metaheuristics and to Optimization at large.
So... back to CALIBRA...
From its own authors' admission, Calibra has several limitations
Limit of 5 parameters, maximum
Requirement of a range of values for [some of ?] the parameters
Works better when the parameters are relatively independent (but... wait, when that is the case, isn't the whole search problem much easier ;-) )
CALIBRA is based on a combination of approaches, which are repeated in a sequence. A mix of guided search and local optimization.
The paper where CALIBRA was presented is dated 2006. Since then, there's been relatively few references to this paper and to CALIBRA at large. Its two authors have since published several other papers in various disciplines related to Operational Research (OR).
This may be indicative that CALIBRA hasn't been perceived as a breakthrough.
State of the art in that area ("parameter tuning", "algorithm configuration") is the SPOT package in R. You can connect external fitness functions using a language of your choice. It is really powerful.
I am working on adapters for e.g. C++ and Java that simplify the experimental setup, which requires some getting used to in SPOT. The project goes under name InPUT, and a first version of the tuning part will be up soon.