Timetabling/Scheduling Library - constraint-programming

I'm looking for a library to help me solve a constraint based logic problem where I need to schedule a number of different events of varying duration. The events have different attributes associated with them and my main issue is that I need to encode "preferences" based on these attributes. These preferences aren't hard constraints, but I would like to maximise how well they are satisfied in the solution. There are also different preferences of competing priorities.
I've taken a look at a few constraint solvers (Sat4j, clasp, Glucose, GlueMiniSat, etc.) but from what I've seen they all seem to only deal with fixed constraints, and setting up preferences would be non-trivial.
I don't care too much about what technology/language it's in - I'm happy to write a wrapper around it.

Absolutely, Choco Solver is a powerful Java constraint solver that is often used for scheduling and planning.
Let's take the following example:
"it would be nice if x = 10"
You can encode preferences in different ways.
1) through variables and constraints.
1.1) reify the constraint with a binary variables
ICF.arithm(x,"=",10).reifyWith(b);
it basically means b = 1 <=> x = 10 (so the constraint may or may not be satisfied), then you can maximise b (possibly with a weight)
1.2) through gap variables
solver.post(ICF.arithm(x,'-',gap,"=",10);
then you can minimise the absolute value of gap (possibly with a weight)
to the constraint.
2) through search : when solving the problem ask the search strategy to try x=10 before trying another value. There is not optimality proof but it works quite well in practice.
Hope this help. Please feel free to contact us for more support on Choco Solver www.cosling.com
best,

I think OptaPlanner is a tool that can help you to solve this problem, check this:
OptaPlanner is a constraint satisfaction solver. It optimizes business
resource planning. Every organization faces scheduling puzzles: assign
a limited set of constrained resources (employees, assets, time and
money) to provide products or services to customers. OptaPlanner
optimizes such planning problems to do more business with less
resources. Use cases include Vehicle Routing, Employee Rostering, Job
Scheduling, Bin Packing and many more.
OptaPlanner is a lightweight, embeddable planning engine. It enables
normal Java™ programmers to solve optimization problems efficiently.
Constraints apply on plain domain objects and can reuse existing code.
There’s no need to input difficult mathematical equations. Under the
hood, OptaPlanner combines sophisticated optimization heuristics and
metaheuristics (such as Tabu Search, Simulated Annealing and Late
Acceptance) with very efficient score calculation.
OptaPlanner is open source software, released under the Apache
Software License. It is written in 100% pure Java™, runs on any JVM
and is available in the Maven Central repository too.
Source:
http://www.optaplanner.org/
It's part of Drools, which has another interesting tools:
http://www.drools.org/

Another actively-maintained library is "choco-solver".
website
github

Another alternative is the Gecode Toolkit. It is an open-source and modern Constraint Programming Solver.

Related

Is Alloy Analyzer "a falsifier"?

In my community, recently we actively use the term "falsification" of a formal specification. The term appears in, for instance:
https://www.cs.huji.ac.il/~ornak/publications/cav05.pdf
I wonder whether Alloy Analyzer does falsification. It seems true for me, but I'm not sure. Is it correct? If not, what is the difference?
Yes, Alloy is a falsifier. Alloy's primary novelty when it was introduced 20 years ago was to argue that falsification was often more important than verification, since most designs are not correct, so the role of an analyzer should be to find the errors, not to show that they are not present. For a discussion of this issue, see Section 1.4, Verification vs. Refutation in Software analysis: A roadmap (Jackson and Rinard, 2000); Section 5.1.1, Instance Finding and Undecidability Compromises in Software Abstractions (Jackson 2006).
In Alloy's case though, there's another aspect, which is the argument that scope-complete analysis is actually quite effective from a verification standpoint. This claim is what we called the "small scope hypothesis" -- that most bugs can be found in small scopes (that is analyses that are bounded by a small fixed number of elements in each basic type).
BTW, Alloy was one of the earliest tools to suggest using SAT for bounded verification. See, for example, Boolean Compilation of Relational Specifications (Daniel Jackson, 1998), a tech report that was known to the authors of the first bounded model checking paper, which discusses Alloy's predecessor, Nitpick, in the following terms:
The hypothesis underlying Nitpick is a controversial one. It is that,
in practice, small scopes suffice. In other words, most errors can be
demonstrated by counterexamples within a small scope. This is a purely
empirical hypothesis, since the relevant distribution of errors cannot
be described mathematically: it is determined by the specifications
people write.
Our hope is that successful use of the Nitpick tool will justify the
hypothesis. There is some evidence already for its plausibility. In
our experience with Nitpick to date, we have not gained further
information by increasing the scope beyond 6.
A similar notion of scope is implicit in the context of model checking
of hardware. Although the individual state machines are usually
finite, the design is frequently parameterized by the number of
machines executing in parallel. This metric is analogous to scope; as
the number of machines increases, the state space increases
exponentially, and it is rarely possible to analyze a system involving
more than a handful of machines. Fortunately, however, it seems that
only small configurations are required to find errors. The celebrated
analysis of the Futurebus+ cache protocol [C+95], which perhaps marked
the turning point in model checking’s industrial reputation, was
performed for up to 8 processors and 3 buses. The reported flaws,
however, could be demonstrated with counterexamples involving at most
3 processors and 2 buses.
From my understanding of what is meant by falsification, yes, Alloy does it.
It becomes quite apparent when you look at the motivation behind the creation of Alloy, as forumalted in the Software Abstraction book:
This book is the result of a 10-year effort to bridge this gap, to develop a language (Alloy) that captures the essence of software abstractions simply and succinctly, with an analysis that is fully automatic, and can expose the subtlest of flaws.

Best modelling language for modelling LP/MILP? (NOT solver)

I have a Gurobi licence and I am after a good MILP/LP modelling language, which should be
free/open source
intuitive, i.e. something that looks like (taken from MiniZinc)
var int: x;
constraint x >= 0.5;
solve minimize x;
fast: the time to build the model and send it to Gurobi should be of similar order to the best ones (AMPL GAMS etc.)
flexible/powerful (ability to deal with 3D+ arrays, activate/deactivate constraints easily, provide initial solutions to the solver, etc.)
Of course, and correct me if I'm wrong, AMPL GAMS fail at 1), Python and R fail at 2) (and perhaps at 3)?).
How about GLPK, Minizinc, ZIMPL etc.? They satisfy 1) and 2) but what about 3) and 4)? Are they as good as AMPL in this regard? If not, is there a modelling language satisfying 1-4?
I've used AMPL with Gurobi for mid-sized MIPs (~ 100k-1m variables?) and MiniZinc, mostly with Gecode, for smaller combinatorial problems. I've seen some Gurobi work done with R and Python, but haven't used it that way myself.
I'm less familiar with the other options. My understanding is that GAMS is quite similar to AMPL and much of what I have to say about AMPL may also be valid for GAMS, but I can't vouch for it.
Of course, and correct me if I'm wrong, AMPL GAMS fail at 1),
Yes, generally. There is an exception which probably isn't helpful for your specific requirements but might be useful to others: you can get free use of AMPL, Gurobi, and many other optimisation products, by using the NEOS web service. This is restricted to academic non-commercial purposes and you have to grant NEOS certain rights in relation to the problems you send them; definitely read those terms of service before using it. It also requires waiting for an available server, so if speed is a high priority this probably isn't the solution for you.
Python and R fail at 2) (and perhaps at 3)?).
In my limited experience, yes for (2). AMPL, GAMS, and MiniZinc are designed specifically for defining optimisation problems, so it's unsurprising that their syntax is more user-friendly for that purpose than languages like Python and R.
The flip-side to this is that if you want to do just about anything other than defining an optimisation problem with these languages, Python/R/etc. will probably be better for that purpose.
On speed: for the problems I usually work with, AMPL takes maybe a couple of seconds to build and presolve a MIP model which takes Gurobi a couple of minutes to solve. Obviously this is going to vary somewhat with hardware and details of the problem, but in general I would expect build time to be small compared to solve time for any of the solutions under discussion. Even with a good solver like Gurobi, big MIPs are hard. Many of the serious optimisation programmers I've met do use Python, so I presume the performance side is good enough.
However, that doesn't mean the choice of language/platform is irrelevant to speed. One of the nice features of AMPL (and also GAMS) is presolve, which attempts to reduce the problem size before sending it to the solver. My standard problems have a lot of redundant variables and constraints; AMPL identifies and eliminates many of these, reducing the problem size by about 80% and giving a noticeable improvement in solver time (as compared to runs where I switch off presolve, which I sometimes do for debugging-related reasons). This might be a consideration if you expect a lot of redundancy.
flexible/powerful (ability to deal with 3D+ arrays, activate/deactivate constraints easily, provide initial solutions to the solver, etc.)
MiniZinc handles up to 6D arrays, which may or may not be enough depending on your applications.
It's more flexible than AMPL in some areas and less so in others. AMPL has a lot of set-based functionality that I find useful (e.g. I can define a variable whose index set is something like "pairs of non-identical cities separated by no more than 500 km") and MiniZinc doesn't have this. OTOH, MiniZinc seems to be better than AMPL for solver-hopping, e.g. if I write a MZ model with a combinatorial constraint like "alldifferent" but then try to run it on a solver that doesn't recognise such constraints, MZ will translate it into something the solver can deal with.
I haven't tried deactivating constraints in MZ other than by commenting them out, so I can't help there, and similarly on providing initial solutions.
Overall, MiniZinc is a good choice to consider. Some pluses and minuses relative to AMPL ("free" being a big plus!) but it fills a similar niche.
IMHO, there is no such system if you consider the Python interfaces/modeling environments to SCIP or Gurobi too complicated:
x = model.addVar()
y = model.addVar(vtype="INTEGER")
model.setObjective(x + y)
model.addCons(2*x - y*y >= 0)
model.optimize()
To me this looks quite natural and straight forward. The immense benefit of using an actual programming language instead of modeling language is that you can do anything in there, while there will always be boundaries in the latter.
If you are a looking for a modeling GUI, you should check out LITIC. It can be used almost entirely with drag-and-drop operations: https://litic.com/showcase.html
I've used a lot of the options mentioned, and some not yet mentioned
GAMS
GAMS' Python API
GAMS' MATLAB API
AMPL
FICO Xpress Mosel
FICO Xpress Model's Python API
IBM ILOG OPL
Gurobi's Python API
PuLP (Python)
Pyomo (Python)
Python-MIP
JuMP (Julia)
MATLAB Optimization Toolbox
Google OR-Tools
Based on your requirements, I'd suggest trying Python-MIP, PuLP or JuMP. They are free and have easy syntax with no limit on array dimensionality.
Take a look at Google or-tools. I’m not sure if getting initial solution to the solver is available in all of its interfaces, but if you use it in python, it should probably satisfy all 1-4.

How to implement a "Generalisation" in SCL

Is it possible for a generalisation in UML to be implemented in Simatic SCL code (or Structured text code)?
The definition of a Generalisation in UML:
A generalisation is a relationship between a morew general classifier and a
more specific classifier. Each Instance of the specific classifier is also an
indirect instance of the general clasifier. Thus, the specific classifier
inherits the features of the more general classifier.
Features specified for instances of the general classifier are implicitly
specified for instances of the specific classifier. Any constraint applying
to instances of the general classifier also applies to instances of the
specific classifier.
In general the answer to this is no, not really. All means of programming PLCs (ladder, ST, FBD, etc) are generally only very lightly abstracted from the actual machine code. They are closer to assembly wrappers than to anything we would think of as a modern development language. Structured Text is closer to very primitive Pascal - it lacks most any sort of object oriented features.
The notion is that PLCs and PLC programmers have long since been used to an approach of extreme micromanagement when it comes to developing programs for them. The reasons for this are many - some more valid than others. Scott Whitlock wrote a good bit here outlining some of those reasons. A big one is that maintenance guys on the factory floor are often the ones trying to troubleshoot the machines and having clear, non-abstract, state-machine information available to them is much more valuable than the need for an elegant, minimal formulation to stroke the ego of the system developer.
PLC programming is a ruthlessly practical industry. If you have the choice between something 10% more practical and something 90% more elegant, the practical solution will always win.
With that said - there are some who are playing in this area. I suggest a quick read of this article for some examples of trying to make ST work a bit like you are suggesting. Still, I would be cautious before putting anything like this to work in a real factory with real machines that need to be both safe and reliably making money.

Opinions on and Experiences with Excel 2010's Evolutionary Solver method

Microsoft has augmented the existing Simplex (Linear) and Gradient (Non-linear) solver engines of the standard Solver Add-In by an Evolutionary solver engine aiming at non-smooth discontinuous problems where global optimal solutions are generally hard (or most of the time even impossible) to find with the other engines. In fact, it is one of the solvers that was previously only available through Frontline's Premium Solver product line, so I think it can be considered a generous addition to the standard solver that ships with Excel.
I haven't heard a lot about people using this new engine and guess that most solver users haven't noticed this recent addition by Microsoft. I become aware of it here: http://office.microsoft.com/en-us/excel-help/what-s-new-in-excel-2010-HA010369709.aspx
I would therefore like to hear about your opinions and experiences with it, also with respect to reasonable settings as it seems to take a lot more time to converge than the other methods.
I've used Evolutionary solver engine developing a MPS (master production schedule) trying to involve most aspects and get an optimal solution, I've found this:
Sometimes it gives the optimal solution, but sometimes you have to give it some kind of hints, such as moving some variables, as will, and see what happens, therefore, I wouldn't recommend it for final decisions, but give it a chance!

Software to Tune/Calibrate Properties for Heuristic Algorithms

Today I read that there is a software called WinCalibra (scroll a bit down) which can take a text file with properties as input.
This program can then optimize the input properties based on the output values of your algorithm. See this paper or the user documentation for more information (see link above; sadly doc is a zipped exe).
Do you know other software which can do the same which runs under Linux? (preferable Open Source)
EDIT: Since I need this for a java application: should I invest my research in java libraries like gaul or watchmaker? The problem is that I don't want to roll out my own solution nor I have time to do so. Do you have pointers to an out-of-the-box applications like Calibra? (internet searches weren't successfull; I only found libraries)
I decided to give away the bounty (otherwise no one would have a benefit) although I didn't found a satisfactory solution :-( (out-of-the-box application)
Some kind of (Metropolis algorithm-like) probability selected random walk is a possibility in this instance. Perhaps with simulated annealing to improve the final selection. Though the timing parameters you've supplied are not optimal for getting a really great result this way.
It works like this:
You start at some point. Use your existing data to pick one that look promising (like the highest value you've got). Set o to the output value at this point.
You propose a randomly selected step in the input space, assign the output value there to n.
Accept the step (that is update the working position) if 1) n>o or 2) the new value is lower, but a random number on [0,1) is less than f(n/o) for some monotonically increasing f() with range and domain on [0,1).
Repeat steps 2 and 3 as long as you can afford, collecting statistics at each step.
Finally compute the result. In your case an average of all points is probably sufficient.
Important frill: This approach has trouble if the space has many local maxima with deep dips between them unless the step size is big enough to get past the dips; but big steps makes the whole thing slow to converge. To fix this you do two things:
Do simulated annealing (start with a large step size and gradually reduce it, thus allowing the walker to move between local maxima early on, but trapping it in one region later to accumulate precision results.
Use several (many if you can afford it) independent walkers so that they can get trapped in different local maxima. The more you use, and the bigger the difference in output values, the more likely you are to get the best maxima.
This is not necessary if you know that you only have one, big, broad, nicely behaved local extreme.
Finally, the selection of f(). You can just use f(x) = x, but you'll get optimal convergence if you use f(x) = exp(-(1/x)).
Again, you don't have enough time for a great many steps (though if you have multiple computers, you can run separate instances to get the multiple walkers effect, which will help), so you might be better off with some kind of deterministic approach. But that is not a subject I know enough about to offer any advice.
There are a lot of genetic algorithm based software that can do exactly that. Wrote a PHD about it a decade or two ago.
A google for Genetic Algorithms Linux shows a load of starting points.
Intrigued by the question, I did a bit of poking around, trying to get a better understanding of the nature of CALIBRA, its standing in academic circles and the existence of similar software of projects, in the Open Source and Linux world.
Please be kind (and, please, edit directly, or suggest editing) for the likely instances where my assertions are incomplete, inexact and even flat-out incorrect. While working in related fields, I'm by no mean an Operational Research (OR) authority!
[Algorithm] Parameter tuning problem is a relatively well defined problem, typically framed as one of a solution search problem whereby, the combination of all possible parameter values constitute a solution space and the parameter tuning logic's aim is to "navigate" [portions of] this space in search of an optimal (or locally optimal) set of parameters.
The optimality of a given solution is measured in various ways and such metrics help direct the search. In the case of the Parameter Tuning problem, the validity of a given solution is measured, directly or through a function, from the output of the algorithm [i.e. the algorithm being tuned not the algorithm of the tuning logic!].
Framed as a search problem, the discipline of Algorithm Parameter Tuning doesn't differ significantly from other other Solution Search problems where the solution space is defined by something else than the parameters to a given algorithm. But because it works on algorithms which are in themselves solutions of sorts, this discipline is sometimes referred as Metaheuristics or Metasearch. (A metaheuristics approach can be applied to various algorihms)
Certainly there are many specific features of the parameter tuning problem as compared to the other optimization applications but with regard to the solution searching per-se, the approaches and problems are generally the same.
Indeed, while well defined, the search problem is generally still broadly unsolved, and is the object of active research in very many different directions, for many different domains. Various approaches offer mixed success depending on the specific conditions and requirements of the domain, and this vibrant and diverse mix of academic research and practical applications is a common trait to Metaheuristics and to Optimization at large.
So... back to CALIBRA...
From its own authors' admission, Calibra has several limitations
Limit of 5 parameters, maximum
Requirement of a range of values for [some of ?] the parameters
Works better when the parameters are relatively independent (but... wait, when that is the case, isn't the whole search problem much easier ;-) )
CALIBRA is based on a combination of approaches, which are repeated in a sequence. A mix of guided search and local optimization.
The paper where CALIBRA was presented is dated 2006. Since then, there's been relatively few references to this paper and to CALIBRA at large. Its two authors have since published several other papers in various disciplines related to Operational Research (OR).
This may be indicative that CALIBRA hasn't been perceived as a breakthrough.
State of the art in that area ("parameter tuning", "algorithm configuration") is the SPOT package in R. You can connect external fitness functions using a language of your choice. It is really powerful.
I am working on adapters for e.g. C++ and Java that simplify the experimental setup, which requires some getting used to in SPOT. The project goes under name InPUT, and a first version of the tuning part will be up soon.

Resources