Traveling Salesman (TSP) no return - traveling-salesman

Does someone know about a implementation of a variation of the TSP algorithm without the need of returning to the start point, but keeping the 'must-pass' requirement (with fixed start and end point)? Thanks in advance! :)

When using Genetic Algorithms to solve the TSP problem, you can apply this restriction to the Genoms (which usually are realized as arrays containing the visiting sequence of the villages).
Spawn the Genoms with a fixed start and end point
Make sure the start and end point are not altered on mutation and crossover
Whether or not you return to the start point is within the realm of the Fitness Function, which interprets and rates the Fitness (inverse route length) of your Genoms.
This guarantees that any solution produced follows your requirements.

Related

How would I construct an integer optimization model corresponding to a graph

Suppose we're given some sort of graph where the feasible region of our optimization problem is given. For example: here is an image
How would I go on about constructing these constraints in an integer optimization problem? Anyone got any tips? Thanks!
Mate, I agree with the others that you should be a little more specific than that paint-ish picture ;). In particular you are neither specifying any objective/objective direction nor are you giving any context, what about this graph should be integer-variable related, except for the existence of disjunctive feasible sets, which may be modeled by MIP-techniques. It seems like your problem is formalization of what you conceptualized. However, in case you are just being lazy and are just interested in modelling disjunctive regions, you should be looking into disjunctive programming techniques, such as "big-M" (Note: big-M reformulations can be problematic). You should be aiming at some convex-hull reformulation if you can attain one (fairly easily).
Back to your picture, it is quite clear that you have a problem in two real dimensions (let's say in R^2), where the constraints bounding the feasible set are linear (the lines making up the feasible polygons).
So you know that you have two dimensions and need two real continuous variables, say x[1] and x[2], to formulate each of your linear constraints (a[i,1]*x[1]+a[i,2]<=rhs[i] for some index i corresponding to the number of lines in your graph). Additionally your variables seem to be constrained to the first orthant so x[1]>=0 and x[2]>=0 should hold. Now, to add disjunctions you want some constraints that only hold when a certain condition is true. Therefore, you can add two binary decision variables, say y[1],y[2] and an additional constraint y[1]+y[2]=1, to tell that only one set of constraints can be active at the same time. You should be able to implement this with the help of big-M by reformulating the constraints as follows:
If you bound things from above with your line:
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[1]) if i corresponds to the one polygon,
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[2]) if i corresponds to the other polygon,
and if your line bounds things from below:
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the one polygon,
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the other polygon.
It is important that M is sufficiently large, but not too large to cause numerical issues.
That being said, I am by no means an expert on these disjunctive programming techniques, so feel free to chime in, add corrections or make things clearer.
Also, a more elaborate question typically yields more elaborate and satisfying answers ;) If you had gone to the effort of making up a true small example problem you likely would have gotten a full formulation of your problem or even an executable piece of code in no time.

Pyomo Optimization of minimum cost using binary variables

I have an optimization problem where I want to minimize for the total cost of a system, so I write an objective function that is the sum of my different costs. The problem includes using one of three machines each one with different cost at a different threshold of usage. I define each machine (model.Machine#) as a binary variable and declare the parameters of each machine cost model.Cost#). I am trying to get the cost to be able to minimize it but when I write the constraint:
model.Cost1*model.Machine1 + model.Cost2*model.Machine2 + model.Cost3*model.Machine3 == model.MachineCost
Where I also write:
model.Machine1 + model.Machine2 + model.Machine3 == 1
Gurobi is telling me that it can't handle an quadratic function referring to the first constraint mentioned above. However it is parameters multiplied by binary variables there isn't anything quadratic.
I know the question is vague and part of a larger problem but I hope you can understand what I am referring to and help me!
Thank you so much for your assistance!
What is model.MachineCost? Is it an Expression component with some kind of quadratic expression stored inside of it?
If not, can you start commenting out things in your model until you get down to a minimal working example (that causes this error) and post that? Otherwise, we can't be sure that there are not other quadratic pieces of the model that you are not showing.

Does Julia have a way to solve for unknown variables

Is there a function in Julia that is similar to the solver function in Excel where I can provide and equation, and it will solve for the unknown variable? If not, does anybody know the math behind Excel's solver function?
I am not expecting anybody to solve the equation, but if it helps:
Price = (Earnings_1/(1+r)^1)+(Earnings_2/(1+r)^2)++(Earnings_3/(1+r)^3)+(Earnings_4/(1+r)^4)+(Earnings_5/(1+r)^5)+(((Earnings_5)(RiskFreeRate))/((1+r)^5)(1-RiskFreeRate))
The known variables are: Price, All Earnings, and RiskFreeRate. I am just trying to figure out how to solve for r.
Write this instead as an expression f(r) = 0 by subtracting Price over to the other side. Now it's a rootfinding problem. If you only have one variable you're solving for (looks to be the case), then Roots.jl is a good choice.
fzero(f, a::Real, b::Real)
will search for a solution between a and b for example, and the docs have more choices for algorithms when you don't know a range to start with and only give an initial condition for example.
In addition, KINSOL in Sundials.jl is good when you know you're starting close to a multidimensional root. For multidimensional and needing some robustness to the initial condition, I'd recommend using NLsolve.jl.
There's nothing out of the box no. Root finding is a science in itself.
Luckily for you, your function has an analytic first derivative with respect to r. That means that you can use Newton Raphson, which will be extremely stable for your function.
I'm sure you're aware your function behaves badly around r = -1.

Functional alternative to caching known "answers"

I think the best way to form this question is with an example...so, the actual reason I decided to ask about this is because of because of Problem 55 on Project Euler. In the problem, it asks to find the number of Lychrel numbers below 10,000. In an imperative language, I would get the list of numbers leading up to the final palindrome, and push those numbers to a list outside of my function. I would then check each incoming number to see if it was a part of that list, and if so, simply stop the test and conclude that the number is NOT a Lychrel number. I would do the same thing with non-lychrel numbers and their preceding numbers.
I've done this before and it has worked out nicely. However, it seems like a big hassle to actually implement this in Haskell without adding a bunch of extra arguments to my functions to hold the predecessors, and an absolute parent function to hold all of the numbers that I need to store.
I'm just wondering if there is some kind of tool that I'm missing here, or if there are any standards as a way to do this? I've read that Haskell kind of "naturally caches" (for example, if I wanted to define odd numbers as odds = filter odd [1..], I could refer to that whenever I wanted to, but it seems to get complicated when I need to dynamically add elements to a list.
Any suggestions on how to tackle this?
Thanks.
PS: I'm not asking for an answer to the Project Euler problem, I just want to get to know Haskell a bit better!
I believe you're looking for memoizing. There are a number of ways to do this. One fairly simple way is with the MemoTrie package. Alternatively if you know your input domain is a bounded set of numbers (e.g. [0,10000)) you can create an Array where the values are the results of your computation, and then you can just index into the array with your input. The Array approach won't work for you though because, even though your input numbers are below 10,000, subsequent iterations can trivially grow larger than 10,000.
That said, when I solved Problem 55 in Haskell, I didn't bother doing any memoization whatsoever. It turned out to just be fast enough to run (up to) 50 iterations on all input numbers. In fact, running that right now takes 0.2s to complete on my machine.

A reverse inference engine (find a random X for which foo(X) is true)

I am aware that languages like Prolog allow you to write things like the following:
mortal(X) :- man(X). % All men are mortal
man(socrates). % Socrates is a man
?- mortal(socrates). % Is Socrates mortal?
yes
What I want is something like this, but backwards. Suppose I have this:
mortal(X) :- man(X).
man(socrates).
man(plato).
man(aristotle).
I then ask it to give me a random X for which mortal(X) is true (thus it should give me one of 'socrates', 'plato', or 'aristotle' according to some random seed).
My questions are:
Does this sort of reverse inference have a name?
Are there any languages or libraries that support it?
EDIT
As somebody below pointed out, you can simply ask mortal(X) and it will return all X, from which you can simply pick a random one from the list. What if, however, that list would be very large, perhaps in the billions? Obviously in that case it wouldn't do to generate every possible result before picking one.
To see how this would be a practical problem, imagine a simple grammar that generated a random sentence of the form "adjective1 noun1 adverb transitive_verb adjective2 noun2". If the lists of adjectives, nouns, verbs, etc. are very large, you can see how the combinatorial explosion is a problem. If each list had 1000 words, you'd have 1000^6 possible sentences.
Instead of the deep-first search of Prolog, a randomized deep-first search strategy could be easyly implemented. All that is required is to randomize the program flow at choice points so that every time a disjunction is reached a random pole on the search tree (= prolog program) is selected instead of the first.
Though, note that this approach does not guarantees that all the solutions will be equally probable. To guarantee that, it is required to known in advance how many solutions will be generated by every pole to weight the randomization accordingly.
I've never used Prolog or anything similar, but judging by what Wikipedia says on the subject, asking
?- mortal(X).
should list everything for which mortal is true. After that, just pick one of the results.
So to answer your questions,
I'd go with "a query with a variable in it"
From what I can tell, Prolog itself should support it quite fine.
I dont think that you can calculate the nth solution directly but you can calculate the n first solutions (n randomly picked) and pick the last. Of course this would be problematic if n=10^(big_number)...
You could also do something like
mortal(ID,X) :- man(ID,X).
man(X):- random(1,4,ID), man(ID,X).
man(1,socrates).
man(2,plato).
man(3,aristotle).
but the problem is that if not every man was mortal, for example if only 1 out of 1000000 was mortal you would have to search a lot. It would be like searching for solutions for an equation by trying random numbers till you find one.
You could develop some sort of heuristic to find a solution close to the number but that may affect (negatively) the randomness.
I suspect that there is no way to do it more efficiently: you either have to calculate the set of solutions and pick one or pick one member of the superset of all solutions till you find one solution. But don't take my word for it xd

Resources