OR-Tools CVRP with Multiple Trips - traveling-salesman

I'm trying to use OR-Tools' Routing Solver to solve a Multi Trip Capacitated VRP. What I need is
one depot, route starts and ends here.
one unloading location (different from the depot)
set time window and demand for each node
So the vehicles should pick up the goods from each node until the capacity is filled. Then go to "unloading location", unload all their weight and keep collecting the demand from nodes until a time limit is reached or all the goods are collected. Then return back to the depot.
CVRP Reload Example seems very close but in my case, at the end of the route vehicles should visit the unloading location before the depot. In other words a vehicle can not go to the depot (starting, ending location) with load.
Example:
0: Depot
1: Unloading Location
2, 3, 4, 5, 6, 7: Nodes to pick up demand
0 > 2 > 3 > 4 > 1 (unload) > 5 > 6 > 7 > 1 (unload) > 0
0 > 2 > 3 > 4 > 1 (unload) > 5 > 6 > 7 > 0 This is the result cvrp_reload returns.
I'm fairly new to or-tools, and trying to figure it out. Can you help me if you have any ideas?
I'm using Python and or-tools v8.2
This is a cross-post from github.
I think, it's possible to implement a preceding constraint before the last point (unload before depot) by using a count_dimension but I couldn't figure out how.

Simply add
# Vehicles must be empty upon arrival
capacity_dimension = routing.GetDimensionOrDie("Capacity")
for v in range(manager.GetNumberOfVehicles()):
print(f"vehicle {v}")
end = routing.End(v)
#routing.solver().Add(capacity_dimension.CumulVar(end) == 0) # see comment below
capacity_dimension.SetCumulVarSoftUpperBound(end, 0, 100_000)
possible output:
./2442_unload.py
...
I0417 23:53:02.181640 17437 search.cc:260] Solution #317 (372696, objective maximum = 4838552, time = 2969 ms, branches = 3223, failures = 940, depth = 33, MakeInactiveOperator, neighbors = 265820, filtered neighbors = 317, accepted neighbors = 317, memory used = 14.93 MB, limit = 99%)
I0417 23:53:20.290527 17437 search.cc:260] Solution #318 (469816, objective minimum = 372696, objective maximum = 4838552, time = 2987 ms, branches = 3239, failures = 945, depth = 33, MakeInactiveOperator, neighbors = 267395, filtered neighbors = 318, accepted neighbors = 318, memory used = 14.93 MB, limit = 99%)
I0417 23:53:21.045410 17437 search.cc:260] Solution #319 (469816, objective minimum = 372696, objective maximum = 4838552, time = 2988 ms, branches = 3247, failures = 947, depth = 33, MakeActiveOperator, neighbors = 267415, filtered neighbors = 319, accepted neighbors = 319, memory used = 14.93 MB, limit = 99%)
I0417 23:53:22.304931 17437 search.cc:260] Solution #320 (372696, objective maximum = 4838552, time = 2989 ms, branches = 3253, failures = 949, depth = 33, MakeActiveOperator, neighbors = 267464, filtered neighbors = 320, accepted neighbors = 320, memory used = 14.93 MB, limit = 99%)
I0417 23:53:30.987548 17437 search.cc:260] Finished search tree (time = 2998 ms, branches = 3259, failures = 982, neighbors = 268318, filtered neighbors = 320, accepted neigbors = 320, memory used = 14.93 MB)
I0417 23:53:31.046630 17437 search.cc:260] End search (time = 2998 ms, branches = 3259, failures = 982, memory used = 14.93 MB, speed = 1087 branches/s)
Objective: 372696
dropped orders: [25]
dropped reload stations: [3, 5]
Route for vehicle 0:
0 Load(0) Time(0,0) -> 20 Load(0) Time(75,506) -> 12 Load(3) Time(94,525) -> 14 Load(6) Time(119,550) -> 13 Load(10) Time(140,700) -> 8 Load(13) Time(159,1000) -> 0 Load(0) Time(237,1500)
Distance of the route: 2624m
Load of the route: 0
Time of the route: 237min
Route for vehicle 1:
1 Load(0) Time(0,0) -> 19 Load(0) Time(2,182) -> 24 Load(3) Time(20,200) -> 26 Load(7) Time(42,400) -> 4 Load(15) Time(89,770) -> 7 Load(15) Time(92,773) -> 11 Load(0) Time(169,850) -> 17 Load(3) Time(188,959) -> 10 Load(11) Time(229,1000) -> 1 Load(0) Time(307,1500)
Distance of the route: 2648m
Load of the route: 0
Time of the route: 307min
Route for vehicle 2:
2 Load(0) Time(0,0) -> 23 Load(0) Time(15,63) -> 22 Load(4) Time(37,85) -> 21 Load(7) Time(85,101) -> 9 Load(10) Time(104,120) -> 18 Load(0) Time(184,200) -> 16 Load(8) Time(226,600) -> 15 Load(12) Time(248,800) -> 6 Load(15) Time(268,1000) -> 2 Load(0) Time(346,1500)
Distance of the route: 2624m
Load of the route: 0
Time of the route: 346min
Total Distance of all routes: 7896m
Total Load of all routes: 0
Total Time of all routes: 890min
note: I use the Soft constraint since otherwise the solver prefer to drop all nodes and never manage to escape form this solution search space point.

Related

How to select a good sample size of nodes from a graph

I have a network that has a node attribute labeled as 0 or 1. I want to find how the distance between nodes with the same attribute differs from the distance between nodes with a different attributes. As it is computationally difficult to find the distance between all combinations of nodes, I want to select a sample size of nodes. How will I select a sample size of nodes? I am working on python and networkx
You've not given many details, so I'll invent some data and make assumptions in the hope it's useful.
Start by importing packages and sampling a dataset:
import random
import networkx as nx
# human social networks tend to be "scale-free"
G = nx.generators.scale_free_graph(1000)
# set labels to either 0 or 1
for i, attr in G.nodes.data():
attr['label'] = 1 if random.random() < 0.2 else 0
Next, calculate the shortest paths between random pairs of nodes:
results = []
# I had to use 100,000 pairs to get the CI small enough below
for _ in range(100000):
a, b = random.sample(list(G.nodes), 2)
try:
n = nx.algorithms.shortest_path_length(G, a, b)
except nx.NetworkXNoPath:
# no path between nodes found
n = -1
results.append((a, b, n))
Finally, here is some code to summarise the results and print them out:
from collections import Counter
from scipy import stats
# somewhere to counts of both 0, both 1, different labels
c_0 = Counter()
c_1 = Counter()
c_d = Counter()
# accumulate distances into the above counters
node_data = {i: a['label'] for i, a in G.nodes.data()}
cc = { (0,0): c_0, (0,1): c_d, (1,0): c_d, (1,1): c_1 }
for a, b, n in results:
cc[node_data[a], node_data[b]][n] += 1
# code to display the results nicely
def show(c, title):
s = sum(c.values())
print(f'{title}, n={s}')
for k, n in sorted(c.items()):
# calculate some sort of CI over monte carlo error
lo, hi = stats.beta.ppf([0.025, 0.975], 1 + n, 1 + s - n)
print(f'{k:5}: {n:5} = {n/s:6.2%} [{lo:6.2%}, {hi:6.2%}]')
show(c_0, 'both 0')
show(c_1, 'both 1')
show(c_d, 'different')
The above prints out:
both 0, n=63930
-1: 60806 = 95.11% [94.94%, 95.28%]
1: 107 = 0.17% [ 0.14%, 0.20%]
2: 753 = 1.18% [ 1.10%, 1.26%]
3: 1137 = 1.78% [ 1.68%, 1.88%]
4: 584 = 0.91% [ 0.84%, 0.99%]
5: 334 = 0.52% [ 0.47%, 0.58%]
6: 154 = 0.24% [ 0.21%, 0.28%]
7: 50 = 0.08% [ 0.06%, 0.10%]
8: 3 = 0.00% [ 0.00%, 0.01%]
9: 2 = 0.00% [ 0.00%, 0.01%]
both 1, n=3978
-1: 3837 = 96.46% [95.83%, 96.99%]
1: 6 = 0.15% [ 0.07%, 0.33%]
2: 34 = 0.85% [ 0.61%, 1.19%]
3: 34 = 0.85% [ 0.61%, 1.19%]
4: 31 = 0.78% [ 0.55%, 1.10%]
5: 30 = 0.75% [ 0.53%, 1.07%]
6: 6 = 0.15% [ 0.07%, 0.33%]
To save space I've cut off the section where the labels differ. The proportions in the square brackets is the 95% CI of the Monte-Carlo error. Using more iterations above allows you to reduce this error, while obviously taking more CPU time.
This is more or less an extension of my discussion with Sam Mason and only want to give you some timing numbers, because as discussed maybe retrieving all distances is feasible and may even faster. Based on the code in Sam Mason answer, I tested both variants and retrieving all distances is for 1000 nodes much faster than sampling 100 000 pairs. The main advantage is that all "retrieved distances" are used.
import random
import networkx as nx
import time
# human social networks tend to be "scale-free"
G = nx.generators.scale_free_graph(1000)
# set labels to either 0 or 1
for i, attr in G.nodes.data():
attr['label'] = 1 if random.random() < 0.2 else 0
def timing(f):
def wrap(*args, **kwargs):
time1 = time.time()
ret = f(*args, **kwargs)
time2 = time.time()
print('{:s} function took {:.3f} ms'.format(f.__name__, (time2-time1)*1000.0))
return ret
return wrap
#timing
def get_sample_distance():
results = []
# I had to use 100,000 pairs to get the CI small enough below
for _ in range(100000):
a, b = random.sample(list(G.nodes), 2)
try:
n = nx.algorithms.shortest_path_length(G, a, b)
except nx.NetworkXNoPath:
# no path between nodes found
n = -1
results.append((a, b, n))
#timing
def get_all_distances():
all_distances = nx.shortest_path_length(G)
get_sample_distance()
# get_sample_distance function took 2338.038 ms
get_all_distances()
# get_all_distances function took 304.247 ms
``

OR-Tools for VRP without constraints wrongly grouped

I am using OR tools to solve VRP without any constraints. Here is the source code:
def create_data_model():
"""Stores the data for the problem."""
data = {}
data['distance_matrix'] = [
[0, 20079, 2613, 8005, 19277, 12468, 13701],
[0, 0, 21285, 16012, 32574, 35394, 28806],
[0, 18233, 0, 5392, 19965, 19650, 13064],
[0, 15013, 5639, 0, 22883, 22570, 15982],
[0, 32991, 19256, 21815, 0, 18414, 9112],
[0, 34348, 16976, 23122, 15678, 0, 14647],
[0, 27652, 13917, 16476, 8043, 14820, 0]
]
data['time_matrix'] = [
[0, 1955, 508, 1331, 1474, 1427, 1292],
[0, 0, 1795, 1608, 2057, 2410, 2036],
[0, 1485, 0, 823, 1370, 1541, 1100],
[0, 1402, 924, 0, 1533, 1637, 1263],
[0, 2308, 1663, 1853, 0, 1766, 1104],
[0, 2231, 1373, 1660, 1441, 0, 1554],
[0, 1998, 1353, 1543, 764, 1550, 0]
]
data['num_vehicles'] = 6
data['depot'] = 0
return data
def print_solution(data, manager, routing, solution):
"""Prints solution on console."""
max_route_distance = 0
for vehicle_id in range(data['num_vehicles']):
index = routing.Start(vehicle_id)
plan_output = 'Route for vehicle {}:\n'.format(vehicle_id)
route_distance = 0
while not routing.IsEnd(index):
plan_output += ' {} -> '.format(manager.IndexToNode(index))
previous_index = index
index = solution.Value(routing.NextVar(index))
route_distance += routing.GetArcCostForVehicle(
previous_index, index, vehicle_id)
plan_output += '{}\n'.format(manager.IndexToNode(index))
plan_output += 'Distance of the route: {}m\n'.format(route_distance)
print(plan_output)
max_route_distance = max(route_distance, max_route_distance)
print('Maximum of the route distances: {}m'.format(max_route_distance))
def test(request):
# Instantiate the data problem.
data = create_data_model()
# Create the routing index manager.
manager = pywrapcp.RoutingIndexManager(len(data['distance_matrix']),
data['num_vehicles'], data['depot'])
# Create Routing Model.
routing = pywrapcp.RoutingModel(manager)
def distance_callback(from_index, to_index):
"""Returns the distance between the two nodes."""
# Convert from routing variable Index to distance matrix NodeIndex.
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['distance_matrix'][from_node][to_node]
transit_callback_index = routing.RegisterTransitCallback(distance_callback)
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)
dimension_name = 'Distance'
routing.AddDimension(
transit_callback_index,
0, # no slack
1000000000, # vehicle maximum travel distance
True, # start cumul to zero
dimension_name)
distance_dimension = routing.GetDimensionOrDie(dimension_name)
distance_dimension.SetGlobalSpanCostCoefficient(35394)
# Setting first solution heuristic.
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
# Solve the problem.
solution = routing.SolveWithParameters(search_parameters)
# Print solution on console.
if solution:
print_solution(data, manager, routing, solution)
return HttpResponse('')
Everything above was copy-pasted from google's example, except for the following:
My own distance matrix & num of vehicles
very large vehicle maximum travel distance in AddDimension function
SetGlobalSpanCostCoefficient(35394)
I followed OR-Tools solve traveling salesman (TSP) without returning to the home node to set distance from all nodes to 0 (the depot) to 0.
The result for the above code is shown below:
Route for vehicle 0:
0 -> 1 -> 0
Distance of the route: 20079m
Route for vehicle 1:
0 -> 5 -> 0
Distance of the route: 12468m
Route for vehicle 2:
0 -> 4 -> 0
Distance of the route: 19277m
Route for vehicle 3:
0 -> 2 -> 3 -> 0
Distance of the route: 8005m
Route for vehicle 4:
0 -> 6 -> 0
Distance of the route: 13701m
Route for vehicle 5:
0 -> 0
Distance of the route: 0m
Maximum of the route distances: 20079m
To verify the above output, I marked the points in the google map. The numbering order is same as the order of the distance matrix.
Coords on map
The depot(starting point) is in Watthana, which can be seen near marker B.
Clearly, from Watthana, the cheapest path should be 2-1-3 in a single trip. But Google OR returns it as two trips (as seen in routes for vehicles 0 and 3). This can also be verified by manually adding the distances.
Dist from home to 2 to 1 to 3 = 2613+5392+15013 = 23018m
Dist of vehicles 0 and 3 = 20079+8005 = 28084m
What am I doing wrong? How can I get google to not separate out the point 1? Also please note that ideally points E,F,D could have also been grouped but they were not.
Thanks in advance!
From the question, I think what you want is to reduce the cumulative distance traveled by all vehicles.
distance_dimension.SetGlobalSpanCostCoefficient(35394)
Conversely, This code makes sure that the distance traveled by each vehicle is minimized by adding a Span Cost to the objective function weighted by 35394.
global_span_cost =
coefficient * (Max(dimension end value) - Min(dimension start value))
In your case this is not a very high priority, hence the solution would be to comment that line or reduce the coefficient to a small value like 1 or 2, to reduce the sole importance of it.
Read more about GSpanCoeff
Now the solution should be
Route for vehicle 0:
0 -> 2 -> 3 -> 1 -> 0
Distance of the route: 23018m
Route for vehicle 1:
0 -> 6 -> 4 -> 0
Distance of the route: 21744m
Route for vehicle 2:
0 -> 5 -> 0
Distance of the route: 12468m
Maximum of the route distances: 23018m
Sum of the route distances: 57230m

Recursive memoization solutio to solve "count changes"

I am trying to solving the "Counting Change" problem with memorization.
Consider the following problem: How many different ways can we make change of $1.00, given half-dollars, quarters, dimes, nickels, and pennies? More generally, can we write a function to compute the number of ways to change any given amount of money using any set of currency denominations?
And the intuitive solution with recursoin.
The number of ways to change an amount a using n kinds of coins equals
the number of ways to change a using all but the first kind of coin, plus
the number of ways to change the smaller amount a - d using all n kinds of coins, where d is the denomination of the first kind of coin.
#+BEGIN_SRC python :results output
# cache = {} # add cache
def count_change(a, kinds=(50, 25, 10, 5, 1)):
"""Return the number of ways to change amount a using coin kinds."""
if a == 0:
return 1
if a < 0 or len(kinds) == 0:
return 0
d = kinds[0] # d for digit
return count_change(a, kinds[1:]) + count_change(a - d, kinds)
print(count_change(100))
#+END_SRC
#+RESULTS:
: 292
I try to take advantage of memorization,
Signature: count_change(a, kinds=(50, 25, 10, 5, 1))
Source:
def count_change(a, kinds=(50, 25, 10, 5, 1)):
"""Return the number of ways to change amount a using coin kinds."""
if a == 0:
return 1
if a < 0 or len(kinds) == 0:
return 0
d = kinds[0]
cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
return cache[a]
It works properly for small number like
In [17]: count_change(120)
Out[17]: 494
work on big numbers
In [18]: count_change(11000)
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
<ipython-input-18-52ba30c71509> in <module>
----> 1 count_change(11000)
/tmp/ipython_edit_h0rppahk/ipython_edit_uxh2u429.py in count_change(a, kinds)
9 return 0
10 d = kinds[0]
---> 11 cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
12 return cache[a]
... last 1 frames repeated, from the frame below ...
/tmp/ipython_edit_h0rppahk/ipython_edit_uxh2u429.py in count_change(a, kinds)
9 return 0
10 d = kinds[0]
---> 11 cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
12 return cache[a]
RecursionError: maximum recursion depth exceeded in comparison
What's the problem with memorization solution?
In the memoized version, the count_change function has to take into account the highest index of coin you can use when you make the recursive call, so that you can use the already calculated values ...
def count_change(n, k, kinds):
if n < 0:
return 0
if (n, k) in cache:
return cache[n,k]
if k == 0:
v = 1
else:
v = count_change(n-kinds[k], k, kinds) + count_change(n, k-1, kinds)
cache[n,k] = v
return v
You can try :
cache = {}
count_change(120,4, [1, 5, 10, 25, 50])
gives 494
while :
cache = {}
count_change(11000,4, [1, 5, 10, 25, 50])
outputs: 9930221951

How can I generate random numbers in slabs of a range in Python3 or in a specific interval of range in Python3?

Suppose I want to generate 10 random numbers between 1 to 100. But I want to pick numbers randomly from each number of sets like 1-10, 11-20, 21-30,.... So that It does not come out like this: 1, 7, 26, 29, 51, 56, 59, 89, 92, 95.
I want to pick numbers randomly like this: 7, 14, 22, 39, 45, 58, 64, 76, 87, 93.
I have created a code sample. But I can't figure out later part of the problem.
import random
def getInteger(prompt):
result = int(input(prompt))
return result
range1 = getInteger("Please Enter Initial Range: ")
range2 = getInteger("Please Enter Ending Range: ")
range3 = getInteger("Please Enter the Range size: ")
myList = random.sample(range(range1, range2), range3)
myList.sort()
print ("Random List is here: ", myList)
I am new to programming. I googled about it, but did not find any solution. Thank you guys in advance...
In your case, you need to pick 10 times a random number between 0 and 9 and add 10 in each step.
import random
random_numbers = []
for i in range(0, 10):
random_number = random.randrange(10) # pick a number between 0-9
random_number += 10*i # add 10 in each iteration
random_numbers.append(random_number)
print(random_numbers)
EDIT:
if you want to set your own values, this could work:
import random
random_numbers = []
begin = 100
end = 200
interval = 10
for i in range(0, round((end-begin)/interval)):
random_number = random.randrange(interval)
random_number += round(interval)*i + begin
random_numbers.append(random_number)
print(random_numbers)
Consider using random.choice and a for loop:
>>> for i in range(1, 100, 10):
... print(random.choice(range(i, i + 10)))
...
10
19
21
34
45
51
68
74
88
98
>>> for i in range(1, 100, 10):
... print(random.choice(range(i, i + 10)))
...
6
14
30
37
50
56
65
79
85
94
You could use this:
import random
start = 1
stop = 100
interval = 10
ran = [random.choice( range(start + i*interval, start + (i+1)*interval-1))
for i in range(len(range(start,stop,interval)))]
print(ran)
Explanation:
the i is selected from 0 to len(...) of how many intervals you would get by using start to stop with/by interval
For this examble it returns 10 which results in numbers for i from 0 to 9.
The random.choice uses this i to compute/chunk up the whole number-range from start to stop in chunks of intervals size - choice then draws one of the numbers in this subrange for your resulting list.
range(start + i*interval, start + (i+1)*interval-1)
# evaluaters to
# i = 0: 1+0, 1+(0+1)*10-1 = 1,10
# i = 1: 1+10, 1+(1+1)*10-1 = 11,20
# etc.
Edit:
This might overshoot on the upper limit - which is fixable by using
ran = [random.choice( range(start + i*interval,min(stop, start + (i+1)*interval-1))) for i in range(len(range(start,stop,interval)))]
which limits the upper bound by using min(stop, calculated end)

Path finding: A star not same length from A to B than from B to A

I am implementing the A star algorithm with the Manhattan distance for the 8 puzzle. [ The solution is in spiral form]
1 2 3
8 0 4
7 6 5
In some case, going from A to B will not take the same number of steps as going from B to A.
I think this is because it does not pick the same state on the open list, when they have the same cost, thus, not expanding the same nodes.
From
7 6 4
1 0 8
2 3 5
(A -> B)
7 6 4
1 8 0
2 3 5
(B -> A)
7 6 4
1 3 8
2 0 5
Which both have the same value using Manhattan distance.
Should I explore all path with the same value?
Or should I change the heuristic to have some kind of tie-breaker?
Here is the relevant part of the code
def solve(self):
cost = 0
priority = 0
self.parents[str(self.start)] = (None, 0, 0)
open = p.pr() #priority queue
open.add(0, self.start, cost)
while open:
current = open.get()
if current == self.goal:
return self.print_solution(current)
parent = self.parents[str(current)]
cost = self.parents[str(current)][2] + 1
for new_state in self.get_next_states(current):
if str(new_state[0]) not in self.parents or cost < self.parents[str(new_state[0])][2]:
priority = self.f(new_state) + cost
open.add(priority, new_state[0], cost)
self.parents[str(new_state[0])] = (current, priority, cost)
After wasting so much time re-writing my "solve" function many different ways, for nothing,
I finally found the problem.
def get_next_states(self, mtx, direction):
n = self.n
pos = mtx.index(0)
if direction != 1 and pos < self.length and (pos + 1) % n:
yield (self.swap(pos, pos + 1, mtx),pos, 3)
if direction != 2 and pos < self.length - self.n:
yield (self.swap(pos, pos + n, mtx),pos, 4)
if direction != 3 and pos > 0 and pos % n:
yield (self.swap(pos, pos - 1, mtx),pos, 1)
if direction != 4 and pos > n - 1:
yield (self.swap(pos, pos - n, mtx),pos, 2)
It was in this function. The last if used to be "if 4 and pos > n:"
So there were unexplored states..
2 days for a "-1"
It will teach me to do more unit testing

Resources