Clarification on `failures` solver statistic in MiniZinc - constraint-programming

I have been playing around with a simple n-queens model in MiniZinc:
include "globals.mzn";
int: n_queens = 8;
array[1..n_queens] of var 1..n_queens: queens;
constraint alldifferent(queens);
constraint alldifferent(i in 1..n_queens) (queens[i] + i);
constraint alldifferent(i in 1..n_queens) (queens[i] - i);
solve satisfy;
The MiniZinc handbook mentions failures as the "number of leaf nodes that were failed". Following are the statistics after running the model:
%%%mzn-stat: initTime=0.000576
%%%mzn-stat: solveTime=0.000822
%%%mzn-stat: solutions=1
%%%mzn-stat: variables=24
%%%mzn-stat: propagators=19
%%%mzn-stat: propagations=1415
%%%mzn-stat: nodes=47
%%%mzn-stat: failures=22
%%%mzn-stat: restarts=0
%%%mzn-stat: peakDepth=5
%%%mzn-stat-end
There were 22 failures. Being a beginner to constraint programming, my understanding was that the entire purpose of the paradigm is to prune and avoid leaf nodes as much as possible. I am extra confused since the peak depth of the search tree is reported as 5 (not 8).
Am I interpreting these statistics right? If yes, why are there leaf node failures in the model? Will I create a better model by trying to reduce these failures?

Those values depend of the search strategy, some times you cannot avoid a leaf node because it hasn't been pruned, that means, nothing before it told the solver that that node was going to be a failure, modeling it in a different way can prevent some failures, and can also prevent suboptimal solutions in the case of optimization problems.
These are the first three nodes that got evaluated on the search tree of the default search strategy of minizinc, I labeled them in the image of the Search Tree in the order they got evaluated, and the 4 and 5 to show the arrival to a feasible solution.
In the the blue dots are nodes where there is still uncertainty, the red squares are failures, white dots are non evaluated nodes, large triangles are whole branches where the search only resulted in failures, the green diamond means a feasible solution, and orange diamonds mean non-best-but-feasible solution (only in optimization problems).
The explanation of each of the labeled nodes is
0: Root node: all variables are un assigned
Nothing has happened, these are all the decision variables and their full domains
queens = array1d(1..8, [[1..8], [1..8], [1..8], [1..8], [1..8], [1..8], [1..8], [1..8]]);
1: First decision
Then it picked the smallest value in the domain of the last variable and made the first split, the solver thought either queens[8] = 1(left child of the root) or queens[8] = [2..8](right child of the root), it will first evaluate queens[8] = 1 and that bring the first node to existence,
queens = array1d(1..8, [[2..7], {2..6,8}, {2..5,7..8}, {2..4,6..8}, {2..3,5..8}, {2,4..8}, [3..8], 1]);
where the decision queens[8] = 1 already propagated to the other variables and removed values from its domains.
2: The search continues
Then it again splits at queens[7], this is the left child node where queens[7] = 3, the minimum value in the domain of that variable, and the propagation of that decision to the other variables.
queens = array1d(1..8, [{2,4..7}, {2,4..6}, {2,4..5,8}, {2,4,7..8}, {2,6..8}, [5..8], 3, 1]);
In hindsight (more like cheating by looking at the image of the Search Tree) we know that this whole branch of the search will result in failures, but we cannot know that while searching, because there is still uncertainty in some variables, to know that we would have to evaluate all of the possibilities, which are possibly feasible, that might happen or not, hopefully we will find a satisfying solution before that, but before carry on with the search, notice that already some some pruning got done in the form of nodes that won't exist, for example queens[4] can only take the values 2,4,7,8 at this point, and we haven't made any decision on it, its just the solver eliminating values from the variable that it knows will certainly result in failures, if we where making a brute force search this variable would have the same domain as in the root node [1..8] because we haven't made a decision on it yet, so we are making a smarter search by propagating the constraints.
3: First Failure: but we carry on
Carrying on with the same strategy it makes a split for queens[6], this time the minimum value queens[6] = 5, when propagating to the undecided variables, but there is no solution that satisfies all the constraints (here it gave the value 8 to two queens), so this is a dead end and must backtrack.
queens = array1d(1..8, [7, 2, 4, 8, 8, 5, 3, 1]); ---> Failure
So the very first three nodes of the search lead to a failure.
The search continues like that, since the choice for queens[6] = 5 caused a failure it goes to the next value queens[6] = [6..8], that search also results in the failures that are encircled in red in the image of the Search Tree.
As you can probably guess by now the search strategy is something like go in the order of the variables and split the domain of the variables by picking the smallest value available and put the rest of the domain in another node, this in minizinc search annotations are called input_order and indomain_min.
Now we fast forward the search to the node labeled 4.
4: Prelude to a solution: are we there yet?
Here you can see that queens[8] = 1 (remains the same), queens[7] = 5 while in the node 2 it was queens[7] = 3, that means that all the possibilities where queens[8] = 1 and queens[7] = [3..4] where either evaluated or pruned, but all lead to failures.
queens = array1d(1..8, [{2,4,6..7}, {2..3,6}, {2..4,7}, {3..4,7}, {2,6}, 8, 5, 1]);
Then this node spited into queens[6] = 2 (left child) which lead to more failures and queens[6] = 6 (right child)
5: We struck gold: a feasible Solution !!
queens[2] = 6 propagated, and the result satisfied all the constraints, so we have a solution and we stop the search.
queens = array1d(1..8, [4, 2, 7, 3, 6, 8, 5, 1]);
Pruning
Arriving to the solution only required 47 nodes of the gigantic Whole Search Tree, the area inside the blue line is the search tree is the Search Tree where nodes labeled 0,1,2,3,4,5 are, it is gigantic even pruned for this relatively small instance of 8 decision variables of cardinality 8 with a global constraint which certainly reduces the span of the search tree by a lot since it communicates the domains of the variables between each other much more effectively than the constraint store of the solver. The whole search tree only has 723 nodes in total (nodes and leafs) where only 362 are leafs, while the brute force search could generate all the possible 8^8 leaf nodes directly (again, it might not, but it could), thats a search space of 16.777.216 possibilities (its like 8 octal digits since its 8 variables with cardinality of domain 8), it is a big saving when you compare it, of the 16.777.216 to the solver only 362 made some sense, and 92 where feasible, its less than 0.0001% of the combinations of the whole search space you would face by, for example, generating at random a solution by generating 8 random digits in the range [1..8] and evaluating its feasibility afterwards, talk about a needle in a haystack.
Pruning basically means to reduce the search space, anything better than the evaluation of ALL the combinations, even by removing one single possibility is considered a pruned search space. Since this was a satisfaction problem rather than an optimization one, the pruning is just to remove unfeasible values from the domain of the variables.
In the optimization problems there are two types of pruning, the satisfaction pruning like before, eliminating imposible solutions, and the pruning done by the bounds of the objective function, when the bounds of the objective function can be determined before all the variables reached a value and be it is determined to be "worst" than the current "best" value found so far (i.e. in a minimization optimization the smallest value the objective could take in a branch is larger than the smallest value found so far in a feasible solution) you can prune that branch, which surely contains feasible (but not as good) solutions as well as unfeasible solutions, and save some work, also you still have to prune or evaluate all the tree if you want to find the optimal solution and prove that it is optimal.
To explore search trees like the ones of the images you can run your code with the gecode-gist solver in the minizinc IDE, or use minizinc --Solver gecode-gist <modelFile> <dataFile> in the command line, upon double clicking on one of the nodes you will see the state of the decision variables just like the ones in this post.
And even further use solve :: int_search( pos, varChoise, valChoise, complete) satisfy; to test this different search strategies
% variable selections:
ann : varChoise
% = input_order
% = first_fail
% = smallest
% = largest
;
% value selections:
ann : valChoise
% = indomain_min
% = indomain_max
% = indomain_median
% = indomain_random
% = indomain_split
% = indomain_reverse_split
;
just paste this in you model and uncomment one varChoise annotation and one valChoise to test that combination of variable selection and value selection, and see if one strategy finds the solution with less failures, less nodes, or less propagations. You can read more about them in the minizinc documentation.

Related

Reduce Time complexity

Question at hand : Complete the function minimumSwaps in the editor below. It must return an integer representing the minimum number of swaps to sort the array.
My Approach:
def minimumSwaps(arr):
count = 0
temp = [None]*len(arr)
res1=sorted(arr)
while(res1!=arr):
for i in range(int(len(arr))):
if(res1[i]!=arr[i]):
y=res1.index(arr[i])
arr[y] , arr[i]=arr[i] , arr[y]
count = count +1
return count
The code does give the required op for majority of the cases , but fails a few due to time limit exceeds error. Could someone suggest a few changes to reduce the time complexity issues and make the code more efficient. If Possible please try not to change the code in its entirety , I want to learn to make codes more efficient rather than trying a whole new approach altogether.
Link to one of the huge test case
To me, this is a graph problem. Maybe it's possible with a more simple solution, but I don't think so.
You can observe that to get the minimum swaps necessary, you'd just have to move every element into its sorted position. You can figure out where they're supposed to be by sorting and having an array indexed by element (or dictionary, for that matter) to the index.
Now, build a graph by making each item its own node, and connecting with a directed edge to the place it needs to be. We can observe that for a cycle of length k, we will need k-1 swaps to solve it. This is because we just need to swap each item forward, but the last swap actually solves two items rather than one. Thus, the answer is the sum of k-1 for each cycle, which can be reduced to n-c where c is the number of cycles.
To see why this works, consider the case of [2,3,1]. The sorted version of this array is [1,2,3]. Now, build the graph, where index 0 points to index 1 (since 2 needs to be in index 1), index 1 points to index 2, and index 2 points to index 0. We can run a search algorithm through the graph and find the number of cycles or components, and find that there is 1 cycle of length 3. So, the answer we produce is 3-1 = 2. As we can observe, this is indeed correct.
The problem gets a little more complicated if the array can contain duplicates, but it's not so bad, you'd just have to think a little harder. Maybe this isn't the intended solution, but it'll certainly work in O(n). Best of luck!

Transportation problem to minimize the cost using genetic algorithm

I am new to Genetic Algorithm and Here is a simple part of what i am working on
There are factories (1,2,3) and they can server any of the following customers(ABC) and the transportation costs are given in the table below. There are some fixed cost for A,B,C (2,4,1)
A B C
1 5 2 3
2 2 4 6
3 8 5 5
How to solve the transportation problem to minimize the cost using a genetic algorithm
First of all, you should understand what is a genetic algorithm and why we call it like that. Because we act like a single cell organism and making cross overs and mutations to reach a better state.
So, you need to implement your chromosome first. In your situation, let's take a side, customers or factories. Let's take customers. Your solution will look like
1 -> A
2 -> B
3 -> C
So, your example chromosome is "ABC". Then create another chromosome ("BCA" for example)
Now you need a fitting function which you wish to minimize/maximize.
This function will calculate your chromosomes' breeding chance. In your situation, that'll be the total cost.
Write a function that calculates the cost for given factory and given customer.
Now, what you're going to do is,
Pick 2 chromosomes weighted randomly. (Weights are calculated by fitting function)
Pick an index from 2 chromosomes and create new chromosomes via using their switched parts.
If new chromosomes have invalid parts (Such as "ABA" in your situation), make a fixing move (Make one of "A"s, "C" for example). We call it a "mutation".
Add your new chromosome to the chromosome set if it wasn't there before.
Go to first process again.
You'll do this for some iterations. You may have thousands of chromosomes. When you think "it's enough", stop the process and sort the chromosome set ascending/descending. First chromosome will be your result.
I'm aware that makes the process time/chromosome dependent. I'm aware you may or may not find an optimum (fittest according to biology) chromosome if you do not run it enough. But that's called genetic algorithm. Even your first run and second run may or may not produce the same results and that's fine.
Just for your situation, possible chromosome set is very small, so I guarantee that you will find an optimum in a second or two. Because the entire chromosome set is ["ABC", "BCA", "CAB", "BAC", "CBA", "ACB"] for you.
In summary, you need 3 informations for applying a genetic algorithm:
How should my chromosome be? (And initial chromosome set)
What is my fitting function?
How to make cross-overs in my chromosomes?
There are some other things to care about this problem:
Without mutation, genetical algorithm can stuck to a local optimum. It still can be used for optimization problems with constraints.
Even if a chromosome exists with a very low chance to be picked for cross-over, you shouldn't sort and truncate the chromosome set till the end of iterations. Otherwise, you may stuck at a local extremum or worse, you may get an ordinary solution candidate instead of global optimum.
To fasten your process, pick non-similar initial chromosomes. Without enough mutation rate, finding global optimum could be a real pain.
As mentioned in nejdetckenobi's answer, in this case the solution search space is too small, i.e. only 8 feasible solutions ["ABC", "BCA", "CAB", "BAC", "CBA", "ACB"]. I assume this is only a simplified version of your problem, and your problem actually contains more factories and customers (but the numbers of factories and customers are equal). In this case, you can just make use of special mutation and crossover to avoid infeasible solution with repeating customers, e.g. ["ABA", 'CCB', etc.].
For mutation, I suggest to use a swap mutation, i.e. randomly pick two customers, swap their corresponding factory (position):
ABC mutate to ACB
ABC mutate to CBA

Bitwise operations Python

This is a first run-in with not only bitwise ops in python, but also strange (to me) syntax.
for i in range(2**len(set_)//2):
parts = [set(), set()]
for item in set_:
parts[i&1].add(item)
i >>= 1
For context, set_ is just a list of 4 letters.
There's a bit to unpack here. First, I've never seen [set(), set()]. I must be using the wrong keywords, as I couldn't find it in the docs. It looks like it creates a matrix in pythontutor, but I cannot say for certain. Second, while parts[i&1] is a slicing operation, I'm not entirely sure why a bitwise operation is required. For example, 0&1 should be 1 and 1&1 should be 0 (carry the one), so binary 10 (or 2 in decimal)? Finally, the last bitwise operation is completely bewildering. I believe a right shift is the same as dividing by two (I hope), but why i>>=1? I don't know how to interpret that. Any guidance would be sincerely appreciated.
[set(), set()] creates a list consisting of two empty sets.
0&1 is 0, 1&1 is 1. There is no carry in bitwise operations. parts[i&1] therefore refers to the first set when i is even, the second when i is odd.
i >>= 1 shifts right by one bit (which is indeed the same as dividing by two), then assigns the result back to i. It's the same basic concept as using i += 1 to increment a variable.
The effect of the inner loop is to partition the elements of _set into two subsets, based on the bits of i. If the limit in the outer loop had been simply 2 ** len(_set), the code would generate every possible such partitioning. But since that limit was divided by two, only half of the possible partitions get generated - I couldn't guess what the point of that might be, without more context.
I've never seen [set(), set()]
This isn't anything interesting, just a list with two new sets in it. So you have seen it, because it's not new syntax. Just a list and constructors.
parts[i&1]
This tests the least significant bit of i and selects either parts[0] (if the lsb was 0) or parts[1] (if the lsb was 1). Nothing fancy like slicing, just plain old indexing into a list. The thing you get out is a set, .add(item) does the obvious thing: adds something to whichever set was selected.
but why i>>=1? I don't know how to interpret that
Take the bits in i and move them one position to the right, dropping the old lsb, and keeping the sign. Sort of like this
Except of course that in Python you have arbitrary-precision integers, so it's however long it needs to be instead of 8 bits.
For positive numbers, the part about copying the sign is irrelevant.
You can think of right shift by 1 as a flooring division by 2 (this is different from truncation, negative numbers are rounded towards negative infinity, eg -1 >> 1 = -1), but that interpretation is usually more complicated to reason about.
Anyway, the way it is used here is just a way to loop through the bits of i, testing them one by one from low to high, but instead of changing which bit it tests it moves the bit it wants to test into the same position every time.

Neo4j query for shortest path stuck (Do not work) if I have 2way relationship in graph nodes and nodes are interrelated

I made relation graph two relationship, like if A knows B then B knows A, Every node has unique Id and Name along with other properties.. So my graph looks like
if I trigger a simple query
MATCH (p1:SearchableNode {name: "Ishaan"}), (p2:SearchableNode {name: "Garima"}),path = (p1)-[:NAVIGATE_TO*]-(p2) RETURN path
it did not give any response and consumes 100% CPU and RAM of the machine.
UPDATED
As I read though posts and from comments on this post I simplified the model and relationship. Now it ends up to
Each relationship has different weights, to simplify consider horizontal connections weight 1, vertical weights 1 and diagonal relations have weights 1.5
In my database there are more than 85000 nodes and 0.3 Million relationships
Query with shortest path is not ends up to some result. It stuck in the processing and CPU goes to 100%
im afraid you wont be able to do much here. your graph is very specific, having a relation only to closest nodes. thats too bad cause neo4j is ok to play around the starting point +- few relations away, not over whole graph with each query
it means, once, you are 2 nodes away, the computational complexity raises up to:
8 relationships per node
distance 2
8 + 8^2
in general, the top complexity for a distance n is
O(8 + 8^n) //in case all affected nodes have 8 connections
you say, you got like ~80 000 of nodes.this means (correct me if im wrong), the longest distance of ~280 (from √80000). lets suppose your nodes
(p1:SearchableNode {name: "Ishaan"}),
(p2:SearchableNode {name: "Garima"}),
to be only 140 hopes away. this will create a complexity of 8^140 = 10e126, im not sure if any computer in the world can handle this.
sure, not all nodes have 8 connections, only those "in the middle", in our example graph it will have ~500 000 relationships. you got like ~300 000, which is maybe 2 times less so lets supose the overal complexity for an average distance of 70 (out of 140 - a very relaxed bottom estimation) for nodes having 4 relationships in average (down from 8, 80 000 *4 = 320 000) to be
O(4 + 4^70) = ~10e42
one 1GHz CPU should be able to calculate this by:
-1000 000 per second
10e42 == 10e36 * 1 000 000 -> 10e36 seconds
lets supose we got a cluster of 100 10Ghz cpu serves, 1000 GHz in total.
thats still 10e33 * 1 000 000 000 -> 10e33seconds
i would suggest to just keep away from AllshortestPaths, and look only for the first path available. using gremlin instead of cypher it is possible to implement own algorithms with some heuristics so actually you can cut down the time to maybe seconds or less.
exmaple: using one direction only = down to 10e16 seconds.
an example heuristic: check the id of the node, the higher the difference (subtraction value) between node2.id - node1.id, the higher the actual distance (considering the node creation order - nodes with similar ids to be close together). in that case you can either skip the query or just jump few relations away with something like MATCH n1-[:RELATED..5]->q-[:RELATED..*]->n2 (i forgot the syntax of defining exact relation count) which will (should) actually jump (instantly skip to) 5 distances away nodes which are closer to the n2 node = complexity down from 4^70 to 4^65. so if you can exactly calculate the distance from the nodes id, you can even match ... [:RELATED..65] ... which will cut the complexity to 4^5 and thats just matter of miliseconds for cpu.
its possible im completely wrong here. it has been already some time im our of school and would be nice to ask a mathematician (graph theory) to confirm this.
Let's consider what your query is doing:
MATCH (p1:SearchableNode {name: "Ishaan"}),
(p2:SearchableNode {name: "Garima"}),
path = (p1)-[:NAVIGATE_TO*]-(p2)
RETURN path
If you run this query in the console with EXPLAIN in front of it, the DB will give you its plan for how it will answer. When I did this, the query compiler warned me:
If a part of a query contains multiple disconnected patterns, this
will build a cartesian product between all those parts. This may
produce a large amount of data and slow down query processing. While
occasionally intended, it may often be possible to reformulate the
query that avoids the use of this cross product, perhaps by adding a
relationship between the different parts or by using OPTIONAL MATCH
You have two issues going on with your query - first, you're assigning p1 and p2 independent of one another, possibly creating this cartesian product. The second issue is that because all of your links in your graph go both ways and you're asking for an undirected connection you're making the DB work twice as hard, because it could actually traverse what you're asking for either way. To make matters worse, because all of the links go both ways, you have many cycles in your graph, so as cypher explores the paths that it can take, many paths it will try will loop back around to where it started. This means that the query engine will spend a lot of time chasing its own tail.
You can probably immediately improve the query by doing this:
MATCH p=shortestPath((p1:SearchableNode {name:"Ishaan"})-[:NAVIGATE_TO*]->(p2:SearchableNode {name:"Garima"}))
RETURN p;
Two modifications here - p1 and p2 are bound to each other immediately, you don't separately match them. Second, notice the [:NAVIGATE_TO*]-> part, with that last arrow ->; we're matching the relationship ONE WAY ONLY. Since you have so many reflexive links in your graph, either way would work fine, but either way you choose you cut the work the DB has to do in half. :)
This may still perform not so great, because traversing that graph is still going to have a lot of cycles, which will send the DB chasing its tail trying to find the best path. In your modeling choice here, you usually shouldn't have relationships going both ways unless you need separate properties on each relationship. A relationship can be traversed in both directions, so it doesn't make sense to have two (one in each direction) unless the information that relationship is capturing is semantically different.
Often you'll find with query performance that you can do better by reformulating the query and thinking about it, but there's major interplay between graph modeling and overall performance. With the graph set up with so many bi-directional links, there will only be so much you can do to optimize path-finding.
MATCH (p1:SearchableNode {name: "Ishaan"}), (p2:SearchableNode {name: "Garima"}),path = (p1)-[:NAVIGATE_TO*]->(p2) RETURN path
Or:
MATCH (p1:SearchableNode {name: "Ishaan"}), (p2:SearchableNode {name: "Garima"}), (p1)-[path:NAVIGATE_TO*]->(p2) RETURN path

Quadtree object movement

So I need some help brainstorming, from a theoretical standpoint. Right now I have some code that just draws some objects. The objects lie in the leaves of a quadtree. Now as the objects move I want to keep them placed in the correct leaf of the quadtree.
Right now I am just reconstructing the quadtree on the objects after I change their position. I was trying to figure out a way to correct the tree without rebuilding it completely. All I can think of is having a bunch of pointers to adjacent leaf nodes.
Does anyone have an idea of how to figure out the node into which an object moves without just having a ton of pointers everywhere or a link to articles on this? All I could find was different ways to build the quadtree, nothing about updating it.
If I understand your question. You want some way of mapping between spatial coordinates and leaves on the quadtree.
Here's one possible solution I've been looking at:
For simplicity, let's do the 1D case first. And lets assume we have 32 gridpoints in x. Every grid point then corresponds to some leaf on a quadtree of depth five. (depth 0 = the whole grid, depth 1 = 2 points, depth 2 = 4 points... depth 5 = 32 points).
Each leaf could be represented by the branch indices leading to the leaf. At each level there are two branches we can label A and B. So, a particular leaf might be labeled BBAAB, which would mean, go down the B branch, then the B branch, then the A branch, then the B branch and then the B branch.
So, how do you map e.g. BBABB to an x grid point between 0..31? Just convert it to binary, so that BBABB->11011 = 27. Thus, the mapping from gridpoint to leaf-node is simply a matter of translating the letters A and B into 0s and 1s and then interpreting the result as a binary number.
For the 2D case, it's only slightly more complicated. Now we have four branches from each node, so we can label each branch path using a four-letter alphabet, e.g. starting from the root and taking the 3rd branch and then the fourth branch and then the first branch and then the second branch and then the second branch again we would generate the string CDABB.
Now to convert the string (e.g. 'CDABB') into a pair of gridvalues (x,y).
Let's assume A is lower-left, B is lower right, C is upper left and D is upper right. Then, symbolically, we could write, A.x=0, A.y=0 / B.x=1, B.y=0 / C.x=0, C.y=1 / D.x=1, D.y=1.
Taking the example CDABB, we first look at its x values (CDABB).x = (01011), which gives us the x grid point. And similarly for y.
Finally, if you want to find out e.g. the node immediately to the right of CDABB, then simply convert it to a pair of binary numbers in x and y, add +1 to the x value and convert the new pair of binary numbers back into a string.
I'm sure this has all been discovered, but I haven't yet found this information on the web.
If you have the spatial data necessary to insert an element into the quad-tree in the first place (ex: its point or rectangle), then you have the same data needed to remove it.
An easy way is before you move an element, remove it from the quad-tree using the same data you used to originally insert it, then move it, then re-insert.
Removal from the quad-tree can first remove the element from the leaf node(s), then if the leaf nodes become empty, remove them from their parents. If the parents become empty, remove them from their parents, and so forth.
This simple method is efficient enough for a complex world of objects moving every frame as long as you implement the quad-tree efficiently (ex: use a free list for the nodes). There shouldn't have to be a heap allocation on a per-node basis to insert it, nor a heap deallocation involved in removing every single node. Most node allocations/deallocations should be a simple constant-time operation just involving, say, the manipulation of a couple of integers or pointers.
You can also make this a little more complex if you like. You can start off storing the previous position of an object and then move it. If the new position occupies nodes other than the previous position, then remove the object from the nodes it no longer occupies and insert it to the new ones. Otherwise just keep it in the same node(s).
Update
I usually try to avoid linking my previous answers, but in this case I ended up doing a pretty comprehensive write up on the topic which would be hard to replicate anywhere else. Here it is: https://stackoverflow.com/a/48330314/4842163

Resources