Find all non co-linear points - graphics

Given a set of points in the 2-D Plane, is it possible to find a set of all possible non co-linear points in the 2-D plane?
Time Complexity doesn't matter at present just that the solution be correct.

What you're asking for might need clarification. Any two points are collinear to each other, since a line connects them. To make this question answerable, it seems I must interpret what you're asking to mean this: that if any number of points greater than or equal to three are all collinear with each other, our set of non-collinear points can contain only one of the collinear points.
Given this, we can do the following:
for each pair of points, calculate the slope between them as (y - y') / (x - x'). If x = x', simply note that slope as V.
next, for each pair of points and slope, check all the other pairs of points involving either of the points in the pair and see whether the corresponding slope is the same as the one being checked. Take all the points so determined and add them to a collection and add this collection to a collection.
Once you have finished, you will have a collection of collections, and all the points in each collection in your collection will consist of points which are all collinear to each other.
Now the problem is the following: Find the largest number of points that can be chosen so that no two points exist in two of the collections. If you picked two points that were in one of the collections, you'd have picked two points that were collinear together with at least one other point.
At this point, we can simply try all 2^n subsets of n points and check each one to see if it is acceptable (in that the intersection of the subset and any collection has size at most one).
Example:
p = (1, 1), q = (2, 2), r = (3, 3), s = (2, 3)
m = [ - 1 1 2]
[ 1 - 1 V]
[ 1 1 - 0]
[ 2 V 0 -]
(p, q): r yes, s no; collection (p, q, r)
(p, r): q yes, s no; collection (p, q, r)
(p, s): q no, r no; no collection
(q, p): r yes, s no; collection (p, q, r)
(q, r): p yes, s no; collection (p, q, r)
(q, s): p no, r no; no collection
(r, p): q yes, s no; collection (p, q, r)
(r, q): p yes, s no; collection (p, q, r)
(r, s): p no, q no; no collection
(s, p): q no, r no; no collection
(s, q): p no, r no; no collection
(s, r): p no, q no; no collection
Try candidate (p, q, r, s): intersection with (p, q, r) has size > 1, reject
Try candidate (p, q, r): intersection with (p, q, r) has size > 1, reject
Try candidate (p, q, s): intersection with (p, q, r) has size > 1, reject
…
Try candidate (p, s): intersection with (p, q, r) has size = 1, accept
This is clearly exponential in time and space but this will solve the problem.

Related

how to multiply three arrays with different dimension in PyTorch

enter image description here
L array dimension is (d,a) ,B is (a,a,N) and R is (a,d). By multiplying these arrays I have to get an array size of (d,d,N). How could I implement this is PyTorch
A possible and straightforward approach is to apply torch.einsum (read more here):
>>> torch.einsum('ij,jkn,kl->iln', L, B, R)
Where j and k are the reduced dimensions of L and R respectively. And n is the "batch" dimension of B.
The first matrix multiplication will reduce L#B (let this intermediate result be o):
ij,jkn->ikn
The second matrix multiplication will reduce o#R:
ikn,kl->iln
Which overall sums up to the following form:
ij,jkn,kl->iln
It seems like doing batch matrix multiplication. Like result[:,:, i]=L#B[:,:,i]#R. You can use:
B = B.permute([2,0,1])
result = torch.matmul(torch.matmul(L, B), R).permute([1,2,0])
N seems to be the batch dimension, let's forget it first.
It is simple chained matrix multiplication:
d, a = 3, 5
L = torch.randn(d, a)
B = torch.randn(a, a)
R = torch.randn(a, d)
L.matmul(B).shape # (d, a)
L.matmul(B).matmul(R).shape # (d, d)
Now let's add the batch dimension N.
Everything is almost the same, but PyTorch works with batch dim first whereas your data is batch dim last, so a bit of movedim is required.
N = 7
B = torch.randn(a, a, N)
L.matmul(B.movedim(-1, 0)).shape # (N, d, a)
L.matmul(B.movedim(-1, 0)).matmul(R).shape # (N, d, d)
L.matmul(B.movedim(-1, 0)).matmul(R).movedim(0, -1).shape # (d, d, N)

What is the maximum number of faces in a triangular mesh of n vertices in 3 dimensions?

In 2D the maximum number of faces for n vertices in a 'perfect'(non-overlapping) mesh is f = 2n - 4. Is there an equivalent result for 3 dimensions?
The Euler characteristic chi is defined as:
chi = V - E + F
where V, E, and F are the numbers of vertices, edges and faces, respectively.
For closed triangular meshes, we know that each edge has two incident faces and each face has three incident edges. Therefore:
3 * F = 2 * E
E = 3/2 * F
Hence,
chi = V - 3/2 * F + F
= V - 1/2 F
F = 2 * (V - chi)
In the 2D case for planar graphs, chi is 2, resulting in your definition F = 2 * V - 4.
For any 3D surface, the Euler characteristic can be calculated from its genus. In general, the more handles the surface has, the smaller its Euler characteristic. Hence, chi (and by that F) is not limited. However, for a fixed surface topology, the number of faces (relative to the number of vertices) is fixed.

Doing something on the final result of a recursive function, whithin the same function

Easier to show than to explain. I have this tiny function to do base conversion from base 10:
demode 0 _ = []
demode n b = m:(demode d b)
where (d, m) = divMod n b
So, if we want to see how we would write 28 in base 9, demode 28 9 = [1,3].
But, of course, we have then to invert the list so it looks like a 31.
This could be easily made by making a function that calls 'demode' and then reverses it result, but with Haskell being so cool and all that there's probably a more elegant way of saying "in the end case (demode 0 _), append everything to a list and then reverse the list".
Note that base conversion is just an example I'm using to illustrate the question, the real question is how to apply a final transformation to the last result of a recursive function.
Nope. Your only hope is to use a helper function. Note that Haskell does allow you to define functions in where clauses (at least for now), so that doesn't have to be a 'separate function' in the sense of a separate top-level definition. You have basically two choices:
Add an accumulator and do whatever work you want to do in the end:
demode n b = w n [] where
w 0 xn = reverse xn
w n xn = w d (xn ++ [m]) where
(d, m) = divMod n b
Hopefully you can follow how that would work, but note that, in this case, you are far better off saying
demode n b = w n [] where
w 0 xn = xn
w n xn = w d (m : xn) where
(d, m) = divMod n b
which builds the list in reversed order and returns that.
Push the regular definition down to a helper function, and wrap that function in whatever work you want:
demode n b = reverse (w n) where
w 0 = []
w n = m : w d where
(d, m) = divMod n b
(I've used the term w as a short-hand for 'worker' in all three examples).
Either case can generally benefit from learning to do your recursions using higher-order functions, instead.
In general, it's somewhat bad style in Haskell to try to do 'everything in one function'; Haskell style is built around dividing a problem into multiple parts, solving those with separate functions, and composing the resulting functions together; especially if those functions will be useful elsewhere as well (which happens more often than you might naively expect).

How to ensure correct edges in graph

I was trying to make a datatype for a graph in Haskell as follows:
type Vertex a = a
type Edge a = (Vertex a, Vertex a)
data Graph a = Combine [Vertex a] [Edge a]
This is a representation that worked for what I wanted to do, but I realized there could be edges for vertices which are not in the vertex-list.
My question is thus whether there is a possibility to assure every edge only contains vertices from the vertex-list?
I thought a lot about it already, but the best idea I got thus far was some function that makes a new graph with all missing vertices from the edge-list added to the vertex-list. Something like:
fix_graph :: Graph a -> Graph a
fix_graph (Combine v e) = Combine (removeDuplicates (v ++ [x | (x,_) <- e] ++ [y | (_,y) <- e])) e
removeDuplicates :: [t] -> [t]
...
Because this idea did not really satisfy me (also because I didn't take the time to implement it well), I wondered whether it would be possible to have a data constructor that adds the vertices from the edges which are not in the vertex-list yet immediately.
I've already read through the answers here, but I'm not really fond of the adjacency-representation used there. I know I am being annoying, but I would just like to get to know whether there aren't any other possibilities to solve this problem.
If anybody could help me with a solution or with getting rid of my illusions, it would be helpful...
Thanks in advance
So there are a couple different options:
Tying the knot
There are lots of ways to encode graphs in Haskell. The absolute simplest is to use a process called "tying the knot" to create circularity in a tree data structure. So for example to encode this graph:
.--.
A -- B -- C -- D -./
| | | |
E -- F G -- H
| |
+--------------+
You can simply write a node as its name and list of children:
data Node = Node String [Node]
instance Eq Node where
Node a _ == Node b _ = a == b
my_graph = [a, b, c, d, e, f, g, h] where
a = Node "A" [b, e]
b = Node "B" [a, c, f]
c = Node "C" [b, d, g]
d = Node "D" [c, d, h]
e = Node "E" [a, f, h]
f = Node "F" [b, e]
g = Node "G" [c, h]
h = Node "H" [d, e, g]
This has a lot of convenience: you can now walk through the data structure like any other Haskell data structure, with tail-recursive functions. To terminate on cycles, on recursion you tack the current item onto a path variable and the first thing that your recursive logic should say is, | nodeelempath = ... to handle the cycles however you want.
The flip side is that your minor consistency issues have blown up a bit into really thorny consistency issues. Consider for example the difference between these two:
-- A has one child, which is itself; B has one child, which is A.
let a = Node "A" [a]; b = Node "B" [a] in [a, b]
-- this looks almost the same but if you descend far enough on B you find an
-- impostor node with the wrong children.
let a = Node "A" [a]
impostor = Node "A" [b]
b = Node "B" [Node "A" [Node "A" [impostor]]]
in [a, b]
So that sucks and the only real answer I have for it is, "normalize by converting to one of the below...".
Anyway, the above trick also goes by the names mutual recursion and letrec, and basically means that within a where or let clause, all of the definitions that you put there can "see each other". It is not a property of laziness; you can make the above data structure in a strict language too -- but the language design for a functional strict language which understands mutually-recursive definitions this way might be a little difficult. (With a non-functional language, you just create the pointers as you need.)
Explicit numerology
Now think about how you'd take such a graph as we've got above, and convert it to your representation. The easiest way would involve going through a middle-man step which contains an Array:
import From.Above.Code (Node)
import Data.Array
type Graph = Array [Int]
graph :: [Node] -> Maybe Graph
graph nodes = fmap (array (1, length nodes)) . sequence $ map format nodes where
indices = zip nodes [1..]
pair x y = (x, y)
format node#(Node _ children) = do -- in the Maybe monad
index <- lookup node indices
index_list <- sequence $ map (flip lookup indices) children
return (index, index_list)
Now, this has a lot fewer consistency issues, which can now all be alleviated programmatically. However, those consistency issues can serve a purpose if you want to programmatically create such a graph with the State monad, and you want to temporarily leave the data structure in an inconsistent state until the proper node is read. The only disadvantage is that when you write the graph into your file, it looks a bit harder to understand because numbers are less friendly than strings:
array (1, 8) [
(1, [2, 5]),
(2, [1, 3, 6]),
(3, [2, 4, 7]),
(4, [3, 4, 8]),
(5, [1, 6, 8]),
(6, [2, 5]),
(7, [3, 8]),
(8, [4, 5, 7])]
You can solve this with, say, a Map String [String] for the tradeoff that accesses become O(log n). In any case you should learn that representation: you will want to convert to an IntMap [Int] and back when you to do your "completeness checks" that you're proposing.
Once you've got these, it turns out that you can use a backing Array Int Node to create a recursive [Node] as above:
nodesFromArray arr = nodes where
makeNode index children = Node (show index) [backingArray ! c | c <- children]
backingArray = array (bounds arr) [(i, makeNode i c) | (i, c) <- assocs arr]
nodes = map makeNode arr
Lists of edges
Once you've got the above lists (either Map.toList or Array.assocs), lists of edges become very easy to make:
edges_from_array = concatMap . uncurry (fmap . pair) . assocs
The flip-side is a little more complicated and accomplishes what you're trying to do directly:
import Data.Map (Map)
import Data.Set (Set)
import qualified Data.Map as Map
import qualified Data.Set as Set
makeGraphMap vertices edges = add edges (Map.fromList $ blankGraph vertices) where
blankGraph verts = zip verts (repeat Set.empty)
setInsert x Nothing = Just $ Set.singleton x
setInsert x (Just set) = Just $ Set.insert x set
add [] graphMap = fmap Set.toList graphMap
add ((a, b) : es) graphMap = Map.alter (setInsert b) a verts
That is, we walk the list of edges with a map which maps keys to their sets of children; we initialize this to the list of vertices mapping to empty sets (so that we can have disconnected single nodes in our graphs), then walk through the edges by inserting the value to the set at the key, creating that set if we don't see the key.

Haskell: tail recursion version of depth of binary tree

First of all I have two different implementation that I believe are correct, and have profiled them and thinking they are about of the same performance:
depth::Tree a -> Int
depth Empty = 0
depth (Branch b l r) = 1 + max (depth l) (depth r)
depthTailRec::Tree a -> Int
depthTailRec = depthTR 0 where
depthTR d Empty = d
depthTR d (Branch b l r) = let dl = depthTR (d+1) l; dr = depthTR (d+1) r in max dl dr
I was just wondering aren't people are talking about how tail recursion can be beneficial for performance? And a lot of questions are jumping into my head:
How can you make the depth function faster?
I read about something about how Haskell's laziness can reduce the need of tail recursion, is that true?
Is it the truth that every recursion can be converted into tail recursion?
Finally tail recursion can be faster and space efficient because it can be turned into loops and thus reduce the need to push and pop the stack, is my understanding right?
1. Why isn't your function tail recursive?
For a recursive function to be tail recursive, all the recursive calls must be in tail position. A function is in tail position if it is the last thing to be called before the function returns. In your first example you have
depth (Branch _ l r) = 1 + max (depth l) (depth r)
which is equivalent to
depth (Branch _ l r) = (+) 1 (max (depth l) (depth r))
The last function called before the function returns is (+), so this is not tail recursive. In your second example you have
depthTR d (Branch _ l r) = let dl = depthTR (d+1) l
dr = depthTR (d+1) r
in max dl dr
which is equivalent to (once you've re-lambdified all the let statements) and re-arranged a bit
depthTR d (Branch _ l r) = max (depthTR (d+1) r) (depthTR (d+1) l)
Now the last function called before returning is max, which means that this is not tail recursive either.
2. How could you make it tail recursive?
You can make a tail recursive function using continuation-passing style. Instead of re-writing your function to take a state or an accumulator, you pass in a function (called the continuation) that is an instruction for what to do with the value computed -- i.e. instead of immediately returning to the caller, you pass whatever value you have computed to the continuation. It's an easy trick for turning any function into a tail-recursive function -- even functions that need to call themselves multiple times, as depth does. It looks something like this
depth t = go t id
where
go Empty k = k 0
go (Branch _ l r) k = go l $ \dl ->
go r $ \dr ->
k (1 + max dl dr)
Now you see that the last function called in go before it returns is itself go, so this function is tail recursive.
3. Is that it, then?
(NB this section draws from the answers to this previous question.)
No! This "trick" only pushes the problem back somewhere else. Instead of a non-tail recursive function that uses lots of stack space, we now have a tail-recursive function that eats thunks (unapplied functions) which could potentially be taking up a lot of space themselves. Fortunately, we don't need to work with arbitrary functions - in fact, there are only three kinds
\dl -> go r (\dr -> k (1 + max dl dr)) (which uses the free variables r and k)
\dr -> k (1 + max dl dr) (with free variables k and dl)
id (with no free variables)
Since there are only a finite number of functions, we can represent them as data
data Fun a = FunL (Tree a) (Fun a) -- the fields are 'r' and 'k'
| FunR Int (Fun a) -- the fields are 'dl' and 'k'
| FunId
We'll have to write a function eval as well, which tells us how to evaluate these "functions" at particular arguments. Now you can re-write the function as
depth t = go t FunId
where
go Empty k = eval k 0
go (Branch _ l r) k = go l (FunL r k)
eval (FunL r k) d = go r (FunR d k)
eval (FunR dl k) d = eval k (1 + max dl d)
eval (FunId) d = d
Note that both go and eval have calls to either go or eval in tail position -- therefore they are a pair of mutually tail recursive functions. So we've transformed the version of the function that used continuation-passing style into a function that uses data to represent continuations, and uses a pair of mutually recursive functions to interpret that data.
4. That sounds really complicated
Well, I guess it is. But wait! We can simplify it! If you look at the Fun a data type, you'll see that it's actually just a list, where each element is either a Tree a that we're going to compute the depth of, or it's an Int representing a depth that we've computed so far.
What's the benefit of noticing this? Well, this list actually represents the call stack of the chain of continuations from the previous section. Pushing a new item onto the list is pushing a new argument onto the call stack! So you could write
depth t = go t []
where
go Empty k = eval k 0
go (Branch _ l r) k = go l (Left r : k)
eval (Left r : k) d = go r (Right d : k)
eval (Right dl : k) d = eval k (1 + max dl d)
eval [] d = d
Each new argument you push onto the call stack is of type Either (Tree a) Int, and as the functions recurse, they keep pushing new arguments onto the stack, which are either new trees to be explored (whenever go is called) or the maximum depth found so far (whenever eval is called).
This call strategy represents a depth-first traversal of the tree, as you can see by the fact that the left tree is always explored first by go, while the right tree is always pushed onto the call stack to be explored later. Arguments are only ever popped off the call stack (in eval) when an Empty branch has been reached and can be discarded.
5. Alright... anything else?
Well, once you've noticed that you can turn the continuation-passing algorithm into a version that mimics the call stack and traverses the tree depth first, you might start to wonder whether there's a simpler algorithm that traverses the tree depth first, keeping track of the maximum depth encountered so far.
And indeed, there is. The trick is to keep a list of branches that you haven't yet explored, together with their depths, and keep track of the maximum depth you've seen so far. It looks like this
depth t = go 0 [(0,t)]
where
go depth [] = depth
go depth (t:ts) = case t of
(d, Empty) -> go (max depth d) ts
(d, Branch _ l r) -> go (max depth d) ((d+1,l):(d+1,r):ts)
I think that's about as simple as I can make this function within the constraints of ensuring that it's tail-recursive.
6. So that's what I should use?
To be honest, your original, non tail-recursive version is probably fine. The new versions aren't any more space efficient (they always have to store the list of trees that you're going to process next) but they do have the advantage of storing the trees to be processed next on the heap, rather than on the stack - and there's lots more space on the heap.
You might want to look at the partially tail-recursive function in Ingo's answer, which will help in the case when your trees are extremely unbalanced.
A partially tail recursive version would be this:
depth d Empty = d
depth d (Branch _ l Empty) = depth (d+1) l
depth d (Branch _ Empty r) = depth (d+1) r
depth d (Branch _ l r) = max (depth (d+1) l) (depth (d+1) r)
Note that tail rescursion in this case (as opposed to the more complex full case in Chris' answer) is done only to skip the incomplete branches.
But this should be enough under the assumption that the depth of your trees is at most some double digit number. In fact, if you properly balance your tree, this should be fine. If your trees, OTOH, use to degenerate into lists, then this already will help to avoid stack overflow (this is a hypothesis I haven't proved, but it is certainly true for a totally degenerated tree that has no branch with 2 non empty children.).
Tail recursion is not a virtue in and of itself. It is only then important if we do not want to explode the stack with what would be a simple loop in imperative programming languages.
to your 3., yes, e.g. by use of CPS technique (as shown in Chris's answer);
to your 4., correct.
to your 2., with lazy corecursive breadth-first tree traversal we naturally get a solution similar to Chris's last (i.e. his #5., depth-first traversal with explicated stack), even without any calls to max:
treedepth :: Tree a -> Int
treedepth tree = fst $ last queue
where
queue = (0,tree) : gen 1 queue
gen 0 p = []
gen len ((d,Empty) : p) = gen (len-1) p
gen len ((d,Branch _ l r) : p) = (d+1,l) : (d+1,r) : gen (len+1) p
Though both variants have space complexity of O(n) in the worst case, the worst cases themselves are different, and opposite to each other: the most degenerate trees are the worst case for depth-first traversal (DFT) and the best case (space-wise) for breadth-first (BFT); and similarly the most balanced trees are the best case for DFT and the worst for BFT.

Resources