If I have a sequence of values, how would I find the index of an element based on a predicate function? For example, if I had the following seq:
let values = #["pie", "cake", "ice cream"]
How would I find the index of the first element with four characters? I know of find, but it seems to only find index by equality, and does not allow passing a predicate. I could implement this myself but it feels as if it should be be in the standard library if find is.
A simple solution would be to use map from sequtils to map the predicate over the input sequence, and then to use find to get the index of the first true value in the result. This returns -1 when no element of the sequence satisfies the predicate:
import sequtils
proc isLen4(s: string): bool =
len(s) == 4
echo map(#["pie", "cake", "ice cream"], isLen4).find(true) #--> 1
This works, but is bad for large sequences since map processes the entire sequence. Thus even when the first element satisfies the predicate the entire sequence is processed. It would be better to just write a findIf procedure that returns the current index when the predicate is satisfied instead of continuing to process the rest of the input:
proc findIf[T](s: seq[T], pred: proc(x: T): bool): int =
result = -1 # return -1 if no items satisfy the predicate
for i, x in s:
if pred(x):
result = i
break
echo #["pie", "cake", "ice cream"].findIf(isLen4) #--> 1
Related
TL;DR: How can I generate a graph while constraining it to be subisomorph to every graph in a positive list while being non-subisomorph to every graph in a negative list?
I have a list of directed heterogeneous attributed graphs labeled as positive or negative. I would like to find the smallest list of patterns(graphs with special values) such that:
Every input graph has a pattern that matches(= 'P is subisomorphic to G, and the mapped nodes have the same attribute values')
A positive pattern can only match a positive graph
A positive pattern does not match any negative graph
A negative pattern can only match a negative graph
A negative pattern does not match any negative graph
Exemple:
Input g1(+),g2(-),g3(+),g4(+),g5(-),g6(+)
Acceptable solution: p1(+),p2(+),p3(-) where p1(+) matches g1(+) and g4(+); p2(+) matches g3(+) and g6(+); and p3(-) matches g2(-) and g5(-)
Non acceptable solution: p1(+),p2(-) where p1(+) matches g1(+),g2(-),g3(+); p2(-) matches g4(+),g5(-),g6(+)
Currently, I'm able to generate graphs matching every graph in a list, but I can't manage to enforce the constraint 'A positive pattern does not match any negative graph'. I made a predicate 'matches', which takes as input a pattern and a graph, and uses a local array of variables 'mapping' to try and map nodes together. But when I try to use that predicate in a negative context, the following error is returned: MiniZinc: flattening error: free variable in non-positive context.
How can I bypass that limitation? I tried to code the opposite predicate 'not_matches' but I've not yet found how to specify 'for all node mapping, the isomorphism is invalid'. I also can't define the mapping outside the predicate, because a pattern can match a graph more than once and i need to be able to get all mappings.
Here is a reproductible exemple:
include "globals.mzn";
predicate p(array [1..5] of var 0..10:arr1, array [1..5] of 1..10:arr2)=
let{array [1..5] of var 1..5: mapping; constraint all_different(mapping)} in (forall(i in 1..5)(arr1[i]=0\/arr1[i]=arr2[mapping[i]]));
array [1..5] of var 0..10:arr;
constraint p(arr,[1,2,3,4,5]);
constraint p(arr,[1,2,3,4,6]);
constraint not p(arr,[1,2,3,5,6]);
solve satisfy;
For that exemple, the decision variable is an array and the predicate p is true if a mapping exists such that the values of the array are mapped together. One or more elements of the array can also be 0, used here as a wildcard.
[1,2,3,4,0] is an acceptable solution
[0,0,0,0,0] is not acceptable, it matches anything. And the solution should not match [1,2,3,5,6]
[1,2,3,4,7] is not acceptable, it doesn't match anything(as there is no 7 in the parameter arrays)
Thanks by advance! =)
Edit: Added non-acceptable solutions
It is probably good to note that MiniZinc's limitation is not coincidental. When the creation of a free variable is negated, rather then finding a valid assignment for the variable, instead the model would have to prove that no such valid assignment exists. This is a much harder problem that would bring MiniZinc into the field of quantified constraint programming. The only general solution (to still receive the same flattened constraint model) would be to iterate over all possible values for each variable and enforce the negated constraints. Since the number of possibilities quickly explodes and the chance of getting a good model is small, MiniZinc does not do this automatically and throws this error instead.
This technique would work in your case as well. In the not_matches version of your predicate, you can iterate over all possible permutations (the possible mappings) and enforce that they not correct (partial) mappings. This would be a correct way to enforce the constraint, but would quickly explode. I believe, however, that there is a different way to enforce this constraint that will work better.
My idea stems from the fact that, although the most natural way to describe a permutation from one array to the another is to actually create the assignment from the first to the second, when dealing with discrete variables, you can instead enforce that each has the exact same number of each possible value. As such a predicate that enforces X is a permutation of Y might be written as:
predicate is_perm(array[int] of var $$E: X, array[int] of var $$E: Y) =
let {
array[int] of int: vals = [i | i in (dom_array(X) union dom_array(Y))]
} in global_cardinality(X, vals) = global_cardinality(Y, vals);
Notably this predicate can be negated because it doesn't contain any free variables. All new variables (the resulting values of global_cardinality) are functionally defined. When negated, only the relation = has to be changed to !=.
In your model, we are not just considering full permutations, but rather partial permutations, and we use a dummy value otherwise. As such, the p predicate might also be written:
predicate p(array [int] of var 0..10: X, array [int] of var 1..10: Y) =
let {
set of int: vals = lb_array(Y)..ub_array(Y); % must not include dummy value
array[vals] of var int: countY = global_cardinality(Y, [i | i in vals]);
array[vals] of var int: countX = global_cardinality(X, [i | i in vals]);
} in forall(i in vals) (countX[i] <= countY[i]);
Again this predicate does not contain any free variables, and can be negated. In this case, the forall can be changed into a exist with a negated body.
There are a few things that we can still do to optimise p for this use case. First, it seems that global_cardinality is only defined for variables, but since Y is guaranteed par, we can rewrite it and have the correct counts during MiniZinc's compilation. Second, it can be seen that lb_array(Y)..ub_array(Y) gives the tighest possible set. In your example, this means that only slightly different versions of the global cardinality function are evaluated, that could have been
predicate p(array [1..5] of var 0..10: X, array [1..5] of 1..10: Y) =
let {
% CHANGE: Use declared values of Y to ensure CSE will reuse `global_cardinality` result values.
set of int: vals = 1..10; % do not include dummy value
% CHANGE: parameter evaluation of global_cardinality
array[vals] of int: countY = [count(j in index_set(Y)) (i = Y[j]) | i in vals];
array[vals] of var int: countX = global_cardinality(X, [i | i in 1..10]);
} in forall(i in vals) (countX[i] <= countY[i]);
Regarding the example. One approach might be to rewrite the not p(...) constraint to a specific not_p(...) constraint. But I'm how sure how that be formulated.
Here's an example but it's probably not correct:
predicate not_p(array [1..5] of var 0..10:arr1, array [1..5] of 1..10:arr2)=
let{
array [1..5] of var 1..5: mapping;
constraint all_different(mapping)
} in
exists(i in 1..5)(
arr1[i] != 0
/\
arr1[i] != arr2[mapping[i]]
);
This give 500 solutions such as
arr = [1, 0, 0, 0, 0];
----------
arr = [2, 0, 0, 0, 0];
----------
arr = [3, 0, 0, 0, 0];
...
----------
arr = [2, 0, 0, 3, 4];
----------
arr = [2, 0, 1, 3, 4];
----------
arr = [2, 1, 0, 3, 4];
Update
I added not before the exists loop.
I have the following Scala code :
val res = for {
i <- 0 to 3
j <- 0 to 3
if (similarity(z(i),z(j)) < threshold) && (i<=j)
} yield z(j)
z here represents Array[String] and similarity(z(i),z(j)) calculates similarity between two strings.
This problems works like that similarity is calculated between 1st string and all the other strings and then similarity is calculated between 2nd string and all other strings except for first and then similarity for 3rd string and so on.
My requirement is that if 1st string matches with 3rd, 4th and 8th string, then
all these 3 strings shouldn't participate in loops further and loop should jump to 2nd string, then 5th string, 6th string and so on.
I am stuck at this step and don't know how to proceed further.
I am presuming that your intent is to keep the first String of two similar Strings (eg. if 1st String is too similar to 3rd, 4th, and 8th Strings, keep only the 1st String [out of these similar strings]).
I have a couple of ways to do this. They both work, in a sense, in reverse: for each String, if it is too similar to any later Strings, then that current String is filtered out (not the later Strings). If you first reverse the input data before applying this process, you will find that the desired outcome is produced (although in the first solution below the resulting list is itself reversed - so you can just reverse it again, if order is important):
1st way (likely easier to understand):
def filterStrings(z: Array[String]) = {
val revz = z.reverse
val filtered = for {
i <- 0 to revz.length if !revz.drop(i+1).exists(zz => similarity(zz, revz(i)) < threshold)
} yield revz(i)
filtered.reverse // re-reverses output if order is important
}
The 'drop' call is to ensure that each String is only checked against later Strings.
2nd option (fully functional, but harder to follow):
val filtered = z.reverse.foldLeft((List.empty[String],z.reverse)) { case ((acc, zt), zz) =>
(if (zt.tail.exists(tt => similarity(tt, zz) < threshold)) acc else zz :: acc, zt.tail)
}._1
I'll try to explain what is going on here (in case you - or any readers - aren't use to following folds):
This uses a fold over the reversed input data, starting from the empty String (to accumulate results) and the (reverse of the) remaining input data (to compare against - I labeled it zt for "z-tail").
The fold then cycles through the data, checking each entry against the tail of the remaining data (so it doesn't get compared to itself or any earlier entry)
If there is a match, just the existing accumulator (labelled acc) will be allowed through, otherwise, add the current entry (zz) to the accumulator. This updated accumulator is paired with the tail of the "remaining" Strings (zt.tail), to ensure a reducing set to compare against.
Finally, we end up with a pair of lists: the required remaining Strings, and an empty list (no Strings left to compare against), so we take the first of these as our result.
If I understand correctly, you want to loop through the elements of the array, comparing each element to later elements, and removing ones that are too similar as you go.
You can't (easily) do this within a simple loop. You'd need to keep track of which items had been filtered out, which would require another array of booleans, which you update and test against as you go. It's not a bad approach and is efficient, but it's not pretty or functional.
So you need to use a recursive function, and this kind of thing is best done using an immutable data structure, so let's stick to List.
def removeSimilar(xs: List[String]): List[String] = xs match {
case Nil => Nil
case y :: ys => y :: removeSimilar(ys filter {x => similarity(y, x) < threshold})
}
It's a simple-recursive function. Not much to explain: if xs is empty, it returns the empty list, else it adds the head of the list to the function applied to the filtered tail.
I am starting my journey with Elixir and am looking for some advice on how best to approach a particular problem.
I have a data set that needs to be searched as quickly as possible. The data consists of two numbers that form an enclosed band and some meta data associated with each band.
For example:
From,To,Data
10000,10999,MetaData1
11000,11999,MetaData2
12000,12499,MetaData3
12500,12999,MetaData4
This data set could have upwards of 100,000 entries.
I have a struct defined that models the data, along with a parser that creates an Elixir list in-memory representation.
defmodule Band do
defstruct from: 0, to: 0, metadata: 0
end
The parser returns a list of the Band struct. I defined a find method that uses a list comprehension
defp find_metadata(bands, number) do
match? = fn(x) -> x.from <= number and x.to >= number end
[match | _ ] = for band <- bands, match?.(band), do: band
{ :find, band }
end
Based on my newbie knowledge, using the list comprehension will require a full traversal of the list. To avoid scanning the full list, I have used search trees in other languages.
Is there an algorithm/mechanism/approach available in Elixir that would a more efficient approach for this type of search problem?
Thank you.
If the bands are mutually exclusive you could structure them into a tree sorted by from. Searching through that tree should take log(n) time. Something like the following should work:
defmodule Tree do
defstruct left: nil, right: nil, key: nil, value: nil
def empty do
nil
end
def insert(tree, value = {key, _}) do
cond do
tree == nil -> %Tree{left: empty, right: empty, key: key, value: value}
key < tree.key -> %{tree | left: insert(tree.left, value)}
true -> %{tree | right: insert(tree.right, value)}
end
end
def find_interval(tree, value) do
cond do
tree == nil -> nil
value < tree.key -> find_interval(tree.left, value)
between(tree.value, value) -> tree.value
true -> find_interval(tree.right, value)
end
end
def between({left, right}, value) do
value >= left and value <= right
end
end
Note that you can also use Ranges to store the "bands" as you call them. Also note that the tree isn't balanced. A simple scheme to (probably) achieve a balanced tree is to shuffle the intervals before inserting them. Otherwise you'd need to have a more complex implementation that balances the tree. You can look at erlang's gb_trees for inspiration.
Is there a way that I can write a predicate function that will compare two strings and see which one is greater? Right now I have
def helper1(x, y):
return x > y
However, I'm trying to use the function in this way,
new_tuple = divide((helper1(some_value, l[0]),l[1:])
Please note that the above function call is probably wrong because my helper1 is incomplete. But the gist is I'm trying to compare two items to see if one's greater than the other, and the items are l[1:] to l[0]
Divide is a function that, given a predicate and a list, divides that list into a tuple that has two lists, based on what the predicate comes out as. Divide is very long, so I don't think I should post it on here.
So given that a predicate should only take one parameter, how should I write it so that it will take one parameter?
You should write a closure.
def helper(x):
def cmp(y):
return x > y
return cmp
...
new_tuple = divide(helper1(l[0]), l[1:])
...
In the Go blog, this is how to print the map in order.
http://blog.golang.org/go-maps-in-action
import "sort"
var m map[int]string
var keys []int
for k := range m {
keys = append(keys, k)
}
sort.Ints(keys)
for _, k := range keys {
fmt.Println("Key:", k, "Value:", m[k])
}
but what if I have the string keys like var m map[string]string
I can't figure out how to print out the string in order(not sorted, in order of string creation in map container)
The example is at my playground http://play.golang.org/p/Tt_CyATTA3
as you can see, it keeps printing the jumbled strings, so I tried map integer values to map[string]string but I still could not figure out how to map each elements of map[string]string.
http://play.golang.org/p/WsluZ3o4qd
Well, the blog mentions that iteration order is randomized:
"...When iterating over a map with a range loop, the iteration order is not specified and is not guaranteed to be the same from one iteration to the next"
The solution is kind of trivial, you have a separate slice with the keys ordered as you need:
"...If you require a stable iteration order you must maintain a separate data structure that specifies that order."
So, to work as you expect, create an extra slice with the correct order and the iterate the result and print in that order.
order := []string{"i", "we", "he", ....}
func String(result map[string]string) string {
for _, v := range order {
if present in result print it,
}
... print all the Non-Defined at the end
return stringValue
}
See it running here: http://play.golang.org/p/GsDLXjJ0-E