How can I pass a set of instances to a function or predicate in Alloy Analyzer's Evaluator? - alloy

BLUF: I have a predicate which takes as arguments an instance of a signature and a set of instances of the same signature. Upon generating instances of the model, I'd like to pass instances of the signature to the predicate, but am at a loss for how to pass a set of instances, if it is even possible.
Alloy's Evaluator does not appear to be well-documented, unless I've missed it. I have Daniel Jackson's book, have done the tutuorial, and found various other resources online, but no one really seems to address how to use the Evaluator.
I've tried notation like:
myPred[instance$0,set(instance$1,instance$2)]
and
myPred[instance$0,set[instance$1,instance$2]]
and
myPred[instance$0,(instance$1,instance$2)]
and
myPred[instance$0,[instance$1,instance$2]]
The Evaluator doesn't like any of them. Is it possible to pass a set of instances? If so, how do I do it? Thanks for the help!

So, in usual fashion for me, almost as soon as I asked the question, I realized the answer (or at least a way to do what I wanted to). I simply used the union operator "+" to pass the set.
myPred[instance$0, instance$1 + instance$2]
Sorry for the inconvenience, but maybe this will help someone else!

Related

Sigilless class attribute

Although you can actually itemize any kind of data structure to fit it into a scalar variable, sigilless variables might have some intrinsic value, since they are actually shapeless, to be used as class attributes. However, there seems to be no way to do so, has \.a does not work, has .\a either. A has $!a can probably cover most of what we could achieve with sigilless variables, but would there be an actual way to use them as attributes?
There's currently no way to have a sigilless attribute. It's also not, so far as I'm aware, currently under active consideration for inclusion in a future Raku version. The most obvious design considerations, were it to be proposed, would be:
The semantics of my \foo = ... are single static assignment. It's quite clear what that means on a lexically scoped symbol. It's less clear what it would mean in a class declaration, and how it would interact with the instantiation workflow. For example, there'd be no equivalent of the :$!foo signature syntax that can be convenient in BUILD/TWEAK.
The use of a twigil implies that it follows a sigil. The twigils without that are potentially ambiguous, depending on context. Further, it'd be odd to allow the . case only for the purpose of declaring that we want a sigilless attribute to get an accessor.
It would break the rule that you can always find access to the instance state by looking for a ! twigil, which would be a pity.

Custom constraint OR-Tools // Constraint programming

I am working on a geometry problem with the OR-Tools constraint programming tools.
Could one of you tell me the procedure to create a custom constraint?
I dont really understand demon, model visitor behavior...
Also, can any type of constraint be inserted?
Thank you in advance
To write a constraint, you need to understand that during search, variables are not instantiated (domain is reduced to a single value). Therefore, calling Value() does not work.
You can access the current domain (min, max, list of possible values, and then you can write deduction rule from there).
See https://github.com/google/or-tools/blob/stable/examples/cpp/dobble_ls.cc.
Now, the CP solver is replaced by the CP-SAT solver, which does not allow writing custom constraints. In that case, maybe you can express you constraints with boolean logic, and arithmetic operators.

Strict map with custom reading method or type with custom read instance

So, it is not a problem but I would want an opinion what would be a better way. So I need to read data from a outside source (TCP), that comes basically in this format:
key: value
okey: enum
stuff: 0.12240
amazin: 1020
And I need to parse it into a Haskell accessible format, so the two solutions I thought about, were either to, parse that into a strict String to String map, or record syntax type declarations.
Initially I thought to make a type synonym for my String => String map, and make extractor functions like amazin :: NiceSynonym -> Int, and do the necessary treatment and parsing within the method, but that felt like, sketchy at the time? Then I thought an actual type declaration with record syntax, with a custom Read instance. That was a nightmare, because there is a lot of enums and keys with different types and etc. And it felt... disappointing. It simply wraps the arguments and creates reader functions, not much different from the original: amazin :: TypeDeclaration -> Int.
Now I'm kind of regretting not going with reader functions as I initially envisioned. So, anything else I'm forgetting to consider? Any pros and cons of either sides to take note on? Is one objectively better then the other?
P.S.: Some considerations that may make one or the other better:
Once read I won't need to change it at all whatsoever, it's basically a status report
No need to compare, add, etc., again just status report no point
Not really a need for performance, I wont be reading hundreds a second or anything
TL;DR: Given that input example, what's the best way to make into a Haskell-readable format? map, data constructor, dependent map...
Both ways are very valid in their own respects, but since I was making an API to interact with such protocol too, I preferred the record syntax so I could cover all the properties more easily. Also I wasn't really going to do any checking or treatment in the getter functions, and no matter how boring making the reader instance for my type might have seemed, I bet doing all the get functions manually would be worse. Parsing stuff manually is inherently boring, I guess I was just looking for a magical funcional one liner to make all the work for me.

What is the typed hole exploration development style?

While doing the CIS194 (Spring of 2013) homework 10, I got stuck with Applicative instance of a Parser type. I seek help from Google, an I came across with this Reddit post. The user ephrion gave an answer, which was also an example of typed hole exploration method, which I didn't quite understand. In the comments section of his answer he also said this:
It's extremely useful and one of the things that makes Haskell development so nice.
So question is, what this method is exactly, and are there some explicit order of steps in this method?
I still consider myself as a beginner when it comes to Haskell, and by googling about the subject I didn't find a very clear explanation how this kind of development style could be used.
Almost anywhere on the right hand side of an assignment in Haskell, you can write an underscore (optionally followed by other characters) instead of a value (constant or function). Instead of compiling, GHC will then tell you which type of value you might want to replace the underscore with, and list which identifiers in scope are of that type.
Matthías Páll Gissurarson is expanding the list of hints from GHC to include compound expressions.

IComparer or IEquatable or niether?

I have two different types, TypeA and TypeB, both of which are relatable in the sense that they describe the same concept but from different view points - though there exists a loose temporal relationship.
I want to create a utility method which tests to see if the two types I pass into the utility method are in fact related.
The objects are not directly equitable, but I thought that they might be Comparable (which they are), however when I look at the description of ICompareable, it suggest that this should be implemented to assist sorting of arrays.
Is there a better interface to use, or should I just create my own comparer routine which does not implement any interface. This is what I am going with for now, but I thought there might be a more pattern specific solution, hence the question.
I would go for an own solution, as neither IComparer nor IEquatable are matching your requirement.
Maybe just bool IsRelatedTo() instance method would be enough?

Resources