Use topographical sort with alloy 4.2 - alloy

I'm trying to use topological sort to find two different sequential schedules that follow their prereqs. When I execute the code no instances are found and I'm not sure why. Here's my code:
open util/relation
abstract sig Course {
prereq: set Course, -- c->d in prereq if c is a prerequisite of d
s1, s2: set Course -- two sequential course schedules
}
one sig cs1121, cs1122, cs1141, cs2311, cs2321,
cs3000, cs3141, cs3311, cs3331, cs3411, cs3421, cs3425 extends Course { }
fact {
no prereq.cs1121
prereq.cs1122 = cs1121
prereq.cs1141 = cs1122
prereq.cs2311 = cs1121
prereq.cs2321 = cs1122
prereq.cs3000 = cs3141
prereq.cs3141 = cs2311
prereq.cs3141 = cs2321
prereq.cs3311 = cs2311
prereq.cs3331 = cs1141
prereq.cs3331 = cs2311
prereq.cs3331 = cs2321
prereq.cs3411 = cs1141
prereq.cs3411 = cs3421
prereq.cs3421 = cs1122
prereq.cs3425 = cs2311
prereq.cs3425 = cs2321
}
-- is the given schedule a topological sort of the prereq relation?
pred topoSort [schedule: Course->Course] {
(all c: Course | lone c.schedule and lone schedule.c) -- no branching in the schedule
and totalOrder[*schedule, Course] -- and it's a total order
and prereq in ^schedule -- and it obeys the prerequisite graph
}
pred show {
s1.irreflexive and s2.irreflexive -- no retaking courses!
s1.topoSort and s2.topoSort -- both schedules are topological sorts of the prereq relation
s1 != s2 -- the schedules are different
}
run show

Switch the solver (under the Options menu) to MiniSAT with Unsat Core, then look at the core. You'll see it highlights
prereq.cs3141 = cs2311
prereq.cs3141 = cs2321
which contradicts your no branching rule.

Related

Predicate not consistent

I have to realize an application which permits to reserve a seat in a store. There are no sintax error, but I don't understand why terminal reply me "predicate not consistent". Can u help me?
This is my code:
open util/integer
sig Email{}
sig CF{}
sig Time{
hour: one Int,
minute:one Int,
second: one Int
}
abstract sig RegisteredUser{
email: one Email
}
sig User extends RegisteredUser{
cf: one CF
}
sig SM extends RegisteredUser{
cf: one CF,
store: one Store
}
sig Location{}
sig Date{}
abstract sig Status {}
//only one of this can be true
one sig Waiting extends Status {}
one sig Expired extends Status{}
one sig Pending extends Status{}
sig Ticket{
owner:one User,
date: one Date,
time: one Time,
status : one Status,
}
sig Visit{
owner:one User,
date: one Date,
time: one Time,
duration:one Time,
status : one Status,
products: some Product,
category: some Category,
}
//tickets are assumed with a default duration time of 15 min and entrance one for time, so 4*8=32 visit per day
sig TicketQueue {
max_visit:one Int,
ticket: some Ticket,
manager: one SM
}
sig VisitQueue{
category: some Category,
visit: some Visit,
manager: one SM
}
sig Store{
max_simultaneous: one Int,
location: one Location,
visitqueue: one VisitQueue,
ticketqueue: one TicketQueue,
product_category: some Category
}
sig Category{
simultaneous_seats: one Int,
}
sig Product{
category: one Category
}
// Constraints
// Registration data for the system are unique(Unique username and Fiscal Code)
fact registrationDataUniqueness {
no disjoint u1, u2: RegisteredUser | u1.email = u2.email
no disjoint u1,u2: User | u1.cf = u2.cf
no disjoint s1,s2: SM | s1.cf = s2.cf
no disjoint s:SM, u:User | s.cf=u.cf
}
//Ticket and Visit can have only one of status' values defined before
fact requestConsistency {
all s: Status | (s = Waiting && s != Expired && s !=Pending ) || (s != Waiting && s = Expired && s != Pending) || (s !=Waiting && s != Expired && s = Pending)
}
//SM can manage only one Store
fact StoreUniqueness{
no disjoint s1,s2: SM |s1.store=s2.store
}
// the same ticket or visit cannot be of two or more different user
fact UserUniqueness{
no disjoint t1,t2: Ticket | t1.owner=t2.owner
no disjoint v1,v2: Visit | v1.owner=v2.owner
}
//different products can not be of the same category
fact CategoryUniqueness{
no disjoint p1,p2: Product | p1.category=p2.category
}
//different stores can't have the same ticket or visit queue
fact QueueUniqueness{
no disjoint s1,s2: Store | s1.ticketqueue=s2.ticketqueue
no disjoint s1,s2: Store | s1.visitqueue=s2.visitqueue
}
//different tickets cannot have the same time
fact TimeUniqueness{
no disjoint t1,t2: Ticket | t1.time = t2.time
}
//for semplicity,in the visit/ticket queue, we consider "active" only visit/ticket with Waiting status
fact StatusQueue{
all tq:TicketQueue | tq.ticket.status=Waiting
all vq:VisitQueue | vq.visit.status=Waiting
}
//one ticket/visit can not belong to two or more different queue
fact ReservationUniqueness{
no disjoint tq1,tq2:TicketQueue | tq1.ticket=tq2.ticket
no disjoint vq1,vq2:VisitQueue | vq1.visit=vq2.visit
}
//SM is unique for a specific store, and its related ticket/visit queue
fact SMUniqueness{
no disjoint s1,s2: Store | s1.ticketqueue.manager=s2.ticketqueue.manager
and s1.visitqueue.manager=s2.visitqueue.manager
and s1.ticketqueue.manager=s2.visitqueue.manager
and s1.visitqueue.manager=s2.ticketqueue.manager
}
//different tickets/visits associated with the same user have different time
fact UniqueTimeUser {
all u: User, t1, t2: Ticket, v1,v2:Visit | ((u in t1.owner) and (u in t2.owner) and (u in v1.owner) and (u in v2.owner)
and (t1 != t2) and (v1 !=v2))
implies (t1.time != t2.time)
and (v1.time != v2.time)
}
//different ticket and visit associated with the same user have different time
fact UniqueTimeUser2 {
all u: User, t: Ticket, v:Visit | ((u in t.owner) and (u in v.owner) )
implies (v.time != t.time)
}
//Maximum number of visit with the same time for category
fact MaxTime{
all v:Visit, c: Category |(( c in v.category)) implies #v.time< c.simultaneous_seats
}
//Considering meaningfull integer value
fact PossibleValues{
all c: Category | c.simultaneous_seats>1 and c.simultaneous_seats<5
all s: Store | s.max_simultaneous >0 and s.max_simultaneous<50
all t: Ticket | t.time.hour>7 and t.time.hour<21 and ((t.time.minute=15) || (t.time.minute=30) || (t.time.minute=45)) and t.time.second=0
all v: Visit | v.time.hour>7 and v.time.hour<21 and ((v.time.minute=15) || (v.time.minute=30) || (v.time.minute=45)) and v.time.second=0 and v.duration.hour=0 and v.duration.minute>0 and v.duration.minute=<30 and v.duration.second=0
all tq: TicketQueue | tq.max_visit=32
}
pred addTicket[ t :Ticket,ti,ti':Time, tq, tq':TicketQueue]{
//precondition
ti' not in Ticket.time
#tq.ticket< tq.max_visit //seats available
//postconditions
tq'.manager= tq.manager
tq'.ticket= tq.ticket+t
t in tq'.ticket
#tq'.ticket< tq.max_visit
all t': Ticket | t' in tq.ticket implies t' in tq'.ticket
}
pred show{
#User = 4
#Store = 2
#SM = 2
#Category = 3
#TicketQueue = 2
#VisitQueue= 2
#Category = 5
some v, v': Visit | v.time != v'.time
#Ticket = 5
#Visit=5
}
run show for 4
This is the complete response of Alloy terminal:
Executing "Run show for 4"
Solver=sat4j Bitwidth=4 MaxSeq=4 SkolemDepth=1 Symmetry=20
15098 vars. 840 primary vars. 17697 clauses. 28ms.
No instance found. Predicate may be inconsistent. 1ms.
Have you run unsat core (using the MiniSAT with Unsat Core solver)? That might give you a hint. Another thought: I would simplify the model first and then only add features as you need them. I'm not sure that you really need to split the times into hours, minutes and seconds. That adds a lot of solver complexity, perhaps needlessly.
General observations:
Some inconsistences in your show predicate:
#Category = 3
#Category = 5
Your scope is definitely too small. You can't expect 5 Visit elements if you have a general scope of 4.
I'd rewrite your show predicate and run command as follows:
pred show{
some v, v': Visit | v.time != v'.time
}
run show for 10 but 7 Int, exactly 4 User, exactly 2 Store, exactly 2 SM, exactly 3 Category, exactly 2 TicketQueue, exactly 2 VisitQueue, exactly 5 Ticket, exactly 5 Visit
Notice the 7 Int.
It means that your model will contain int type atoms that represent all the integers you can express with a bitwidth of 7 (interval is [-64,63]).
As Daniel wrote, i'd also advise to stay as abstract as possible when you model in Alloy and thus to prefer concepts over quantitative values.
I concur with the previous advice of simplifying and would add to use check statements to test as you go. I'm not an expert with Alloy, so I take it slow and test early and often.
One thing that sticks out to me immediately is that your "show" predicate seems to be specifying multiplicities for some signatures that are greater than those allowed by the run statement ("for 4"). I've never specified multiplicity in a predicate that way, so I don't know how Alloy handles that. Offhand, those seem like contradictory constraints.
Also, I figured this out the hard way, but Alloy counts instances (atoms) of extended signatures as instances (atoms) of the parent(s). So, be aware of that.

Alloy Analyzer element comparision from set

Some background: my project is to make a compiler that compiles from a c-like language to Alloy. The input language, that has c-like syntax, must support contracts. For now, I am trying to implement if statements that support pre and post condition statements, similar to the following:
int x=2
if_preCondition(x>0)
if(x == 2){
x = x + 1
}
if_postCondtion(x>0)
The problem is that I am a bit confused with the results of Alloy.
sig Number{
arg1: Int,
}
fun addOneConditional (x : Int) : Number{
{ v : Number |
v.arg1 = ((x = 2 ) => x.add[1] else x)
}
}
assert conditionalSome {
all n: Number| (n.arg1 = 2 ) => (some field: addOneConditional[n.arg1] | { field.arg1 = n.arg1.add[1] })
}
assert conditionalAll {
all n: Number| (n.arg1 = 2 ) => (all field: addOneConditional[n.arg1] | { field.arg1 = n.arg1.add[1] })
}
check conditionalSome
check conditionalAll
In the above example, conditionalAll does not generate any Counterexample. However, conditionalSomegenerates Counterexamples. If I understand all and some quantifiers correctly then there is a mistake. Because from mathematical logic we have Ɐx expr(x) => ∃x expr(x) ( i.e. If expression expr(x) is true for all values of x then there exist a single x for which expr(x) is true)
The first thing is that you need to model your pre-, post- and operations. Functions are terrible for that because they cannot not return something that indicates failure. You, therefore, need to model the operation as a predicate. The value of a predicate indicates if the pre/post are satisfied, the arguments and return values can be modeled as parameters.
This is as far as I can understand your operation:
pred add[ x : Int, x' : Int ] {
x > 0 -- pre condition
x = 2 =>
x'=x.plus[1]
else
x'=x
x' > 0 -- post condition
}
Alloy has no writable variables (Electrum has) so you need to model the before and after state with a prime (') character.
We can now use this predicate to calculate the set of solutions to your problem:
fun solutions : set Int {
{ x' : Int | some x : Int | add[ x,x' ] }
}
We create a set with integers for which we have a result. The prime character is nothing special in Alloy, it is only a convention for the post-state. I am abusing it slightly here.
This is more than enough Alloy source to make mistakes so let's test this.
run { some solutions }
If you run this then you'll see in the Txt view:
skolem $solutions={1, 3, 4, 5, 6, 7}
This is as expected. The add operation only works for positive numbers. Check. If the input is 2, the result is 3. Ergo, 2 can never be a solution. Check.
I admit, I am slight confused by what you're doing in your asserts. I've tried to replicate them faithfully, although I've removed unnecessary things, at least I think we're unnecessary. First your some case. Your code was doing an all but then selecting on 2. So removed the outer quantification and hardcoded 2.
check somen {
some x' : solutions | 2.plus[1] = x'
}
This indeed does not give us any counterexample. Since solutions was {1, 3, 4, 5, 6, 7}, 2+1=3 is in the set, i.e. the some condition is satisfied.
check alln {
all x' : solutions | 2.plus[1] = x'
}
However, not all solutions have 3 as the answer. If you check this, I get the following counter-example:
skolem $alln_x'={7}
skolem $solutions={1, 3, 4, 5, 6, 7}
Conclusion. Daniel Jackson advises not to learn Alloy with Ints. Looking at your Number class you took him literally: you still base your problem on Ints. What he meant is not use Int, don't hide them under the carpet in a field. I understand where Daniel is coming from but Ints are very attractive since we're so familiar with them. However, if you use Ints, let them at least use their full glory and don't hide them.
Hope this helps.
And the whole model:
pred add[ x : Int, x' : Int ] {
x > 0 -- pre condition
x = 2 =>
x'=x.plus[1]
else
x'=x
x' > 0 -- post condition
}
fun solutions : set Int { { x' : Int | some x : Int | add[ x,x' ] } }
run { some solutions }
check somen { some x' : solutions | x' = 3 }
check alln { all x' : solutions | x' = 3 }

Alloy error signature

I've to run an example of the book "Logic in computer Science", Michael Huth and Mark Ryan. The example is on section 2.7.3, and it's the next one:
module PDS
open std/ord -- opens specification template for linear order
sig Component {
name: Name,
main: option Service,
export: set Service,
import: set Service,
version: Number
}{ no import & export }
sig PDS {
components: set Component,
schedule: components -> Service ->? components,
requires: components -> components
}{ components.import in components.export }
fact SoundPDSs {
all P : PDS |
all c : components, s : Service | --1
let c' = c.schedule[s] {
(some c' iff s in c.import) && (some c' => s in c'.export)
}
all c : components | c.requieres = c.schedule[Service] --2
}
sig Name, Number, Service {}
fun AddComponent(P, P': PDS, c: Component) {
not c in P.components
P'.components = P.components + c
} run AddComponent for 3 but 2 PDS
fun RemoveComponent(P, P' : PDS, c: Component) {
c in P.components
P'.components = P.components - c
} run RemoveComponents for 3 but 2 PDS
fun HighestVersionPolicy(P: PDS) {
all s : Service, c : components, c' : c.schedule[s],
c'' : components - c' {
s in c''.export && c''.name = c'.name => c''.version in c'version.^(Ord[Number].prev)
}
} run HighestVersionPolicy for 3 but 1 PDS
fun AGuideSimulation(P, P', P'' : PDS, c1, c2 : Component) {
AddComponent(P, P', c1) RemoveComponent(P, P'', c2)
HighestVersionPolicy(P) HigjestVersionPolicy(P') HighestVersionPolicy(P'')
} run AGuideSimulation for 3
assert AddingIsFunctionalForPDSs {
all P, P', P'' : PDS, c : Component {
AddComponent(P, P', c) && AddComponent(P, P'', c) => P' = P''
}
}
check AddingIsFunctionalForPDSs for 3
I've to run it on the MIT's alloy analizer (http://alloy.mit.edu/alloy/), and when I execute this code I have the following error:
Syntax error at line 7 column 15:
There are 1 possible tokens that can appear here:
}
I 've searched in some reference books, forums... and I don't found something useful. If someone have been working with this tool and knows how to solve this problem I would be very grateful.
Thanks in advance.
Your primary problem is that the second edition of the Huth / Ryan book appears to have been published in 2004 and to use (unsurprisingly) the syntax accepted by the Alloy Analyzer at that time, which is (also unsurprisingly) not quite the same as the syntax accepted by current versions of the Alloy Analyzer.
So to run this in a current version of the Analyzer, you are going to have to understand (a) what they are trying to say (b) the then-current Alloy syntax in which they are trying to say it, and (c) the current Alloy syntax, well enough to translate the model into current syntax. (Or else find someone who has done this already.) Fortunately, Huth and Ryan explain the Alloy syntax they use in some detail, so it's not a difficult exercise for someone familiar with Alloy 4 to translate the model into the Alloy 4 syntax.
Good luck!
[Postscript] On the theory that the goal of your assignment is to let you get familiar with the Alloy Analyzer, and not to drop you in the deep end by requiring a translation from the old Alloy syntax to the new Alloy syntax, I append a rough translation of the Huth / Ryan PDS model into Alloy 4 syntax, with some interspersed comments. (I say 'rough' because I haven't spent a lot of time on it, and I may have missed some nuances, especially in the predicate HighestVersionPolicy, where the authors get a little tricky.) If the goal of your assignment is precisely to force you to fight your way through the thickets of syntax with only your native ingenuity to use as a machete, then I apologize for messing up that experience.
At the top of the module, the main change is to the way the ordering library module is invoked.
module PDS
open util/ordering[Number]
In Component, the keyword 'option' is replaced by the current keyword 'lone', and I've transcribed some of Huth and Ryan's comments, to try to help myself understand what is going on better.
sig Component {
name: Name, // name of the component
main: lone Service, // component may have a 'main' service
export: set Service, // services the component exports
import: set Service, // services the component imports
version: Number // version number of the component
}{
no import & export // imports and exports are disjoint
// sets of services
}
In the sig for PDS, the main change is again the change to cardinality syntax: ->? becomes -> lone.
// Package Dependency System
// components is the set of Component in this PDS
// schedule assigns to each component in the PDS
// and any of its import services
// a component in the PDS that provides that service
// (see SoundPDSs, below)
// requires expresses the component dependencies
// entailed by the schedule
sig PDS {
components: set Component,
schedule: components -> Service -> lone components,
// any component / Service pair maps to at most
// one component
requires: components -> components
}{
// for every component in the system,
// the services it imports are supplied (exported)
// by some (other) component in the system
components.import in components.export
}
In the fact SoundPDSs, the authors use a with P construct which I don't remember ever seeing in versions of Alloy I've used. So I took it out, and reformulated the expressions a bit for clarity, since the authors explain that clarity is their main motive for using the with P construct. It will be well worth your while to be sure you understand Alloy's box notation, and why P.schedule[c][s] is another way of writing c.(P.schedule)[s] or s.(c.(P.schedule)).
fact SoundPDSs {
all P : PDS | {
all c : P.components, s : Service |
let c' = P.schedule[c][s] {
(some c' iff s in c.import)
// c and s require c' only iff c imports s
&&
(some c' => s in c'.export)
// c and s require c' only if c' exports s
}
all c : P.components | P.requires[c]= P.schedule[c][Service]
// c requires precisely those components
// that schedule says c depends on for some service
}
}
sig Name, Number, Service {}
The big change from here on out is that Huth and Ryan use fun for their definitions of various properties, where Alloy 4 uses pred -- the keyword fun is still legal, but it means a function (an expression which when evaluated returns a value, not a Boolean) not a predicate.
pred AddComponent(P, P': PDS, c: Component) {
not c in P.components
P'.components = P.components + c
} run AddComponent for 3 but 2 PDS
pred RemoveComponent(P, P' : PDS, c: Component) {
c in P.components
P'.components = P.components - c
} run RemoveComponent for 3 but 2 PDS
In HighestVersionPolicy, I've again introduced box notation to try to make the expression clearer. Note that prev is not defined here -- it's one of the relations imported by the import instruction (open ...) at the top of the module, from the library module for ordering.
pred HighestVersionPolicy(P: PDS) {
all s : Service,
c : P.components,
c' : P.schedule[c][s],
c'' : P.components - c' {
s in c''.export && c''.name = c'.name
=>
c''.version in ^prev[c'.version]
}
} run HighestVersionPolicy for 3 but 1 PDS
pred AGuideSimulation(P, P', P'' : PDS, c1, c2 : Component) {
AddComponent[P, P', c1]
RemoveComponent[P, P'', c2]
HighestVersionPolicy[P]
HighestVersionPolicy[P']
HighestVersionPolicy[P'']
} run AGuideSimulation for 3
assert AddingIsFunctionalForPDSs {
all P, P', P'' : PDS, c : Component {
AddComponent[P, P', c] && AddComponent[P, P'', c]
=> P' = P''
}
}
check AddingIsFunctionalForPDSs for 3
The version of their model given by Huth and Ryan on the web doesn't include the StructurallyEqual predicate they describe in their text; I added it to help make sure my translation of the model worked.
pred StructurallyEqual(P, P' : PDS) {
P.components = P'.components
P.schedule = P'.schedule
P.requires = P'.requires
}
run StructurallyEqual for 2
And similarly, they don't include the fix to AddingIsStructurallyFunctional -- the intent is presumably for the student to do it dynamically, in Alloy.
assert AddingIsStructurallyFunctionalForPDSs {
all P, P', P'' : PDS, c : Component {
AddComponent[P, P', c] && AddComponent[P, P'', c]
=>
StructurallyEqual[P',P'']
}
}
check AddingIsStructurallyFunctionalForPDSs for 3

Performance difference in toString.map and toString.toArray.map

While coding Euler problems, I ran across what I think is bizarre:
The method toString.map is slower than toString.toArray.map.
Here's an example:
def main(args: Array[String])
{
def toDigit(num : Int) = num.toString.map(_ - 48) //2137 ms
def toDigitFast(num : Int) = num.toString.toArray.map(_ - 48) //592 ms
val startTime = System.currentTimeMillis;
(1 to 1200000).map(toDigit)
println(System.currentTimeMillis - startTime)
}
Shouldn't the method map on String fallback to a map over the array? Why is there such a noticeable difference? (Note that increasing the number even causes an stack overflow on the non-array case).
Original
Could be because toString.map uses the WrappedString implicit, while toString.toArray.map uses the WrappedArray implicit to resolve map.
Let's see map, as defined in TraversableLike:
def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That = {
val b = bf(repr)
b.sizeHint(this)
for (x <- this) b += f(x)
b.result
}
WrappedString uses a StringBuilder as builder:
def +=(x: Char): this.type = { append(x); this }
def append(x: Any): StringBuilder = {
underlying append String.valueOf(x)
this
}
The String.valueOf call for Any uses Java Object.toString on the Char instances, possibly getting boxed first. These extra ops might be the cause of speed difference, versus the supposedly shorter code paths of the Array builder.
This is a guess though, would have to measure.
Edit
After revising, the general point still stands, but the I referred the wrong implicits, since the toDigit methods return an Int sequence (or like), not a translated string as I misread.
toDigit uses LowPriorityImplicits.fallbackStringCanBuildFrom[T]: CanBuildFrom[String, T, immutable.IndexedSeq[T]], with T = Int, which just defers to a general IndexedSeq builder.
toDigitFast uses a direct Array implicit of type CanBuildFrom[Array[_], T, Array[T]], which is unarguably faster.
Passing the following CBF for toDigit explicitly makes the two methods on par:
object FastStringToArrayBuild {
def canBuildFrom[T : ClassManifest] = new CanBuildFrom[String, T, Array[T]] {
private def newBuilder = scala.collection.mutable.ArrayBuilder.make()
def apply(from: String) = newBuilder
def apply() = newBuilder
}
}
You're being fooled by running out of memory. The toDigit version does create more intermediate objects, but if you have plenty of memory then the GC won't be heavily impacted (and it'll all run faster). For example, if instead of creating 1.2 million numbers, I create 12k 100x in a row, I get approximately equal times for the two methods. If I create 1.2k 5-digit numbers 1000x in a row, I find that toDigit is about 5% faster.
Given that the toDigit method produces an immutable collection, which is better when all else is equal since it is easier to reason about, and given that all else is equal for all but highly demanding tasks, I think the library is as it should be.
When trying to improve performance, of course one needs to keep all sorts of tricks in mind; one of these is that arrays have better memory characteristics for collections of known length than do the fancy collections in the Scala library. Also, one needs to know that map isn't the fastest way to get things done; if you really wanted this to be fast you should
final def toDigitReallyFast(num: Int, accum: Long = 0L, iter: Int = 0): Array[Byte] = {
if (num==0) {
val ans = new Array[Byte](math.max(1,iter))
var i = 0
var ac = accum
while (i < ans.length) {
ans(ans.length-i-1) = (ac & 0xF).toByte
ac >>= 4
i += 1
}
ans
}
else {
val next = num/10
toDigitReallyFast(next, (accum << 4) | (num-10*next), iter+1)
}
}
which on my machine is at 4x faster than either of the others. And you can get almost 3x faster yet again if you leave everything in a Long and pack the results in an array instead of using 1 to N:
final def toDigitExtremelyFast(num: Int, accum: Long = 0L, iter: Int = 0): Long = {
if (num==0) accum | (iter.toLong << 48)
else {
val next = num/10
toDigitExtremelyFast(next, accum | ((num-10*next).toLong<<(4*iter)), iter+1)
}
}
// loop, instead of 1 to N map, for the 1.2k number case
{
var i = 10000
val a = new Array[Long](1201)
while (i<=11200) {
a(i-10000) = toDigitReallyReallyFast(i)
i += 1
}
a
}
As with many things, performance tuning is highly dependent on exactly what you want to do. In contrast, library design has to balance many different concerns. I do think it's worth noticing where the library is sub-optimal with respect to performance, but this isn't really one of those cases IMO; the flexibility is worth it for the common use cases.

Truly declarative language?

Does anyone know of a truly declarative language? The behavior I'm looking for is kind of what Excel does, where I can define variables and formulas, and have the formula's result change when the input changes (without having set the answer again myself)
The behavior I'm looking for is best shown with this pseudo code:
X = 10 // define and assign two variables
Y = 20;
Z = X + Y // declare a formula that uses these two variables
X = 50 // change one of the input variables
?Z // asking for Z should now give 70 (50 + 20)
I've tried this in a lot of languages like F#, python, matlab etc, but every time I tried this they come up with 30 instead of 70. Which is correct from an imperative point of view, but I'm looking for a more declarative behavior if you know what I mean.
And this is just a very simple calculation. When things get more difficult it should handle stuff like recursion and memoization automagically.
The code below would obviously work in C# but it's just so much code for the job, I'm looking for something a bit more to the point without all that 'technical noise'
class BlaBla{
public int X {get;set;} // this used to be even worse before 3.0
public int Y {get;set;}
public int Z {get{return X + Y;}}
}
static void main(){
BlaBla bla = new BlaBla();
bla.X = 10;
bla.Y = 20;
// can't define anything here
bla.X = 50; // bit pointless here but I'll do it anyway.
Console.Writeline(bla.Z);// 70, hurray!
}
This just seems like so much code, curly braces and semicolons that add nothing.
Is there a language/ application (apart from Excel) that does this? Maybe I'm no doing it right in the mentioned languages, or I've completely missed an app that does just this.
I prototyped a language/ application that does this (along with some other stuff) and am thinking of productizing it. I just can't believe it's not there yet. Don't want to waste my time.
Any Constraint Programming system will do that for you.
Examples of CP systems that have an associated language are ECLiPSe, SICSTUS Prolog / CP package, Comet, MiniZinc, ...
It looks like you just want to make Z store a function instead of a value. In C#:
var X = 10; // define and assign two variables
var Y = 20;
Func<int> Z = () => X + Y; // declare a formula that uses these two variables
Console.WriteLine(Z());
X = 50; // change one of the input variables
Console.WriteLine(Z());
So the equivalent of your ?-prefix syntax is a ()-suffix, but otherwise it's identical. A lambda is a "formula" in your terminology.
Behind the scenes, the C# compiler builds almost exactly what you presented in your C# conceptual example: it makes X into a field in a compiler-generated class, and allocates an instance of that class when the code block is entered. So congratulations, you have re-discovered lambdas! :)
In Mathematica, you can do this:
x = 10; (* # assign 30 to the variable x *)
y = 20; (* # assign 20 to the variable y *)
z := x + y; (* # assign the expression x+y to the variable z *)
Print[z];
(* # prints 30 *)
x = 50;
Print[z];
(* # prints 70 *)
The operator := (SetDelayed) is different from = (Set). The former binds an unevaluated expression to a variable, the latter binds an evaluated expression.
Wanting to have two definitions of X is inherently imperative. In a truly declarative language you have a single definition of a variable in a single scope. The behavior you want from Excel corresponds to editing the program.
Have you seen Resolver One? It's like Excel with a real programming language behind it.
Here is Daniel's example in Python, since I noticed you said you tried it in Python.
x = 10
y = 10
z = lambda: x + y
# Output: 20
print z()
x = 20
# Output: 30
print z()
Two things you can look at are the cells lisp library, and the Modelica dynamic modelling language, both of which have relation/equation capabilities.
There is a Lisp library with this sort of behaviour:
http://common-lisp.net/project/cells/
JavaFX will do that for you if you use bind instead of = for Z
react is an OCaml frp library. Contrary to naive emulations with closures it will recalculate values only when needed
Objective Caml version 3.11.2
# #use "topfind";;
# #require "react";;
# open React;;
# let (x,setx) = S.create 10;;
val x : int React.signal = <abstr>
val setx : int -> unit = <fun>
# let (y,sety) = S.create 20;;
val y : int React.signal = <abstr>
val sety : int -> unit = <fun>
# let z = S.Int.(+) x y;;
val z : int React.signal = <abstr>
# S.value z;;
- : int = 30
# setx 50;;
- : unit = ()
# S.value z;;
- : int = 70
You can do this in Tcl, somewhat. In tcl you can set a trace on a variable such that whenever it is accessed a procedure can be invoked. That procedure can recalculate the value on the fly.
Following is a working example that does more or less what you ask:
proc main {} {
set x 10
set y 20
define z {$x + $y}
puts "z (x=$x): $z"
set x 50
puts "z (x=$x): $z"
}
proc define {name formula} {
global cache
set cache($name) $formula
uplevel trace add variable $name read compute
}
proc compute {name _ op} {
global cache
upvar $name var
if {[info exists cache($name)]} {
set expr $cache($name)
} else {
set expr $var
}
set var [uplevel expr $expr]
}
main
Groovy and the magic of closures.
def (x, y) = [ 10, 20 ]
def z = { x + y }
assert 30 == z()
x = 50
assert 70 == z()
def f = { n -> n + 1 } // define another closure
def g = { x + f(x) } // ref that closure in another
assert 101 == g() // x=50, x + (x + 1)
f = { n -> n + 5 } // redefine f()
assert 105 == g() // x=50, x + (x + 5)
It's possible to add automagic memoization to functions too but it's a lot more complex than just one or two lines. http://blog.dinkla.net/?p=10
In F#, a little verbosily:
let x = ref 10
let y = ref 20
let z () = !x + !y
z();;
y <- 40
z();;
You can mimic it in Ruby:
x = 10
y = 20
z = lambda { x + y }
z.call # => 30
z = 50
z.call # => 70
Not quite the same as what you want, but pretty close.
not sure how well metapost (1) would work for your application, but it is declarative.
Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio
x = 10
y = 20
z = function() return x + y; end
x = 50
= z()
70
It's not what you're looking for, but Hardware Description Languages are, by definition, "declarative".
This F# code should do the trick. You can use lazy evaluation (System.Lazy object) to ensure your expression will be evaluated when actually needed, not sooner.
let mutable x = 10;
let y = 20;
let z = lazy (x + y);
x <- 30;
printf "%d" z.Value

Resources