Alloy integer comparison semantics using "Forbid Overflow: Yes" - alloy

I have the following Alloy module and run command:
sig A { x : set A }
run {all a: A| #a.x<3 and #a.x>1} for exactly 2 A, 2 int
With "Forbid Overflow: No" the Alloy Analyzer 4.2 (Build date: 2012-09-25) does not find an instance. I believe the reason is that due to the overflow of the constant 3 the run predicate reads {all a: A| #a.x<-1 and #a.x>1}.
With "Forbid Overflow: Yes" the Alloy Analyzer finds an instance.
---INSTANCE---
integers={-2, -1, 0, 1}
univ={-1, -2, 0, 1, A$0, A$1}
Int={-1, -2, 0, 1}
seq/Int={0}
String={}
none={}
this/A={A$0, A$1}
this/A<:x={A$0->A$0, A$0->A$1, A$1->A$0, A$1->A$1}
The Alloy Evaluator tells me that the predicate {all a: A| #a.x<3 and #a.x>1} used in the run command evaluates to false.
Could somebody please explain this behavior? Is there a difference in the sematics of integer comparisons in the Evaluator and the Analyzer?
Edit: I noticed that the behavior is different in the latest experimental version: Alloy 4.2_2014-03-07. It does not find an instance. This behavior is as expected.

You already provided all the right answers in your question, so I can only quickly reiterate them
the expected behavior (no instance) is the "correct" behavior for that model;
version 4.2 has some known bugs regarding handling overflows, so that's the reason it finds an instance (those should be fixed in the latest version, hence correct behavior for this model);
under the "wraparound" semantics of integers (overflow detection turned off), there is still no instance, exactly for the reason you described (-3 being treated as -1);
the evaluator still has some issues (even in the latest version), so sometimes it will just use the wraparound semantics.

Related

Ada 2012 - replace Fixed_Decimal_Type'Round() with customised version?

Recently I have had joyous success when providing my own replacement 'Write () procedure for a custom record type, such as...
type Pixel_Format is
record
-- blah blah
end record;
procedure Pixel_Format_Write (
Stream : not null access Root_Stream_Type'Class;
Item : in Pixel_Format);
for Pixel_Format'Write use Pixel_Format_Write;
I was using this to convert certain record members from little-endian to big-endian when writing to a network stream. Worked beautifully.
By the same thinking I wondered if it is possible to replace the 'Round () function of decimal fixed point types, so I attempted a quick and dirty test...
-- This is a "Decimal Fixed Point" type
type Money_Dec_Type is delta 0.001 digits 14;
-- ...
function Money_CustomRound(X : in Money_Dec_Type)
return Money_Dec_Type'Base;
for Money_Dec_Type'Round use Money_CustomRound; -- COMPILER COMPLAINS HERE
-- ...
function Money_CustomRound(X : in Money_Dec_Type)
return Money_Dec_Type'Base is
begin
return 0.001;
end Money_CustomRound;
Alas, GNAT finds this offensive:
attribute "Round" cannot be set with definition clause
Question:
Am I attempting the impossible? Or is there a way to change the default 'Round attribute, in the same way that changing 'Write is possible?
Context to the question:
I have a set of about 15 different ways of rounding currency values that change from one project to the next (sometimes within the same project!). Examples include:
Round halves away from zero (Ada's default it seems)
Round halves towards zero
Statistical (a re-entrant type that requires global housekeeping)
Round towards evens OR odds
Rounds towards +INF / -INF
...
It would be a powerful tool to be able to have this kind of functionality become transparent to the programmer by using certain rounding methods defined at the generic package level.
The angel on my other shoulder suggests I'm asking for something completely insane.
I wonder this because the documentation (ALRM and "Barnes 2012") both give a function specification for the default procedure. Why would they do that if one couldn't replace it with another of one's own design?
No, you cannot redefine the Round attribute. Attributes can only be queried (see RM K.2). Only aspects can be (re)defined using an aspect specification (see RM K.1; some exceptions apply). The RM gives specifications of the functions behind the attributes to clarify the signatures to the reader.

Can I use Alloy to solve linear programming like problems?

I want to find a solution for set of numerical equations and I wondering whether Alloy could be used for that.
I've found limited information on alloy that seem to suggest (to me, at least) that it could be done, but I've found no examples of similar problems.
It certainly isn't easy, so before investing time and some money in literature I'd like to know if this is doable or not.
Simplified example:
(1) a + b = c, (2) a > b, (3) a > 0, (4) b > 0, (5) c > 0
One solution would be
a = 2, b = 1, c = 3
Any insights on the usability of Alloy or better tools / solutions would be greatly appreciated.
Kind regards,
Paul.
Daniel Jackson discourage using Alloy for numeric problems. The reason is that Alloy uses a SAT solver and this does not scale well since it severely limits the range of available integers. By default Alloy uses 4 bits for an integer: -8..7. (This can be enlarged with the run command but will of course slow down finding an answer.) The mindset of not to use numbers also influenced the syntax, there are no nice operators for numbers. I.e. addition is 5.plus[6].
That said, your problem would look like:
pred f[a,b,c : Int] {
a.plus[b] = c
a > b
a > 0
b > 0
c > 0
}
run f for 4 int
The answer can be found in the evaluator or text view. The first answer I got was a=4, b=1, c=5.
Alloy was developed around 2010 and since then there are SMT solvers that work similar to SAT solvers but can handle numeric problems as well. Alloy could be made to use those solvers I think. Would be nice because the language is incredibly nice to work with, the lack of number is a real miss.
Update Added a constraint puzzle at https://github.com/AlloyTools/models/blob/master/puzzle/einstein/einstein-wikipedia.als
Alloy is specialized as a relational constraint solver. While it can do very simple linear programming, you might want to look at a specialized tool like MiniZinc instead.

Change (0, 1] to (0, 1) without branching

I have a random number generator that outputs values from (0, 1], but I need to give the output to a function that returns infinity at 0 or 1. How can I post-process the generated number to be in (0, 1) without any branches, as this is intended to execute on a GPU?
I suppose one way is to add a tiny constant and then take the value mod 1. In other words, generate from (ɛ, 1 + ɛ], which gets turned into [ɛ, 1). Is there a better way? What should the ɛ be?
Update 1
In Haskell, you can find ɛ by using floatRange. The C++ portion below applies otherwise.
Note: The answer below was written before the OP expressed the answer should be for Haskell
You don't state the implementation language in the question, so I'm going to assume C++ here.
Take a look at std::nextafter.
This will allow you to get the next possible value which you can add to the upper limit, which will result in your code acting as if it was inclusive.
As for the branching, you could overload the function to avoid the branch. However, this leads to code duplication.
I'd recommend allowing the branch and letting the compiler make such micro-optimizations unless you really need the performance and can provide a more specialised implementation than the standard one (see Pascal Cuoq's comment).

Alloy 4, Software Abstractions 2E, and the seq keyword

Not long ago I acquired the second edition of Software Abstractions, and when I needed to refresh my memory on how to spell the name of the elems function I thought "Oh, good, I can check the new edition instead of trying to read my illegible handwritten notes in the end-papers of the first edition."
But I can't find "seq" or "elems" or the names of any of the other helper functions in the index, nor do I see any mention of the seq keyword in the language reference in Appendix B.
One or more of the following seems likely to be the case; which?
I am missing something. (What? where?)
The seq keyword is not covered in Appendix B because it's not strictly speaking a keyword in the way that set and the other unary operators are. (Please expound!)
The support for sequences was added to Alloy 4 after the second edition went to press, and so the book needs to be augmented by reference to the discussion of new features in Alloy 4 in the Quick Guide and the Alloy 4 grammar on the Web site. (Ah, OK. Pages are slow, bits are fast.)
Other ...
I guess, to try to put a generally useful question here, that I'm asking: what exactly is the relation between the language implemented by the Alloy Analyzer 4.2 (or any 4.*) and the language defined in Software abstractions second edition?
The current implementation corresponds to this online documentation.
Sequences are really not part of the language; x: seq A can be seen just as a syntactic sugar for x: Int -> A, and all the utility functions (e.g., first, last, elems) are library-defined (in util/sequence). The actual implementation is a little more complicated (just so that we can let the user write something like x.elems and make the type checker happy at the same time), but conceptually that's what's going on.

How to make gcc on SUN calculate floating points the same way as in Linux

I have a project where I have to perform some mathematics calculations with double variables.
The problem is that I get different results on SUN Solaris 9 and Linux.
There are a lot of ways (explained here and other forums) how to make Linux work as Sun, but not the other way around.
I cannot touch the Linux code, so it is only SUN I can change.
Is there any way to make SUN to behave as Linux?
The code I run(compile with gcc on both systems):
int hash_func(char *long_id)
{
double product, lnum, gold;
while (*long_id)
lnum = lnum * 10.0 + (*long_id++ - '0');
printf("lnum => %20.20f\n", lnum);
lnum = lnum * 10.0E-8;
printf("lnum => %20.20f\n", lnum);
gold = 0.6125423371582974;
product = lnum * gold;
printf("product => %20.20f\n", product);
...
}
if the input is 339886769243483
the output in Linux:
lnum => 339886769243**483**.00000000000000000000
lnum => 33988676.9243**4829473495483398**
product => 20819503.600158**59827399253845**
When on SUN:
lnum => 339886769243483.00000000000000000000
lnum => 33988676.92434830218553543091
product = 20819503.600158**60199928283691**
Note: The result is not always different, moreover most of the times it is the same. Just 10 15-digit numbers out of 60000 have this problem.
Please help!!!
The real answer here is another question: why do you think you need this? There may be a better way to accomplish what you're trying to do that doesn't depend on intricate details of platform floating-point. Having said that...
It's unfortunate that you can't change the Linux code, since it's really the Linux results that are deficient here. The SUN results are as good as they could possibly be: they're correctly rounded; each multiplication gives the unique (in this case) C double that's closest to the result. In contrast, the first Linux multiplication does not give a correctly rounded result.
Your Linux results come from a 32-bit system on x86 hardware, right? The results you show are consistent with, and likely caused by, the phenomenon of 'double rounding': the result of the first multiplication is first rounded to 64-bit precision (the precision used internally by the Intel x87 FPU), and then re-rounded to the usual 53-bit precision of a double. Most of the time (around 1999 times out of 2000 or so on average) this double round has the same effect as a single round to 53-bit precision would have had, but occasionally it can produce a different result, and that's what you're seeing here.
As you say, there are ways to fix the Linux results to match the Solaris ones: one of these is to use appropriate compiler flags to force the use of SSE2 instructions for floating-point operations if possible. The recent 4.5 release of gcc also fixes the difference by means of a new -fexcess-precision flag, though the fix may impact performance when not using SSE2.
[Edit: after several rereads of the gcc manuals, the gcc-patches mailing list thread at http://gcc.gnu.org/ml/gcc-patches/2008-11/msg00105.html, and the related gcc bug report, it's still not clear to me whether use of -fexcess-precision=standard does in fact eliminate double rounding on x87 systems; I think the answer depends on the value of FLT_EVAL_METHOD. I don't have a 32-bit Linux/x86 machine handy to test this on.]
But I don't know how you'd fix the Solaris results to match the Linux ones, and I'm not sure why you'd want to: you'd be making the Solaris results less accurate instead of making the Linux results more accurate.
[Edit: caf has a good suggestion here. On Solaris, try deliberately using long double for intermediate results, then forcing back to double. If done right, this should reproduce the double rounding effect that you're seeing in Linux.]
See David Monniaux's excellent paper The pitfalls of verifying floating-point computations for a good explanation of double rounding. It's essential reading after the Goldberg article mentioned in an earlier answer.
Two things:
you need to read this: http://docs.sun.com/source/806-3568/ncg_goldberg.html
you need to decide what your requirements are for numerical precision in your application
These values differ by less than one part in 252. But a double-precision number has just 52 bits behind the radix point. So they differ by just the last bit. It may be that one machine isn't always rounding correctly, but you're doing two multiplications, so the answer is going to have that much error anyway.

Resources