Use module overloading to implement a hash function in TLA+ - tla+

The module overloading mechanism is explained in the Tower of Hanoi sample here. It enables you to implement TLA+ operators in Java, for improved model-checking performance.
I've struggled for a while to define a useful hash function in TLA+ (no, the identity function does not work for my purposes) and am thinking module overloading might be the way to do it. The hash function would accept a TLA+ object (a record, for example) and use Java's hashCode() method on the object's string representation to deterministically derive its hash value. This value would be returned to the TLA+ spec.
Is this possible? What would the Java override code look like? Do any other module override code samples exist?

import tlc2.value.impl.IntValue;
import tlc2.value.impl.Value;
public class TLCHash {
public static Value Hash(Value v) {
return IntValue.gen(v.hashCode());
}
}
------------------------------ MODULE TLCHash ------------------------------
EXTENDS Integers
Hash(val) == CHOOSE n \in Int : TRUE
ASSUME(Hash(<<"a","b","c">>) = Hash(<<"a","b","c">>))
ASSUME(Hash(<<"a","b","c">>) # Hash(<<"c","b","a">>))
ASSUME(Hash({"a","b","c"}) = Hash({"b","c","a", "c"}))
=============================================================================
Note that TLC's implementation of Value#hashCode delegates to Value#fingerprint and thus should work the way you want it to (from my understanding of your question). Also note that the Value class hierarchy has been moved from the package tlc2.value to tlc2.value.impl in version 1.5.8.
https://gist.github.com/lemmy/1eaf4bec8910b25e206d070b0bc80754 shows a real-world application of module overwrites and might inspire a solution. Lastly, the only aspect of TLC's built-in standard modules that actually turns them into standard modules is the tlc2.module package/namespace.

Related

Is it safe to use Class level Predicate in Multithreading Application

I am trying to understand if there could be any issues with Predicate defined at class level in multithreaded application. ? We have defined such predicates in our services and using them up different methods of same class. Currently we have not seen any issue but I am curious to understand our class level Predicate object is going to function. will there be any inconsistency in the behaviour?
eg:
class SomeClass{
Predicate<String> check = (value) -> value.contains("SomeString");
// remaning impl. of the class.
}
The predicate in your example is categorically thread-safe. It is calling a method on an intrinsicly thread-safe (and immutable) object.
This does not generalize to all predicates though. For example
Predicate<StringBuilder> check = (value) -> value.indexOf("SomeString") >= 0;
is not thread-safe. Another thread could mutate the contents of the StringBuilder argument while this predicate is checking it. The predicate could also be vulnerable to memory model related inconsistencies.
(The StringBuilder class is not thread-safe; see javadoc.)
It is not clear what you mean by "class level". Your example shows a predicate declared as a regular field, not a static (class level) field.
With a variable declared as a (mutable) instance field, it is difficult to reason about the thread-safety of the field in isolation. This can be solved by declaring the field as final.

Program against an interface in node.js?

Dependency injection and programming against an interface is standard in C# and Java. I am wondering can this be achieved in Node.js too? For example I am programming a memory cache module and ideally it should expose an interface, let's say with two methods getKey and putKey. I can have one implementation with ioredis and another one with just plain memory cache. But my application will only depend on the memory cache interface.
Javascript uses Duck Typing
In computer programming with object-oriented programming languages,
duck typing is a style of dynamic typing in which an object's current
set of methods and properties determines the valid semantics, rather
than its inheritance from a particular class or implementation of a
specific interface.
from Wikipedia - Duck typing
This means that you can make a function expect a object as an argument and assume it has some specific functions or properties. i.e:
function addAreas(s1, s2){
//We can't know for sure if s1 or s2 have a getArea() function before the code is executed
return s1.getArea() + s2.getArea();
}
To test if an object have a specific function, we can do the following:
function isShape(obj){
return obj.implements("getArea") && obj.implements("getPerimeter");
}
Both can be combined into the following:
function addAreas(s1, s2){
if(isShape(s1) && isShape(s2))
return s1.getArea() + s2.getArea();
throw "s1 or s2 are not a shape!"
}
Here you can find a great blog post on the subject.
In your example, both your implementations should have a getKey and putKey functions, and the code which expects an object with those functions must test if they were implemented before executing.

Proper Uses for Abstracts

Just yesterday, I decided to begin learning the Haxe programming language after having used Actionscript 3 for the past few years. Today I have been exploring abstract types, and I have come to realize that they seem quite different from abstract classes in Java. I am beginning to grasp some of what they do, but I am unsure of what abstracts are used for. What constitutes the proper use of abstracts in Haxe, and when ought I to favor them over classes?
For instance, below is an incomplete definition for a complex number type using an abstract type. Ought I to prefer this or just an ordinary class?
abstract Complex({real:Float, imag:Float}) {
public function new(real:Float, imag:Float) {
this = { real: real, imag: imag };
}
public function real():Float { return this.real; }
public function imag():Float { return this.imag; }
#:op(A + B)
public static function add(lhs:Complex, rhs:Complex):Complex {
return new Complex(lhs.real() + rhs.real(), lhs.imag() + rhs.imag());
}
public function toString():String {
return real() + " + " + imag() + "i";
}
}
Indeed abstracts are not at all like abstract classes in Java. Abstract types in Haxe are powerful and interesting. Their main characteristic is that they are types that exist only at compile-time. At runtime they are entirely replaced by the wrapped type. Methods are transformed into static functions. In the case you described all of your instances will be replaced by anonymous objects with the two fields real and imag. Is that a good use case? Probably yes since a Complex type is not meant to be extended and you probably want to define some operator overloading (as you did for the addition).
To keep it even more light-weight you could use an Array<Float> as the wrapped type where the first element is the real part and the second the imaginary one.
So what is good about abstract types?
they add semantic to types (particularly primitive types). For example you could define an abstract RGB(Int) {} to always output very efficient color encoding with the benefit of methods and properties. Or you could have an abstract Path(String) {} to conveniently deal with path concatenation, relative paths and the like.
you can define operator overloading. In the case above you could do something like white + black and get something meaningful out of it.
similarly to operator overloading, abstracts can define implicit casts from and to other types. In the case of the RGB above you could easily define a method fromString() to parse an hex string into an Int representing a color. With the implicit cast you could do: var color : RGB = "#669900";. thx.color defines a lot of abstracts for color handling.
they are ideal to wrap the very powerful Enums in Haxe. With an abstract you can add methods and properties to enumerations (that natively do not support any of that).
they are ideal to wrap optimized code. Abstract methods can be inlined and the wrapped type ensures that you are not adding any additional layer of indirection when executing your code.
What is not so good? Or better, what we should know about abstracts?
since they are just a compile-time artifact, you cannot use runtime checks (eg: no Std.is(value, MyAbstract)).
abstracts are not classes, so no inheritance.

What overloaded method is chosen by Groovy when null is passed as a parameter?

class SimpleTest {
void met( Object a ) {
println "Object"
}
void met( String b ) {
println "String"
}
static main( args ) {
SimpleTest i = new SimpleTest()
i.met(null)
}
}
This code will produce the output "Object". It will not choose the most specialized version of the method. In this case String is more specialized than Object, so this rule does not apply.
Groovy uses a distance calculation approach. Basically if you imagine the classes and interfaces as nodes in a graph and them being connected by their inheritance relationship, then we kind of look for the distance from our given argument type (the runtime type of the argument) to our parameter type (the static type the method parameter has). The connections have different weights, basically going to the super class means a distance of I think 3, to an interface 1, wrapping a primitive is also 1, vargs wrapping has also a weight (and cannot really be represented in the graph anymore, so sorry for the slightly failing image)
In case of null this cannot work of course. Here we look at the distance of the parameter type to Object instead. While the none-null case is the most possible special method we take for the null part the most general one instead. In Java you would normally have the static type or use a cast to ensure what is to be selected. In Groovy we don't have a static type and what is most special can often not be decided correctly. Thus we decided for the most general approach instead for that case. It works really well in general.
Object then is kind of like a fallback, that allows you central null handling. In future versions we may allow the usage of an explicit null type, which then would be preferred over Object if there.
While you can often see directly the distance approach for classes, it is a bit more complicated for interfaces. Basically the algorithm goes like this: If my current class directly implements the interface we are looking for, then it is a match with distance 1. If any of the interfaces the class implements has the interface we look for as parent, then count the "hops" till we are there as distance. But we look for the shortest distance. So we also look the same way at the super class. Any search result from there will have that distance +1 (for the super class hop). If the super class search gives a shorter distance than the search on the implementing interfaces, the super class search result will be taken instead.
As for handling null with interfaces... The distance to Object is here 1 if the interface does not extend another. If it does it is the distance of the parent interface +1. If multiple interfaces are extended, it is the shortest path again.
Let us look at List and Integer for null.
List extends Collection, Collection extend Iterable, Iterable has no parent. That makes a distance of 1 for Iterable, 2 for Collection and finally 3 for List.
Integer extends Number, Number extends Object. Since we hop two times we have a distance of 6 here (2x3), being much bigger than the other case. Yes, that means in general we prefer interfaces. We do that for practical reasons actually, since this way proofed to be most near actual programming practice.

Is there a language where types can take content of fields into account?

I had this crazy idea and was wondering if such a thing exists:
Usually, in a strongly typed language, types are mainly concerned with memory layout, or membership to an abstract 'class'. So class Foo {int a;} and class Bar {int a; int b;} are distinct, but so is class Baz {int a; int b;} (although it has the same layout, it's a different type). So far, so good.
I was wondering if there is a language that allows one to specify more fine grained contraints as to what makes a type. For example, I'd like to have:
class Person {
//...
int height;
}
class RollercoasterSafe: Person (where .height>140) {}
void ride(RollercoasterSafe p) { //... }
and the compiler would make sure that it's impossible to have p.height < 140 in ride. This is just a stupid example, but I'm sure there are use cases where this could really help. Is there such a thing?
It depends on whether the predicate is checked statically or dynamically. In either case the answer is yes, but the resulting systems look different.
On the static end: PL researchers have proposed the notion of a refinement type, which consists of a base type together with a predicate: http://en.wikipedia.org/wiki/Program_refinement. I believe the idea of refinement types is that the predicates are checked at compile time, which means that you have to restrict the language of predicates to something tractable.
It's also possible to express constraints using dependent types, which are types parameterized by run-time values (as opposed to polymorphic types, which are parameterized by other types).
There are other tricks that you can play with powerful type systems like Haskell's, but IIUC you would have to change height from int to something whose structure the type checker could reason about.
On the dynamic end: SQL has something called domains, as in CREATE DOMAIN: http://developer.postgresql.org/pgdocs/postgres/sql-createdomain.html (see the bottom of the page for a simple example), which again consist of a base type and a constraint. The domain's constraint is checked dynamically whenever a value of that domain is created. In general, you can solve the problem by creating a new abstract data type and performing the check whenever you create a new value of the abstract type. If your language allows you to define automatic coercions from and to your new type, then you can use them to essentially implement SQL-like domains; if not, you just live with plain old abstract data types instead.
Then there are contracts, which are not thought of as types per se but can be used in some of the same ways, such as constraining the arguments and results of functions/methods. Simple contracts include predicates (eg, "accepts a Person object with height > 140"), but contracts can also be higher-order (eg, "accepts a Person object whose makeSmallTalk() method never returns null"). Higher-order contracts cannot be checked immediately, so they generally involve creating some kind of proxy. Contract checking does not create a new type of value or tag existing values, so the dynamic check will be repeated every time the contract is performed. Consequently, programmers often put contracts along module boundaries to minimize redundant checks.
An example of a language with such capabilities is Spec#. From the tutorial documentation available on the project site:
Consider the method ISqrt in Fig. 1, which computes the integer square root of
a given integer x. It is possible to implement the method only if x is non-negative, so
int ISqrt(int x)
requires 0 <= x;
ensures result*result <= x && x < (result+1)*(result+1);
{
int r = 0;
while ((r+1)*(r+1) <= x)
invariant r*r <= x;
{
r++;
}
return r;
}
In your case, you could probably do something like (note that I haven't tried this, I'm just reading docs):
void ride(Person p)
requires p.height > 140;
{
//...
}
There may be a way to roll up that requires clause into a type declaration such as RollercoasterSafe that you have suggested.
Your idea sounds somewhat like C++0x's concepts, though not identical. However, concepts have been removed from the C++0x standard.
I don't know any language that supports that kind of thing, but I don't find it really necessary.
I'm pretty convinced that simply applying validation in the setters of the properties may give you all the necessary restrictions.
In your RollercoasterSafe class example, you may throw an exception when the value of height property is set to a value less than 140. It's runtime checking, but polymorphism can make compile-time checking impossible.

Resources