I noticed that both
#eval have x : Nat := 3 ; x*2
and
#eval let x : Nat := 3 ; x*2
work in the same way. Same goes when proving theorems.
Are those equivalent? What is the difference between them?
The difference is that let "remembers" the definition and have forgets it.
So for example, the following works with let but not have.
example : {x : nat // x = 0} :=
let x := 0 in ⟨x, rfl⟩
In general have is usually used for proofs and let for everything else. In tactic mode you can use dsimp [x] to unfold the definition of let x := ...
This is what have does
Internally, the expression have h : p := s; t produces the term (fun (h : p) => t) s. In other words, s is a proof of p, t is a proof of the desired conclusion assuming h : p, and the two are combined by a lambda abstraction and application.
This is what let does
The let construct is a stronger means of abbreviation, and there are expressions of the form let a := t1; t2 that cannot be expressed as (fun a => t2) t1. As an exercise, try to understand why the definition of foo below type checks, but the definition of bar does not.
def foo := let a := Nat; fun x : a => x + 2
/-
def bar := (fun a => fun x : a => x + 2) Nat
-/
Related
Attempt #1:
def sget' {α : Type} {n : ℕ} (i : ℕ) {h1 : n > 0} {h2 : i < n} (s: sstack α n) : α :=
begin
cases n with n0 nn,
begin
have f : false, from nat.lt_asymm h1 h1,
tauto,
end,
induction s,
cases s_val,
begin
have : stack.empty.size = 0, from #stack_size_0 α,
tauto,
end,
cases i with i0 ri,
exact s_val_x,
exact sget' (pred i) s_val_s,
end
Attempt #2:
def sget' {α : Type} {n : ℕ} (i : ℕ) {h1 : n > 0} {h2 : i < n} (s: sstack α n) : α :=
match i, s with
| 0, ⟨stack.push x s, _⟩ := x
| i, ⟨stack.push _ s, _⟩ := sget' (pred i) ⟨s, _⟩
| _, ⟨stack.empty, _⟩ := sorry -- just ignore this
Lean in both cases throws unknown identifier sget' error. I know that I can call sget' recursively from ehh pattern guards (not sure how they are properly called), but is there any way to do something like that with tactics and/or match expressions?
You can do recursive calls if you use the equation compiler
def map (f : α → β) : list α → list β
| [] := []
| (a :: l) := f a :: map l
Otherwise you should use induction tactic or one of the explicit recursor functions (like nat.rec).
I was studying references in SML.
I wrote the following code:
let
val f = (fn (s) => s := ref((!(!s)) + 2))
val x = ref (5)
val y = ref x
in
(f y ; !x)
end;
I'm trying to get to val it = 7 : int, although my program prints val it = 5 : int. I can't understand why. I am sure the problem is in the f function but can't understand why.
What I'm trying to do: f function should update the argument y to be ref(ref(7)) so x could be ref(7). but for some reason it doesn't work. What is the problem?
Updating y to point to a new ref does not update x. There's a new reference created during the call to f, let's call it z. Before the call we have:
x -> 5
y -> x
where -> is "points to". After the call it is:
x -> 5
y -> z
z -> 7
Edit: One possible way to actually update x is by defining f as follows:
val f = fn r => !r := 7
When invoking f y, this updates the reference pointed to by y, which is x. But whether that is the "right" solution depends on what you actually want to achieve.
As Andreas Rossberg suggests, val f = fn r => !r := 7 could be one way to update the int of an int ref ref to 7. But instead of 7 you could write anything. If, instead, you want to increase by two the int being pointed indirectly to, a hybrid between your attempt and Andreas'es suggestion could be
fun f r = !r := !(!r) + 2
Here, !r := ... means "dereference r to get the int ref it points to, and update that int ref so that it instead points to ...", and !(!r) + 2 means "dereference r twice to get the int it indirectly points to, and add two to it." At this point, you have not changed what r points to (like you do with s := ref ...), and you're using the value it points to indirectly using the double-dereference !(!r).
A test program for this could be:
val x = ref 5
val y = ref x
fun f r = !r := !(!r) + 2
fun debug str =
print ( str ^ ": x points to " ^ Int.toString (!x) ^ " and "
^ "y points indirectly to " ^ Int.toString (!(!y)) ^ ".\n" )
val _ = debug "before"
val _ = f y
val _ = debug "after"
Running this test program yields:
before: x points to 5 and y points indirectly to 5.
after: x points to 7 and y points indirectly to 7.
I have a big Excel file, which i read with Excel Provider in F#.
The rows should be grouped by some column. Processing crashes with OutOfMemoryException. Not sure whether the Seq.groupBy call is guilty or excel type provider.
To simplify it I use 3D Point here as a row.
type Point = { x : float; y: float; z: float; }
let points = seq {
for x in 1 .. 1000 do
for y in 1 .. 1000 do
for z in 1 .. 1000 ->
{x = float x; y = float y; z = float z}
}
let groups = points |> Seq.groupBy (fun point -> point.x)
The rows are already ordered by grouped column, e.g. 10 points with x = 10, then 20 points with x = 20 and so one. Instead of grouping them I need just to split the rows in chunks until changed. Is there some way to enumerate the sequence just once and get sequence of rows splitted, not grouped, by some column value or some f(row) value?
If the rows are already ordered then this chunkify function will return a seq<'a list>. Each list will contain all the points with the same x value.
let chunkify pred s = seq {
let values = ref []
for x in s do
match !values with
|h::t -> if pred h x then
values := x::!values
else
yield !values
values := [x]
|[] -> values := [x]
yield !values
}
let chunked = points |> chunkify (fun x y -> x.x = y.x)
Here chunked has a type of
seq<Point list>
Another solution, along the same lines as Kevin's
module Seq =
let chunkBy f src =
seq {
let chunk = ResizeArray()
let mutable key = Unchecked.defaultof<_>
for x in src do
let newKey = f x
if (chunk.Count <> 0) && (newKey <> key) then
yield chunk.ToArray()
chunk.Clear()
key <- newKey
chunk.Add(x)
}
// returns 2 arrays, each with 1000 elements
points |> Seq.chunkBy (fun pt -> pt.y) |> Seq.take 2
Here's a purely functional approach, which is surely slower, and much harder to understand.
module Seq =
let chunkByFold f src =
src
|> Seq.scan (fun (chunk, (key, carry)) x ->
let chunk = defaultArg carry chunk
let newKey = f x
if List.isEmpty chunk then [x], (newKey, None)
elif newKey = key then x :: chunk, (key, None)
else chunk, (newKey, Some([x]))) ([], (Unchecked.defaultof<_>, None))
|> Seq.filter (snd >> snd >> Option.isSome)
|> Seq.map fst
Lets start with the input
let count = 1000
type Point = { x : float; y: float; z: float; }
let points = seq {
for x in 1 .. count do
for y in 1 .. count do
for z in 1 .. count ->
{x = float x; y = float y; z = float z}
}
val count : int = 1000
type Point =
{x: float;
y: float;
z: float;}
val points : seq<Point>
If we try to evalute points then we get a OutOfMemoryException:
points |> Seq.toList
System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
at Microsoft.FSharp.Collections.FSharpList`1.Cons(T head, FSharpList`1 tail)
at Microsoft.FSharp.Collections.SeqModule.ToList[T](IEnumerable`1 source)
at <StartupCode$FSI_0011>.$FSI_0011.main#()
Stopped due to error
It might be same reason that groupBy fails, but I'm not sure. But it tells us that we have to use seq and yield to return the groups with. So we get this implementation:
let group groupBy points =
let mutable lst = [ ]
seq { for p in points do match lst with | [] -> lst <- [p] | p'::lst' when groupBy p' p -> lst <- p::lst | lst' -> lst <- [p]; yield lst' }
val group : groupBy:('a -> 'a -> bool) -> points:seq<'a> -> seq<'a list>
It is not the most easily read code. It takes each point from the points sequence and prepends it to an accumulator list while the groupBy function is satisfied. If the groupBy function is not satisfied then a new accumulator list is generated and the old one is yielded. Note that the order of the accumulator list is reversed.
Testing the function:
for g in group (fun p' p -> p'.x = p.x ) points do
printfn "%f %i" g.[0].x g.Length
Terminates nicely (after some time).
Other implementation with bug fix and better formatting.
let group (groupBy : 'a -> 'b when 'b : equality) points =
let mutable lst = []
seq {
yield! seq {
for p in points do
match lst with
| [] -> lst <- [ p ]
| p' :: lst' when (groupBy p') = (groupBy p) -> lst <- p :: lst
| lst' ->
lst <- [ p ]
yield (groupBy lst'.Head, lst')
}
yield (groupBy lst.Head, lst)
}
Seems there is no one line purely functional solution or already defined Seq method which I have overseen.
Therefore as an alternative here my own imperative solution. Comparable to #Kevin's answer but actually satisfies more my need. The ref cell contains:
The group key, which is calculated just once for each row
The current chunk list (could be seq to be conform to Seq.groupBy), which contains the elements in the input order for which the f(x) equals to the sored group key (requires equality).
.
let splitByChanged f xs =
let acc = ref (None,[])
seq {
for x in xs do
match !acc with
| None,_ ->
acc := Some (f x),[x]
| Some key, chunk when key = f x ->
acc := Some key, x::chunk
| Some key, chunk ->
let group = chunk |> Seq.toList |> List.rev
yield key, group
acc := Some (f x),[x]
match !acc with
| None,_ -> ()
| Some key,chunk ->
let group = chunk |> Seq.toList |> List.rev
yield key, group
}
points |> splitByChanged (fun point -> point.x)
The function has the following signature:
val splitByChanged :
f:('a -> 'b) -> xs:seq<'a> -> seq<'b * 'a list> when 'b : equality
Correctures and even better solutions are welcome
I found a great haskell solution (source) for generating a Hofstadter sequence:
hofstadter = unfoldr (\(r:s:ss) -> Just (r, r+s:delete (r+s) ss)) [1..]
Now, I am trying to write such a solution in F#, too. Unfortunately (I am not really familar to F#) I had no success so far.
My problem is, that when I use a sequence in F#, it seems not to be possible to remove an element (like it is done in the haskell solution).
Other data structures like arrays, list or set which allow to remove elements are not generating an infinite sequence, but operate on certain elements, only.
So my question: Is it possible in F# to generate an infinite sequence, where elements are deleted?
Some stuff I tried so far:
Infinite sequence of numbers:
let infinite =
Seq.unfold( fun state -> Some( state, state + 1) ) 1
Hofstadter sequence - not working, because there is no del keyword and there are more syntax errors
let hofstadter =
Seq.unfold( fun (r :: s :: ss) -> Some( r, r+s, del (r+s) ss)) infinite
I thought about using Seq.filter, but found no solution, either.
I think you need more than a delete function on sequence. Your example requires pattern matching on inifinite collections, which sequence doesn't support.
The F# counterpart of Haskell list is LazyList from F# PowerPack. LazyList is also potentially infinite and it supports pattern matching, which helps you to implement delete easily.
Here is a faithful translation:
open Microsoft.FSharp.Collections.LazyList
let delete x xs =
let rec loop x xs = seq {
match xs with
| Nil -> yield! xs
| Cons(x', xs') when x = x' -> yield! xs'
| Cons(x', xs') ->
yield x'
yield! loop x xs'
}
ofSeq (loop x xs)
let hofstadter =
1I |> unfold (fun state -> Some(state, state + 1I))
|> unfold (function | (Cons(r, Cons(s, ss))) ->
Some(r, cons (r+s) (delete (r+s) ss))
| _ -> None)
|> toSeq
There are a few interesting things here:
Use sequence expression to implement delete to ensure that the function is tail-recursive. A non-tail-recursive version should be easy.
Use BigInteger; if you don't need too many elements, using int and Seq.initInfinite is more efficient.
Add a case returning None to ensure exhaustive pattern matching.
At last I convert LazyList to sequence. It gives better interoperability with .NET collections.
Implementing delete on sequence is uglier. If you are curious, take a look at Remove a single non-unique value from a sequence in F# for reference.
pad's solution is nice but, likely due to the way LazyList is implemented, stack overflows somewhere between 3-4K numbers. For curiosity's sake I wrote a version built around a generator function (unit -> 'a) which is called repeatedly to get the next element (to work around the unwieldiness of IEnumerable). I was able to get the first 10K numbers (haven't tried beyond that).
let hofstadter() =
let delete x f =
let found = ref false
let rec loop() =
let y = f()
if not !found && x = y
then found := true; loop()
else y
loop
let cons x f =
let first = ref true
fun () ->
if !first
then first := false; x
else f()
let next =
let i = ref 0
fun () -> incr i; !i
Seq.unfold (fun next ->
let r = next()
let s = next()
Some(r, (cons (r+s) (delete (r+s) next)))) next
In fact, you can use filter and a design that follows the haskell solution (but, as #pad says, you don't have pattern matching on sequences; so I used lisp-style destruction):
let infinite = Seq.initInfinite (fun i -> i+1)
let generator = fun ss -> let (r, st) = (Seq.head ss, Seq.skip 1 ss)
let (s, stt) = (Seq.head st, Seq.skip 1 st)
let srps = seq [ r + s ]
let filtered = Seq.filter (fun t -> (r + s) <> t) stt
Some (r, Seq.append srps filtered)
let hofstadter = Seq.unfold generator infinite
let t10 = Seq.take 10 hofstadter |> Seq.toList
// val t10 : int list = [1; 3; 7; 12; 18; 26; 35; 45; 56; 69]
I make no claims about efficiency though!
I'm working on an experimental programming language that has global polymorphic type inference.
I recently got the algorithm working sufficiently well to correctly type the bits of sample code I'm throwing at it. I'm now looking for something more complex that will exercise the edge cases.
Can anyone point me at a source of really gnarly and horrible code fragments that I can use for this? I'm sure the functional programming world has plenty. I'm particularly looking for examples that do evil things with function recursion, as I need to check to make sure that function expansion terminates correctly, but anything's good --- I need to build a test suite. Any suggestions?
My language is largely imperative, but any ML-style code ought to be easy to convert.
My general strategy is actually to approach it from the opposite direction -- ensure that it rejects incorrect things!
That said, here are some standard "confirmation" tests I usually use:
The eager fix point combinator (unashamedly stolen from here):
datatype 'a t = T of 'a t -> 'a
val y = fn f => (fn (T x) => (f (fn a => x (T x) a)))
(T (fn (T x) => (f (fn a => x (T x) a))))
Obvious mutual recursion:
fun f x = g (f x)
and g x = f (g x)
Check out those deeply nested let expressions too:
val a = let
val b = let
val c = let
val d = let
val e = let
val f = let
val g = let
val h = fn x => x + 1
in h end
in g end
in f end
in e end
in d end
in c end
in b end
Deeply nested higher order functions!
fun f g h i j k l m n =
fn x => fn y => fn z => x o g o h o i o j o k o l o m o n o x o y o z
I don't know if you have to have the value restriction in order to incorporate mutable references. If so, see what happens:
fun map' f [] = []
| map' f (h::t) = f h :: map' f t
fun rev' [] = []
| rev' (h::t) = rev' t # [h]
val x = map' rev'
You might need to implement map and rev in the standard way :)
Then with actual references lying around (stolen from here):
val stack =
let val stk = ref [] in
{push = fn x => stk := x :: !stk,
pop = fn () => stk := tl (!stk),
top = fn () => hd (!stk)}
end
Hope these help in some way. Make sure to try to build a set of regression tests you can re-run in some automatic fashion to ensure that all of your type inference behaves correctly through all changes you make :)