Often a <= b is defined as a < b || a == b. Do any languages make <= the more "base" operator and define a < b in terms of <=, e.g. as a <= b && !(a == b)?
Related
Given the language below, how do i find a regular expression for the language...
L = { a^n b^m | n >= 4 , m <= 3}
Is there an infix notation that would allow to rewrite those proofs
example (h₀ : a ≤ b) (h₁ : b < c) (h₂ : c ≤ d) (h₃ : d < e) :
a < e := lt_trans (lt_of_lt_of_le (lt_of_le_of_lt (h₀) h₁) h₂ ) h₃
example (h₀ : a ≤ b) (h₁ : b < c) (h₂ : c ≤ d) (h₃ : d < e) :
a < e := by
{ apply lt_trans,
{apply lt_of_lt_of_le,
{apply lt_of_le_of_lt, apply h₀, apply h₁}
, apply h₂},
apply h₃
}
more naturally as
example (h₀ : a ≤ b) (h₁ : b < c) (h₂ : c ≤ d) (h₃ : d < e) :
a < e := ((h₀ `lt_of_le_of_lt` h₁) `lt_of_lt_of_le` h₂) `lt_trans` h₃
You can use Lean's dot notation to make an infix version of the argument, for example
example (h₀ : a ≤ b) (h₁ : b < c) (h₂ : c ≤ d) (h₃ : d < e) :
a < e := (h₀.trans_lt h₁).trans (h₂.trans_lt h₃)
What's going on here is that the type of h₀ is le a b (in the has_le namespace), so h₀.trans_lt means le.trans_lt h₀, and you can see in order.basic in mathlib that le.trans_lt is just an abbreviation for lt_of_le_of_lt.
I try to prove the following lemma. It seems really trivial, but I can't manage to prove it. Thank you in advance!
lemma test(x : seq<int>)
// if the values from x are margined by an interval
// the values are different
// and the number of values equals the size of the interval
// then all values from the interval appear in x
requires forall i :: 0 <= i < |x| ==>
0 <= x[i] < |x|;
requires forall i :: 0 <= i < |x| ==>
forall i' :: 0 <= i' < |x| && i != i' ==>
x[i] != x[i'];
ensures forall v :: 0 <= v < |x| ==>
exists i :: 0 <= i < |x| && x[i] == v;
{
}
https://rise4fun.com/Dafny/d8VK
Here's one way to do it, using some facts about set cardinalities.
lemma test(x : seq<int>)
// if the values from x are margined by an interval
// the values are different
// and the number of values equals the size of the interval
// then all values from the interval appear in x
requires forall i :: 0 <= i < |x| ==>
0 <= x[i] < |x|;
requires forall i :: 0 <= i < |x| ==>
forall i' :: 0 <= i' < |x| && i != i' ==>
x[i] != x[i'];
ensures forall v :: 0 <= v < |x| ==> v in x
{
var L: set<int>, R: set<int> := {}, RangeSet(0, |x|);
var i := 0;
CardinalityRangeSet(0, |x|);
while i < |x|
invariant 0 <= i <= |x|
invariant L == set j | 0 <= j < i :: x[j]
invariant forall v | v in L :: v in x
invariant forall v | 0 <= v < |x| :: v in L || v in R
invariant |R| == |x| - i
{
L, R := L + {x[i]}, R - {x[i]};
i := i + 1;
}
}
predicate InRange(lo: int, hi: int, i: int)
{
lo <= i < hi
}
function RangeSet(lo: int, hi: int): set<int>
{
set i | lo <= i < hi && InRange(lo, hi, i)
}
lemma CardinalityRangeSet(lo: int, hi: int)
decreases hi - lo
ensures |RangeSet(lo, hi)| == if lo >= hi then 0 else hi - lo
{
if lo < hi {
assert RangeSet(lo, hi) == {lo} + RangeSet(lo + 1, hi);
CardinalityRangeSet(lo + 1, hi);
}
}
I changed your specification slightly to use the Dafny syntax v in x, which is equivalent to what you wrote, and a little easier for Dafny to reason about.
The basic idea of the proof is to start with the range R of elements 0..|x|, and then iteratively remove elements x[i] from R and add them to L. This maintains the invariant that every number in the range 0..|x| is either in L or R, while the cardinality of R decreases on every iteration. Thus, at the end of the loop R is empty, so every number in the range must be in L, and therefore in x.
I also used one helper lemma proved by induction to show that RangeSet has the expected size.
(Edited to get rid of "No terms found to trigger on" warning in RangeSet. Introducing the predicate InRange gives it something to trigger on, but you still need to include the explicit range in RangeSet because otherwise it can't figure that the set is finite.)
ltl p {(Xq) || (Fp)},
what will be the formal language accepted of this LTL formula?
For example :
ltl p { p && (Xq)}
{w = a0a1a2.... | p∈a0 && q∈a1}
The property X q || F p accepts words of the language
{ w = w0w1w2... | q ∈ w1 or ∃wi. p ∈ wi }
Another way to look at the accepted language is to draw the Buchi Automaton corresponding to the LTL formula X q || F p:
Credits: image generated with ltl2ba
I'm writing a routine to determine whether the high 16 bits of a 32-bit integer have more bits set, or the low bits.
In C, I would write this:
bool more_high_bits(int a) {
if ((a >> 16) == 0) return false; // no high bits
if ((a & 0xFFFF) == 0) return true; // no low bits
// clear one high bit and one low bit, and ask again
return more_high_bits(a&(a - 0x10001));
}
So in Haskell, I'm trying this:
more_high_bits a=if (a `shiftR` 16) /= 0 then 0 else
if ((.&.) a 65535) /= 0 then 1 else
more_high_bits((.&.) a (a-65537))
But it just times out.
What am I doing wrong? What's the more idiomatic way to do this? Please don't code away the shift or the & because I'd like to know how I "should" be using these.
Addendum: I tried this code out on an haskell compiler:
http://www.tutorialspoint.com/compile_haskell_online.php
import Data.Bits
g a=if (a `shiftR` 16) == 0 then 0 else
if ((.&.) a 65535) == 0 then 1 else
g((.&.) a (a-65537))
main = print (g(237))
But it tells me "No instance for (Bits a0) arising from a use of 'g'
The type variable 'a0' is ambiguous"
What is "a0"??
Here's a pretty direct translation of your C code to Haskell:
import Data.Word
import Data.Bits
more_high_bits :: Word32 -> Bool
more_high_bits a
| (a `shiftR` 16) == 0 = False
| (a .&. 0xFFFF) == 0 = True
| otherwise = more_high_bits (a .&. (a - 0x10001))
Your attempt has /= where the C version has ==, which inverts the condition.
a0 is the type variable that the type checker automatically created for your use of g 237. It doesn't know which type you mean because 237 could be any numeric type at all, and g works with all numbers that support bitwise operations and equality. The list of types you could have meant includes (but is not limited to) Int, Integer, Word, ...