x ([: u v) y expands to u (x v y), but so does x u#:v y. #: strictly supersedes [: in Special Codes. Is there any reason to use [: over #:?
Some people prefer ([: f g) to f#:g for readability, perhaps because it is more spread out, although f #: g accomplishes the same spacing without parentheses.
I am pretty sure that I have seen cases where it made a difference to the outcome, but I can't remember them now. Perhaps others will be able to come up with examples where they differ.
Related
When being left with an expression of the form ite ("a"="b") x y which involves a decidable equality between two distinct string literals, it appear that simp on its own does not allow me to reduce this expressions to y. This is in contrast to the case of ite ("a"="a") x y which is reduced to x with simp. So I find myself doing a case
analysis cases decidable.em ("a"="b") with H H and then handling one case using exfalso and dec_trivial and the other by using simp [H]. So I am able to move forward, but I was wondering if there was a more idiomatic and shorter way to achieve the same result.
Either rw [if_neg (show "a" ≠ "b", from dec_trivial)] or simp [if_neg (show "a" ≠ "b", from dec_trivial)] is the easiest way I know.
I wrote a program in Scheme a while ago that would evolve an L-system using macros. Essentially there are rules about how tokens are expanded which would be run recursively. For example, given the rules:
F => F F
X => X < F > F
> => identity (stay >)
< => identity (stay <)
If we start with X, we get:
// after 0 iterations
X
// after 1 iteration
X < F > F
// after 2 iterations
X < F > F < F F > F F
// after 3 iterations
X < F > F < F F > F F < F F F F > F F F F
et cetera. In Scheme this was a charm to do. Super simple matching and recursive macro definitions. The call looked something like this:
; macro name iters starting tokens
(evolve-lsys-n 5 X F X)
But I'm really struggling to do this with Rust.
Standard macro_rules! have the advantage of pattern matching, which is really nice. But unfortunately there's no unquoting/quasiquoting as far as I can tell, so I can't actually tail recurse (I think?)
Procedural macros seem like the way to go, but I'm also struggling with how to do this.
If the input is the same as what I had in Scheme (evolve!(X F X)), how do I go about actually looping through these tokens?
With rust being more powerful, I'm also hoping I can have a more expressive input without additional spaces. For example, evolve!(XFX) would be nice. Is this possible? This doesn't seem like much of a benefit here, but when defining the expansion rules they can actually get pretty big so it would be nice to avoid spaces.
Finally, in Scheme I was also able to implement parametric macros. This means that some "tokens" would have parameters. A call would look something like this: evolve!(X F(10) X), and the expansion for F(10) would take the parameter and do something with it, for example F(t) F(t * 2), so that F(10) would expand to F(10) F(20).
Obviously I can do all of this without macros. I really like the idea of using macros for all of this though, as it's just an interesting exercise and comes from the idea of "defining your own grammar" which is the most attractive part of Scheme and lisp-like languages to me.
Thanks,
I want to put the operation which takes all the items in a list which are greater than 2 into a pointless (as in not explicitly capturing the argument in a variable) function in J. I wanted to do this by using ~ with a hook, like f =: ((> & 2) #)~ but it seems like neither that nor ((> & 2) #~) works.
My reasoning was that my function has the form (f y) g y where y is the list, f is (> & 2), and g is #. I would appreciate any help!
Everything is OK except you mixed the order of the hook. It's y f (g y) so you want
(#~ (>&2)) y
Hooks have the form f g and the interpretation, when applied to a single argument (i.e. monadically) is (unaltered input) f (g input). So, as Eelvex noted, you'd phrase this as a hook like hook =: #~ >&2 . Also, as kaledic noted, the idiom (#~ filter) is extremely common in J, so much that it's usually read as a cohesive whole: keep-items-matching-filter.*
If you wanted a point-free phrasing of the operation which looks similar, notationally, to the original noun-phrase (y > 2) # y , you might like to use the fork >&2 # ] where ] means "the unaltered input" (i.e. the identity function) or even (] # 2:) # ] or some variation.
(*) In fact, the pattern (f~ predicate) defines an entire class of idioms, like (<;.1~ frets) for cutting an array into partitions and (</.~ categories) for classifying the items of an array into buckets.
I have defined the following functions for beta reduction but i'm not sure how to consider the case where free variables get bounded.
data Term = Variable Char | Lambda Char Term | Pair Term Term deriving (Show,Eq)
--substition
s[M:x]= if (s=x) then M else s
AB[M:x]= (A[M:x] B [x:M])
Lambda x B[M:x] = Lambda x B
Lambda y P[M:x]= if x=y then Lambda y P else Lambda y P (M:x)
--beta reduction
Reduce [s]= s
Reduce[Lambda x B]M = B[M:x]
Reduce[L1 L2] = (Reduce [L1] Reduce [L2])
The link hammar gave in the comment describes the solution in detail.
I'd just like to offer a different solution. Nicolaas Govert de Bruijn, a Dutch mathematician, invented an alternative notation for lambda terms. The idea is that instead of using symbols for variables, we use numbers. We replace each variable by the number representing how many lambdas we need to cross until we find the abstraction that binds the variable. Abstraction then don't need to have any information at all. For example:
λx. λy. x
is converted to
λ λ 2
or
λx. λy. λz. (x z) (y z)
is converted to
λ λ λ 3 1 (2 1)
This notation has several considerable advantages. Notably, since there are no variables, there is no renaming of variables, no α-conversion. Although we have to renumber the indices accordingly when substituting, we don't have to check for names conflicts and do any renaming. The above Wikipedia article gives an example of how a β-reduction works with this notation.
In Haskell there are two functions that allow one to perform an operation on a list of items in order to reduce it to a single value. (There are more than two, of course, but these are the two I'm interested in.) They are foldl1 and foldr1. If the operation to be performed is commutative (such as addition), it doesn't matter which of these you use. The result will be the same. However, if the operation is not commutative (e.g., subtraction), then the two produce very different results. For example:
foldr1 (-) [1..9]
foldl1 (-) [1..9]
The answer to the first one is 5 and to the second, -43. The J equivalent of foldr1 is the insert adverb, /, e.g.,
-/ 1+i.9
which is the equivalent of foldr1 (-) [1..9]. I want to create an adverb in J that works like the insert adverb, but folds left instead of right. The best I could come up with is the following:
foldl =: 1 : 'u~/#|.'
Thus, one could say:
- foldl 1+i.9
and get -43 as the answer, which is what is expected from a left fold.
Is there a better way to do this in J? For some reason, reversing the y argument does not seem efficient to me. Perhaps there is a way to do this without having to resort to that.
I don't think there is a better way to fold left than that you describe:
(v~) / (|. list)
It is a very natural way, an almost "literal" implementation of the definition. The cost of reversing the list is very small (imo).
The other obvious way of implementing the left fold is to set
new_list = (first v second) v rest
eg:
foldl_once =: 1 :'(u / 0 1 { y), (2}. y)'
foldl =: 1 :'(u foldl_once)^:(<:#y) y'
so:
- foldl >:i.9
_43
but your way performs much better than this both in space and time.
($:#}:-{:)^:(1<#) 1+i.9
_43
No idea if it's any more (or less) efficient.