I wrote a program in Scheme a while ago that would evolve an L-system using macros. Essentially there are rules about how tokens are expanded which would be run recursively. For example, given the rules:
F => F F
X => X < F > F
> => identity (stay >)
< => identity (stay <)
If we start with X, we get:
// after 0 iterations
X
// after 1 iteration
X < F > F
// after 2 iterations
X < F > F < F F > F F
// after 3 iterations
X < F > F < F F > F F < F F F F > F F F F
et cetera. In Scheme this was a charm to do. Super simple matching and recursive macro definitions. The call looked something like this:
; macro name iters starting tokens
(evolve-lsys-n 5 X F X)
But I'm really struggling to do this with Rust.
Standard macro_rules! have the advantage of pattern matching, which is really nice. But unfortunately there's no unquoting/quasiquoting as far as I can tell, so I can't actually tail recurse (I think?)
Procedural macros seem like the way to go, but I'm also struggling with how to do this.
If the input is the same as what I had in Scheme (evolve!(X F X)), how do I go about actually looping through these tokens?
With rust being more powerful, I'm also hoping I can have a more expressive input without additional spaces. For example, evolve!(XFX) would be nice. Is this possible? This doesn't seem like much of a benefit here, but when defining the expansion rules they can actually get pretty big so it would be nice to avoid spaces.
Finally, in Scheme I was also able to implement parametric macros. This means that some "tokens" would have parameters. A call would look something like this: evolve!(X F(10) X), and the expansion for F(10) would take the parameter and do something with it, for example F(t) F(t * 2), so that F(10) would expand to F(10) F(20).
Obviously I can do all of this without macros. I really like the idea of using macros for all of this though, as it's just an interesting exercise and comes from the idea of "defining your own grammar" which is the most attractive part of Scheme and lisp-like languages to me.
Thanks,
Related
Say I have a haskell function f n l = filter (n<) l where it takes an integer n and list l and returns all of the integers in l greater then n.
I'm trying to figure out how to best write this function in a language like Joy. I've generally had good luck with converting the haskell function to pointfree form f = filter . (<) and then trying to rewrite it in Joy from there. But I can't figure out how to simulate partial function application in a concatenative language.
So far, I've tried to do something like swap [[>] dip] filter, but it seems like there must be a better/cleaner way to write this.
Also, I'm experimenting with writing my own concatenative language and was wondering if lazy-evaluation could be compatible with concatenative languages.
swap [[>] dip] filter won’t work because it assumes n is accessible for each call to the quotation by which you’re filtering; that implies filter can’t leave any intermediate values on the stack while it’s operating, and > doesn’t consume n. You need to capture the value of n in that quotation.
First “eta”-reduce the list parameter:
l n f = l [ n > ] filter
n f = [ n > ] filter
Then capture n by explicitly quoting it and composing it with >:
n f = n quote [ > ] compose filter
(Assuming quote : a -> (-> a) a.k.a. unit, takes a value and wraps it in a quotation and compose : (A -> B) (B -> C) -> (A -> C) a.k.a. cat, concatenates two quotations.)
Then just “eta”-reduce n:
f = quote [ > ] compose filter
I put “eta” in scare quotes because it’s a little more general than in lambda calculus, working for any number of values on the stack, not just one.
You can of course factor out partial application into its own definition, e.g. the papply combinator in Cat, which is already defined as swons (swap cons) in Joy, but can also be defined like so:
DEFINE
papply (* x [F] -- [x F] *)
== [unit] dip concat ;
f (* xs n -- xs[>=n] *)
== [>] papply filter .
In Kitten this could be written in a few different ways, according to preference:
// Point-free
function \> compose filter
// Local variable and postfix
-> n; { n (>) } filter
// Local variable and operator section
-> n; \(n <) filter
Any evaluation strategy compatible with functional programming is also compatible with concatenative programming—popr is a lazy concatenative language.
Here's the code.
largestDivisible :: (Integral a) => a
largestDivisible = head (filter p [100000,99999..])
where p x = x `mod` 3829 == 0
I am little bit confused. What is p in this case? Also, I do not understand the where expression in this particular example, because we got two expressions with p and x on the left side and we have one alignment, which is actually a boolean.
I would appreciate, if someone could explain me the above code.
p is a function, which accepts an argument x and returns True only if x is divisible by 3829. You can use where to define local functions just like you define local "values", using the same f x = y syntax you use to define top-level functions.
I am trying to understand currying by reading various blogs and stack over flow answers and I think I understood some what. In Haskell, every function is curried, it means, when you have a function like f x y = x + y
it really is ((f x) y)
in this, the function initially take the first parameter 'x' as the parameter and partially applies it to function f which in turn returns a function for y. where it takes just y a single parameter and applies the function. In both cases the function takes only one parameter and also the process of reducing a function to take single parameter is called 'currying'. Correct me if my understanding wrong here.
So if it is correct, could you please tell me if the functions 'two' and 'three' are curried functions?
three x y z = x + y + z
two = three 1
same = two 1
In this case, I have two specialized functions, 'two' and 'same' which are reduced to take only one parameter so is it curried?
Let's look at two first.
It has a signature of
two :: Num a => a -> a -> a
forget the Num a for now (it's only a constraint on a - you can read Int here).
Surely this too is a curried function.
The next one is more interesting:
same :: Num a => a -> a
(btw: nice name - it's the same but not exactly id ^^)
TBH: I don't know for sure.
The best definition I know of a curried function is this:
A curried function is a function of N arguments returning another function of (N-1) arguments.
(if you want you can extent this to fully curried functions of course)
This will only fit if you define constants as functions with 0 parameters - which you surely can.
So I would say yes(?) this too is a curried function but only in a mathy borderline way (like the sum of 0 numbers is defined to be 0)
Best just think about this equationally. The following are all equivalent definitions:
f x y z = x+y+z
f x y = \z -> x+y+z
f x = \y -> (\z -> x+y+z)
f = \x -> (\y -> (\z -> x+y+z))
Partial application is only tangentially relevant here. Most often you don't want the actual partial application to be performed and the actual lambda object to be created in memory - hoping instead that the compiler will employ - and optimize better - the full definition at the final point of full application.
The presence of the functions curry/uncurry is yet another confusing issue. Both f (x,y) = ... and f x y = ... are curried in Haskell, of course, but in our heads we tend to think about the first as a function of two arguments, so the functions translating between the two forms are named curry and uncurry, as a mnemonic.
You could think that three function with anonymous functions is:
three = \x -> (\y -> (\z -> x + y + z)))
I want to put the operation which takes all the items in a list which are greater than 2 into a pointless (as in not explicitly capturing the argument in a variable) function in J. I wanted to do this by using ~ with a hook, like f =: ((> & 2) #)~ but it seems like neither that nor ((> & 2) #~) works.
My reasoning was that my function has the form (f y) g y where y is the list, f is (> & 2), and g is #. I would appreciate any help!
Everything is OK except you mixed the order of the hook. It's y f (g y) so you want
(#~ (>&2)) y
Hooks have the form f g and the interpretation, when applied to a single argument (i.e. monadically) is (unaltered input) f (g input). So, as Eelvex noted, you'd phrase this as a hook like hook =: #~ >&2 . Also, as kaledic noted, the idiom (#~ filter) is extremely common in J, so much that it's usually read as a cohesive whole: keep-items-matching-filter.*
If you wanted a point-free phrasing of the operation which looks similar, notationally, to the original noun-phrase (y > 2) # y , you might like to use the fork >&2 # ] where ] means "the unaltered input" (i.e. the identity function) or even (] # 2:) # ] or some variation.
(*) In fact, the pattern (f~ predicate) defines an entire class of idioms, like (<;.1~ frets) for cutting an array into partitions and (</.~ categories) for classifying the items of an array into buckets.
I've been asking a few questions about strictness, but I think I've missed the mark before. Hopefully this is more precise.
Lets say we have:
n = 1000000
f z = foldl' (\(x1, x2) y -> (x1 + y, y - x2)) z [1..n]
Without changing f, what should I set
z = ...
So that f z does not overflow the stack? (i.e. runs in constant space regardless of the size of n)
Its okay if the answer requires GHC extensions.
My first thought is to define:
g (a1, a2) = (!a1, !a2)
and then
z = g (0, 0)
But I don't think g is valid Haskell.
So your strict foldl' is only going to evaluate the result of your lambda at each step of the fold to Weak Head Normal Form, i.e. it is only strict in the outermost constructor. Thus the tuple will be evaluated, however those additions inside the tuple may build up as thunks. This in-depth answer actually seems to address your exact situation here.
W/R/T your g: You are thinking of BangPatterns extension, which would look like
g (!a1, !a2) = (a1, a2)
and which evaluates a1 and a2 to WHNF before returning them in the tuple.
What you want to be concerned about is not your initial accumulator, but rather your lambda expression. This would be a nice solution:
f z = foldl' (\(!x1, !x2) y -> (x1 + y, y - x2)) z [1..n]
EDIT: After noticing your other questions I see I didn't read this one very carefully. Your goal is to have "strict data" so to speak. Your other option, then, is to make a new tuple type that has strictness tags on its fields:
data Tuple a b = Tuple !a !b
Then when you pattern match on Tuple a b, a and b will be evaluated.
You'll need to change your function regardless.
There is nothing you can do without changing f. If f were overloaded in the type of the pair you could use strict pairs, but as it stands you're locked in to what f does. There's some small hope that the compiler (strictness analysis and transformations) can avoid the stack growth, but nothing you can count on.