Emacs: fill all text except indicated regions - text

Using elisp within gnu emacs, I'd like to be able to fill all text in the buffer except for text that is indicated with special identifiers. The identifers can be just about anything, but for the sake of this question, let's just assume that it's any text that falls between [nofill] and [/nofill] tags.
For example, assume that my buffer looks like this:
Now is the time
for all good
men to come to the aid
of their party. Now is
the time for all good
men to come to the aid
of their party.
[nofill]
The quick
brown fox
jumped over the
lazy sleeping dog
[/nofill]
When in the course of
human events, it becomes
it becomes necessary for one
people to dissolve the
political bands
[nofill]
baa-baa
black sheep,
have you
any wool
[/nofill]
After the kind of filling I'm looking for, I want the buffer to appear as follows:
Now is the time for all good men to come to the aid of their
party. Now is the time for all good me to come to the aid of
their party
[nofill]
The quick
brown fox
jumped over the
lazy sleeping dog
[/nofill]
When in the course of human events, it becomes it becomes
necessary for one people to dissolve the political bands
[nofill]
baa-baa
black sheep,
have you
any wool
[/nofill]
I know elisp and I could write something which does this. However, before I attempt to "reinvent the wheel", I'm wondering if anyone knows of any existing elisp modules which might already provide this functionality.
Thank you in advance.

You can just justify everything between [/nofill] and [nofill] (or possibly beginning/end of buffer).
(defun fill-special () "fill special"
(interactive)
(goto-char (point-min))
(while (< (point) (point-max))
(let ((start (point)))
(if (search-forward "[nofill]" nil 1)
(forward-line -1))
(fill-region start (point) 'left)
(if (search-forward "[/nofill]" nil 1)
(forward-line 1)))))

This seems overly-complicated compared to the other answer, but basically, I mark the current point, searched forward for a tag (which could be parameterized), and fill the region. Then, I recursively call fill-region-ignore-tags-helper, using the first character after the starting point as the start of the region, and then the next [nofill] tag as the end of the region. This continues until the entire buffer is filled. It seems to work with some random trivial cases, although there may be some edge cases that aren't covered.
(defun fill-region-ignore-tags ()
(interactive)
(save-excursion
(fill-region-ignore-tags-helper (point-min) (search-forward "[nofill]"))))
(defun fill-region-ignore-tags-helper (begin end)
(let ((cur-point begin)
(next-point end))
(if (eq next-point nil)
nil
(progn
(fill-region cur-point next-point)
(fill-region-ignore-tags-helper (progn
(search-forward "[/nofill]")
(re-search-forward "\\S-")
(point))
(progn
(search-forward "[nofill]")
(previous-line)
(point)))))))

Related

Clojure concurrency: Let an agent act on a java object / deftype

working my way up learning Clojure I arrived at the following problem:
setup: A class for a graph data structure, created with deftype and definterface which has a addNode [id data] member function. Works as expected when called directly, like (.addNode graph "anItem" 14)
idea: Since string tokenizing and updating the graph both consume considerable amounts of time (at least for millions of lines), I would like to read and tokenize the file serially and push the token lists to an agent which will execute the `(.addNode graph id data) part.
problem: I can't seem to find the right syntax to make the agent accept a class instance's member function as update function.
Simplified code (dropped namespaces here, may contain typos!):
; from graph.clj
(definterface IGraph
(addNode [id data])
(addNode2 [_ id data]))
(deftype Graph [^:volatile-mutable nodes] ; expects an empty map, else further calls fail horribly
IGraph
(addNode [this id data] (set! nodes (assoc nodes id data)) this)
(addNode2 [this _ id data] (.addNode this id data) this))
; core.clj
(def g (Graph. {}))
(def smith (agent g)) ; agent smith shall do the dirty work
(send smith .addNode "x" 42) ; unable to resolve symbol
(send smith (.addNode #smith) "x" 42) ; IllegalArgumentException (arity?)
(send smith (.addNode2 #smith) "x" 42) ; same as above. Not arity after all?
(send smith #(.addNode #smith) "x" 42) ; ArityException of eval (3)
(send smith (partial #(.addNode #smith)) "x" 42) ; the same
; agent smith, the president is ashamed...
The five lines won't work for various reasons, while a simple
(def jones (agent 0))
(send jones + 1)
; agent jones, this nation is in your debt
successfully executes. This should be possible, so what am I doing wrong?
Your direct issue is that .addNode isn't a function, but some sugar around the . special form. You can't pass special forms around this way, so you'll need to wrap it in a function that the agent knows how to call - #(.addNode %&) or something similar. The special form is then only evaluated once all the arguments are there, and it can see that there is an addNode method on the graph in its first argument.
Still, James Sharp's answer has a good point - this is a pretty imperative and OO way to treat this problem. From your code so far it looks like you're intending to feed tokens from the list serially into smith with send, who will then update his graph by assoc-ing each into it. This is a classic reduce operation - take the empty graph and assoc into that, take the result of that and assoc into that and so on, until the input runs out. Having an agent perform STM things between each step of this process doesn't seem very necessary.
If you're looking to use ^:volatile-mutable for performance reasons, you could also try using transients and reducing assoc! - or just using (into {} ... which handles the transients for you (although it behaves like conj, not assoc and for maps expects vectors of [key value] rather than separate key and value arguments).
What you are trying to do could be possible but IMHO not idiomatic, you are thinking in an OO way yet. As the docs says,
Agent should be itself immutable (preferably an instance of one of Clojure's persistent collections)
You can model a tree with a Map, have a look at this example.
Generally speaking 4clojure it's a good place to get started to write idiomatic Clojure solutions
Edited: more complete and idiomatic example

Why is the argument position of split and join in clojure.string mixed up?

I wanted to do this:
(-> string
(str/split "\s")
(modification-1)
(modification-2)
…
(modification-n
(str/join "\n"))
But no, split takes [s regex] and join takes [seperator coll].
Is there any apparent reason for this madness (read: What is the design decision behind this)?
As of Clojure 1.5, you can also use one of the new threading macros.
clojure.core/as->
([expr name & forms])
Macro
Binds name to expr, evaluates the first form in the lexical context
of that binding, then binds name to that result, repeating for each
successive form, returning the result of the last form.
It's quite a new construct, so not sure how to use idiomatically yet, but I guess something like this would do:
(as-> "test test test" s
(str/split s #" ")
(modification-1 s)
(modification-2 s)
...
(modification-n s)
(str/join "\n" s))
Edit
As for why the argument position is different, I'm in no place to say, but I think Arthur's suggestion makes sense:
Some functions clearly operate on collections (map, reduce, etc). These tend to consistently take the collection as the last argument, which means they work well with ->>
Some functions don't operate on collections and tend to take the most important argument (is that a thing?) as the first argument. For example, when using / we expect the numerator to come first. These functions work best with ->
The thing is - some functions are ambiguous. They might take a collection and produce a single value, or take a single value and produce a collection. string\split is one example (disregarding for the moment that additional confusion that a string could be thought of as both a single value or a collection). Concatenation/reducing operations will also do it - they will mess up your pipeline!
Consider, for instance:
(->> (range 1 5)
(map inc)
(reduce +)
;; at this point we have a single value and might want to...
(- 4)
(/ 2))
;; but we're threading in the last position
;; and unless we're very careful, we'll misread this arithmetic
In those cases, I think something like as-> is really helpful.
I think in general the guideline to use ->> when operating on collections and -> otherwise is sound - and it's just in these borderline/ambiguous cases, as-> can make the code a little neater, a little clearer.
I also run into this sort of (minor) threading headache fairly regularly.
(-> string
(str/split "\s")
(modification-1)
(modification-2)
…
(modification-n
(#(str/join "\n" %)))
and often create an anonymous function to make the ordering match. My guess as to why is that some functions where intended to be used with thread first ->, some for thread last ->> and for some threading was not a design goal, though this is just a guess.
You can use partial function to fix the separator argument for str/join.
(-> string
(str/split #"\s")
(modification-1)
(modification-2)
;;
(modification-n)
((partial str/join "\n")))
There is nothing wrong with threading your threaded expression through another threading macro, like this:
(-> string
(str/split "\s")
modification-1
modification-2
modification-n
(->> (str/join "\n")))

Testing for non standard ascii character in common lisp

I need to test a string to see if it contains any characters that have codes above decimal 127 (extended ASCII codes) or are below 32. Is there any really nice way to do this or will I just have to iterate through the whole string and compare char-codes of the characters? I am using the common lisp implementation CCL.
The portable way is, as you suggested yourself,
(defun string-standard-p (string &key (min 32) (max 127))
(every (lambda (c) (<= min (char-code c) max)) string))
There may be an implementation-specific way, e.g., in CLISP, you can do
(defun string-encodable-p (string encoding)
(every (lambda (c) (typep c encoding)) string))
(string-encodable-p "foo" charset:ascii)
==> T
although it will actually accept all ASCII characters, not just 32:127.
(I am sorry, I am not familiar with CCL).
However, I am pretty sure that you will not find a nicer solution than the one you suggested in your question.

To what extent are macros "functions in reverse?"

I'm writing a Lisp in Haskell (code at GitHub) as a way of learning more about both languages.
The newest feature that I'm adding is macros. Not hygienic macros or anything fancy - just plain vanilla code transformations. My initial implementation had a separate macro environment, distinct from the environment that all other values live in. Between the read and eval functions I interspersed another function, macroExpand, which walked the code tree and performed the appropriate transformations whenever it found a keyword in the macro environment, before the final form was passed on to eval to be evaluated. A nice advantage of this was that macros had the same internal representation as other functions, which reduced some code duplication.
Having two environments seemed clunky though, and it annoyed me that if I wanted to load a file, eval had to have access to the macro environment in case the file contained macro definitions. So I decided to introduce a macro type, store macros in the same environment as functions and variables, and incorporate the macro expansion phase into eval. I was at first at a bit of a loss for how to do it, until I figured that I could just write this code:
eval env (List (function : args)) = do
func <- eval env function
case func of
(Macro {}) -> apply func args >>= eval env
_ -> mapM (eval env) args >>= apply func
It works as follows:
If you are passed a list containing an initial expression and a bunch of other expressions...
Evaluate the first expression
If it's a macro, then apply it to the arguments, and evaluate the result
If it's not a macro, then evaluate the arguments and apply the function to the result
It's as though macros are exactly the same as functions, except the order of eval/apply is switched.
Is this an accurate description of macros? Am I missing something important by implementing macros in this way? If the answers are "yes" and "no", then why have I never seen macros explained this way before?
The answers are "no" and "yes".
It looks like you've started with a good model of macros, where the macro level and the runtime level in in separate worlds. In fact, this is one of the main points behind Racket's macro system. You can read some brief text about it in the Racket guide, or see the original paper that describes this feature and why it's a good idea to do that. Note that Racket's macro system is a very sophisticated one, and it's hygienic -- but phase separation is a good idea regardless of hygiene. To summarize the main advantage, it makes it possible to always expand code in a reliable way, so you get benefits like separate compilation, and you don't depend on code loading order and such problems.
Then, you moved into a single environment, which loses that. In most of the Lisp world (eg, in CL and in Elisp), this is exactly how things are done -- and obviously, you run into the problems that are described above. ("Obvious" since phase separation was designed to avoid these, you just happened to get your discoveries in the opposite order from how they happened historically.) In any case, to address some of these problems, there is the eval-when special form, which can specify that some code is evaluated at run-time or at macro-expansion-time. In Elisp you get that with eval-when-compile, but in CL you get much more hair, with a few other "*-time"s. (CL also has read-time, and having that share the same environment as everything else is triple the fun.) Even if it seems like a good idea, you should read around around and see how some lispers lose hair because of this mess.
And in the last step of your description you step even further back in time and discover something that is known as FEXPRs. I won't even put any pointers for that, you can find a ton of texts about it, why some people think that it's a really bad idea, why some other people think that it's a really good idea. Practically speaking, those two "some"s are "most" and "few" respectively -- though the few remaining FEXPR strongholds can be vocal. To translate all of this: it's explosive stuff... Asking questions about it is a good way to get long flamewars. (As a recent example of a serious discussion you can see the initial discussion period for the R7RS, where FEXPRs came up and lead to exactly these kinds of flames.) No matter which side you choose to sit at, one thing is obvious: a language with FEXPRs is extremely different than a language without them. [Coincidentally, working on an implementation in Haskell might affect your view, since you have a place to go to for a sane static world for code, so the temptation in "cute" super-dynamic languages is probably bigger...]
One last note: since you're doing something similar, you should look into a similar project of implementing a Scheme in Haskell -- IIUC, it even has hygienic macros.
Not quite. Actually, you've pretty concisely described the difference between "call by name" and "call by value"; a call-by-value language reduces arguments to values before substitution, a call-by-name language performs substitution first, then reduction.
The key difference is that macros allow you to break referential transparency; in particular, the macro can examine the code, and thus can differentiate between (3 + 4) and 7, in a way that ordinary code can't. That's why macros are both more powerful and also more dangerous; most programmers would be upset if they found that (f 7) produced one result and (f (+ 3 4)) produced a different result.
Background Rambling
What you have there is very late binding macros. This is a workable approach, but it is inefficient, because repeated executions of the same code will repeatedly expand the macros.
On the positive side, this is friendly for interactive development. If the programmer changes a macro, and then re-invokes some code which uses it, such as a previously defined function, the new macro instantly takes effect. This is an intuitive "do what I mean" behavior.
Under a macro system which expands macros earlier, the programmer has to redefine all of the functions that depend on a macro when that macro changes, otherwise the existing definitions continue to be based on the old macro expansions, oblivious to the new version of the macro.
A reasonable approach is to have this late binding macro system for interpreted code, but a "regular" (for lack of a better word) macro system for compiled code.
Expanding macros does not require a separate environment. It should not, because local macros should be in the same namespace as variables. For instance in Common Lisp if we do this (let (x) (symbol-macrolet ((x 'foo)) ...)), the inner symbol macro shadows the outer lexical variable. The macro expander has to be aware of the variable binding forms. And vice versa! If there is an inner let for the variable x, it shadows an outer symbol-macrolet. The macro expander cannot just blindly substitute all occurrences of x that occur in the body. So in other words, Lisp macro expansion has to be aware of the full lexical environment in which macros and other kinds of bindings coexist. Of course, during macro expansion, you don't instantiate the environment in the same way. Of course if there is a (let ((x (function)) ..), (function) is not called and x is not given a value. But the macro expander is aware that there is an x in this environment and so occurrences of x are not macros.
So when we say one environment, what we really mean is that there are two different manifestations or instantiations of a unified environment: the expansion-time manifestation and then the evaluation-time manifestation. Late-binding macros simplify the implementation by merging these two times into one, as you have done, but it does not have to be that way.
Also note that Lisp macros can accept an &environment parameter. This is needed if the macros need to call macroexpand on some piece of code supplied by the user. Such a recursion back into the macro expander through a macro has to pass the proper environment so the user's code has access to its lexically surrounding macros and gets expanded properly.
Concrete Example
Suppose we have this code:
(symbol-macrolet ((x (+ 2 2)))
(print x)
(let ((x 42)
(y 19))
(print x)
(symbol-macrolet ((y (+ 3 3)))
(print y))))
The effect of this to prints 4, 42 and 6. Let's use the CLISP implementation of Common Lisp, and expand this using CLISP's implementation-specific function called system::expand-form. We cannot use regular, standard macroexpand because that will not recurse into the local macros:
(system::expand-form
'(symbol-macrolet ((x (+ 2 2)))
(print x)
(let ((x 42)
(y 19))
(print x)
(symbol-macrolet ((y (+ 3 3)))
(print y)))))
-->
(LOCALLY ;; this code was reformatted by hand to fit your screen
(PRINT (+ 2 2))
(LET ((X 42) (Y 19))
(PRINT X)
(LOCALLY (PRINT (+ 3 3))))) ;
(Now firstly, about these locally forms. Why are they there? Note that they correspond to places where we had a symbol-macrolet. This is probably for the sake of declarations. If the body of a symbol-macrolet form has declarations, they have to be scoped to that body, and locally will do that. If the expansion of symbol-macrolet does not leave behind this locally wrapping, then declarations will have the wrong scope.)
From this macro expansion you can see what the task is. The macro expander has to walk the code and recognize all binding constructs (all special forms, really), not only binding constructs having to do with the macro system.
Notice how one of the instances of (print x) is left alone: the one which is in the scope of the (let ((x ..)) ...). The other became (print (+ 2 2)), in accordance with the symbol macro for x.
Another thing we can learn from this is that macro expansion just substitutes the expansion and removes the symbol-macrolet forms. So the environment that remains is the original one, minus all of the macro material which is scrubbed away in the expansion process. The macro expansion honors all of the lexical bindings, in one big "Grand Unified" environment, but then graciously vaporizes, leaving behind just the code like (print (+ 2 2)) and other vestiges like the (locally ...), with just the non-macro binding constructs resulting in a reduced version of the original environment.
Thus now when the expanded code is evaluated, just the reduced environment's run-time personality comes into play. The let bindings are instantiated and stuffed with initial values, etc. During expansion, none of that was happening; the non-macro bindings just lie there asserting their scope, and hinting at a future existence in the run time.
What you're missing is that this symmetry breaks down when you separate analysis from evaluation, which is what all practical Lisp implementations do. Macro expansion would occur during the analysis phase so eval can be kept simple.
I really recommend to have some of the Lisp books handy. Recommended is for example Christian Queinnec, Lisp in Small Pieces. The book is about the implementation of Scheme.
http://pagesperso-systeme.lip6.fr/Christian.Queinnec/WWW/LiSP.html
Chapter 9 is about macros: http://pagesperso-systeme.lip6.fr/Christian.Queinnec/WWW/chap9.html
For what its worth, the Scheme R5RS section Binding constructs for syntactic keywords has this to say about it:
Let-syntax and letrec-syntax are analogous to let and letrec, but they bind syntactic keywords to macro transformers instead of binding variables to locations that contain values.
See: http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-7.html#%_sec_4.3.1
This seems to imply a separate strategy should be used, at least for the syntax-rules macro system.
You can write some... interesting code in a Scheme that uses separate "places" for macros. It does not make much sense to mix macros and variables of the same name in any "real" code but if you just want to try it out, consider this example from Chicken Scheme:
#;1> let
Error: unbound variable: let
#;1> (define let +)
#;2> (let ((talk "hello!")) (write talk))
"hello!"
#;3> let
#<procedure C_plus>
#;4> (let 1 2)
Error: (let) not a proper list: (let 1 2)
Call history:
<syntax> (let 1 2) <--
#;4> (define a let)
#;5> (a 1 2)
3

Is functional Clojure or imperative Groovy more readable?

OK, no cheating now.
No, really, take a minute or two and try this out.
What does "positions" do?
Edit: simplified according to cgrand's suggestion.
(defn redux [[current next] flag] [(if flag current next) (inc next)])
(defn positions [coll]
(map first (reductions redux [1 2] (map = coll (rest coll)))))
Now, how about this version?
def positions(coll) {
def (current, next) = [1, 1]
def previous = coll[0]
coll.collect {
current = (it == previous) ? current : next
next++
previous = it
current
}
}
I'm learning Clojure and I'm loving it, because I've always enjoyed functional programming. It took me longer to come up with the Clojure solution, but I enjoyed having to think of an elegant solution. The Groovy solution is alright, but I'm at the point where I find this type of imperative programming boring and mechanical. After 12 years of Java, I feel in a rut and functional programming with Clojure is the boost I needed.
Right, get to the point. Well, I have to be honest and say that I wonder if I'll understand the Clojure code when I go back to it months later. Sure I could comment the heck out of it, but I don't need to comment my Java code to understand it.
So my question is: is it a question of getting more used to functional programming patterns? Are functional programming gurus reading this code and finding it a breeze to understand? Which version did you find easier to understand?
Edit: what this code does is calculate the positions of players according to their points, while keep track of those who are tied. For example:
Pos Points
1. 36
1. 36
1. 36
4. 34
5. 32
5. 32
5. 32
8. 30
I don't think there's any such thing as intrinsic readability. There's what you're used to, and what you aren't used to. I was able to read both versions of your code OK. I could actually read your Groovy version more easily, even though I don't know Groovy, because I too spent a decade looking at C and Java and only a year looking at Clojure. That doesn't say anything about the languages, it only says something about me.
Similarly I can read English more easily than Spanish, but that doesn't say anything about the intrinsic readability of those languages either. (Spanish is actually probably the "more readable" language of the two in terms of simplicity and consistency, but I still can't read it). I'm learning Japanese right now and having a heck of a hard time, but native Japanese speakers say the same about English.
If you spent most of your life reading Java, of course things that look like Java will be easier to read than things that don't. Until you've spent as much time looking at Lispy languages as looking at C-like languages, this will probably remain true.
To understand a language, among other things you have to be familiar with:
syntax ([vector] vs. (list), hyphens-in-names)
vocabulary (what does reductions mean? How/where can you look it up?)
evaluation rules (does treating functions as objects work? It's an error in most languages.)
idioms, like (map first (some set of reductions with extra accumulated values))
All of these take time and practice and repetition to learn and internalize. But if you spend the next 6 months reading and writing lots of Clojure, not only will you be able to understand that Clojure code 6 months from now, you'll probably understand it better than you do now, and maybe even be able to simplify it. How about this:
(use 'clojure.contrib.seq-utils) ;;'
(defn positions [coll]
(mapcat #(repeat (count %) (inc (ffirst %)))
(partition-by second (indexed coll))))
Looking at Clojure code I wrote a year ago, I'm horrified at how bad it is, but I can read it OK. (Not saying your Clojure code is horrible; I had no trouble reading it at all, and I'm no guru.)
I agree with Timothy: you introduce too much abstractions. I reworked your code and ended with:
(defn positions [coll]
(reductions (fn [[_ prev-score :as prev] [_ score :as curr]]
(if (= prev-score score) prev curr))
(map vector (iterate inc 1) coll)))
About your code,
(defn use-prev [[a b]] (= a b))
(defn pairs [coll] (partition 2 1 coll))
(map use-prev (pairs coll))
can be simply refactored as:
(map = coll (rest coll))
edit: may not be relevant anymore.
The Clojure one is convoluted to me. It contains more abstractions which need to be understood. This is the price of using higher order functions, you have to know what they mean. So in an isolated case, imperative requires less knowledge. But the power of abstractions is in their means of combination. Every imperative loop must be read and understood, whereas sequence abstractions allow you to remove the complexity of a loop and combine powerful opperations.
I would further argue that the Groovy version is at least partially functional as it uses collect, which is really map, a higher order function. It has some state in it also.
Here is how I would write the Clojure version:
(defn positions2 [coll]
(let [current (atom 1)
if-same #(if (= %1 %2) #current (reset! current (inc %3)))]
(map if-same (cons (first coll) coll) coll (range (count coll)))))
This is quite similar to the Groovy version in that it uses a mutable "current", but differs in that it doesn't have a next/prev variable - instead using immutable sequences for those. As Brian elloquently put it, readability is not intrinsic. This version is my preference for this particular case, and seems to sit somewhere in the middle.
The Clojure one is more convoluted at first glance; though it maybe more elegant.
OO is the result to make language more "relatable" at higher-level.
Functional languages seems to have a more "algorithimc"(primitive/elementary) feel to it.
That's just what I felt at the moment.
Maybe that will change when I have more experience working with clojure.
I'm afraid that we are decending into the game of which language can be the most concise or solve a problem in the least line of code.
The issue are 2 folds for me:
How easy at first glance to get a feel of what the code is doing?.
This is important for code maintainers.
How easy is it to guess at the logic behind the code?.
Too verbose/long-winded?. Too terse?
"Make everything as simple as possible, but not simpler."
Albert Einstein
I too am learning Clojure and loving it. But at this stage of my development, the Groovy version was easier to understand. What I like about Clojure though is reading the code and having the "Aha!" experience when you finally "get" what is going on. What I really enjoy is the similar experience that happens a few minutes later when you realize all of the ways the code could be applied to other types of data with no changes to the code. I've lost count of the number of times I've worked through some numerical code in Clojure and then, a little while later, thought of how that same code could be used with strings, symbols, widgets, ...
The analogy I use is about learning colors. Remember when you were introduced to the color red? You understood it pretty quickly -- there's all this red stuff in the world. Then you heard the term magenta and were lost for a while. But again, after a little more exposure, you understood the concept and had a much more specific way to describe a particular color. You have to internalize the concept, hold a bit more information in your head, but you end up with something more powerful and concise.
Groovy supports various styles of solving this problem too:
coll.groupBy{it}.inject([]){ c, n -> c + [c.size() + 1] * n.value.size() }
definitely not refactored to be pretty but not too hard to understand.
I know this is not an answer to the question, but I will be able "understand" the code much better if there are tests, such as:
assert positions([1]) == [1]
assert positions([2, 1]) == [1, 2]
assert positions([2, 2, 1]) == [1, 1, 3]
assert positions([3, 2, 1]) == [1, 2, 3]
assert positions([2, 2, 2, 1]) == [1, 1, 1, 4]
That will tell me, one year from now, what the code is expected to do. Much better than any excellent version of the code I have seen here.
Am I really off topic?
The other thing is, I think "readability" depends on the context. It depends who will maintain the code. For example, in order to maintain the "functional" version of the Groovy code (however brilliant), it will take not only Groovy programmers, but functional Groovy programmers...
The other, more relevant, example is: if a few lines of code make it easier to understand for "beginner" Clojure programmers, then the code will overall be more readable because it will be understood by a larger community: no need to have studied Clojure for three years to be able to grasp the code and make edits to it.

Resources