Is there a programming language that reads right to left? [closed] - programming-languages

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I'm quite sure that modern industrial strength programming languages, all read left to right
But out of the whole ecosystem of programming languages are there examples of right to left read languages ?
Apart from better alignment with human languages that written this way, would there be any advantage in such a language?
examples :
int a = 5; // read left to right
tfel ot thgir daer // ;5 = a tni

APL is a language which overcomes the inconsistencies (and difficulty in typing / parsing / executing) traditional mathematical notation by unifying certain constructs, such as:
multi-dimensional array reduction (where the reduce construct was first named as such)
+⌿2 4 6 ⍝ Sum of a list
12
×⌿2 4 6 ⍝ Product of a list
48
2 3⍴⍳6 ⍝ A 2 row, 3 column matrix
1 2 3
4 5 6
+⌿2 3⍴⍳6 ⍝ Sum down the columns
5 7 9
or the inner product:
2 3 4+.×1 0 ¯1 ⍝ Vector inner (dot) product
¯2
+/2 3 4×1 0 ¯1 ⍝ Equivalent expression for vectors
¯2
(2 2⍴1 0 1)(2 2⍴2 5 4 3) ⍝ Two matrices
┌───┬───┐
│1 0│2 5│
│1 1│4 3│
└───┴───┘
(2 2⍴1 0 1)+.×(2 2⍴2 5 4 3) ⍝ Matrix inner product
2 5
6 8
1 2 3 ∧.= 1 2 2 ⍝ Are all elements equal?
0
Since it is inspired by traditional mathematics then it follows f(g(x)):
f g x means apply g to x and then apply f to the result of that, hence right to left. f(g(x)) is also valid APL.
The easiest way to demonstrate how this affects things is:
84 - 12 - 1 - 13 - 28 - 9 - 6 - 15
70
which gives 0 in traditional mathematic execution order. With parentheses:
84 - (12 - (1 - (13 - (28 - (9 - (6 - 15)))))) ⍝ APL
70
(((((((84 - 12) - 1) - 13) - 28) - 9) - 6) - 15) ⍝ Traditional maths
0
Although it is a general purpose programming language, used for applications from finance to 3D graphics, you can see more comparisons with mathematical notation on the APL Wiki: https://aplwiki.com/wiki/Comparison_with_traditional_mathematics
And having said all that, APL is still generally "read" (by humans) basically left to right, even though it is parsed and executed "right-to-left", and even then it's more of "functions have a long right scope and short left scope". For example, using the statement separator ⋄ to have multiple statements in a single line:
2+2 ⋄ 3 4 5 ⋄ ⎕A⍳'APL'
4
3 4 5
1 16 12
The left-most statements are executed first, but each individual statement is parsed as described above.

Befunge is such a language :)
"!dlrow olleH">:#,_#
Is a standard Hello world program, however this actually evaluates left-to-right.
<v_##:<"Hello world!"
>,>>>^
This ^ evaluates right-to-left, with some up-to-down and down-to-up movement too. You can run this program at https://www.bedroomlan.org/tools/befunge-playground/#prog=hello,mode=edit .
Unefunge is a variant without up-down movement, but it is still turing-complete. If you want strictly right-to-left programs, you can write in unefunge as long as your program starts with < to send the instruction pointer in the right direction.
For more information, Stack Overflow has a befunge tag.

There was a language for children called logowriter which later got translated into hebrew as תמלילוגו, since hebrew is RTL, so was the translated language.
The תמלילוגו language was taught as part of the CS sylabus in Israeli highschools on 2008,
http://www.csit.org.il/TSTBAG/2008/899122.pdf

A bit late to the party, but now there is Avsha. That is a whole new language based on Hebrew, or ChavaScript that uses Hebrew words and translates them to JS in English (including variable names, that are translated when found in the dictionary, or written in with each letter translated to the relevant English letter).

tfel ot thgir daer // ;5 = a tni
AFAIK, there are no such languages, at least, not a serious one (thanks for tohava for the interesting example). Simply because it's mostly pointless.
You can pick any existing language, write a one liner script that reverses every line, and you have a converter (and back converter) for this "new language". You can incorporate it into your build environment - transform before compilation, reverse the lines of the compiler error messages and so on...
But really, what's the point? The only people who would benefit from this are the the ones who use right to left for their native language (BTW, AFAIK, none of them use the latin ABC). They have to learn something completely new anyways, and it makes a lot more sense to do what the rest of the world does. All the blogs, tutorial, blogs, SO, everything is written in "left-to-right" programming languages.

Related

In Haskell (max 3 5) * 2 => 10 [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
This is my first day with Haskell, Can you please explain, how this works.
I assume, first the compiler should work on finding the max value among 3 and 5 and then multiply the result by 2.
Whereas, Haskell multiplies 5*2 and compare the result with 3 and finding the max.
Syntax
max (3 5) * 2
This doesn't mean what you seem to think it means. In the above, the function 3 is applied to the argument 5. Consider instead:
(max 3 5) * 2
Or equivalently:
max 3 5 * 2
Terminology
Let's also keep the terminology straight: the compiler doesn't perform any evaluation, it just produces binaries.
Answer
The first thing to consider is the function *. There is no guaranteed order of evaluation here. To evaluate max 3 5 the max function is applied to each argument and the results is 5. The second argument, 2 is already in normal form. So now we just have 5*2 which produces 10.
You have your parentheses inexplicably backwards.
Haskell multiplies 5*2 and compare the result with 3 and finding the max.
so you want
max 3 (5 * 2)
Your code as written tries to coerce the literal 3 into a function Num a => a -> b, then apply 5 to it. It can't do that, of course, so it stops.

Can dynamic programming problems always be represented as DAG

I am trying to draw a DAG for Longest Increasing Subsequence {3,2,6,4,5,1} but cannot break this into a DAG structure.
Is it possible to represent this in a tree like structure?
As far as I know, the answer to the actual question in the title is, "No, not all DP programs can be reduced to DAGs."
Reducing a DP to a DAG is one of my favorite tricks, and when it works, it often gives me key insights into the problem, so I find it always worth trying. But I have encountered some that seem to require at least hypergraphs, and this paper and related research seems to bear that out.
This might be an appropriate question for the CS Stack Exchange, meaning the abstract question about graph reduction, not the specific question about longest increasing subsequence.
Assuming following Sequence, S = {3,2,6,4,5,1,7,8} and R = root node. Your tree or DAG will look like
R
3 2 4 1
6 5 7
8
And your result is the longest path (from root to the node with the maximum depth) in the tree (result = {r,1,7,8}).
The result above show the longest increasing sequence in S. The Tree for the longest increasing subsequence in S look as follows
R
3 2 6 4 5 1 7 8
6 4 7 5 7 7 8
7 5 8 7 8 8
8 7 8
8
And again the result is the longest path (from root to the node with the maximum depth) in the tree (result = {r,2,4,5,7,8}).
The answer to this question should be YES.
I'd like to cite the following from here: The Soul of Dynamic Programming
Formulations and Implementations.
A DP must have a corresponding DAG (most of the time implicit), otherwise we cannot find a valid order for computation.
For your case, Longest Increasing Subsequence can be represented as some DAG like the following:
The task is amount to finding the longest path in that DAG. For more information please refer to section 6.2 of Algorithms, Dynamic programming.
Yes, It is possible to represent longest Increasing DP Problem as DAG.
The solution is to find the longest path ( a path that contains maximum nodes) from every node to the last node possible for that particular node.
Here, S is the starting node, E is the ending node and C is count of nodes between S and E.
S E C
3 5 3
2 5 3
6 6 1
4 5 2
5 5 1
1 1 1
so the answer is 3 and it is very easy to generate solution as we have to traverse the nodes only.
I think it might help you.
Reference: https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/lecture-20-dynamic-programming-ii-text-justification-blackjack/

Why do prevailing programming languages like C use array starting from 0? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why does the indexing start with zero in 'C'?
Why do prevailing programming languages like C use array starting from 0? I know some programming languages like PASCAL have arrays starting from 1. Are there any good reasons for doing so? Or is it merely a historical reason?
Because you access array elements by offset relative to the beginning of the array.
First element is at offset 0.
Later more complex array data structures appeared (such as SAFEARRAY) that allowed arbitrary lower bound.
In C, the name of an array is essentially a pointer, a reference to a memory location, and so the expression array[n] refers to a memory location n-elements away from the starting element. This means that the index is used as an offset. The first element of the array is exactly contained in the memory location that array refers (0 elements away), so it should be denoted as array[0]. Most programming languages have been designed this way, so indexing from 0 is pretty much inherent to the language.
However, Dijkstra explains why we should index from 0. This is a problem on how to denote a subsequence of natural numbers, say for example 1,2,3,...,10. We have four solutions available:
a. 0 < i < 11
b. 1<= i < 11
c. 0 < i <= 10
d. 1 <= i <= 10
Dijkstra argues that the proper notation should be able to denote naturally the two following cases:
The subsequence includes the smallest natural number, 0
The subsequence is empty
Requirement 1. leaves out a. and c. since they would have the form -1 < i which uses a number not lying in the natural number set (Dijkstra says this is ugly). So we are left with b. and d. Now requirement 2. leaves out d. since for a set including 0 that is shrunk to the empty one, d. takes the form 0 <= i <= -1, which is a little messed up! Subtracting the ranges in b. we also get the sequence length, which is another plus. Hence we are left with b. which is by far the most widely used notation in programming now.
Now you know. So, remember and take pride in the fact that each time you write something like
for( i=0; i<N; i++ ) {
sum += a[i];
}
you are not just following the rules of language notation. You are also promoting mathematical beauty!
here
In assembly and C, arrays was implemented as memory pointers. There the first element was stored at offset 0 from the pointer.
In C arrays are tied to pointers. Array index is a number that you add to the pointer to the array's initial element. This is tied to one of the addressing modes of PDP-11, where you could specify a base address, and place an offset to it in a register to simulate an array. By the way, this is the same place from which ++ and -- came from: PDP-11 provided so-called auto-increment and auto-decrement addressing modes.
P.S. I think Pascal used 1 by default; generally, you were allowed to specify the range of your array explicitly, so you could start it at -10 and end at +20 if you wanted.
Suppose you can store only two bits. That gives you four combinations:
00 10 01 11 Now, assign integers to those 4 values. Two reasonable mappings are:
00->0
01->1
10->2
11->3
and
11->-2
10->-1
00->0
01->1
(Another idea is to use signed magnitude and use the mapping:
11->-1 10->-0 00->+0 01->+1)
It simply does not make sense to use 00 to represent 1 and use 11 to represent 4. Counting from 0 is natural. Counting from 1 is not.

Programming languages where indexing starts at 1? [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
C programming language is known as a zero index array language. The first item in an array is accessible using 0. For example double arr[2] = {1.5,2.5} The first item in array arr is at position 0. arr[0] === 1.5 What programming languages are 1 based indexes?
I've heard of the these languages start at 1 instead of 0 for array access: Algol, Matlab, Action!, Pascal, Fortran, Cobol. Is this complete?
Specificially, a 1 based array would access the first item with 1, not zero.
A list can be found on wikipedia.
ALGOL 68
APL
AWK
CFML
COBOL
Fortran
FoxPro
Julia
Lua
Mathematica
MATLAB
PL/I
Ring
RPG
Sass
Smalltalk
Wolfram Language
XPath/XQuery
Fortran starts at 1. I know that because my Dad used to program Fortran before I was born (I am 33 now) and he really criticizes modern programming languages for starting at 0, saying it's unnatural, not how humans think, unlike maths, and so on.
However, I find things starting at 0 quite natural; my first real programming language was C and *(ptr+n) wouldn't have worked so nicely if n hadn't started at zero!
A pretty big list of languages is on Wikipedia under Comparison of Programming Languages (array) under "Array system cross-reference list" table (Default base index column)
This has a good discussion of 1- vs. 0- indexed and subscriptions in general
To quote from the blog:
EWD831 by E.W. Dijkstra, 1982.
When dealing with a sequence of length N, the elements of which we
wish to distinguish by subscript, the
next vexing question is what subscript
value to assign to its starting
element. Adhering to convention a)
yields, when starting with subscript
1, the subscript range 1 ≤ i < N+1;
starting with 0, however, gives the
nicer range 0 ≤ i < N. So let us let
our ordinals start at zero: an
element's ordinal (subscript) equals
the number of elements preceding it in
the sequence. And the moral of the
story is that we had better regard
—after all those centuries!— zero as a
most natural number.
Remark:: Many programming languages have been designed without due
attention to this detail. In FORTRAN
subscripts always start at 1; in ALGOL
60 and in PASCAL, convention c) has
been adopted; the more recent SASL has
fallen back on the FORTRAN convention:
a sequence in SASL is at the same time
a function on the positive integers.
Pity! (End of Remark.)
Fortran, Matlab, Pascal, Algol, Smalltalk, and many many others.
You can do it in Perl
$[ = 1; # set the base array index to 1
You can also make it start with 42 if you feel like that. This also affects string indexes.
Actually using this feature is highly discouraged.
Also in Ada you can define your array indices as required:
A : array(-5..5) of Integer; -- defines an array with 11 elements
B : array(-1..1, -1..1) of Float; -- defines a 3x3 matrix
Someone might argue that user-defined array index ranges will lead to maintenance problems. However, it is normal to write Ada code in a way which does not depend on the array indices. For this purpose, the language provides element attributes, which are automatically defined for all defined types:
A'first -- this has the value -5
A'last -- this has the value +5
A'range -- returns the range -5..+5 which can be used e.g. in for loops
JDBC (not a language, but an API)
String x = resultSet.getString(1); // the first column
Erlang's tuples and lists index starting at 1.
Lua - disappointingly
Found one - Lua (programming language)
Check Arrays section which says -
"Lua arrays are 1-based: the first index is 1 rather than 0 as it is for many other programming languages (though an explicit index of 0 is allowed)"
VB Classic, at least through
Option Base 1
Strings in Delphi start at 1.
(Static arrays must have lower bound specified explicitly. Dynamic arrays always start at 0.)
ColdFusion - even though it is Java under the hood
Ada and Pascal.
PL/SQL. An upshot of this is when using languages that start from 0 and interacting with Oracle you need to handle the 0-1 conversions yourself for array access by index. In practice if you use a construct like foreach over rows or access columns by name, it's not much of an issue, but you might want the leftmost column, for example, which will be column 1.
Indexes start at one in CFML.
The entire Wirthian line of languages including Pascal, Object Pascal, Modula-2, Modula-3, Oberon, Oberon-2 and Ada (plus a few others I've probably overlooked) allow arrays to be indexed from whatever point you like including, obviously, 1.
Erlang indexes tuples and arrays from 1.
I think—but am no longer positive—that Algol and PL/1 both index from 1. I'm also pretty sure that Cobol indexes from 1.
Basically most high level programming languages before C indexed from 1 (with assembly languages being a notable exception for obvious reasons – and the reason C indexes from 0) and many languages from outside of the C-dominated hegemony still do so to this day.
There is also Smalltalk
Visual FoxPro, FoxPro and Clipper all use arrays where element 1 is the first element of an array... I assume that is what you mean by 1-indexed.
I see that the knowledge of fortran here is still on the '66 version.
Fortran has variable both the lower and the upper bounds of an array.
Meaning, if you declare an array like:
real, dimension (90) :: x
then 1 will be the lower bound (by default).
If you declare it like
real, dimension(0,89) :: x
then however, it will have a lower bound of 0.
If on the other hand you declare it like
real, allocatable :: x(:,:)
then you can allocate it to whatever you like. For example
allocate(x(0:np,0:np))
means the array will have the elements
x(0, 0), x(0, 1), x(0, 2 .... np)
x(1, 0), x(1, 1), ...
.
.
.
x(np, 0) ...
There are also some more interesting combinations possible:
real, dimension(:, :, 0:) :: d
real, dimension(9, 0:99, -99:99) :: iii
which are left as homework for the interested reader :)
These are just the ones I remembered off the top of my head. Since one of fortran's main strengths are array handling capabilities, it is clear that there are lot of other in&outs not mentioned here.
Nobody mentioned XPath.
Mathematica and Maxima, besides other languages already mentioned.
informix, besides other languages already mentioned.
Basic - not just VB, but all the old 1980s era line numbered versions.
Richard
FoxPro used arrays starting at index 1.
dBASE used arrays starting at index 1.
Arrays (Beginning) in dBASE
RPG, including modern RPGLE
Although C is by design 0 indexed, it is possible to arrange for an array in C to be accessed as if it were 1 (or any other value) indexed. Not something you would expect a normal C coder to do often, but it sometimes helps.
Example:
#include <stdio.h>
int main(){
int zero_based[10];
int* one_based;
int i;
one_based=zero_based-1;
for (i=1;i<=10;i++) one_based[i]=i;
for(i=10;i>=1;i--) printf("one_based[%d] = %d\n", i, one_based[i]);
return 0;
}

Best strategies for reading J code

I've been using J for a few months now, and I find that reading unfamiliar code (e.g. that I didn't write myself) is one of the most challenging aspects of the language, particularly when it's in tacit. After a while, I came up with this strategy:
1) Copy the code segment into a word document
2) Take each operator from (1) and place it on a separate line, so that it reads vertically
3) Replace each operator with its verbal description in the Vocabulary page
4) Do a rough translation from J syntax into English grammar
5) Use the translation to identify conceptually related components and separate them with line breaks
6) Write a description of what each component from (5) is supposed to do, in plain English prose
7) Write a description of what the whole program is supposed to do, based on (6)
8) Write an explanation of why the code from (1) can be said to represent the design concept from (7).
Although I learn a lot from this process, I find it to be rather arduous and time-consuming -- especially if someone designed their program using a concept I never encountered before. So I wonder: do other people in the J community have favorite ways to figure out obscure code? If so, what are the advantages and disadvantages of these methods?
EDIT:
An example of the sort of code I would need to break down is the following:
binconv =: +/# ((|.#(2^i.###])) * ]) # ((3&#.)^:_1)
I wrote this one myself, so I happen to know that it takes a numerical input, reinterprets it as a ternary array and interprets the result as the representation of a number in base-2 with at most one duplication. (e.g., binconv 5 = (3^1)+2*(3^0) -> 1 2 -> (2^1)+2*(2^0) = 4.) But if I had stumbled upon it without any prior history or documentation, figuring out that this is what it does would be a nontrivial exercise.
Just wanted to add to Jordan's Answer : if you don't have box display turned on, you can format things this way explicitly with 5!:2
f =. <.#-:##{/:~
5!:2 < 'f'
┌───────────────┬─┬──────┐
│┌─────────┬─┬─┐│{│┌──┬─┐│
││┌──┬─┬──┐│#│#││ ││/:│~││
│││<.│#│-:││ │ ││ │└──┴─┘│
││└──┴─┴──┘│ │ ││ │ │
│└─────────┴─┴─┘│ │ │
└───────────────┴─┴──────┘
There's also a tree display:
5!:4 <'f'
┌─ <.
┌─ # ─┴─ -:
┌─ # ─┴─ #
──┼─ {
└─ ~ ─── /:
See the vocabulary page for 5!: Representation and also 9!: Global Parameters for changing the default.
Also, for what it's worth, my own approach to reading J has been to retype the expression by hand, building it up from right to left, and looking up the pieces as I go, and using identity functions to form temporary trains when I need to.
So for example:
/:~ i.5
0 1 2 3 4
NB. That didn't tell me anything
/:~ 'hello'
ehllo
NB. Okay, so it sorts. Let's try it as a train:
[ { /:~ 'hello'
┌─────┐
│ehllo│
└─────┘
NB. Whoops. I meant a train:
([ { /:~) 'hello'
|domain error
| ([{/:~)'hello'
NB. Not helpful, but the dictionary says
NB. "{" ("From") wants a number on the left.
(0: { /:~) 'hello'
e
(1: { /:~) 'hello'
h
NB. Okay, it's selecting an item from the sorted list.
NB. So f is taking the ( <. # -: # # )th item, whatever that means...
<. -: # 'hello'
2
NB. ??!?....No idea. Let's look up the words in the dictionary.
NB. Okay, so it's the floor (<.) of half (-:) the length (#)
NB. So the whole phrase selects an item halfway through the list.
NB. Let's test to make sure.
f 'radar' NB. should return 'd'
d
NB. Yay!
addendum:
NB. just to be clear:
f 'drara' NB. should also return 'd' because it sorts first
d
Try breaking the verb up into its components first, and then see what they do. And rather than always referring to the vocab, you could simply try out a component on data to see what it does, and see if you can figure it out. To see the structure of the verb, it helps to know what parts of speech you're looking at, and how to identify basic constructions like forks (and of course, in larger tacit constructions, separate by parentheses). Simply typing the verb into the ijx window and pressing enter will break down the structure too, and probably help.
Consider the following simple example: <.#-:##{/:~
I know that <. -: # { and /: are all verbs, ~ is an adverb, and # is a conjunction (see the parts of speech link in the vocab). Therefore I can see that this is a fork structure with left verb <.#-:## , right verb /:~ , and dyad { . This takes some practice to see, but there is an easier way, let J show you the structure by typing it into the ijx window and pressing enter:
<.#-:##{/:~
+---------------+-+------+
|+---------+-+-+|{|+--+-+|
||+--+-+--+|#|#|| ||/:|~||
|||<.|#|-:|| | || |+--+-+|
||+--+-+--+| | || | |
|+---------+-+-+| | |
+---------------+-+------+
Here you can see the structure of the verb (or, you will be able to after you get used to looking at these). Then, if you can't identify the pieces, play with them to see what they do.
10?20
15 10 18 7 17 12 19 16 4 2
/:~ 10?20
1 4 6 7 8 10 11 15 17 19
<.#-:## 10?20
5
You can break them down further and experiment as needed to figure them out (this little example is a median verb).
J packs a lot of code into a few characters and big tacit verbs can look very intimidating, even to experienced users. Experimenting will be quicker than your documenting method, and you can really learn a lot about J by trying to break down large complex verbs. I think I'd recommend focusing on trying to see the grammatical structure and then figure out the pieces, building it up step by step (since that's how you'll eventually be writing tacit verbs).
(I'm putting this in the answer section instead of editing the question because the question looks long enough as it is.)
I just found an excellent paper on the jsoftware website that works well in combination with Jordan's answer and the method I described in the question. The author makes some pertinent observations:
1) A verb modified by an adverb is a verb.
2) A train of more than three consecutive verbs is a series of forks, which may have a single verb or a hook at the far left-hand side depending on how many verbs there are.
This speeds up the process of translating a tacit expression into English, since it lets you group verbs and adverbs into conceptual units and then use the nested fork structure to quickly determine whether an instance of an operator is monadic or dyadic. Here's an example of a translation I did using the refined method:
d28=: [:+/\{.#],>:#[#(}.-}:)#]%>:#[
[: +/\
{.#] ,
>:#[ #
(}.-}:)#] %
>:#[
cap (plus infix prefix)
(head atop right argument) ravel
(increment atop left argument) tally
(behead minus curtail) atop right
argument
divided by
increment atop left argument
the partial sums of the sequence
defined by
the first item of the right argument,
raveled together with
(one plus the left argument) copies
of
(all but the first element) minus
(all but the last element)
of the right argument, divided by
(one plus the left argument).
the partial sums of the sequence
defined by
starting with the same initial point,
and appending consecutive copies of
points derived from the right argument by
subtracting each predecessor from its
successor
and dividing the result by the number
of copies to be made
Interpolating x-many values between
the items of y
I just want to talk about how I read:
<.#-:##{/:~
First off, I knew that if it was a function, from the command line, it had to be entered (for testing) as
(<.#-:##{/:~)
Now I looked at the stuff in the parenthesis. I saw a /:~, which returns a sorted list of its arguments, { which selects an item from a list, # which returns the number of items in a list, -: half, and <., floor...and I started to think that it might be median, - half of the number of items in the list rounded down, but how did # get its arguments? I looked at the # signs - and realized that there were three verbs there - so this is a fork. The list comes in at the right and is sorted, then at the left, the fork got the list to the # to get the number of arguments, and then we knew it took the floor of half of that. So now we have the execution sequence:
sort, and pass the output to the middle verb as the right argument.
Take the floor of half of the number of elements in the list, and that becomes the left argument of the middle verb.
Do the middle verb.
That is my approach. I agree that sometimes the phrases have too many odd things, and you need to look them up, but I am always figuring this stuff out at the J instant command line.
Personally, I think of J code in terms of what it does -- if I do not have any example arguments, I rapidly get lost. If I do have examples, it's usually easy for me to see what a sub-expression is doing.
And, when it gets hard, that means I need to look up a word in the dictionary, or possibly study its grammar.
Reading through the prescriptions here, I get the idea that this is not too different from how other people work with the language.
Maybe we should call this 'Test Driven Comprehension'?

Resources