Understanding trains with literal values in J - j

I'm sure this is obvious but I'm a little unclear about it. Suppose I wanted to make a function do something like f(x) = 3x+1. Knowing the rule for forks, I expect to see something like this: [: 1&+ 3&* which is not that beautiful to me, but I guess is nicer looking that (1&+) #: (3&*) with the extra parentheses. Instead, if I query with 13: I get this:
13 : '1+3*y'
1 + 3 * ]
Way more beautiful, but I don't understand how it is possible. ] is the identity function, * and + are doing their usual thing, but how are the literals working here? Why is J not attempting to "call" 1 and 3 with arguments as if they are functions? I notice that this continues to do the right thing if I replace any of the constants with [ or ], so I think it is interpreting this as a train of some kind, but I'm not sure.

When J was first described, forks were all verbs (V V V), but then it was decided to let nouns be in the left tine position and return their face value. So (N V V) is seen as a fork as well. In some older code you can see the left tine of the fork show up as a 'verbified' noun such as 1: or 'a'"_ which act as verbs that return their face value when given any argument.
(N V V) configuration is described in the dictionary as "The train N g h (a noun followed by two verbs) is equivalent to N"_ g h ." http://www.jsoftware.com/help/dictionary/dictf.htm

Related

Creating y shape random float array in J

I am trying to creating y shape random float array, and this is my current right now:
input_dim =: 2
hidden_dim =: 16
0 ?#$ ~ (input_dim, hidden_dim)
0.838135 0.96131 0.766721 0.420625 0.640265 0.683779 0.683311 0.427981 0.281479 0.305607 0.385446 0.898389 0.24596 0.452391 0.739534 0.973384
0.914155 0.172582 0.146184 0.624908 0.333564 0.132774 0.475515 0.802788 0.277571 0.146896 0.40596 0.735201 0.943969 0.259493 0.442858 0.374871
It seems like this code returns what I exactly want, so I tried to make a function like below:
rand =: 0 ?#$ ~
but rand (input_dim, hidden_dim) gives me a syntax error...
I think I am missing one very important part, but I am not sure what that is.
Any advice would be grateful!
The only thing missing from your verb is ].
That is:
rand =: 0 ?#$~ ]
rand 2 3
0.891663 0.888594 0.716629
0.9962 0.477721 0.946355
Potentially your confusion arose because you were wanting to create a fork of the form (noun verb verb), however ~ is an adverb and so combines with the verb to its left to create a new verb (in your case ?#$~) so your rand had the form (0 ?#$~) or (noun verb) which J does not recognise - hence the syntax error.
It makes sense to use the combination ?#$ if possible because it is supported by special code and does not create x $ y.
Without the argument, the syntax of 0 ?#$ ~ is ambiguous and the interpreter missclassifies the parenthesization (or, more accurately, the correct parenthesization is not the one you think it is).
The easiest way around this is to define rand as:
rand =: 3 :'0 ?#$ ~ y'
Of course, any other way of removing the syntactic ambiguity will also work:
rand =: [: ? 0 $~ ]
rand =: ?#(0$~])
rand =: ?#(0&($~))
...

Sequence of legal pairs of parentheses using recursion - Python

I have some problem to solve using recursion in Python.
I'm simply bad in recursion and don't know how to start so please guide me.
We will say that a string contains 'n' legal pairs of parentheses if the string contains only the chars '(',')' and if this sequence of parentheses can be written so in a manner of mathematical formula (That is, every opening of parentheses is closed and parentheses are not closed before they are opened). More precise way to describe it is at the beginning of the string the number of '(' is greater or equal to ')' - and the number of any kind of char in the whole string is equal. Implement a function that recieves a positive integer n and returns a list which contains every legal string of n-combination of the parentheses.
I have tried to start at least, think of a base case, but what's my base case at all?
I tried to think of a base case when I am given the minimal n which is 1 and then I think I have to return a list ['(', ')']. But to do that I have also a difficulty...
def parentheses(n):
if n == 1:
return combine_parent(n)
def combine_parent(n):
parenth_lst = []
for i in range(n):
parenth_lst +=
Please explain me the way to solve problems recursively.
Thank you!
Maybe it's helpful to look in a simple case of the problem:
n = 2
(())
()()
So we start by n=2 and we produce a sequence of ( n times followed by a sequence of ) n times and we return a list of that. Then we recursively do that with n-1. When we reach n=1 it looks like we reached the base case which is that we need to return a string with () n times (not n=1 but n=2).
n = 3
((()))
(())()
()()()
Same pattern for n=3.
The above examples are helpful to understand how the problem can be solved recursively.
def legal_parentheses(n, nn=None):
if nn == 1:
return ["()" * n]
else:
if not nn:
nn = n
# This will produce n ( followed by n ) ( i.e n=2 -> (()) )
string = "".join(["(" * nn, ")" * nn])
if nn < n:
# Then here we want to produce () n-nn times.
string += "()" * (n-nn)
return [string] + legal_parentheses(n, nn-1)
print(legal_parentheses(3))
print(legal_parentheses(4))
print(legal_parentheses(5))
For n = 3:
['((()))', '(())()', '()()()']
For n = 4:
['(((())))', '((()))()', '(())()()', '()()()()']
For n = 5:
['((((()))))', '(((())))()', '((()))()()', '(())()()()', '()()()()()']
This is one way of solving the problem.
The way to think about solving a problem recursively, in my opinion, is to first pick the simplest example of your problem in your case, n=2 and then write down what do you expect as a result. In this case, you are expecting the following output:
"(())", "()()"
Now, you are trying to find a strategy to break down the problem such that you can produce each of the strings. I start by thinking of the base case. I say the trivial case is when the result is ()(), I know that an element of that result is just () n times. If n=2, I should expect ()() and when n=3 I should expect ()()() and there should be only one such element in the sequence (so it should be done only once) hence it becomes the base case. The question is how do we calculate the (()) part of the result. The patterns shows that we just have to put n ( followed by n ) -> (()) for n=2. This looks like a good strategy. Now you need to start thinking for a slightly harder problem and see if our strategy still holds.
So let's think of n=3. What do we expect as a result?
'((()))', '(())()', '()()()'
Ok, we see that the base case should still produce the ()()() part, all good, and should also produce the ((())) part. What about the (())() part? It looks like we need a slightly different approach. In this case, we need to somehow generate n ( followed by n ) and then produce n-1 ( followed by n-1 ) and then n-2 ( followed by n-2 ) and so on, until we reach the base case n=1 where we just going to produce () part n times. But, if we were to call the function each time with n-1 then when we reach the base case we have no clue what the original n value was and hence we cannot produce the () part as we don't know how many we want (if original n was 3 then we need ()()() but if we change n by calling the function with n-1 then by the time we reach the base case, we won't know the original n value). Hence, because of that, in my solution, I introduce a second variable called nn that is the one that is reduced each time, but still, we leave the n unmodified so we know what the original value was.

Essence of Laziness. Haskell [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am learning a haskell for a few days and the laziness is something like buzzword. Because of the fact I am not familiar with laziness ( I have been working mainly with non-functional languages ) it is not easy concept for me.
So, I am asking for any excerise / example which show me what laziness is in the fact.
Thanks in advance ;)
In Haskell you can create an infinite list. For instance, all natural numbers:
[1,2..]
If Haskell loaded all the items in memory at once that wouldn't be possible. To do so you would need infinite memory.
Laziness allows you to get the numbers as you need them.
Here's something interesting: dynamic programming, the bane of every intro. algorithms student, becomes simple and natural when written in a lazy and functional language. Take the example of string edit distance. This is the problem of measuring how similar two DNA strands are or how many bytes changed between two releases of a binary executable or just how 'different' two strings are. The dynamic programming algorithm, expressed mathematically, is simple:
let:
• d_{i,j} be the edit distance of
the first string at index i, which has length m
and the second string at index j, which has length m
• let a_i be the i^th character of the first string
• let b_j be the j^th character of the second string
define:
d_{i,0} = i (0 <= i <= m)
d_{0,j} = j (0 <= j <= n)
d_{i,j} = d_{i - 1, j - 1} if a_i == b_j
d_{i,j} = min { if a_i != b_j
d_{i - 1, j} + 1 (delete)
d_{i, j - 1} + 1 (insert)
d_{i - 1, j - 1} + 1 (modify)
}
return d_{m, n}
And the algorithm, expressed in Haskell, follows the same shape of the algorithm:
distance a b = d m n
where (m, n) = (length a, length b)
a' = Array.listArray (1, m) a
b' = Array.listArray (1, n) b
d i 0 = i
d 0 j = j
d i j
| a' ! i == b' ! j = ds ! (i - 1, j - 1)
| otherwise = minimum [ ds ! (i - 1, j) + 1
, ds ! (i, j - 1) + 1
, ds ! (i - 1, j - 1) + 1
]
ds = Array.listArray bounds
[d i j | (i, j) <- Array.range bounds]
bounds = ((0, 0), (m, n))
In a strict language we wouldn't be able to define it so straightforwardly because the cells of the array would be strictly evaluated. In Haskell we're able to have the definition of each cell reference the definitions of other cells because Haskell is lazy – the definitions are only evaluated at the very end when d m n asks the array for the value of the last cell. A lazy language lets us set up a graph of standing dominoes; it's only when we ask for a value that we need to compute the value, which topples the first domino, which topples all the other dominoes. (In a strict language, we would have to set up an array of closures, doing the work that the Haskell compiler does for us automatically. It's easy to transform implementations between strict and lazy languages; it's all a matter of which language expresses which idea better.)
The blog post does a much better job of explaining all this.
So, I am asking for any excerise / example which show me what laziness is in the fact.
Click on Lazy on haskell.org to get the canonical example. There are many other examples just like it to illustrate the concept of delayed evaluation that benefits from not executing some parts of the program logic. Lazy is certainly not slow, but the opposite of eager evaluation common to most imperative programming languages.
Laziness is a consequence of non-strict function evaluation. Consider the "infinite" list of 1s:
ones = 1:ones
At the time of definition, the (:) function isn't evaluated; ones is just a promise to do so when it is necessary. Such a time would be when you pattern match:
myHead :: [a] -> a
myHead (x:rest) = x
When myHead ones is called, x and rest are needed, but the pattern match against 1:ones simply binds x to 1 and rest to ones; we don't need evaluate ones any further at this time, so we don't.
The syntax for infinite lists, using the .. "operator" for arithmetic sequences, is sugar for calls to enumFrom and enumFromThen. That is
-- An infintite list of ones
ones = [1,1..] -- enumFromThen 1 1
-- The natural numbers
nats = [1..] -- enumFrom 1
so again, laziness just comes from the non-strict evaluation of enumFrom.
Unlike with other languages, Haskell decouples the creation and definition of an object.... You can easily watch this in action using Debug.Trace.
You can define a variable like this
aValue = 100
(the value on the right hand side could include a complicated evaluation, but let's keep it simple)
To see if this code ever gets called, you can wrap the expression in Debug.Trace.trace like this
import Debug.Trace
aValue = trace "evaluating aValue" 100
Note that this doesn't change the definition of aValue, it just forces the program to output "evaluating aValue" whenever this expression is actually created at runtime.
(Also note that trace is considered unsafe for production code, and should only be used to debug).
Now, try two experiments.... Write two different mains
main = putStrLn $ "The value of aValue is " ++ show aValue
and
main = putStrLn "'sup"
When run, you will see that the first program actually creates aValue (you will see the "creating aValue" message, while the second does not.
This is the idea of laziness.... You can put as many definitions in a program as you want, but only those that are used will be actually created at runtime.
The real use of this can be seen with objects of infinite size. Many lists, trees, etc. have an infinite number of elements. Your program will use only some finite number of values, but you don't want to muddy the definition of the object with this messy fact. Take for instance the infinite lists given in other answers here....
[1..] -- = [1,2,3,4,....]
You can again see laziness in action here using trace, although you will have to write out a variant of [1..] in an expanded form to do this.
f::Int->[Int]
f x = trace ("creating " ++ show x) (x:f (x+1)) --remember, the trace part doesn't change the expression, it is just used for debugging
Now you will see that only the elements you use are created.
main = putStrLn $ "the list is " ++ show (take 4 $ f 1)
yields
creating 1
creating 2
creating 3
creating 4
the list is [1,2,3,4]
and
main = putStrLn "yo"
will not show any item being created.

G-machine, (non-)strict contexts - why case expressions need special treatment

I'm currently reading Implementing functional languages: a tutorial by SPJ and the (sub)chapter I'll be referring to in this question is 3.8.7 (page 136).
The first remark there is that a reader following the tutorial has not yet implemented C scheme compilation (that is, of expressions appearing in non-strict contexts) of ECase expressions.
The solution proposed is to transform a Core program so that ECase expressions simply never appear in non-strict contexts. Specifically, each such occurrence creates a new supercombinator with exactly one variable which body corresponds to the original ECase expression, and the occurrence itself is replaced with a call to that supercombinator.
Below I present a (slightly modified) example of such transformation from 1
t a b = Pack{2,1} ;
f x = Pack{2,2} (case t x 7 6 of
<1> -> 1;
<2> -> 2) Pack{1,0} ;
main = f 3
== transformed into ==>
t a b = Pack{2,1} ;
f x = Pack{2,2} ($Case1 (t x 7 6)) Pack{1,0} ;
$Case1 x = case x of
<1> -> 1;
<2> -> 2 ;
main = f 3
I implemented this solution and it works like charm, that is, the output is Pack{2,2} 2 Pack{1,0}.
However, what I don't understand is - why all that trouble? I hope it's not just me, but the first thought I had of solving the problem was to just implement compilation of ECase expressions in C scheme. And I did it by mimicking the rule for compilation in E scheme (page 134 in 1 but I present that rule here for completeness): so I used
E[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
and wrote
C[[case e of alts]] p = C[[e]] p ++ [Eval] ++ [Casejump D[[alts]] p]
I added [Eval] because Casejump needs an argument on top of the stack in weak head normal form (WHNF) and C scheme doesn't guarantee that, as opposed to E scheme.
But then the output changes to enigmatic: Pack{2,2} 2 6.
The same applies when I use the same rule as for E scheme, i.e.
C[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
So I guess that my "obvious" solution is inherently wrong - and I can see that from outputs. But I'm having trouble stating formal arguments as to why that approach was bound to fail.
Can someone provide me with such argument/proof or some intuition as to why the naive approach doesn't work?
The purpose of the C scheme is to not perform any computation, but just delay everything until an EVAL happens (which it might or might not). What are you doing in your proposed code generation for case? You're calling EVAL! And the whole purpose of C is to not call EVAL on anything, so you've now evaluated something prematurely.
The only way you could generate code directly for case in the C scheme would be to add some new instruction to perform the case analysis once it's evaluated.
But we (Thomas Johnsson and I) decided it was simpler to just lift out such expressions. The exact historical details are lost in time though. :)

Why doesn't my Haskell function accept negative numbers?

I am fairly new to Haskell but do get most of the basics. However there is one thing that I just cannot figure out. Consider my example below:
example :: Int -> Int
example (n+1) = .....
The (n+1) part of this example somehow prevents the input of negative numbers but I cannot understand how. For example.. If the input were (-5) I would expect n to just be (-6) since (-6 + 1) is (-5). The output when testing is as follows:
Program error: pattern match failure: example (-5)
Can anyone explain to me why this does not accept negative numbers?
That's just how n+k patterns are defined to work:
Matching an n+k pattern (where n is a variable and k is a positive integer literal) against a value v succeeds if x >= k, resulting in the binding of n to x - k, and fails otherwise.
The point of n+k patterns is to perform induction, so you need to complete the example with a base case (k-1, or 0 in this case), and decide whether a parameter less than that would be an error or not. Like this:
example (n+1) = ...
example 0 = ...
The semantics that you're essentially asking for would be fairly pointless and redundant — you could just say
example n = let n' = n-1 in ...
to achieve the same effect. The point of a pattern is to fail sometimes.

Resources