Accessing namespace script variable - scope

Concider the following namespace script in Dyalog APL:
:Namespace Test
x ← 0
∇ F
##.Test.x ← 1
∇
∇ G; x
x ← 0
F
∇
:EndNamespace
If I run Test.G and then Test.x, I get the output zero. How come? How do I set Test.x in Test.F?

Tradfns (traditional functions using ∇ and a header, etc.) use dynamic scoping, which means that they "see" the environment of the place they are called from. (This is in contrast to dfns which use lexical scoping; they see environment in which they were defined.) See the documentation for details.
Now, when G calls F, while x is localised in G, the global x is invisible to F because the localisation in G shadows the global x.
Notice that ##.Test. doesn't change which namespace we're working in. x is still shadowed.
If instead you had used dfns, you would see the behaviour you want:
:Namespace Test
x ← 0
F←{
##.Test.x←1
}
G←{
x←0
F ⍬
}
:EndNamespace
Try it online!

Related

Can I use where in Haskell to find function parameter given the function output?

This is my program:
modify :: Integer -> Integer
modify a = a + 100
x = x where modify(x) = 101
In ghci, this compiles successfully but when I try to print x the terminal gets stuck. Is it not possible to find input from function output in Haskell?
x = x where modify(x) = 101
is valid syntax but is equivalent to
x = x where f y = 101
where x = x is a recursive definition, which will get stuck in an infinite loop (or generate a <<loop>> exception), and f y = 101 is a definition of a local function, completely unrelated to the modify function defined elsewhere.
If you turn on warnings you should get a message saying "warning: the local definition of modify shadows the outer binding", pointing at the issue.
Further, there is no way to invert a function like you'd like to do. First, the function might not be injective. Second, even if it were such, there is no easy way to invert an arbitrary function. We could try all the possible inputs but that would be extremely inefficient.

What is this expression in Haskell, and how do I interpret it?

I'm learning basic Haskell so I can configure Xmonad, and I ran into this code snippet:
newKeys x = myKeys x `M.union` keys def x
Now I understand what the M.union in backticks is and means. Here's how I'm interpreting it:
newKeys(x) = M.union(myKeys(x),???)
I don't know what to make of the keys def x. Is it like keys(def(x))? Or keys(def,x)? Or is def some sort of other keyword?
It's keys(def,x).
This is basic Haskell syntax for function application: first the function itself, then its arguments separated by spaces. For example:
f x y = x + y
z = f 5 6
-- z = 11
However, it is not clear what def is without larger context.
In response to your comment: no, def couldn't be a function that takes x as argument, and then the result of that is passed to keys. This is because function application is left-associative, which basically means that in any bunch of things separated by spaces, only the first one is the function being applied, and the rest are its arguments. In order to express keys(def(x)), one would have to write keys (def x).
If you want to be super technical, then the right way to think about it is that all functions have exactly one parameter. When we declare a function of two parameters, e.g. f x y = x + y, what we really mean is that it's a function of one parameter, which returns another function, to which we can then pass the remaining parameter. In other words, f 5 6 means (f 5) 6.
This idea is kind of one of the core things in Haskell (and any ML offshoot) syntax. It's so important that it has its own name - "currying" (after Haskell Curry, the mathematician).

Trying to understand behavior of python scope and global keyword

def func():
def nested():
global x
x = 1
x = 3
nested()
print("func:", x)
x = 2
func()
print("main:", x)
Output:
func: 3
main: 1
I'm new to programming. I want to know where I'm going wrong. As I'm new to stack exchange, please let me know if there are any issues with my question or how to improve it.
The way I am reading this code is:
x is assigned the integer 2.
the func() function is called.
x is assigned the integer 3.
the nested() function is called.
x is declared a global variable? #not really clear of the implications
x is assigned the integer 1.
print("func":x) #because x was made a global variable within nested() I expected the output to be 1.
print("main": x) #I believe this is because x was made a global variable?
I'm not clear on why the output is 3 in the first print command?
In short, there are two different identifiers, x, being referenced here: one at the model level, and a different one local to func.
Your steps should read:
The identifier x at the top level (i.e. the module level) is associated with the value 2.
the func function is called.
The completely different identifier x which is local to func is associated with the value 3.
the nested function is called.
Python is told that, within this scope of nested, the x refers to the top level (module level) x. Note global doesn't mean "next level up", it means "the top/global level" - the same one affected by step 1, not the one affected by step 3.
The global (top level) x is associated with the value 1.
etc.
General hint: Every time you think you need to use global you almost certainly don't.
The question was asked perfectly fine. The reason why is because the global x was declared in the nested. Not in the whole program, so they're 2 different variables.

G-machine, (non-)strict contexts - why case expressions need special treatment

I'm currently reading Implementing functional languages: a tutorial by SPJ and the (sub)chapter I'll be referring to in this question is 3.8.7 (page 136).
The first remark there is that a reader following the tutorial has not yet implemented C scheme compilation (that is, of expressions appearing in non-strict contexts) of ECase expressions.
The solution proposed is to transform a Core program so that ECase expressions simply never appear in non-strict contexts. Specifically, each such occurrence creates a new supercombinator with exactly one variable which body corresponds to the original ECase expression, and the occurrence itself is replaced with a call to that supercombinator.
Below I present a (slightly modified) example of such transformation from 1
t a b = Pack{2,1} ;
f x = Pack{2,2} (case t x 7 6 of
<1> -> 1;
<2> -> 2) Pack{1,0} ;
main = f 3
== transformed into ==>
t a b = Pack{2,1} ;
f x = Pack{2,2} ($Case1 (t x 7 6)) Pack{1,0} ;
$Case1 x = case x of
<1> -> 1;
<2> -> 2 ;
main = f 3
I implemented this solution and it works like charm, that is, the output is Pack{2,2} 2 Pack{1,0}.
However, what I don't understand is - why all that trouble? I hope it's not just me, but the first thought I had of solving the problem was to just implement compilation of ECase expressions in C scheme. And I did it by mimicking the rule for compilation in E scheme (page 134 in 1 but I present that rule here for completeness): so I used
E[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
and wrote
C[[case e of alts]] p = C[[e]] p ++ [Eval] ++ [Casejump D[[alts]] p]
I added [Eval] because Casejump needs an argument on top of the stack in weak head normal form (WHNF) and C scheme doesn't guarantee that, as opposed to E scheme.
But then the output changes to enigmatic: Pack{2,2} 2 6.
The same applies when I use the same rule as for E scheme, i.e.
C[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
So I guess that my "obvious" solution is inherently wrong - and I can see that from outputs. But I'm having trouble stating formal arguments as to why that approach was bound to fail.
Can someone provide me with such argument/proof or some intuition as to why the naive approach doesn't work?
The purpose of the C scheme is to not perform any computation, but just delay everything until an EVAL happens (which it might or might not). What are you doing in your proposed code generation for case? You're calling EVAL! And the whole purpose of C is to not call EVAL on anything, so you've now evaluated something prematurely.
The only way you could generate code directly for case in the C scheme would be to add some new instruction to perform the case analysis once it's evaluated.
But we (Thomas Johnsson and I) decided it was simpler to just lift out such expressions. The exact historical details are lost in time though. :)

How to specify tab width for Alex lexer?

Alex documentation (Chapter 5) says:
You might want Alex to keep track of the line and column number in the
input text, or you might wish to do it yourself (perhaps you use a
different tab width from the standard 8-columns, for example)
But changing tab width from 8 to 4 in Alex position tracker is rather hard than easy. The code for this is hidden deep inside Alex generated routines:
-- this function is used by `alexGetByte`, which is used by `alex_scan_tkn`, which is
-- used by `alexScanUser` and `alexRightContext`,
-- which is used by `alex_accept` etc etc...
alexMove :: AlexPosn -> Char -> AlexPosn
alexMove (AlexPn a l c) '\t' = AlexPn (a+1) l (((c+7) `div` 8)*8+1)
alexMove (AlexPn a l c) '\n' = AlexPn (a+1) (l+1) 1
alexMove (AlexPn a l c) _ = AlexPn (a+1) l (c+1)
One idea is to create your own wrapper which defines alexMove the way you want it.
On my Mac wrappers are installed in /Library/Haskell/ghc-7.6.3/lib/alex-3.0.5/share/
Look for where files named "AlexWrapper-monad", "AlexWrapper-monad-bytestring", ... reside on your system.
The "-t" command line option tells alex where to look for templates, but it also might pertain to wrappers since it appears that wrappers and templates reside in the same directory.

Resources