Is it safe to use bindings that depend on each other in let? For example:
let x = 1
y = x + 2
in y
Is it possible that they are evaluated in parallel? my ghci shows that it is evaluated correctly, but will this be always the case?
Haskell is lazy evaluated. This means that expressions are only evaluated by necessity. Let's start with your example.
let x = 1
y = x + 2
in y
The system looks at the y part (the expression), and says "Hey. I know what y equals. It equals x + 2" But it can only evaluate x + 2 if it has a definition for x, so it does the same thing and decides that y is 1 + 2 which is of course 3. Now this is only a small portion of the power of lazy evaluation. The next example shows it more fully.
let x = 0 : y
y = 1 : x
in take 50 x
This expression will evaluate correctly, yielding the first fifty elements on the list x, which will be 0 and 1 alternating. Both of the values x and y are infinite lists that depend on each other, and indeed in most languages this would overflow the stack. However, the evaluation rules allow the system to only see the parts it needs, which in that example is the first fifty elements.
So to answer your question about evaluation order, the system evaluates what it sees. If the function is supposed to return y, it evaluates y and then as necessary x. If you had put x in your first example, it would have evaluated x and left y alone. For example, this code will not err.
let x = 1
y = error "I'm an error message!"
in x
This is because the y form is never needed, so the piece of code that would crash the program is never even looked at.
In Haskell (regardless of whether you use a single let, multiple lets, case, where or function parameters) an expression is evaluated when the evaluation of another expression depends on its value.
So in your case, as soon as y's value is required (which of course depends on the surrounding program), y will be evaluated and x will be evaluated as part of y's evaluation.
Another way to think of it is this: whenever you use the value of a variable, it will be evaluated at that point at the latest. That is it might have been evaluated previously (if it was needed previously) or it might be evaluated now, but, as long as you do use the value, it will never not be evaluated. So, except for performance reasons, there's no need to worry about when a variable will be evaluated.
Related
A query regarding this code:
from functools import reduce
def sum_even(it):
return reduce(lambda x, y: x + y if not y % 2 else x, it,0)
print(sum_even([1, 2, 3, 4]))
Why not adding the third parameter of reduce() adds the first odd element of the list?
If you don't pass an initial element explicitly, then the first two elements of the input become the arguments to the first call to the reducer, so x would be 1 and y would be 2. Since your test only excludes odd ys, not xs, the x gets preserved, and all future xs are sums based on that initial 1. Using 0 as an explicit initial value means only 0 and (later) the accumulated total of that 0 and other even numbers is ever passed as x, so an odd number is never passed as x.
Note that this is kind of a silly way to do this. It's much simpler to build this operation from parts, using one tool to filter to even, and another to sum the surviving elements from the filtering operation. For example:
def sum_even(it):
return sum(x for x in it if not x % 2)
is shorter, clearer, and (likely) faster than the reduce you wrote.
I'm starting to wrap my head around Haskell and do some exciting experiments. And there's one thing I just seem to be unable to comprehend (previous "imperativist" experience talks maybe).
Recently, I was yearning to implement integer division function as if there where no multiply/divide operations. An immensely interesting brain-teaser which led to great confusion.
divide x y =
if x < y then 0
else 1 + divide (x - y) y
I compiled it and it.. works(!). That's mind-blowing. However, I was told, I was sure that variables are immutable in Haskell. How comes that with each recursive step variable x keeps it's value from previous step? Or is my glorious compiler lying to me? Why does it work at all?
Your x here doesn't change during one function call (i.e., after creation) - that's exactly what immutable means. What does change is value of x during multiple (recursive) calls. In a single stack frame (function call) the value of x is constant.
An example of execution of your code, for a simple case
call divide 8 3 -- (x = 8, y = 3), stack: divide 8 3
step 1: x < y ? NO
step 2: 1 + divide 5 3
call: divide 5 3 -- (x = 5, y = 3), stack: divide 8 3, divide 5 3
step 1: x < y ? NO
step 2: 1 + divide 2 3
call divide 2 3 -- (x = 2, y = 3), stack: divide 8 3, divide 5 3, divide 2 3
step 1: x < y ? YES
return: 0 -- unwinding bottom call
return 1 + 0 -- stack: divide 8 3, divide 5 3, unwinding middle call
return 1 + 1 + 0 -- stack: divide 8 3
I am aware that the above notation is not anyhow formalized, but I hope it helps to understand what recursion is about and that x might have different values in different calls, because it's simply a different instance of whole call, thus also different instance of x.
x is actually not a variable, but a parameter, and isn't that different from parameters in imperative languages.
Maybe it'd look more obvious with explicit return statements?
-- for illustrative purposes only, doesn't actually work
divide x y =
if x < y
then return 0
else return 1 + divide (x - y) y
You're not mutating x, just stacking up several function calls to calculate your desired result with the values they return.
Here's the same function in Python:
def divide(x, y):
if x < y:
return 0
else:
return 1 + divide(x - y, y)
Looks familiar, right? You can translate this to any language that allows recursion, and none of them would require you to mutate a variable.
Other than that, yes, your compiler is lying to you. Because you're not allowed to directly mutate values, the compiler can make a lot of extra assumptions based on your code, which helps translating it to efficient machine code, and at that level, there's no escaping mutability. The major benefit is that compilers are way less likely to introduce mutability-related bugs than us mortals.
I have a code in Gnu Mathprog for an energy model:
s.t.EBa1_RateOfFuelProduction1{r in REGION, l in TIMESLICE, f in FUEL, t in TECHNOLOGY, m in MODE_OF_OPERATION, y in YEAR: OutputActivityRatio[r,t,f,m,y] <> 0}:
RateOfActivity[r,l,t,m,y]*OutputActivityRatio[r,t,f,m,y] = RateOfProductionByTechnologyByMode[r,l,t,m,f,y];
s.t.EBa4_RateOfFuelUse1{r in REGION, l in TIMESLICE, f in FUEL, t in TECHNOLOGY, m in MODE_OF_OPERATION, y in YEAR: InputActivityRatio[r,t,f,m,y]<>0}:
RateOfActivity[r,l,t,m,y]*InputActivityRatio[r,t,f,m,y] = RateOfUseByTechnologyByMode[r,l,t,m,f,y];
I want to put these two constraints in one, and i am thinking to insert two conditional expressions(if).The first if, will be referred to technology(t) and fuel(f)where the OutputActivityRatio<>0 and the second one for the same technology(t) it will start checking again the f(fuels) to see if the InputActivityRatio<>0.
Like that:
s.t.RateOfProduction{r in REGION, l in TIMESLICE, f in FUEL, t in TECHNOLOGY, m in MODE_OF_OPERATION, y in YEAR: OutputActivityRatio[r,t,f,m,y] <>0}:
RateOfActivity[r,l,t,m,y]*OutputActivityRatio[r,t,f,m,y] = RateOfProductionByTechnologyByMode[r,l,t,m,f,y]
If InputActivityRatio[r,t,ff,m,y]<>0 then
RateOfActivity[r,l,t,m,y]*InputActivityRatio[r,t,f,m,y] = RateOfUseByTechnologyByMode[r,l,t,m,f,y]
else 0
else 0 ;
My question is: is it possible to have two if in series (nested if) and between them to have an equation as well?How can I write something like that?
Thank you very much!
As described in your other Question (regarding nested if-then-else in mathprog) there are no If-Then-Else statements in mathprog. The workaround with conditional for-loops is also no solution for your problem, since you can only use them in pre- or post processing of your data (you can't use this in your constraints!).
But there are still possibilities to merge your constraints. I think something like the following would work, if your condition is that either Input or Output is 0.
s.t.RateOfProduction{r in REGION, l in TIMESLICE, f in FUEL, t in TECHNOLOGY, m in MODE_OF_OPERATION, y in YEAR}:
(RateOfActivity[r,l,t,m,y]*OutputActivityRatio[r,t,f,m,y])
+ (RateOfActivity[r,l,t,m,y]*InputActivityRatio[r,t,f,m,y])
= RateOfProductionByTechnologyByMode[r,l,t,m,f,y];
Here in the lefthandside summation one multiplication would turn zero.
Since I don't know which parts are variables and which a parameters, this solution could also fail (for example it could be problematic if there is input and output at the same time and the rest of the model doesn't contain the right bounds for that)
I just started reading about the pumping lemma and know how to perform a few proofs, mostly by contradiction. It is only this particular question which I don't seem to find an answer for. I have no idea on how to begin. I can assume that there has to be a pumping length P and that for all w element of L that the LENGTH(w) >= P. And of course that we can write w as xyz with the three normal conditions of the pumping lemma.
I have to proof that the following language is non regular:
L = {x + y = z | x,y,z element of {0,1}* and #(x) + #(y) = #(z) }
Can someone help me on this, I really want to master the process in proofing these kind of questions?
Edit:
Sorry, forgot to say that the alphabet is {0,1,+,=} and # means the binary value of the string. Like #(00101) = 5 and #(110) = 6.
Since you want to master the process, I'll point out a few things before showing a proof.
The first thing to notice is that the + and the = may only appear once each. So when you write your string w as w = abc, the pumped portion, b, cannot contain + or = otherwise you'd reach a trivial contradiction (I'm not using the more standard w = xyz notation to avoid confusion with L's definition).
Another thing to notice is that normally, you'd pick a specific string w to pump. In this case, it could be easier to pick a class of strings that share a certain property. The pumping lemma only requires you to reach a contratiction using one string, but there's no reason you can't reach a contradiction with multiple strings.
Proof (in a spoiler):
So let w be any string in L such that |w| ≥ P and x, y, z do not contain leading 0's. By the pumping lemma we can write w as w = abc By pumping lemma, we know b is not empty. Since b cannot contain + or =, it is fully contained in either x, y, or z. Pumping w with any i ≠ 1 results in the binary equation no longer holding since exactly one of x, y, z would be a different number (this is why we needed the no leading 0's bit).
Choose as the string 1(0^n+1) + 1(0^n) = 11(0^n).
In other words, your string will read "the sum of two to the power n+2 plus two to the power n+1 is equal to 11 followed by n zeroes".
Since the string to be pumped will consist entirely of symbols from the first addend, pumping must change the number represented (adding or removing digits to a number will change the number; this is true because our string doesn't contain leading zeroes) and if x + y = z holds, then x' + y = z does not hold if x' != x (over integers, at least).
Since the pumping lemma requires pumped strings to be in the language, and pumping this string fails, we have that the language is not regular.
Obviously, atomic operations make sure that different threads don't clobber a value. But is this still true across processes, when using shared memory? Even if the processes happen to be scheduled by the OS to run on different cores? Or across different distinct CPUs?
Edit: Also, if it's not safe, is it not safe even on an operating system like Linux, where processes and threads are the same from the scheduler's point of view?
tl;dr: Read the fine print in the documentation of the atomic operations. Some will be atomic by design but may trip over certain variable types. In general, though, an atomic operation will maintain its contract between different processes just as it does between threads.
An atomic operation really only ensures that you won't have an inconsistent state if called by two entities simultaneously. For example, an atomic increment that is called by two different threads or processes on the same integer will always behave like so:
x = initial value (zero for the sake of this discussion)
Entity A increments x and returns the result to itself: result = x = 1.
Entity B increments x and returns the result to itself: result = x = 2.
where A and B indicate the first and second thread or process that makes the call.
A non-atomic operation can result in inconsistent or generally crazy results due to race conditions, incomplete writes to the address space, etc. For example, you can easily see this:
x = initial value = zero again.
Entity A calls x = x + 1. To evaluate x + 1, A checks the value of x (zero) and adds 1.
Entity B calls x = x + 1. To evaluate x + 1, B checks the value of x (still zero) and adds 1.
Entity B (by luck) finishes first and assigns the result of x + 1 = 1 (step 3) to x. x is now 1.
Entity A finishes second and assigns the result of x + 1 = 1 (step 2) to x. x is now 1.
Note the race condition as entity B races past A and completes the expression first.
Now imagine if x were a 64-bit double that is not ensured to have atomic assignments. In that case you could easily see something like this:
A 64 bit double x = 0.
Entity A tries to assign 0x1122334455667788 to x. The first 32 bits are assigned first, leaving x with 0x1122334400000000.
Entity B races in and assigns 0xffeeddccbbaa9988 to x. By chance, both 32 bit halves are updated and x is now = 0xffeeddccbbaa9988.
Entity A completes its assignment with the second half and x is now = 0xffeeddcc55667788.
These non-atomic assignments are some of the most hideous concurrent bugs you'll ever have to diagnose.