How to write a loop with only if-else? - programming-languages

Someone once said to me that you can perform any programming functions with just if - else statements.
I want to know:
Is this statement true? (From my understanding turing-completeness depends on the ability to loop and having if/else doesn't make something turing-complete)
Is it possible to write a for loop with only if else statements?
How would you loop through this with only if else? ["fruit", "cars", "water"]

Engineers can use a recursion to opposite for-while statements.
Recursion is the process of repeating items in a self-similar way.
The computation of Fibonacci numbers is a great example for understanding the
recursion (the most simple and trivial example).
public int fibonacci(int n) {
if (n <= 1) return n;
else return fibonacci(n-1) + fibonacci(n-2);
}
But recursion in most popular programming languages has some disadvantages:
(a) it's slow on big iterations and (b) uses a lot of memory for the stack
of execution.
So, yes it's possible to write a for loop with only if-else statements.

Related

Optimally iterating in Haskell with termination conditions and differing iteration steps

I am trying to write a simple iterating algorithm in Haskell, but I'm struggling to find the optimal solution in terms of elegance and speed.
I have an algorithm that needs to apply an operation to a state over a number of iterations until some stopping condition is reached, recording the state using some arbitrary function. I already know how to implement a scheme like this by defining a function like iterateM.
But in this case the operation to perform for each step depends on the state, and boils down to checking a 'step type' condition to decide on the next iteration types, and then performing operation A for the next 10 iterations, or performing operation B for the next iteration before checking the condition again.
I could write it in an imperative style as:
c=0
while True:
if c>0:
x=iterateByA(x)
c=c-1
else:
if stepCondition(x)==0:
x=iterateByA(x)
c=9
else:
x=iterateByB(x)
observeState(x)
if stopCondition(x):
break
and of course this could just be copied in Haskell, but I would rather do something more elegant.
My idea is to have the iteration use a list of functions to pop and apply to the state, and update that list with a new one (based on the 'step type' condition) once it is empty. I'm slightly concerned that this will be inefficient though. Would doing this and using something like
take 10 (repeat iterateByA)
compile away all of the list allocation etc to a tight loop that only uses a counter, like the imperative one above?
Is there another neat and efficient way of doing this?
If it helps this is for an adaptive stochastic simulation algorithm, the iteration steps update the state and the step condition (that decides the best simulation scheme) is a function of the current state. There are infact 3 different iteration schemes but I figured that an example with 2 is easier to explain.
(I'm not sure if it matters but I should probably also point out that in haskell the iterateByX functions are monadic since they use random numbers.)
A direct translation doesn't look too bad.
loop c x
| stopCondition x = observe x
| c > 0 = observe x >> iterateByA x >>= loop (c-1)
| stepCondition x = observe x >> iterateByA x >>= loop 9
| otherwise = observe x >> iterateByB x >>= loop c
The repetition of observe can be removed via various tricks if you don't like it.
You should probably rethink things, though. This is a very imperative approach; probably something much better can be done (but it's hard to say how from the few details you've given here).

What is the difference between bottom-up and top-down?

The bottom-up approach (to dynamic programming) consists in first looking at the "smaller" subproblems, and then solve the larger subproblems using the solution to the smaller problems.
The top-down consists in solving the problem in a "natural manner" and check if you have calculated the solution to the subproblem before.
I'm a little confused. What is the difference between these two?
rev4: A very eloquent comment by user Sammaron has noted that, perhaps, this answer previously confused top-down and bottom-up. While originally this answer (rev3) and other answers said that "bottom-up is memoization" ("assume the subproblems"), it may be the inverse (that is, "top-down" may be "assume the subproblems" and "bottom-up" may be "compose the subproblems"). Previously, I have read on memoization being a different kind of dynamic programming as opposed to a subtype of dynamic programming. I was quoting that viewpoint despite not subscribing to it. I have rewritten this answer to be agnostic of the terminology until proper references can be found in the literature. I have also converted this answer to a community wiki. Please prefer academic sources. List of references: {Web: 1,2} {Literature: 5}
Recap
Dynamic programming is all about ordering your computations in a way that avoids recalculating duplicate work. You have a main problem (the root of your tree of subproblems), and subproblems (subtrees). The subproblems typically repeat and overlap.
For example, consider your favorite example of Fibonnaci. This is the full tree of subproblems, if we did a naive recursive call:
TOP of the tree
fib(4)
fib(3)...................... + fib(2)
fib(2)......... + fib(1) fib(1)........... + fib(0)
fib(1) + fib(0) fib(1) fib(1) fib(0)
fib(1) fib(0)
BOTTOM of the tree
(In some other rare problems, this tree could be infinite in some branches, representing non-termination, and thus the bottom of the tree may be infinitely large. Furthermore, in some problems you might not know what the full tree looks like ahead of time. Thus, you might need a strategy/algorithm to decide which subproblems to reveal.)
Memoization, Tabulation
There are at least two main techniques of dynamic programming which are not mutually exclusive:
Memoization - This is a laissez-faire approach: You assume that you have already computed all subproblems and that you have no idea what the optimal evaluation order is. Typically, you would perform a recursive call (or some iterative equivalent) from the root, and either hope you will get close to the optimal evaluation order, or obtain a proof that you will help you arrive at the optimal evaluation order. You would ensure that the recursive call never recomputes a subproblem because you cache the results, and thus duplicate sub-trees are not recomputed.
example: If you are calculating the Fibonacci sequence fib(100), you would just call this, and it would call fib(100)=fib(99)+fib(98), which would call fib(99)=fib(98)+fib(97), ...etc..., which would call fib(2)=fib(1)+fib(0)=1+0=1. Then it would finally resolve fib(3)=fib(2)+fib(1), but it doesn't need to recalculate fib(2), because we cached it.
This starts at the top of the tree and evaluates the subproblems from the leaves/subtrees back up towards the root.
Tabulation - You can also think of dynamic programming as a "table-filling" algorithm (though usually multidimensional, this 'table' may have non-Euclidean geometry in very rare cases*). This is like memoization but more active, and involves one additional step: You must pick, ahead of time, the exact order in which you will do your computations. This should not imply that the order must be static, but that you have much more flexibility than memoization.
example: If you are performing fibonacci, you might choose to calculate the numbers in this order: fib(2),fib(3),fib(4)... caching every value so you can compute the next ones more easily. You can also think of it as filling up a table (another form of caching).
I personally do not hear the word 'tabulation' a lot, but it's a very decent term. Some people consider this "dynamic programming".
Before running the algorithm, the programmer considers the whole tree, then writes an algorithm to evaluate the subproblems in a particular order towards the root, generally filling in a table.
*footnote: Sometimes the 'table' is not a rectangular table with grid-like connectivity, per se. Rather, it may have a more complicated structure, such as a tree, or a structure specific to the problem domain (e.g. cities within flying distance on a map), or even a trellis diagram, which, while grid-like, does not have a up-down-left-right connectivity structure, etc. For example, user3290797 linked a dynamic programming example of finding the maximum independent set in a tree, which corresponds to filling in the blanks in a tree.
(At it's most general, in a "dynamic programming" paradigm, I would say the programmer considers the whole tree, then writes an algorithm that implements a strategy for evaluating subproblems which can optimize whatever properties you want (usually a combination of time-complexity and space-complexity). Your strategy must start somewhere, with some particular subproblem, and perhaps may adapt itself based on the results of those evaluations. In the general sense of "dynamic programming", you might try to cache these subproblems, and more generally, try avoid revisiting subproblems with a subtle distinction perhaps being the case of graphs in various data structures. Very often, these data structures are at their core like arrays or tables. Solutions to subproblems can be thrown away if we don't need them anymore.)
[Previously, this answer made a statement about the top-down vs bottom-up terminology; there are clearly two main approaches called Memoization and Tabulation that may be in bijection with those terms (though not entirely). The general term most people use is still "Dynamic Programming" and some people say "Memoization" to refer to that particular subtype of "Dynamic Programming." This answer declines to say which is top-down and bottom-up until the community can find proper references in academic papers. Ultimately, it is important to understand the distinction rather than the terminology.]
Pros and cons
Ease of coding
Memoization is very easy to code (you can generally* write a "memoizer" annotation or wrapper function that automatically does it for you), and should be your first line of approach. The downside of tabulation is that you have to come up with an ordering.
*(this is actually only easy if you are writing the function yourself, and/or coding in an impure/non-functional programming language... for example if someone already wrote a precompiled fib function, it necessarily makes recursive calls to itself, and you can't magically memoize the function without ensuring those recursive calls call your new memoized function (and not the original unmemoized function))
Recursiveness
Note that both top-down and bottom-up can be implemented with recursion or iterative table-filling, though it may not be natural.
Practical concerns
With memoization, if the tree is very deep (e.g. fib(10^6)), you will run out of stack space, because each delayed computation must be put on the stack, and you will have 10^6 of them.
Optimality
Either approach may not be time-optimal if the order you happen (or try to) visit subproblems is not optimal, specifically if there is more than one way to calculate a subproblem (normally caching would resolve this, but it's theoretically possible that caching might not in some exotic cases). Memoization will usually add on your time-complexity to your space-complexity (e.g. with tabulation you have more liberty to throw away calculations, like using tabulation with Fib lets you use O(1) space, but memoization with Fib uses O(N) stack space).
Advanced optimizations
If you are also doing a extremely complicated problems, you might have no choice but to do tabulation (or at least take a more active role in steering the memoization where you want it to go). Also if you are in a situation where optimization is absolutely critical and you must optimize, tabulation will allow you to do optimizations which memoization would not otherwise let you do in a sane way. In my humble opinion, in normal software engineering, neither of these two cases ever come up, so I would just use memoization ("a function which caches its answers") unless something (such as stack space) makes tabulation necessary... though technically to avoid a stack blowout you can 1) increase the stack size limit in languages which allow it, or 2) eat a constant factor of extra work to virtualize your stack (ick), or 3) program in continuation-passing style, which in effect also virtualizes your stack (not sure the complexity of this, but basically you will effectively take the deferred call chain from the stack of size N and de-facto stick it in N successively nested thunk functions... though in some languages without tail-call optimization you may have to trampoline things to avoid a stack blowout).
More complicated examples
Here we list examples of particular interest, that are not just general DP problems, but interestingly distinguish memoization and tabulation. For example, one formulation might be much easier than the other, or there may be an optimization which basically requires tabulation:
the algorithm to calculate edit-distance[4], interesting as a non-trivial example of a two-dimensional table-filling algorithm
Top down and bottom up DP are two different ways of solving the same problems. Consider a memoized (top down) vs dynamic (bottom up) programming solution to computing fibonacci numbers.
fib_cache = {}
def memo_fib(n):
global fib_cache
if n == 0 or n == 1:
return 1
if n in fib_cache:
return fib_cache[n]
ret = memo_fib(n - 1) + memo_fib(n - 2)
fib_cache[n] = ret
return ret
def dp_fib(n):
partial_answers = [1, 1]
while len(partial_answers) <= n:
partial_answers.append(partial_answers[-1] + partial_answers[-2])
return partial_answers[n]
print memo_fib(5), dp_fib(5)
I personally find memoization much more natural. You can take a recursive function and memoize it by a mechanical process (first lookup answer in cache and return it if possible, otherwise compute it recursively and then before returning, you save the calculation in the cache for future use), whereas doing bottom up dynamic programming requires you to encode an order in which solutions are calculated, such that no "big problem" is computed before the smaller problem that it depends on.
A key feature of dynamic programming is the presence of overlapping subproblems. That is, the problem that you are trying to solve can be broken into subproblems, and many of those subproblems share subsubproblems. It is like "Divide and conquer", but you end up doing the same thing many, many times. An example that I have used since 2003 when teaching or explaining these matters: you can compute Fibonacci numbers recursively.
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
Use your favorite language and try running it for fib(50). It will take a very, very long time. Roughly as much time as fib(50) itself! However, a lot of unnecessary work is being done. fib(50) will call fib(49) and fib(48), but then both of those will end up calling fib(47), even though the value is the same. In fact, fib(47) will be computed three times: by a direct call from fib(49), by a direct call from fib(48), and also by a direct call from another fib(48), the one that was spawned by the computation of fib(49)... So you see, we have overlapping subproblems.
Great news: there is no need to compute the same value many times. Once you compute it once, cache the result, and the next time use the cached value! This is the essence of dynamic programming. You can call it "top-down", "memoization", or whatever else you want. This approach is very intuitive and very easy to implement. Just write a recursive solution first, test it on small tests, add memoization (caching of already computed values), and --- bingo! --- you are done.
Usually you can also write an equivalent iterative program that works from the bottom up, without recursion. In this case this would be the more natural approach: loop from 1 to 50 computing all the Fibonacci numbers as you go.
fib[0] = 0
fib[1] = 1
for i in range(48):
fib[i+2] = fib[i] + fib[i+1]
In any interesting scenario the bottom-up solution is usually more difficult to understand. However, once you do understand it, usually you'd get a much clearer big picture of how the algorithm works. In practice, when solving nontrivial problems, I recommend first writing the top-down approach and testing it on small examples. Then write the bottom-up solution and compare the two to make sure you are getting the same thing. Ideally, compare the two solutions automatically. Write a small routine that would generate lots of tests, ideally -- all small tests up to certain size --- and validate that both solutions give the same result. After that use the bottom-up solution in production, but keep the top-bottom code, commented out. This will make it easier for other developers to understand what it is that you are doing: bottom-up code can be quite incomprehensible, even you wrote it and even if you know exactly what you are doing.
In many applications the bottom-up approach is slightly faster because of the overhead of recursive calls. Stack overflow can also be an issue in certain problems, and note that this can very much depend on the input data. In some cases you may not be able to write a test causing a stack overflow if you don't understand dynamic programming well enough, but some day this may still happen.
Now, there are problems where the top-down approach is the only feasible solution because the problem space is so big that it is not possible to solve all subproblems. However, the "caching" still works in reasonable time because your input only needs a fraction of the subproblems to be solved --- but it is too tricky to explicitly define, which subproblems you need to solve, and hence to write a bottom-up solution. On the other hand, there are situations when you know you will need to solve all subproblems. In this case go on and use bottom-up.
I would personally use top-bottom for Paragraph optimization a.k.a the Word wrap optimization problem (look up the Knuth-Plass line-breaking algorithms; at least TeX uses it, and some software by Adobe Systems uses a similar approach). I would use bottom-up for the Fast Fourier Transform.
Lets take fibonacci series as an example
1,1,2,3,5,8,13,21....
first number: 1
Second number: 1
Third Number: 2
Another way to put it,
Bottom(first) number: 1
Top (Eighth) number on the given sequence: 21
In case of first five fibonacci number
Bottom(first) number :1
Top (fifth) number: 5
Now lets take a look of recursive Fibonacci series algorithm as an example
public int rcursive(int n) {
if ((n == 1) || (n == 2)) {
return 1;
} else {
return rcursive(n - 1) + rcursive(n - 2);
}
}
Now if we execute this program with following commands
rcursive(5);
if we closely look into the algorithm, in-order to generate fifth number it requires 3rd and 4th numbers. So my recursion actually start from top(5) and then goes all the way to bottom/lower numbers. This approach is actually top-down approach.
To avoid doing same calculation multiple times we use Dynamic Programming techniques. We store previously computed value and reuse it. This technique is called memoization. There are more to Dynamic programming other then memoization which is not needed to discuss current problem.
Top-Down
Lets rewrite our original algorithm and add memoized techniques.
public int memoized(int n, int[] memo) {
if (n <= 2) {
return 1;
} else if (memo[n] != -1) {
return memo[n];
} else {
memo[n] = memoized(n - 1, memo) + memoized(n - 2, memo);
}
return memo[n];
}
And we execute this method like following
int n = 5;
int[] memo = new int[n + 1];
Arrays.fill(memo, -1);
memoized(n, memo);
This solution is still top-down as algorithm start from top value and go to bottom each step to get our top value.
Bottom-Up
But, question is, can we start from bottom, like from first fibonacci number then walk our way to up. Lets rewrite it using this techniques,
public int dp(int n) {
int[] output = new int[n + 1];
output[1] = 1;
output[2] = 1;
for (int i = 3; i <= n; i++) {
output[i] = output[i - 1] + output[i - 2];
}
return output[n];
}
Now if we look into this algorithm it actually start from lower values then go to top. If i need 5th fibonacci number i am actually calculating 1st, then second then third all the way to up 5th number. This techniques actually called bottom-up techniques.
Last two, algorithms full-fill dynamic programming requirements. But one is top-down and another one is bottom-up. Both algorithm has similar space and time complexity.
Dynamic Programming is often called Memoization!
1.Memoization is the top-down technique(start solving the given problem by breaking it down) and dynamic programming is a bottom-up technique(start solving from the trivial sub-problem, up towards the given problem)
2.DP finds the solution by starting from the base case(s) and works its way upwards. DP solves all the sub-problems, because it does it bottom-up
Unlike Memoization, which solves only the needed sub-problems
DP has the potential to transform exponential-time brute-force solutions into polynomial-time algorithms.
DP may be much more efficient because its iterative
On the contrary, Memoization must pay for the (often significant) overhead due to recursion.
To be more simple, Memoization uses the top-down approach to solve the problem i.e. it begin with core(main) problem then breaks it into sub-problems and solve these sub-problems similarly. In this approach same sub-problem can occur multiple times and consume more CPU cycle, hence increase the time complexity. Whereas in Dynamic programming same sub-problem will not be solved multiple times but the prior result will be used to optimize the solution.
Dynamic programming problems can be solved using either bottom-up or top-down approaches.
Generally, the bottom-up approach uses the tabulation technique, while the top-down approach uses the recursion (with memorization) technique.
But you can also have bottom-up and top-down approaches using recursion as shown below.
Bottom-Up: Start with the base condition and pass the value calculated until now recursively. Generally, these are tail recursions.
int n = 5;
fibBottomUp(1, 1, 2, n);
private int fibBottomUp(int i, int j, int count, int n) {
if (count > n) return 1;
if (count == n) return i + j;
return fibBottomUp(j, i + j, count + 1, n);
}
Top-Down: Start with the final condition and recursively get the result of its sub-problems.
int n = 5;
fibTopDown(n);
private int fibTopDown(int n) {
if (n <= 1) return 1;
return fibTopDown(n - 1) + fibTopDown(n - 2);
}
Simply saying top down approach uses recursion for calling Sub problems again and again where as bottom up approach use the single without calling any one and hence it is more efficient.
Following is the DP based solution for Edit Distance problem which is top down. I hope it will also help in understanding the world of Dynamic Programming:
public int minDistance(String word1, String word2) {//Standard dynamic programming puzzle.
int m = word2.length();
int n = word1.length();
if(m == 0) // Cannot miss the corner cases !
return n;
if(n == 0)
return m;
int[][] DP = new int[n + 1][m + 1];
for(int j =1 ; j <= m; j++) {
DP[0][j] = j;
}
for(int i =1 ; i <= n; i++) {
DP[i][0] = i;
}
for(int i =1 ; i <= n; i++) {
for(int j =1 ; j <= m; j++) {
if(word1.charAt(i - 1) == word2.charAt(j - 1))
DP[i][j] = DP[i-1][j-1];
else
DP[i][j] = Math.min(Math.min(DP[i-1][j], DP[i][j-1]), DP[i-1][j-1]) + 1; // Main idea is this.
}
}
return DP[n][m];
}
You can think of its recursive implementation at your home. It's quite good and challenging if you haven't solved something like this before.
nothing to be confused about... you usually learn the language in bottom-up manner (from basics to more complicated things), and often make your project in top-down manner (from overall goal & structure of the code to certain pieces of implementations)

whether a language needs preIncrement (++x) and postIncrement (x++)

I have never seen the usecase for pre-increment and post-increment in actual code. The only place i see them most often are puzzles.
My opinion is, it introduces more confusion rather than being useful.
is there any real use case scenario for this
can't this can be done by using +=
y = x++
y = x
x += 1
It's just a shorter way of writing the same thing and it's only confusing to those who don't deeply understand C (a). The same argument could be made for replacing:
for (i = 0; i < 10; i++)
printf ("%d\n", i);
with:
i = 0;
while (i < 10) {
printf ("%d\n", i);
i = i + 1;
}
since any for can also be done with while, or:
i = 0;
loop: if (i < 10) {
printf ("%d\n", i);
i = i + 1;
goto loop;
}
since any loop construct can be built out of conditions and goto. But (I'm hoping) you wouldn't do that, would you?
(a) I sometimes like to explain this to my students as simple statements and side effects, something that allows C code to be more succinct with usually no or minimal loss in readability.
For the statement:
y = x++;
the statement is assigning x to y with the side effect that x is incremented afterwards. ++x is the same, it's just that the side effect happens beforehand.
Similarly, the side effect of an assignment is that it evaluates as the value assigned, meaning you can do things like:
while ((c = getchar()) != -1) count++;
and which makes things like:
42;
perfectly valid, but useless, C statements.
The pre- and post-increment operators make much more sense if you consider them in the light of history and when they were conceived.
Back in the days when C was basically a high-level assembler for PDP-11 machines</flamebait>, and long before we had the nice optimizing compilers we have now, there were common idioms used that the post-increment operators were perfect for. Things like this:
char* strcpy(char* src, char* dest)
{
/* highly simplified version and likely not compileable as-is */
while (*dest++ = *src++);
return dest;
}
The code in question generated PDP-11 (or other) machine language code that made heavy use of the underlying addressing modes (like relative direct and relative indirect) that incorporated exactly these kinds of pre- and post-increment and decrement operations.
So to answer your question: do languages "need" these nowadays? No, of course not. It's provable that you need very little in terms of instructions to compute things. The question is more interesting if you ask "are these features desirable?" To that I'd answer a qualified "yes".
Using your examples:
y = x;
x += 1;
vs.
y = x++;
I can see two advantages right off the top of my head.
The code is more succinct. Everything I need to know to understand what you're doing is in one place (as long as I know the language, naturally!) instead of spread out. "Spreading out" across two lines seems like a picky thing but if you're doing thousands of them it can make a big difference in the end.
It is far more likely that the code generated even by a crappy compiler will be atomic in the second case. In the first case it very likely will not be unless you have a nice compiler. (Not all platforms have good, strong optimizing compilers.)
Also, I find it very telling that you're talking about += when that itself is an "unneeded" way of saying x = x + 1;.... After all there is no use case scenario I can think of for += that couldn't be served fine by _ = _ + _ instead.
You're accidentally raising a much larger issue here, and it's one that will make itself more and more known to you as the years (decades) go by.
Languages often make the mistake of supplying "abilities" when they shouldn't. IMO, ++ should be a stand-alone statement only, and absolutely not an expression operator.
Try to keep the following close to heart: The goal is not to create code for the competent engineer to read. The goal is to create code for the competent engineer to read when he is exhausted at 3am and hopped up on caffeine.
If an engineer says to you "All code constructs can get you into trouble. You just have to know what you're doing.", then walk away laughing, because he's just exposed himself as part of the problem.
In other words, please don't ever code anything like this:
a[aIndex++] = b[++bIndex];
You can find a interesting conversation about this kind of thing here:
Why avoid increment ("++") and decrement ("--") operators in JavaScript?

What are some examples of where using parentheses in a program lowers readability?

I always thought that parentheses improved readability, but in my textbook there is a statement that the use of parentheses dramatically reduces the readability of a program. Does anyone have any examples?
I can find plenty of counterexamples where the lack of parentheses lowered the readability, but the only example I can think of for what the author may have meant is something like this:
if(((a == null) || (!(a.isSomething()))) && ((b == null) || (!(b.isSomething()))))
{
// do some stuff
}
In the above case, the ( ) around the method calls is unnecessary, and this kind of code may benefit from factoring out of terms into variables. With all of those close parens in the middle of the condition, it's hard to see exactly what is grouped with what.
boolean aIsNotSomething = (a == null) || !a.isSomething(); // parens for readability
boolean bIsNotSomething = (b == null) || !b.isSomething(); // ditto
if(aIsNotSomething && bIsNotSomething)
{
// do some stuff
}
I think the above is more readable, but that's a personal opinion. That may be what the author was talking about.
Some good uses of parens:
to distinguish between order of operation when behavior changes without the parens
to distinguish between order of operation when behavior is unaffected, but someone who doesn't know the binding rules well enough is going to read your code. The good citizen rule.
to indicate that an expression within the parens should be evaluated before used in a larger expression: System.out.println("The answer is " + (a + b));
Possibly confusing use of parens:
in places where it can't possibly have another meaning, like in front of a.isSomething() above. In Java, if a is an Object, !a by itself is an error, so clearly !a.isSomething() must negate the return value of the method call.
to link together a large number of conditions or expressions that would be clearer if broken up. As in the code example up above, breaking up the large paranthetical statement into smaller chunks can allow for the code to be stepped through in a debugger more straightforwardly, and if the conditions/values are needed later in the code, you don't end up repeating expressions and doing the work twice. This is subjective, though, and obviously meaningless if you only use the expressions in 1 place and your debugger shows you intermediate evaluated expressions anyway.
Apparently, your textbook is written by someone who hate Lisp.
Any way, it's a matter of taste, there is no single truth for everyone.
I think that parentheses is not a best way to improve readability of your code. You can use new line to underline for example conditions in if statement. I don't use parentheses if it is not required.
Well, consider something like this:
Result = (x * y + p * q - 1) % t and
Result = (((x * y) + (p * q)) - 1) % t
Personally I prefer the former (but that's just me), because the latter makes me think the parantheses are there to change the actual order of operations, when in fact they aren't doing that. Your textbook might also refer to when you can split your calculations in multiple variables. For example, you'll probably have something like this when solving a quadratic ax^2+bx+c=0:
x1 = (-b + sqrt(b*b - 4*a*c)) / (2*a)
Which does look kind of ugly. This looks better in my opinion:
SqrtDelta = sqrt(b*b - 4*a*c);
x1 = (-b + SqrtDelta) / (2*a);
And this is just one simple example, when you work with algorithms that involve a lot of computations, things can get really ugly, so splitting the computations up into multiple parts will help readability more than parantheses will.
Parentheses reduce readability when they are obviously redundant. The reader expects them to be there for a reason, but there is no reason. Hence, a cognitive hiccough.
What do I mean by "obviously" redundant?
Parentheses are redundant when they can be removed without changing the meaning of the program.
Parentheses that are used to disambiguate infix operators are not "obviously redundant", even when they are redundant, except perhaps in the very special case of multiplication and addition operators. Reason: many languages have between 10–15 levels of precedence, many people work in multiple languages, and nobody can be expected to remember all the rules. It is often better to disambiguate, even if parentheses are redundant.
All other redundant parentheses are obviously redundant.
Redundant parentheses are often found in code written by someone who is learning a new language; perhaps uncertainty about the new syntax leads to defensive parenthesizing.
Expunge them!
You asked for examples. Here are three examples I see repeatedly in ML code and Haskell code written by beginners:
Parentheses between if (...) then are always redundant and distracting. They make the author look like a C programmer. Just write if ... then.
Parentheses around a variable are silly, as in print(x). Parentheses are never necessary around a variable; the function application should be written print x.
Parentheses around a function application are redundant if that application is an operand in an infix expression. For example,
(length xs) + 1
should always be written
length xs + 1
Anything taken to an extreme and/or overused can make code unreadable. It wouldn't be to hard to make the same claim with comments. If you have ever looked at code that had a comment for virtually every line of code would tell you that it was difficult to read. Or you could have whitespace around every line of code which would make each line easy to read but normally most people want similar related lines (that don't warrant a breakout method) to be grouped together.
You have to go way over the top with them to really damage readability, but as a matter of personal taste, I have always found;
return (x + 1);
and similar in C and C++ code to be very irritating.
If a method doesn't take parameters why require an empty () to call method()? I believe in groovy you don't need to do this.

Why is the 'if' statement considered evil?

I just came from Simple Design and Testing Conference. In one of the session we were talking about evil keywords in programming languages. Corey Haines, who proposed the subject, was convinced that if statement is absolute evil. His alternative was to create functions with predicates. Can you please explain to me why if is evil.
I understand that you can write very ugly code abusing if. But I don't believe that it's that bad.
The if statement is rarely considered as "evil" as goto or mutable global variables -- and even the latter are actually not universally and absolutely evil. I would suggest taking the claim as a bit hyperbolic.
It also largely depends on your programming language and environment. In languages which support pattern matching, you will have great tools for replacing if at your disposal. But if you're programming a low-level microcontroller in C, replacing ifs with function pointers will be a step in the wrong direction. So, I will mostly consider replacing ifs in OOP programming, because in functional languages, if is not idiomatic anyway, while in purely procedural languages you don't have many other options to begin with.
Nevertheless, conditional clauses sometimes result in code which is harder to manage. This does not only include the if statement, but even more commonly the switch statement, which usually includes more branches than a corresponding if would.
There are cases where it's perfectly reasonable to use an if
When you are writing utility methods, extensions or specific library functions, it's likely that you won't be able to avoid ifs (and you shouldn't). There isn't a better way to code this little function, nor make it more self-documented than it is:
// this is a good "if" use-case
int Min(int a, int b)
{
if (a < b)
return a;
else
return b;
}
// or, if you prefer the ternary operator
int Min(int a, int b)
{
return (a < b) ? a : b;
}
Branching over a "type code" is a code smell
On the other hand, if you encounter code which tests for some sort of a type code, or tests if a variable is of a certain type, then this is most likely a good candidate for refactoring, namely replacing the conditional with polymorphism.
The reason for this is that by allowing your callers to branch on a certain type code, you are creating a possibility to end up with numerous checks scattered all over your code, making extensions and maintenance much more complex. Polymorphism on the other hand allows you to bring this branching decision as closer to the root of your program as possible.
Consider:
// this is called branching on a "type code",
// and screams for refactoring
void RunVehicle(Vehicle vehicle)
{
// how the hell do I even test this?
if (vehicle.Type == CAR)
Drive(vehicle);
else if (vehicle.Type == PLANE)
Fly(vehicle);
else
Sail(vehicle);
}
By placing common but type-specific (i.e. class-specific) functionality into separate classes and exposing it through a virtual method (or an interface), you allow the internal parts of your program to delegate this decision to someone higher in the call hierarchy (potentially at a single place in code), allowing much easier testing (mocking), extensibility and maintenance:
// adding a new vehicle is gonna be a piece of cake
interface IVehicle
{
void Run();
}
// your method now doesn't care about which vehicle
// it got as a parameter
void RunVehicle(IVehicle vehicle)
{
vehicle.Run();
}
And you can now easily test if your RunVehicle method works as it should:
// you can now create test (mock) implementations
// since you're passing it as an interface
var mock = new Mock<IVehicle>();
// run the client method
something.RunVehicle(mock.Object);
// check if Run() was invoked
mock.Verify(m => m.Run(), Times.Once());
Patterns which only differ in their if conditions can be reused
Regarding the argument about replacing if with a "predicate" in your question, Haines probably wanted to mention that sometimes similar patterns exist over your code, which differ only in their conditional expressions. Conditional expressions do emerge in conjunction with ifs, but the whole idea is to extract a repeating pattern into a separate method, leaving the expression as a parameter. This is what LINQ already does, usually resulting in cleaner code compared to an alternative foreach:
Consider these two very similar methods:
// average male age
public double AverageMaleAge(List<Person> people)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (person.Gender == Gender.Male)
{
sum += person.Age;
count++;
}
}
return sum / count; // not checking for zero div. for simplicity
}
// average female age
public double AverageFemaleAge(List<Person> people)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (person.Gender == Gender.Female) // <-- only the expression
{ // is different
sum += person.Age;
count++;
}
}
return sum / count;
}
This indicates that you can extract the condition into a predicate, leaving you with a single method for these two cases (and many other future cases):
// average age for all people matched by the predicate
public double AverageAge(List<Person> people, Predicate<Person> match)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (match(person)) // <-- the decision to match
{ // is now delegated to callers
sum += person.Age;
count++;
}
}
return sum / count;
}
var males = AverageAge(people, p => p.Gender == Gender.Male);
var females = AverageAge(people, p => p.Gender == Gender.Female);
And since LINQ already has a bunch of handy extension methods like this, you actually don't even need to write your own methods:
// replace everything we've written above with these two lines
var males = list.Where(p => p.Gender == Gender.Male).Average(p => p.Age);
var females = list.Where(p => p.Gender == Gender.Female).Average(p => p.Age);
In this last LINQ version the if statement has "disappeared" completely, although:
to be honest the problem wasn't in the if by itself, but in the entire code pattern (simply because it was duplicated), and
the if still actually exists, but it's written inside the LINQ Where extension method, which has been tested and closed for modification. Having less of your own code is always a good thing: less things to test, less things to go wrong, and the code is simpler to follow, analyze and maintain.
Huge runs of nested if/else statements
When you see a function spanning 1000 lines and having dozens of nested if blocks, there is an enormous chance it can be rewritten to
use a better data structure and organize the input data in a more appropriate manner (e.g. a hashtable, which will map one input value to another in a single call),
use a formula, a loop, or sometimes just an existing function which performs the same logic in 10 lines or less (e.g. this notorious example comes to my mind, but the general idea applies to other cases),
use guard clauses to prevent nesting (guard clauses give more confidence into the state of variables throughout the function, because they get rid of exceptional cases as soon as possible),
at least replace with a switch statement where appropriate.
Refactor when you feel it's a code smell, but don't over-engineer
Having said all this, you should not spend sleepless nights over having a couple of conditionals now and there. While these answers can provide some general rules of thumb, the best way to be able to detect constructs which need refactoring is through experience. Over time, some patterns emerge that result in modifying the same clauses over and over again.
There is another sense in which if can be evil: when it comes instead of polymorphism.
E.g.
if (animal.isFrog()) croak(animal)
else if (animal.isDog()) bark(animal)
else if (animal.isLion()) roar(animal)
instead of
animal.emitSound()
But basically if is a perfectly acceptable tool for what it does. It can be abused and misused of course, but it is nowhere near the status of goto.
A good quote from Code Complete:
Code as if whoever maintains your program is a violent psychopath who
knows where you live.
— Anonymous
IOW, keep it simple. If the readability of your application will be enhanced by using a predicate in a particular area, use it. Otherwise, use the 'if' and move on.
I think it depends on what you're doing to be honest.
If you have a simple if..else statement, why use a predicate?
If you can, use a switch for larger if replacements, and then if the option to use a predicate for large operations (where it makes sense, otherwise your code will be a nightmare to maintain), use it.
This guy seems to have been a bit pedantic for my liking. Replacing all if's with Predicates is just crazy talk.
There is the Anti-If campaign which started earlier in the year. The main premise being that many nested if statements often can often be replaced with polymorphism.
I would be interested to see an example of using the Predicate instead. Is this more along the lines of functional programming?
Just like in the bible verse about money, if statements are not evil -- the LOVE of if statements is evil. A program without if statements is a ridiculous idea, and using them as necessary is essential. But a program that has 100 if-else if blocks in a row (which, sadly, I have seen) is definitely evil.
I have to say that I recently have begun to view if statements as a code smell: especially when you find yourself repeating the same condition several times. But there's something you need to understand about code smells: they don't necessarily mean that the code is bad. They just mean that there's a good chance the code is bad.
For instance, comments are listed as a code smell by Martin Fowler, but I wouldn't take anyone seriously who says "comments are evil; don't use them".
Generally though, I prefer to use polymorphism instead of if statements where possible. That just makes for so much less room for error. I tend to find that a lot of the time, using conditionals leads to a lot of tramp arguments as well (because you have to pass the data needed to form the conditional on to the appropriate method).
if is not evil(I also hold that assigning morality to code-writing practices is asinine...).
Mr. Haines is being silly and should be laughed at.
I'll agree with you; he was wrong. You can go too far with things like that, too clever for your own good.
Code created with predicates instead of ifs would be horrendous to maintain and test.
Predicates come from logical/declarative programming languages, like PROLOG. For certain classes of problems, like constraint solving, they are arguably superior to a lot of drawn out step-by-step if-this-do-that-then-do-this crap. Problems that would be long and complex to solve in imperative languages can be done in just a few lines in PROLOG.
There's also the issue of scalable programming (due to the move towards multicore, the web, etc.). If statements and imperative programming in general tend to be in step-by-step order, and not scaleable. Logical declarations and lambda calculus though, describe how a problem can be solved, and what pieces it can be broken down into. As a result, the interpreter/processor executing that code can efficiently break the code into pieces, and distribute it across multiple CPUs/cores/threads/servers.
Definitely not useful everywhere; I'd hate to try writing a device driver with predicates instead of if statements. But yes, I think the main point is probably sound, and worth at least getting familiar with, if not using all the time.
The only problem with a predicates (in terms of replacing if statements) is that you still need to test them:
function void Test(Predicate<int> pr, int num)
{
if (pr(num))
{ /* do something */ }
else
{ /* do something else */ }
}
You could of course use the terniary operator (?:), but that's just an if statement in disguise...
Perhaps with quantum computing it will be a sensible strategy to not use IF statements but to let each leg of the computation proceed and only have the function 'collapse' at termination to a useful result.
Sometimes it's necessary to take an extreme position to make your point. I'm sure this person uses if -- but every time you use an if, it's worth having a little think about whether a different pattern would make the code clearer.
Preferring polymorphism to if is at the core of this. Rather than:
if(animaltype = bird) {
squawk();
} else if(animaltype = dog) {
bark();
}
... use:
animal.makeSound();
But that supposes that you've got an Animal class/interface -- so really what the if is telling you, is that you need to create that interface.
So in the real world, what sort of ifs do we see that lead us to a polymorphism solution?
if(logging) {
log.write("Did something");
}
That's really irritating to see throughout your code. How about, instead, having two (or more) implementations of Logger?
this.logger = new NullLogger(); // logger.log() does nothing
this.logger = new StdOutLogger(); // logger.log() writes to stdout
That leads us to the Strategy Pattern.
Instead of:
if(user.getCreditRisk() > 50) {
decision = thoroughCreditCheck();
} else if(user.getCreditRisk() > 20) {
decision = mediumCreditCheck();
} else {
decision = cursoryCreditCheck();
}
... you could have ...
decision = getCreditCheckStrategy(user.getCreditRisk()).decide();
Of course getCreditCheckStrategy() might contain an if -- and that might well be appropriate. You've pushed it into a neat place where it belongs.
It probably comes down to a desire to keep code cyclomatic complexity down, and to reduce the number of branch points in a function. If a function is simple to decompose into a number of smaller functions, each of which can be tested, you can reduce the complexity and make code more easily testable.
IMO:
I suspect he was trying to provoke a debate and make people think about the misuse of 'if'. No one would seriously suggest such a fundamental construction of programming syntax was to be completely avoided would they?
Good that in ruby we have unless ;)
But seriously probably if is the next goto, that even if most of the people think it is evil in some cases is simplifying/speeding up the things (and in some cases like low level highly optimized code it's a must).
I think If statements are evil, but If expressions are not. What I mean by an if expression in this case can be something like the C# ternary operator (condition ? trueExpression : falseExpression). This is not evil because it is a pure function (in a mathematical sense). It evaluates to a new value, but it has no effects on anything else. Because of this, it works in a substitution model.
Imperative If statements are evil because they force you to create side-effects when you don't need to. For an If statement to be meaningful, you have to produce different "effects" depending on the condition expression. These effects can be things like IO, graphic rendering or database transactions, which change things outside of the program. Or, it could be assignment statements that mutate the state of the existing variables. It is usually better to minimize these effects and separate them from the actual logic. But, because of the If statements, we can freely add these "conditionally executed effects" everywhere in the code. I think that's bad.
If is not evil! Consider ...
int sum(int a, int b) {
return a + b;
}
Boring, eh? Now with an added if ...
int sum(int a, int b) {
if (a == 0 && b == 0) {
return 0;
}
return a + b;
}
... your code creation productivity (measured in LOC) is doubled.
Also code readability has improved much, for now you can see in the blink of an eye what the result is when both argument are zero. You couldn't do that in the code above, could you?
Moreover you supported the testteam for they now can push their code coverage test tools use up more to the limits.
Furthermore the code now is better prepared for future enhancements. Let's guess, for example, the sum should be zero if one of the arguments is zero (don't laugh and don't blame me, silly customer requirements, you know, and the customer is always right).
Because of the if in the first place only a slight code change is needed.
int sum(int a, int b) {
if (a == 0 || b == 0) {
return 0;
}
return a + b;
}
How much more code change would have been needed if you hadn't invented the if right from the start.
Thankfulness will be yours on all sides.
Conclusion: There's never enough if's.
There you go. To.

Resources