Related
With pylint, I know that R1705 warning gets triggered when you put a 'return' inside an 'else'.
This is the warning:
R1705: Unnecessary "else" after "return" (no-else-return)
This is what the docs says about it:
Unnecessary “else” after “return” Used in order to highlight an unnecessary block of code following an if containing a return statement. As such, it will warn when it encounters an else following a chain of ifs, all of them containing a return statement.
A snippet of code that will trigger R1705:
if CONDITION1:
return something1
else:
return something2
The desired fix to shut down the warning:
if CONDITION1:
return something1
return something2
Is it really needed to obey this? What's the benefit? I mean I understand that after returning something from a function there is no way to come back and read further code.
But I find it way more organized to use 'else'.
If you're trying to conform to Mozilla Coding Style
or similar
then R1705 makes sense.
Quoting:
Don't put an else right after a return (or a break). Delete the else, it's unnecessary and increases indentation level.
Otherwise, you might prefer to disable that warning.
Better still, consider switching to flake8,
which tends to stay pretty silent if you've been writing sensible code.
Outside of the Mozilla community,
most folks would rather see simple parallel functional clauses
handled with an else, like this:
def max(a, b):
if a > b:
return a
else:
return b
This post gives two different cases for this design decision:
Guard clauses.
def try_something()
if precondition():
result = compute_something()
return result
else:
display_error()
return None
The author argues that for several such conditions their inversion and implicit else is better:
# Implicit else, inverted condition
def try_something():
if not precondition_one():
display_error_one()
return
if not precondition_two():
display_error_two()
return
result = compute_something()
return result
Symmetrical clause.
# Explicit else
def check_link(link) -> bool:
if is_internal_link(link):
return check_internal_link(link)
else:
return check_external_link(link)
I agree with the author that here explicit is better.
I would also cite a comment from that post, which says that this choice is a choice of paradigm:
"Explicit else": "if-then-else" is treated as lazy computation and more suited in "functional-first" environments. If this "if-then-else" is applied to large datasets and code in F#, Scala, Haskel, Closure or even SQL - explicitness is preferred. Most probably language/platform itself will encourage to write "pure" code and discourage/make near impossible to make imperative stunts.
"Implicit else/(explicit return)": computation depends on 100% on side-effects and result is combination of side-effects too. It's impossible to guarantee correctness in strict sense anyway, so explicit return becomes clear declaration: "Because of laws of physics in our Universe, this computation could work incorrect. In majority of such cases this default value will be returned".
Suppose I have a mapping (with known types) such as
1: false,
4: false,
8: true,
16: true
And I want to generate a function take input and gives the correct output. I don't care what happens for any input that is not in the above mapping, for example 3 will never be expected.
A naive solution would be to generate the function with a switch statement, for instance
f(int x) {
if x == 1 return false;
else if x == 4 return false;
else if x == 8 return true;
else if x == 16 return true;
}
I want to be able to generate code that doesn't scale in memory with the set of input.
f(int x) {
return x >= 8;
}
Does this problem have a name? What area should I research into?
You want to "guess" what code to generate for the inputs not provided.
You can't do it.
[EDIT: On further discussion, it is now clear to me that he doesn't care about such inputs. I'm leaving the answer as is, because people will assume, as I did, that he must care. Surprising advice offered at end of this answer, anyway].
Imagine you have an adversary that is going to specify a function f on all inputs, but s/he only provides you a sample, asin your example. some fixed set of inputs. You now apply an oracle, that guesses that f(9) is true. Your adversary promptly shows that her function has f(9) is actually false. Likewise if you guess f(9) is false.
The adversary can always manufacture an input/output pair that does not match what your code generator guesses. So you simply cannot get it right.
What you can do is to accept that your guesses may be wrong, and try to choose a function that has the least complexity that explains the input/output pairs you have seen so far. Your example is essentially one of these.
If you believe that "simple" functions are a better approximation of the world than complex ones, you can generate code and hope you don't encounter an adversary in nature.
Don't count on it to be reliable.
With those caveats, OP might be interested in the GNU SuperOptimizer. This finds short sequences of machine instructions that produce a provided set of input/output pairs [actually, I think you give it function that computes the answer, like OP's original function] by the "obviously" crazy idea of literally trying every instruction sequence.
The genius behind the superoptimizer is that this stunt actually works in practice for short instruction sequences.
I think it would be easy to modify it to produce generic "C" instructions (e.g. valid C actions) since I believe it uses C actions to model machine instructions anyway. You would probably have to modify your function to produce "don't care" results for inputs that don't matter, and teach GNU superoptimizer that "don't care" is a valid result. That would in fact be a useful addition to the GNU superoptimizer for its original purpose, too.
http://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_need says:
"Call-by-need is a memoized version of call-by-name where, if the function argument is evaluated, that value is stored for subsequent uses. [...] Haskell is the most well-known language that uses call-by-need evaluation."
However, the value of a computation is not always stored for faster access (for example consider a recursive definition of fibonacci numbers). I asked someone on #haskell and the answer was that this memoization is done automatically "only in one instance, e.g. if you have `let foo = bar baz', foo will be evaluated once".
My questions is: What does instance exactly mean, are there other cases than let in which memoization is done automatically?
Describing this behavior as "memoization" is misleading. "Call by need" just means that a given input to a function will be evaluated somewhere between 0 and 1 times, never more than once. (It could be partially evaluated as well, which means the function only needed part of that input.) In contrast, "call by name" is simply expression substitution, which means if you give the expression 2 + 3 as an input to a function, it may be evaluated multiple times if the input is used more than once. Both call by need and call by name are non-strict: if the input is not used, then it is never evaluated. Most programming languages are strict, and use a "call by value" approach, which means that all inputs are evaluated before you begin evaluating the function, whether or not the inputs are used. This all has nothing to do with let expressions.
Haskell does not perform any automatic memoization. Let expressions are not an example of memoization. However, most compilers will evaluate let bindings in a call-by-need-esque fashion. If you model a let expression as a function, then the "call by need" mentality does apply:
let foo = expression one in expression two that uses foo
==>
(\foo -> expression two that uses foo) (expression one)
This doesn't correctly model recursive bindings, but you get the idea.
The haskell language definition does not define when, or how often, code is invoked. Infinite loops are defined in terms of 'the bottom' (written ⊥), which is a value (which exists within all types) that represents an error condition. The compiler is free to make its own decisions regarding when and how often to evaluate things as long as the program (and presence/absence of error conditions, including infinite loops!) behaves according to spec.
That said, the usual way of doing this is that most expressions generate 'thunks' - basically a pointer to some code and some context data. The first time you attempt to examine the result of the expression (ie, pattern match it), the thunk is 'forced'; the pointed-to code is executed, and the thunk overwritten with real data. This in turn can recursively evaluate other thunks.
Of course, doing this all the time is slow, so the compiler usually tries to analyze when you'd end up forcing a thunk right away anyway (ie, when something is 'strict' on the value in question), and if it finds this, it'll skip the whole thunk thing and just call the code right away. If it can't prove this, it can still make this optimization as long as it makes sure that executing the thunk right away can't crash or cause an infinite loop (or it handles these conditions somehow).
If you don't want to have to get very technical about this, the essential point is that when you have an expression like some_expensive_computation of all these arguments, you can do whatever you want with it; store it in a data structure, create a list of 53 copies of it, pass it to 6 other functions, etc, and then even return it to your caller for the caller to do whatever it wants with it.
What Haskell will (mostly) do is evaluate it at most once; if it the program ever needs to know something about what that expression returned in order to make a decision, then it will be evaluated (at least enough to know which way the decision should go). That evaluation will affect all the other references to the same expression, even if they are now scattered around in data structures and other not-yet-evaluated expressions all throughout your program.
The question I asked might be closed, But i just want to know that is it necessary to write else part of every if condition. One of my senior programmer said me that "you should write else part in every if condition" . Suppose we have no condition for write in else part then what should we do ? I assume a healthy discussion will going on here....
That's a horrible idea. You end up with code of the form:
if (something) {
doSomething();
} else {
}
How anyone could think that's more readable or maintainable that not having an else at all is beyond me. It sounds like one of those rules made up by people who have too much free time on their hands. Get them fired as quickly as you can, or at least move away calmly and quietly :-)
No, you certainly don't have to - at least in most languages. (You didn't specify; it's quite possible that there's a language which does enforce this.) Here's an example where I certainly wouldn't:
public void DoSomething(string text)
{
if (text == null)
{
throw new ArgumentNullException("text");
}
// Do stuff
}
Now you could put the main work of the method into an "else" clause here - but it would increase nesting unnecessarily. Add a few more conditions and the whole thing becomes an unreadable mess.
This pattern of "early out" is reasonably common, in my experience - and goes for return values as well as exceptions. I know there are some who favour a single return point from a method, but in the languages I work with (Java, C#) that can often lead to significantly less readable code and deeper nesting.
Now, there's one situation where there's more scope for debate, and that's where both branches are terminal, but neither of them is effectively a shortcut:
public int DoSomething()
{
// Do some work
if (conditionBasedOnPreviousWork)
{
log.Info("Condition met; returning discount");
return discount;
}
else
{
log.Info("Condition not met; returning original price");
return originalPrice;
}
}
(Note that I've deliberately given both branches more work to do than just returning - otherwise a conditional statement would be appropriate.)
Would this be more readable without the "else"? That's really a matter of personal choice, and I'm not going to claim I'm always consistent. Having both branches equally indented gives them equal weight somehow - and perhaps encourages the possibility of refactoring later by reversing the condition... whereas if we had just dropped through to the "return original price", the refactoring of putting that into an if block and moving the discount case out of an if block would be less obviously correct at first glance.
In imperative languages like Java and C, if - else is a statement and does not return a value. So you can happily write only the if part and go on. And I think that it is the better practice rather than adding empty elses after every if.
However in functional languages like Haskell and Clojure, if is an expression and it must return a value. So it must be succeeded with an else. However there are still cases where you may not need an else section. Clojure, for such cases, has a when macro which wraps if - else to return nil in the else section and avoid writing it.
(when (met? somecondition)
(dosomething))
Danger! Danger, Will Robinson!
http://en.wikipedia.org/wiki/Cargo_cult_programming
Is the inclusion of empty else { } blocks going to somehow improve the quality, readability, or robustness of your code? I think not.
Looking at this purely from a semantic point of view - I cannot think of a single case
where there is not an implicit else for every if.
if the car is not stopped before I reach the wall I will crash, else I will not crash.
Semantics aside:
The answer to that question depends on the environment, and what the result of a mistake is.
Business code? Do what your coding standards say.
IMHO you will find that spelling it out, although initially it seems like too much work, will become invaluable 10 years from now when you revisit that code. But, it certainly would not be the end of the world if you missed an important 'anti-condition'.
However: Security, Safety or Life Critical code? That's a different story.
In this case you want to do two things.
First:Rather than testing for a fault, you want to prove there is not a fault. This requires a pessimistic view on entry to any module. You assume everything is wrong
until you prove it is correct.
Second: in life critical: You NEVER want to hurt a patient.:
bool everyThingIsSafe = true;
if(darnThereIsAProblem())
{
reportToUserEndOfWorld();
}
return everyThingIsSafe;
Oops. I forgot to set everyThingIsSafe false.
The routine that called this snippit is now lied to. Had I initialized evertThingIsSafe to false - I'm always safe, but now I need the else clause to indicate that there wasn't an error.
And yes, I could have changed this to a positive test - but then I need the else
to handle the fault.
And yes, I could have assigned everyThingIsSafe() the immediate return of the check.
And then tested the flag to report a problem. An implicit else, why not be explicit?
Strictly speaking, the implicit else this represents is reasonable.
To an FDA/safety auditor, maybe not.
If it's explicit, can point to the test, its else, and that I handled both conditions clearly.
I've been coding for medical devices for 25 years. In this case, you want the else, you want the default in the case, and they are never empty. You want to know exactly what is going to happen, or as near as you can. Because overlooking a condition could kill someone.
Look up Therac-25. 8 severely injured. 3 dead.
I know I am late but I did a lot of thinking over this and wanted to share my results.
In critical code, it is imperative for every branch is accounted for. Writing an else is not necessary, but leave a mark that else is not necessary and why. This will help the reviewer. Observe:
//negatives should be fixed
if(a < 0) {
a+=m;
}
//else value is positive
No, It's not required to write the else part for the if statement.
In fact most of the developers prefer and recommend to avoid the else block.
that is
Instead of writing
if (number >= 18) {
let allow_user = true;
} else {
let allow_user = false;
}
Most of the developers prefer:
let allow_user = false;
if (number >= 18) {
let allow_user = true;
}
Sometimes there is no else part....and including an empty one just makes the code less readable imho.
No, you don't have to ..
Also, I don't think that it is a good idea for readability, since you will have lots of empty else blocks. which will not be pretty to see.
This is purely a matter of style and clarity. It's easy to imagine if statements, particularly simple ones, for which an else would be quite superfluous. But when you have a more complex conditional, perhaps handling a number of various cases, it can often be clarifying to explicitly declare that otherwise, nothing ought to be done. In these cases, I'd leave a // do nothing comment in the otherwise empty else to it clear that the space is intentionally left blank.
No, but I personally choose to always include encapsulating braces to avoid
if (someCondition)
bar();
notbar(); //won't be run conditionally, though it looks like it might
foo();
I'd write
if (someCondition){
bar();
notbar(); //will be run
}
foo();
def in_num(num):
if num % 3 == 0:
print("fizz")
if num % 5 == 0:
print("buzz")
if (num % 3 !=0) and (num % 5 !=0):
print(num)
see in this code else statement is not necessary.
I just came from Simple Design and Testing Conference. In one of the session we were talking about evil keywords in programming languages. Corey Haines, who proposed the subject, was convinced that if statement is absolute evil. His alternative was to create functions with predicates. Can you please explain to me why if is evil.
I understand that you can write very ugly code abusing if. But I don't believe that it's that bad.
The if statement is rarely considered as "evil" as goto or mutable global variables -- and even the latter are actually not universally and absolutely evil. I would suggest taking the claim as a bit hyperbolic.
It also largely depends on your programming language and environment. In languages which support pattern matching, you will have great tools for replacing if at your disposal. But if you're programming a low-level microcontroller in C, replacing ifs with function pointers will be a step in the wrong direction. So, I will mostly consider replacing ifs in OOP programming, because in functional languages, if is not idiomatic anyway, while in purely procedural languages you don't have many other options to begin with.
Nevertheless, conditional clauses sometimes result in code which is harder to manage. This does not only include the if statement, but even more commonly the switch statement, which usually includes more branches than a corresponding if would.
There are cases where it's perfectly reasonable to use an if
When you are writing utility methods, extensions or specific library functions, it's likely that you won't be able to avoid ifs (and you shouldn't). There isn't a better way to code this little function, nor make it more self-documented than it is:
// this is a good "if" use-case
int Min(int a, int b)
{
if (a < b)
return a;
else
return b;
}
// or, if you prefer the ternary operator
int Min(int a, int b)
{
return (a < b) ? a : b;
}
Branching over a "type code" is a code smell
On the other hand, if you encounter code which tests for some sort of a type code, or tests if a variable is of a certain type, then this is most likely a good candidate for refactoring, namely replacing the conditional with polymorphism.
The reason for this is that by allowing your callers to branch on a certain type code, you are creating a possibility to end up with numerous checks scattered all over your code, making extensions and maintenance much more complex. Polymorphism on the other hand allows you to bring this branching decision as closer to the root of your program as possible.
Consider:
// this is called branching on a "type code",
// and screams for refactoring
void RunVehicle(Vehicle vehicle)
{
// how the hell do I even test this?
if (vehicle.Type == CAR)
Drive(vehicle);
else if (vehicle.Type == PLANE)
Fly(vehicle);
else
Sail(vehicle);
}
By placing common but type-specific (i.e. class-specific) functionality into separate classes and exposing it through a virtual method (or an interface), you allow the internal parts of your program to delegate this decision to someone higher in the call hierarchy (potentially at a single place in code), allowing much easier testing (mocking), extensibility and maintenance:
// adding a new vehicle is gonna be a piece of cake
interface IVehicle
{
void Run();
}
// your method now doesn't care about which vehicle
// it got as a parameter
void RunVehicle(IVehicle vehicle)
{
vehicle.Run();
}
And you can now easily test if your RunVehicle method works as it should:
// you can now create test (mock) implementations
// since you're passing it as an interface
var mock = new Mock<IVehicle>();
// run the client method
something.RunVehicle(mock.Object);
// check if Run() was invoked
mock.Verify(m => m.Run(), Times.Once());
Patterns which only differ in their if conditions can be reused
Regarding the argument about replacing if with a "predicate" in your question, Haines probably wanted to mention that sometimes similar patterns exist over your code, which differ only in their conditional expressions. Conditional expressions do emerge in conjunction with ifs, but the whole idea is to extract a repeating pattern into a separate method, leaving the expression as a parameter. This is what LINQ already does, usually resulting in cleaner code compared to an alternative foreach:
Consider these two very similar methods:
// average male age
public double AverageMaleAge(List<Person> people)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (person.Gender == Gender.Male)
{
sum += person.Age;
count++;
}
}
return sum / count; // not checking for zero div. for simplicity
}
// average female age
public double AverageFemaleAge(List<Person> people)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (person.Gender == Gender.Female) // <-- only the expression
{ // is different
sum += person.Age;
count++;
}
}
return sum / count;
}
This indicates that you can extract the condition into a predicate, leaving you with a single method for these two cases (and many other future cases):
// average age for all people matched by the predicate
public double AverageAge(List<Person> people, Predicate<Person> match)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (match(person)) // <-- the decision to match
{ // is now delegated to callers
sum += person.Age;
count++;
}
}
return sum / count;
}
var males = AverageAge(people, p => p.Gender == Gender.Male);
var females = AverageAge(people, p => p.Gender == Gender.Female);
And since LINQ already has a bunch of handy extension methods like this, you actually don't even need to write your own methods:
// replace everything we've written above with these two lines
var males = list.Where(p => p.Gender == Gender.Male).Average(p => p.Age);
var females = list.Where(p => p.Gender == Gender.Female).Average(p => p.Age);
In this last LINQ version the if statement has "disappeared" completely, although:
to be honest the problem wasn't in the if by itself, but in the entire code pattern (simply because it was duplicated), and
the if still actually exists, but it's written inside the LINQ Where extension method, which has been tested and closed for modification. Having less of your own code is always a good thing: less things to test, less things to go wrong, and the code is simpler to follow, analyze and maintain.
Huge runs of nested if/else statements
When you see a function spanning 1000 lines and having dozens of nested if blocks, there is an enormous chance it can be rewritten to
use a better data structure and organize the input data in a more appropriate manner (e.g. a hashtable, which will map one input value to another in a single call),
use a formula, a loop, or sometimes just an existing function which performs the same logic in 10 lines or less (e.g. this notorious example comes to my mind, but the general idea applies to other cases),
use guard clauses to prevent nesting (guard clauses give more confidence into the state of variables throughout the function, because they get rid of exceptional cases as soon as possible),
at least replace with a switch statement where appropriate.
Refactor when you feel it's a code smell, but don't over-engineer
Having said all this, you should not spend sleepless nights over having a couple of conditionals now and there. While these answers can provide some general rules of thumb, the best way to be able to detect constructs which need refactoring is through experience. Over time, some patterns emerge that result in modifying the same clauses over and over again.
There is another sense in which if can be evil: when it comes instead of polymorphism.
E.g.
if (animal.isFrog()) croak(animal)
else if (animal.isDog()) bark(animal)
else if (animal.isLion()) roar(animal)
instead of
animal.emitSound()
But basically if is a perfectly acceptable tool for what it does. It can be abused and misused of course, but it is nowhere near the status of goto.
A good quote from Code Complete:
Code as if whoever maintains your program is a violent psychopath who
knows where you live.
— Anonymous
IOW, keep it simple. If the readability of your application will be enhanced by using a predicate in a particular area, use it. Otherwise, use the 'if' and move on.
I think it depends on what you're doing to be honest.
If you have a simple if..else statement, why use a predicate?
If you can, use a switch for larger if replacements, and then if the option to use a predicate for large operations (where it makes sense, otherwise your code will be a nightmare to maintain), use it.
This guy seems to have been a bit pedantic for my liking. Replacing all if's with Predicates is just crazy talk.
There is the Anti-If campaign which started earlier in the year. The main premise being that many nested if statements often can often be replaced with polymorphism.
I would be interested to see an example of using the Predicate instead. Is this more along the lines of functional programming?
Just like in the bible verse about money, if statements are not evil -- the LOVE of if statements is evil. A program without if statements is a ridiculous idea, and using them as necessary is essential. But a program that has 100 if-else if blocks in a row (which, sadly, I have seen) is definitely evil.
I have to say that I recently have begun to view if statements as a code smell: especially when you find yourself repeating the same condition several times. But there's something you need to understand about code smells: they don't necessarily mean that the code is bad. They just mean that there's a good chance the code is bad.
For instance, comments are listed as a code smell by Martin Fowler, but I wouldn't take anyone seriously who says "comments are evil; don't use them".
Generally though, I prefer to use polymorphism instead of if statements where possible. That just makes for so much less room for error. I tend to find that a lot of the time, using conditionals leads to a lot of tramp arguments as well (because you have to pass the data needed to form the conditional on to the appropriate method).
if is not evil(I also hold that assigning morality to code-writing practices is asinine...).
Mr. Haines is being silly and should be laughed at.
I'll agree with you; he was wrong. You can go too far with things like that, too clever for your own good.
Code created with predicates instead of ifs would be horrendous to maintain and test.
Predicates come from logical/declarative programming languages, like PROLOG. For certain classes of problems, like constraint solving, they are arguably superior to a lot of drawn out step-by-step if-this-do-that-then-do-this crap. Problems that would be long and complex to solve in imperative languages can be done in just a few lines in PROLOG.
There's also the issue of scalable programming (due to the move towards multicore, the web, etc.). If statements and imperative programming in general tend to be in step-by-step order, and not scaleable. Logical declarations and lambda calculus though, describe how a problem can be solved, and what pieces it can be broken down into. As a result, the interpreter/processor executing that code can efficiently break the code into pieces, and distribute it across multiple CPUs/cores/threads/servers.
Definitely not useful everywhere; I'd hate to try writing a device driver with predicates instead of if statements. But yes, I think the main point is probably sound, and worth at least getting familiar with, if not using all the time.
The only problem with a predicates (in terms of replacing if statements) is that you still need to test them:
function void Test(Predicate<int> pr, int num)
{
if (pr(num))
{ /* do something */ }
else
{ /* do something else */ }
}
You could of course use the terniary operator (?:), but that's just an if statement in disguise...
Perhaps with quantum computing it will be a sensible strategy to not use IF statements but to let each leg of the computation proceed and only have the function 'collapse' at termination to a useful result.
Sometimes it's necessary to take an extreme position to make your point. I'm sure this person uses if -- but every time you use an if, it's worth having a little think about whether a different pattern would make the code clearer.
Preferring polymorphism to if is at the core of this. Rather than:
if(animaltype = bird) {
squawk();
} else if(animaltype = dog) {
bark();
}
... use:
animal.makeSound();
But that supposes that you've got an Animal class/interface -- so really what the if is telling you, is that you need to create that interface.
So in the real world, what sort of ifs do we see that lead us to a polymorphism solution?
if(logging) {
log.write("Did something");
}
That's really irritating to see throughout your code. How about, instead, having two (or more) implementations of Logger?
this.logger = new NullLogger(); // logger.log() does nothing
this.logger = new StdOutLogger(); // logger.log() writes to stdout
That leads us to the Strategy Pattern.
Instead of:
if(user.getCreditRisk() > 50) {
decision = thoroughCreditCheck();
} else if(user.getCreditRisk() > 20) {
decision = mediumCreditCheck();
} else {
decision = cursoryCreditCheck();
}
... you could have ...
decision = getCreditCheckStrategy(user.getCreditRisk()).decide();
Of course getCreditCheckStrategy() might contain an if -- and that might well be appropriate. You've pushed it into a neat place where it belongs.
It probably comes down to a desire to keep code cyclomatic complexity down, and to reduce the number of branch points in a function. If a function is simple to decompose into a number of smaller functions, each of which can be tested, you can reduce the complexity and make code more easily testable.
IMO:
I suspect he was trying to provoke a debate and make people think about the misuse of 'if'. No one would seriously suggest such a fundamental construction of programming syntax was to be completely avoided would they?
Good that in ruby we have unless ;)
But seriously probably if is the next goto, that even if most of the people think it is evil in some cases is simplifying/speeding up the things (and in some cases like low level highly optimized code it's a must).
I think If statements are evil, but If expressions are not. What I mean by an if expression in this case can be something like the C# ternary operator (condition ? trueExpression : falseExpression). This is not evil because it is a pure function (in a mathematical sense). It evaluates to a new value, but it has no effects on anything else. Because of this, it works in a substitution model.
Imperative If statements are evil because they force you to create side-effects when you don't need to. For an If statement to be meaningful, you have to produce different "effects" depending on the condition expression. These effects can be things like IO, graphic rendering or database transactions, which change things outside of the program. Or, it could be assignment statements that mutate the state of the existing variables. It is usually better to minimize these effects and separate them from the actual logic. But, because of the If statements, we can freely add these "conditionally executed effects" everywhere in the code. I think that's bad.
If is not evil! Consider ...
int sum(int a, int b) {
return a + b;
}
Boring, eh? Now with an added if ...
int sum(int a, int b) {
if (a == 0 && b == 0) {
return 0;
}
return a + b;
}
... your code creation productivity (measured in LOC) is doubled.
Also code readability has improved much, for now you can see in the blink of an eye what the result is when both argument are zero. You couldn't do that in the code above, could you?
Moreover you supported the testteam for they now can push their code coverage test tools use up more to the limits.
Furthermore the code now is better prepared for future enhancements. Let's guess, for example, the sum should be zero if one of the arguments is zero (don't laugh and don't blame me, silly customer requirements, you know, and the customer is always right).
Because of the if in the first place only a slight code change is needed.
int sum(int a, int b) {
if (a == 0 || b == 0) {
return 0;
}
return a + b;
}
How much more code change would have been needed if you hadn't invented the if right from the start.
Thankfulness will be yours on all sides.
Conclusion: There's never enough if's.
There you go. To.