Examples of languages that hide variable multiplicity - programming-languages

What are some examples of programming languages, extensions to programming languages or other solutions that hides the multiplicity of variables when operating on them, calling method etc?
Specifically I imagine a system where I have a single typed collection of objects that transparently will forward any method call on the collection of objects so that the method is applied to all of them individually including using the return value in a meaningful way. Preferably I would like to see examples of languages that does this in a good way, but it could be interesting to see also solutions where this does not work well.
I imagine something like this:
struct Foo
{
int bar();
}
void myFunction()
{
// 4 Foo objects are created in a vector
vector<Foo> vals(4);
// The bar() method is applied to each of the Foo objects and each
// return an int that is automatically inserted into a new vector
vector<int> = vals.bar();
}

Take a look at Java 8 streams. Basically, you'd "stream" the container's contents, and indicate to the stream that each thing that goes through should have the method Foo::bar applied to it.
vals.stream().forEach(Foo::bar);
A lot of these concepts come from earlier languages, including Lisp (list processing).

Related

What are typical functions in object oriented programming?

I Could not find answer anywhere on the internet.
can someone please explain with example.
Functions are normally referred to procedural programming. In OOP, you have methods which are actually functions in nature, work the same as functions but they always work in relation to some object. You cannot declare a method/function without creating a class for it, similarly you need to always call functions using its object. So, the approach of making functions and just calling them to work doesn't work the same way in OOP. You have to associate them with a class here and usually with a constructor for that class too.
Let me show you that with an example.
Suppose we are writing code in C, which is a procedural language, a function looks like this:
int add(int a, int b){
return a+b;
}
Now for java, the method in OOP looks like this,
class NumberAdder{
int num1;
int num2;
NumberAdder(int num1, int num2){
this.num1=num1;
this.num2=num2;
}
public int getSum(){
return num1+num2;
}
}
Depends on the application and how everything is organized. In OOP, the word "methods" is actually preferred to distinguish from "functions" which are not part of an object. Look at the documentation for any library and you should find some examples.
Unity is a game engine and should make sense intuitively. Collider has methods for finding nearest points and detecting collisions. ParticleSystem has a method for emitting particles. Camera has a method for rendering. etc.
https://docs.unity3d.com/ScriptReference/ParticleSystem.html

Is there a language where types can take content of fields into account?

I had this crazy idea and was wondering if such a thing exists:
Usually, in a strongly typed language, types are mainly concerned with memory layout, or membership to an abstract 'class'. So class Foo {int a;} and class Bar {int a; int b;} are distinct, but so is class Baz {int a; int b;} (although it has the same layout, it's a different type). So far, so good.
I was wondering if there is a language that allows one to specify more fine grained contraints as to what makes a type. For example, I'd like to have:
class Person {
//...
int height;
}
class RollercoasterSafe: Person (where .height>140) {}
void ride(RollercoasterSafe p) { //... }
and the compiler would make sure that it's impossible to have p.height < 140 in ride. This is just a stupid example, but I'm sure there are use cases where this could really help. Is there such a thing?
It depends on whether the predicate is checked statically or dynamically. In either case the answer is yes, but the resulting systems look different.
On the static end: PL researchers have proposed the notion of a refinement type, which consists of a base type together with a predicate: http://en.wikipedia.org/wiki/Program_refinement. I believe the idea of refinement types is that the predicates are checked at compile time, which means that you have to restrict the language of predicates to something tractable.
It's also possible to express constraints using dependent types, which are types parameterized by run-time values (as opposed to polymorphic types, which are parameterized by other types).
There are other tricks that you can play with powerful type systems like Haskell's, but IIUC you would have to change height from int to something whose structure the type checker could reason about.
On the dynamic end: SQL has something called domains, as in CREATE DOMAIN: http://developer.postgresql.org/pgdocs/postgres/sql-createdomain.html (see the bottom of the page for a simple example), which again consist of a base type and a constraint. The domain's constraint is checked dynamically whenever a value of that domain is created. In general, you can solve the problem by creating a new abstract data type and performing the check whenever you create a new value of the abstract type. If your language allows you to define automatic coercions from and to your new type, then you can use them to essentially implement SQL-like domains; if not, you just live with plain old abstract data types instead.
Then there are contracts, which are not thought of as types per se but can be used in some of the same ways, such as constraining the arguments and results of functions/methods. Simple contracts include predicates (eg, "accepts a Person object with height > 140"), but contracts can also be higher-order (eg, "accepts a Person object whose makeSmallTalk() method never returns null"). Higher-order contracts cannot be checked immediately, so they generally involve creating some kind of proxy. Contract checking does not create a new type of value or tag existing values, so the dynamic check will be repeated every time the contract is performed. Consequently, programmers often put contracts along module boundaries to minimize redundant checks.
An example of a language with such capabilities is Spec#. From the tutorial documentation available on the project site:
Consider the method ISqrt in Fig. 1, which computes the integer square root of
a given integer x. It is possible to implement the method only if x is non-negative, so
int ISqrt(int x)
requires 0 <= x;
ensures result*result <= x && x < (result+1)*(result+1);
{
int r = 0;
while ((r+1)*(r+1) <= x)
invariant r*r <= x;
{
r++;
}
return r;
}
In your case, you could probably do something like (note that I haven't tried this, I'm just reading docs):
void ride(Person p)
requires p.height > 140;
{
//...
}
There may be a way to roll up that requires clause into a type declaration such as RollercoasterSafe that you have suggested.
Your idea sounds somewhat like C++0x's concepts, though not identical. However, concepts have been removed from the C++0x standard.
I don't know any language that supports that kind of thing, but I don't find it really necessary.
I'm pretty convinced that simply applying validation in the setters of the properties may give you all the necessary restrictions.
In your RollercoasterSafe class example, you may throw an exception when the value of height property is set to a value less than 140. It's runtime checking, but polymorphism can make compile-time checking impossible.

Is it a bad idea to use the new Dynamic Keyword as a replacement switch statement?

I like the new Dynamic keyword and read that it can be used as a replacement visitor pattern.
It makes the code more declarative which I prefer.
Is it a good idea though to replace all instances of switch on 'Type' with a class that implements dynamic dispatch.
class VistorTest
{
public string DynamicVisit(dynamic obj)
{
return Visit(obj);
}
private string Visit(string str)
{
return "a string was called with value " + str;
}
private string Visit(int value)
{
return "an int was called with value " + value;
}
}
It really depends on what you consider a "good idea".
This works, and it works in a fairly elegant manner. It has some advantages and some disadvantages to other approaches.
On the advantage side:
It's concise, and easy to extend
The code is fairly simple
For the disadvantages:
Error checking is potentially more difficult than a classic visitor implementation, since all error checking must be done at runtime. For example, if you pass visitorTest.DynamicVisit(4.2);, you'll get an exception at runtime, but no compile time complaints.
The code may be less obvious, and have a higher maintenance cost.
Personally, I think this is a reasonable approach. The visitor pattern, in a classic implementation, has a fairly high maintenance cost and is often difficult to test cleanly. This potentially makes the cost slightly higher, but makes the implementation much simpler.
With good error checking, I don't have a problem with using dynamic as an approach here. Personally, I'd probably use an approach like this, since the alternatives that perform in a reasonable manner get pretty nasty otherwise.
However, there are a couple of changes I would make here. First, as I mentioned, you really need to include error checking.
Second, I would actually make DynamicVisit take a dynamic directly, which might make it (slightly) more obvious what's happening:
class VistorTest
{
public string DynamicVisit(dynamic obj)
{
try
{
return Visit(obj);
}
catch (RuntimeBinderException e)
{
// Handle the exception here!
Console.WriteLine("Invalid type specified");
}
return string.Empty;
}
// ...Rest of code
The visitor pattern exists primarily to work around the fact that some languages do not allow double dispatch and multiple dispatch.
Multiple dispatch or multimethods is the feature of some object-oriented programming languages in which a function or method can be dynamically dispatched based on the run time (dynamic) type of more than one of its arguments. This is an extension of single dispatch polymorphism where a method call is dynamically dispatched based on the actual derived type of the object. Multiple dispatch generalizes the dynamic dispatching to work with a combination of two or more objects.
Until version 4, C# was one of those languages. With the introduction of the dynamic keyword, however, C# allows developers to opt-in to this dispatch mechanism just as you've shown. I don't see anything wrong with using it in this manner.
You haven't changed the type safety at all, because even a switch (or more likely dispatch dictionary, given that C# does not allow switching on type) would have to have a default case that throws when it can't match a function to call, and this will do exactly the same if it can't find a suitable function to bind to.

Thread-safe data structure design

I have to design a data structure that is to be used in a multi-threaded environment. The basic API is simple: insert element, remove element, retrieve element, check that element exists. The structure's implementation uses implicit locking to guarantee the atomicity of a single API call. After i implemented this it became apparent, that what i really need is atomicity across several API calls. For example if a caller needs to check the existence of an element before trying to insert it he can't do that atomically even if each single API call is atomic:
if(!data_structure.exists(element)) {
data_structure.insert(element);
}
The example is somewhat awkward, but the basic point is that we can't trust the result of "exists" call anymore after we return from atomic context (the generated assembly clearly shows a minor chance of context switch between the two calls).
What i currently have in mind to solve this is exposing the lock through the data structure's public API. This way clients will have to explicitly lock things, but at least they won't have to create their own locks. Is there a better commonly-known solution to these kinds of problems? And as long as we're at it, can you advise some good literature on thread-safe design?
EDIT: I have a better example. Suppose that element retrieval returns either a reference or a pointer to the stored element and not it's copy. How can a caller be protected to safely use this pointer\reference after the call returns? If you think that not returning copies is a problem, then think about deep copies, i.e. objects that should also copy another objects they point to internally.
Thank you.
You either provide a mechanism for external locking (bad), or redesign the API, like putIfAbsent. The latter approach is for instance used for Java's concurrent data-structures.
And, when it comes to such basic collection types, you should check-out whether your language of choice already offers them in its standard library.
[edit]To clarify: external locking is bad for the user of the class, as it introduces another source of potential bugs. Yes, there are times, when performance considerations indeed make matters worse for concurrent data-structures than externally synchronized one, but those cases are rare, and then they usually can only be solved/optimized by people with far more knowledge/experience than me.
One, maybe important, performance hint is found in Will's answer below.
[/edit]
[edit2]Given your new example: Basically you should try to keep the synchronization of the collection and the of the elements separated as much as possible. If the lifetime of the elements is bound to its presence in one collection, you will run into problems; when using a GC this kind of problem actually becomes simpler. Otherwise you will have to use a kind of proxy instead of raw elements to be in the collection; in the simplest case for C++ you would go and use boost::shared_ptr, which uses an atomic ref-count. Insert usual performance disclaimer here. When you are using C++ (as I suspect as you talk about pointers and references), the combination of boost::shared_ptr and boost::make_shared should suffice for a while.
[/edit2]
Sometimes its expensive to create an element to be inserted. In these scenarios you can't really afford to routinely create objects that might already exist just in case they do.
One approach is for the insertIfAbsent() method to return a 'cursor' that is locked - it inserts a place-holder into the internal structure so that no other thread can believe it is absent, but does not insert the new object. The placeholder can contain a lock so that other threads that want to access that particular element must wait for it to be inserted.
In an RAII language like C++ you can use a smart stack class to encapsulate the returned cursor so that it automatically rolls-back if the calling code does not commit. In Java its a bit more deferred with the finalize() method, but can still work.
Another approach is for the caller to create the object that isn't present, but that to occasionally fail in the actual insertion if another thread has 'won the race'. This is how, for example, memcache updates are done. It can work very well.
What about moving the existance check into the .insert() method? A client calls it and if it returns false you know that something went wrong. Much like what malloc() does in plain old C -- return NULL if failed, set ERRNO.
Obviously you can also return an exception, or an instance of an object, and complicate your life up from there..
But please, don't rely on the user setting their own locks.
In an RAII style fashion you could create accessor/handle objects (don't know how its called, there probably exists a pattern of this), e.g. a List:
template <typename T>
class List {
friend class ListHandle<T>;
// private methods use NO locking
bool _exists( const T& e ) { ... }
void _insert( const T& e ) { ... }
void _lock();
void _unlock();
public:
// public methods use internal locking and just wrap the private methods
bool exists( const T& e ) {
raii_lock l;
return _exists( e );
}
void insert( const T& e ) {
raii_lock l;
_insert( e );
}
...
};
template <typename T>
class ListHandle {
List<T>& list;
public:
ListHandle( List<T>& l ) : list(l) {
list._lock();
}
~ListHandle() {
list._unlock();
}
bool exists( const T& e ) { return list._exists(e); }
void insert( const T& e ) { list._insert(e); }
};
List<int> list;
void foo() {
ListHandle<int> hndl( list ); // locks the list as long as it exists
if( hndl.exists(element) ) {
hndl.insert(element);
}
// list is unlocked here (ListHandle destructor)
}
You duplicate (or even triplicate) the public interface, but you give users the choice to choose between internal and safe and comfortable external locking wherever it is required.
First of all, you should really separate your concerns. You have two things to worry about:
The datastructure and its methods.
The thread synchronization.
I highly suggest you use an interface or virtual base class that represents the type of datastructure you are implementing. Create a simple implementation that does not do any locking, at all. Then, create a second implementation that wraps the first implementation and adds locking on top of it. This will allow a more performant implementation where locking isn't needed and will greatly simplify your code.
It looks like you are implementing some sort of dictionary. One thing you can do is provide methods that have semantics that are equivalent to the combined statement. For example setdefault is a reasonable function to provide that will set a value only if the corresponding key does not already exist in the dictionary.
In other words, my recommendation would be to figure out what combinations of methods are frequently used together, and simply create API methods that perform that combination of operations atomically.

Why is the 'if' statement considered evil?

I just came from Simple Design and Testing Conference. In one of the session we were talking about evil keywords in programming languages. Corey Haines, who proposed the subject, was convinced that if statement is absolute evil. His alternative was to create functions with predicates. Can you please explain to me why if is evil.
I understand that you can write very ugly code abusing if. But I don't believe that it's that bad.
The if statement is rarely considered as "evil" as goto or mutable global variables -- and even the latter are actually not universally and absolutely evil. I would suggest taking the claim as a bit hyperbolic.
It also largely depends on your programming language and environment. In languages which support pattern matching, you will have great tools for replacing if at your disposal. But if you're programming a low-level microcontroller in C, replacing ifs with function pointers will be a step in the wrong direction. So, I will mostly consider replacing ifs in OOP programming, because in functional languages, if is not idiomatic anyway, while in purely procedural languages you don't have many other options to begin with.
Nevertheless, conditional clauses sometimes result in code which is harder to manage. This does not only include the if statement, but even more commonly the switch statement, which usually includes more branches than a corresponding if would.
There are cases where it's perfectly reasonable to use an if
When you are writing utility methods, extensions or specific library functions, it's likely that you won't be able to avoid ifs (and you shouldn't). There isn't a better way to code this little function, nor make it more self-documented than it is:
// this is a good "if" use-case
int Min(int a, int b)
{
if (a < b)
return a;
else
return b;
}
// or, if you prefer the ternary operator
int Min(int a, int b)
{
return (a < b) ? a : b;
}
Branching over a "type code" is a code smell
On the other hand, if you encounter code which tests for some sort of a type code, or tests if a variable is of a certain type, then this is most likely a good candidate for refactoring, namely replacing the conditional with polymorphism.
The reason for this is that by allowing your callers to branch on a certain type code, you are creating a possibility to end up with numerous checks scattered all over your code, making extensions and maintenance much more complex. Polymorphism on the other hand allows you to bring this branching decision as closer to the root of your program as possible.
Consider:
// this is called branching on a "type code",
// and screams for refactoring
void RunVehicle(Vehicle vehicle)
{
// how the hell do I even test this?
if (vehicle.Type == CAR)
Drive(vehicle);
else if (vehicle.Type == PLANE)
Fly(vehicle);
else
Sail(vehicle);
}
By placing common but type-specific (i.e. class-specific) functionality into separate classes and exposing it through a virtual method (or an interface), you allow the internal parts of your program to delegate this decision to someone higher in the call hierarchy (potentially at a single place in code), allowing much easier testing (mocking), extensibility and maintenance:
// adding a new vehicle is gonna be a piece of cake
interface IVehicle
{
void Run();
}
// your method now doesn't care about which vehicle
// it got as a parameter
void RunVehicle(IVehicle vehicle)
{
vehicle.Run();
}
And you can now easily test if your RunVehicle method works as it should:
// you can now create test (mock) implementations
// since you're passing it as an interface
var mock = new Mock<IVehicle>();
// run the client method
something.RunVehicle(mock.Object);
// check if Run() was invoked
mock.Verify(m => m.Run(), Times.Once());
Patterns which only differ in their if conditions can be reused
Regarding the argument about replacing if with a "predicate" in your question, Haines probably wanted to mention that sometimes similar patterns exist over your code, which differ only in their conditional expressions. Conditional expressions do emerge in conjunction with ifs, but the whole idea is to extract a repeating pattern into a separate method, leaving the expression as a parameter. This is what LINQ already does, usually resulting in cleaner code compared to an alternative foreach:
Consider these two very similar methods:
// average male age
public double AverageMaleAge(List<Person> people)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (person.Gender == Gender.Male)
{
sum += person.Age;
count++;
}
}
return sum / count; // not checking for zero div. for simplicity
}
// average female age
public double AverageFemaleAge(List<Person> people)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (person.Gender == Gender.Female) // <-- only the expression
{ // is different
sum += person.Age;
count++;
}
}
return sum / count;
}
This indicates that you can extract the condition into a predicate, leaving you with a single method for these two cases (and many other future cases):
// average age for all people matched by the predicate
public double AverageAge(List<Person> people, Predicate<Person> match)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (match(person)) // <-- the decision to match
{ // is now delegated to callers
sum += person.Age;
count++;
}
}
return sum / count;
}
var males = AverageAge(people, p => p.Gender == Gender.Male);
var females = AverageAge(people, p => p.Gender == Gender.Female);
And since LINQ already has a bunch of handy extension methods like this, you actually don't even need to write your own methods:
// replace everything we've written above with these two lines
var males = list.Where(p => p.Gender == Gender.Male).Average(p => p.Age);
var females = list.Where(p => p.Gender == Gender.Female).Average(p => p.Age);
In this last LINQ version the if statement has "disappeared" completely, although:
to be honest the problem wasn't in the if by itself, but in the entire code pattern (simply because it was duplicated), and
the if still actually exists, but it's written inside the LINQ Where extension method, which has been tested and closed for modification. Having less of your own code is always a good thing: less things to test, less things to go wrong, and the code is simpler to follow, analyze and maintain.
Huge runs of nested if/else statements
When you see a function spanning 1000 lines and having dozens of nested if blocks, there is an enormous chance it can be rewritten to
use a better data structure and organize the input data in a more appropriate manner (e.g. a hashtable, which will map one input value to another in a single call),
use a formula, a loop, or sometimes just an existing function which performs the same logic in 10 lines or less (e.g. this notorious example comes to my mind, but the general idea applies to other cases),
use guard clauses to prevent nesting (guard clauses give more confidence into the state of variables throughout the function, because they get rid of exceptional cases as soon as possible),
at least replace with a switch statement where appropriate.
Refactor when you feel it's a code smell, but don't over-engineer
Having said all this, you should not spend sleepless nights over having a couple of conditionals now and there. While these answers can provide some general rules of thumb, the best way to be able to detect constructs which need refactoring is through experience. Over time, some patterns emerge that result in modifying the same clauses over and over again.
There is another sense in which if can be evil: when it comes instead of polymorphism.
E.g.
if (animal.isFrog()) croak(animal)
else if (animal.isDog()) bark(animal)
else if (animal.isLion()) roar(animal)
instead of
animal.emitSound()
But basically if is a perfectly acceptable tool for what it does. It can be abused and misused of course, but it is nowhere near the status of goto.
A good quote from Code Complete:
Code as if whoever maintains your program is a violent psychopath who
knows where you live.
— Anonymous
IOW, keep it simple. If the readability of your application will be enhanced by using a predicate in a particular area, use it. Otherwise, use the 'if' and move on.
I think it depends on what you're doing to be honest.
If you have a simple if..else statement, why use a predicate?
If you can, use a switch for larger if replacements, and then if the option to use a predicate for large operations (where it makes sense, otherwise your code will be a nightmare to maintain), use it.
This guy seems to have been a bit pedantic for my liking. Replacing all if's with Predicates is just crazy talk.
There is the Anti-If campaign which started earlier in the year. The main premise being that many nested if statements often can often be replaced with polymorphism.
I would be interested to see an example of using the Predicate instead. Is this more along the lines of functional programming?
Just like in the bible verse about money, if statements are not evil -- the LOVE of if statements is evil. A program without if statements is a ridiculous idea, and using them as necessary is essential. But a program that has 100 if-else if blocks in a row (which, sadly, I have seen) is definitely evil.
I have to say that I recently have begun to view if statements as a code smell: especially when you find yourself repeating the same condition several times. But there's something you need to understand about code smells: they don't necessarily mean that the code is bad. They just mean that there's a good chance the code is bad.
For instance, comments are listed as a code smell by Martin Fowler, but I wouldn't take anyone seriously who says "comments are evil; don't use them".
Generally though, I prefer to use polymorphism instead of if statements where possible. That just makes for so much less room for error. I tend to find that a lot of the time, using conditionals leads to a lot of tramp arguments as well (because you have to pass the data needed to form the conditional on to the appropriate method).
if is not evil(I also hold that assigning morality to code-writing practices is asinine...).
Mr. Haines is being silly and should be laughed at.
I'll agree with you; he was wrong. You can go too far with things like that, too clever for your own good.
Code created with predicates instead of ifs would be horrendous to maintain and test.
Predicates come from logical/declarative programming languages, like PROLOG. For certain classes of problems, like constraint solving, they are arguably superior to a lot of drawn out step-by-step if-this-do-that-then-do-this crap. Problems that would be long and complex to solve in imperative languages can be done in just a few lines in PROLOG.
There's also the issue of scalable programming (due to the move towards multicore, the web, etc.). If statements and imperative programming in general tend to be in step-by-step order, and not scaleable. Logical declarations and lambda calculus though, describe how a problem can be solved, and what pieces it can be broken down into. As a result, the interpreter/processor executing that code can efficiently break the code into pieces, and distribute it across multiple CPUs/cores/threads/servers.
Definitely not useful everywhere; I'd hate to try writing a device driver with predicates instead of if statements. But yes, I think the main point is probably sound, and worth at least getting familiar with, if not using all the time.
The only problem with a predicates (in terms of replacing if statements) is that you still need to test them:
function void Test(Predicate<int> pr, int num)
{
if (pr(num))
{ /* do something */ }
else
{ /* do something else */ }
}
You could of course use the terniary operator (?:), but that's just an if statement in disguise...
Perhaps with quantum computing it will be a sensible strategy to not use IF statements but to let each leg of the computation proceed and only have the function 'collapse' at termination to a useful result.
Sometimes it's necessary to take an extreme position to make your point. I'm sure this person uses if -- but every time you use an if, it's worth having a little think about whether a different pattern would make the code clearer.
Preferring polymorphism to if is at the core of this. Rather than:
if(animaltype = bird) {
squawk();
} else if(animaltype = dog) {
bark();
}
... use:
animal.makeSound();
But that supposes that you've got an Animal class/interface -- so really what the if is telling you, is that you need to create that interface.
So in the real world, what sort of ifs do we see that lead us to a polymorphism solution?
if(logging) {
log.write("Did something");
}
That's really irritating to see throughout your code. How about, instead, having two (or more) implementations of Logger?
this.logger = new NullLogger(); // logger.log() does nothing
this.logger = new StdOutLogger(); // logger.log() writes to stdout
That leads us to the Strategy Pattern.
Instead of:
if(user.getCreditRisk() > 50) {
decision = thoroughCreditCheck();
} else if(user.getCreditRisk() > 20) {
decision = mediumCreditCheck();
} else {
decision = cursoryCreditCheck();
}
... you could have ...
decision = getCreditCheckStrategy(user.getCreditRisk()).decide();
Of course getCreditCheckStrategy() might contain an if -- and that might well be appropriate. You've pushed it into a neat place where it belongs.
It probably comes down to a desire to keep code cyclomatic complexity down, and to reduce the number of branch points in a function. If a function is simple to decompose into a number of smaller functions, each of which can be tested, you can reduce the complexity and make code more easily testable.
IMO:
I suspect he was trying to provoke a debate and make people think about the misuse of 'if'. No one would seriously suggest such a fundamental construction of programming syntax was to be completely avoided would they?
Good that in ruby we have unless ;)
But seriously probably if is the next goto, that even if most of the people think it is evil in some cases is simplifying/speeding up the things (and in some cases like low level highly optimized code it's a must).
I think If statements are evil, but If expressions are not. What I mean by an if expression in this case can be something like the C# ternary operator (condition ? trueExpression : falseExpression). This is not evil because it is a pure function (in a mathematical sense). It evaluates to a new value, but it has no effects on anything else. Because of this, it works in a substitution model.
Imperative If statements are evil because they force you to create side-effects when you don't need to. For an If statement to be meaningful, you have to produce different "effects" depending on the condition expression. These effects can be things like IO, graphic rendering or database transactions, which change things outside of the program. Or, it could be assignment statements that mutate the state of the existing variables. It is usually better to minimize these effects and separate them from the actual logic. But, because of the If statements, we can freely add these "conditionally executed effects" everywhere in the code. I think that's bad.
If is not evil! Consider ...
int sum(int a, int b) {
return a + b;
}
Boring, eh? Now with an added if ...
int sum(int a, int b) {
if (a == 0 && b == 0) {
return 0;
}
return a + b;
}
... your code creation productivity (measured in LOC) is doubled.
Also code readability has improved much, for now you can see in the blink of an eye what the result is when both argument are zero. You couldn't do that in the code above, could you?
Moreover you supported the testteam for they now can push their code coverage test tools use up more to the limits.
Furthermore the code now is better prepared for future enhancements. Let's guess, for example, the sum should be zero if one of the arguments is zero (don't laugh and don't blame me, silly customer requirements, you know, and the customer is always right).
Because of the if in the first place only a slight code change is needed.
int sum(int a, int b) {
if (a == 0 || b == 0) {
return 0;
}
return a + b;
}
How much more code change would have been needed if you hadn't invented the if right from the start.
Thankfulness will be yours on all sides.
Conclusion: There's never enough if's.
There you go. To.

Resources