Which is preferred: (var==null) or (null==var) [duplicate] - programming-languages

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Conditional styles: if (0 == resultIndex) vs if (resultIndex ==0)
I've seen both in code, and I do not know why one is better over the other, but which do you prefer to use, and why?

The if(null == var) practice comes from C, where you could accidentally write if(var = null) (note the assignment operator), which would compile (since in C everything not 0 is true) but will be a semantic error. So if(null == var) is a defensive practice, since null is an lvalue (again C/C++ terminology) and lvalues cannot be assigned to.
In modern languages (particularly in C#) compiler will warn you if you do assignment in if statement.

I would say var==null because it's more like what you would actually say. "If my variable is null..." makes more sense than any variant of "If null is my variable..." or "If nothing is my variable..." etc.

I prefer to use (var==null) myself. It makes more sense to me, since I'm checking "if var is null". The other order comes across when I read it as "if null is var", which never makes sense. I also like how VB.NET makes it quite readable, and makes a check for null different from other comparisons, with "if var is nothing".

They both have the same functionality and produce the same result. The choice is purely a personal one to make.
I personally prefer the first because "var is null" or "var equals null" reads better than "null is var" or "var is null".
Some people prefer the second and claim it reduces typo-induced bugs because null = var causes an error while var = null still compiles. These days, however, compilers issue warnings for this, so I believe it is a moot point.

Typically I tend to go unknown value then known value when I am comparing two values. That way I know first what I am looking in, giving context to the test. Most languages don't care, but I think some may.

Language-agnostic?
In Scheme, no ambiguity:
(null? var)

Related

Advantage to a certain string comparison order

Looking at some pieces of code around the internet, I've noticed some authors tend to write string comparisons like
if("String"==$variable)
in PHP, or
if("String".equals(variable))
Whereas my preference is:
if(variable.equals("String"))
I realize these are effectively equal: they compare two strings for equality. But I was curious if there was an advantage to one over the other in terms of performance or something else.
Thank you for the help!
One example to the approach of using an equality function or using if( constant == variable ) rather than if( variable == constant ) is that it prevents you from accidentally typoing and writing an assignment instead of a comparison, for instance:
if( s = "test" )
Will assign "test" to s, which will result in undesired behaviour which may potentially cause a hard-to-find bug. However:
if( "test" = s )
Will in most languages (that I'm aware of) result in some form of warning or compiler error, helping to avoid a bug later on.
With a simple int example, this prevents accidental writes of
if (a=5)
which would be a compile error if written as
if (5=a)
I sure don't know about all languages, but decent C compilers warn you about if (a=b). Perhaps whatever language your question is written in doesn't have such a feature, so to be able to generate an error in such a case, they have reverted the order of the comparison arguments.
Yoda conditions call these some.
The kind of syntaxis a language uses has nothing to do with efficiency. It is all about how the comparison algorithm works.
In the examples you mentioned, this:
if("String".equals(variable))
and this:
if(variable.equals("String"))
would be exactly the same, because the expression "String" will be treated as a String variable.
Languages that provide a comparison method for Strings, will use the fastest method so you shouldn't care about it, unless you want to implement the method yourself ;)

Null =! Object vs Object =! Null

This question is merely a comparison between doing:
if(Null != Object){
/// code here.
}
if(Object != Null ){
/// code here.
}
Can anyone chime in if one conditional statement if superior to the other? Is it case dependent?
I've always used Object != Null , but I heard about Null != Object today and wondered its differences?
Language agnostic.
--
Thank you to all for chiming in!
They are logically identical.
I've known people who prefer Null != Object and the most common reasons they give are:
It's less likely to have a bug by using = instead of == because Null = <anything> will result in a compiler error.
It's unique and cool.
The first reason sort of flies (though I don't think it's worth the hit to code readability), but I'm very much opposed to the second reason. I personally prefer Object != Null for readability. It's considerably closer to the actual spoken form of the logic being expressed. (Keep in mind that code is written primarily to be read by humans, and only has a minor secondary purpose of being executed by machines.)
Some people like to use Null != Object because it's safer against typos. With the other form, you have a slightly higher risk of writing Object = Null, which might be a serious and hard-to-notice error.
Others prefer Object != Null because it reads more naturally in English.
But it's completely stylistic -- the two forms are functionally equivalent.
The reason is really because of C, and usually deals with the equality operator rather than the inequality operator. It's because in C, you have = and ==, which both do completely different things. The problem is it's very easy to mistake one for the other, so you might end up with the following code:
int *i = malloc(sizeof(int));
if (i=NULL) {
... do nothing?
} else {
free(i);
}
It's a very hard to miss bug, but this code should use the == operator. The problem is the compiler sees this as correct, and you've got a bug in your code.
I don't know a reason to do this with the inequality operator though other than style.
Two things still have to be compared.

Disadvantages of using ternary operator

I have the following statments in my source code
int tableField1;
int tableField2;
int propertyField1;
int propertyField2;
if (tableField1 != null)
{
propertyField1 = tableField1;
}
if (tableField2 != null)
{
propertyField2 = tableField1;
}
// the above pattern is repeated for 10 tablefields ie tableField3, tableField4... tableField10
I reduced the above statments using the ternary operator as follows
propertyField1 = tableField1 != null ? tableField1 : propertyField1;
propertyField2 = tableField2 != null ? tableField2 : propertyField2;
Following are my questions:
1) Is ternary operator inefficient to use than the if statements.
2) What are the disadvantages (if any) to use ternary operator.
Why not use the null coalescing operator instead?
propertyField1 = tableField1 ?? propertyField1;
Admittedly it looks slightly odd assigning the original value back to the same variable. It's possible that it'll be very slightly less efficient than the if statements, as in theory you're reading the value and assigning it again... but I wouldn't be surprised if the JIT elided that. Anyway, that's definitely at the level of micro-optimization.
Some people consider the conditional operator to be bad for readability - I generally think it's fine for simple statements like this, although it is somewhat obscuring the meaning of "only change the value if we've got a new one".
1) No, the compiler will (should) treat the ternary operator in exactly the same manner as an explicit if statement, so there is no difference in efficiency.
2) The only disadvantage is that it can make your code harder to read. On the other hand, it's more concise. Generally avoid ternary statements for readability reasons, except for very simple cases. This probably counts as a very simple case.
I claim this answer is more language-agnostic than Jon Skeet's :)
As far as I understand it, the ternary statement is mostly just syntactic sugar, though the compiler can optimize for it.
In this case, I believe it would probably be more efficient, though I'd guess that the compiler would optimize it to be about the same as an if statement.
Disadvantages? Well, it might be harder to read in some cases. It depends on what you're going after.

Implications of not including NULL in a language?

I know that NULL isn't necessary in a programming language, and I recently made the decision not to include NULL in my programming language. Declaration is done by initialization, so it is impossible to have an uninitialized variable. My hope is that this will eliminate the NullPointerException in favor of more meaningful exceptions or simply not having certain kinds of bugs.
Of course, since the language is implemented in C, there will be NULLs used under the covers.
My question is, besides using NULL as an error flag (this is handled with exceptions) or as an endpoint for data structures such as linked lists and binary trees (this is handled with discriminated unions) are there any other use-cases for NULL for which I should have a solution? Are there any really important implications of not having NULL which could cause me problems?
There's a recent article referenced on LtU by Tony Hoare titled Null References: The Billion Dollar Mistake which describes a method to allow the presence of NULLs in a programming language, but also eliminates the risk of referencing such a NULL reference. It seems so simple yet it's such a powerful idea.
Update: here's a link to the actual paper that I read, which talks about the implementation in Eiffel: http://docs.eiffel.com/book/papers/void-safety-how-eiffel-removes-null-pointer-dereferencing
Borrowing a page from Haskell's Maybe monad, how will you handle the case of a return value that may or may not exist? For instance, if you tried to allocate memory but none was available. Or maybe you've created an array to hold 50 foos, but none of the foos have been instantiated yet -- you need some way to be able to check for these kinds of things.
I guess you can use exceptions to cover all these cases, but does that mean that a programmer will have to wrap all of those in a try-catch block? That would be annoying at best. Or everything would have to return its own value plus a boolean indicating whether the value was valid, which is certainly not better.
FWIW, I'm not aware of any program that doesn't have some sort of notion of NULL -- you've got null in all the C-style languages and Java; Python has None, Scheme, Lisp, Smalltalk, Lua, Ruby all have nil; VB uses Nothing; and Haskell has a different kind of nothing.
That doesn't mean a language absolutely has to have some kind of null, but if all of the other big languages out there use it, surely there was some sound reasoning behind it.
On the other hand, if you're only making a lightweight DSL or some other non-general language, you could probably get by without null if none of your native data types require it.
The one that immediately comes to mind is pass-by-reference parameters. I'm primarily an Objective-C coder, so I'm used to seeing things kind of like this:
NSError *error;
[anObject doSomething:anArgumentObject error:&error];
// Error-handling code follows...
After this code executes, the error object has details about the error that was encountered, if any. But say I don't care if an error happens:
[anObject doSomething:anArgumentObject error:nil];
Since I don't pass in any actual value for the error object, I get no results back, and I don't really worry about parsing an error (since I don't care in the first place if it occurs).
You've already mentioned you're handling errors a different way, so this specific example doesn't really apply, but the point stands: what do you do when you pass something back by reference? Or does your language just not do that?
I think it's usefull for a method to return NULL - for example for a search method supposed to return some object, it can return the found object, or NULL if it wasn't found.
I'm starting to learn Ruby and Ruby has a very interesting concept for NULL, maybe you could consider implementing something silimar. In Ruby, NULL is called Nil, and it's an actual object just like any other object. It happens to be implemented as a global Singleton object. Also in Ruby, there is an object False, and both Nil and False evaluate to false in boolean expressions, while everything else evaluates to true (even 0, for example, evaluates to true).
In my mind there are two uses cases for which NULL is generally used:
The variable in question doesn't have a value (Nothing)
We don't know the value of the variable in question (Unknown)
Both of common occurrences and, honestly, using NULL for both can cause confusion.
Worth noting is that some languages that don't support NULL do support the nothing of Nothing/Unknown. Haskell, for instance, supports "Maybe ", which can contain either a value of or Nothing. Thus, commands can return (and accept) a type that they know will always have a value, or they can return/accept "Maybe " to indicate that there may not be a value.
I prefer the concept of having non-nullable pointers be the default, with nullable pointers a possibility. You can almost do this with c++ through references (&) rather than pointers, but it can get quite gnarly and irksome in some cases.
A language can do without null in the Java/C sense, for instance Haskell (and most other functional languages) have a "Maybe" type which is effectively a construct that just provides the concept of an optional null pointer.
It's not clear to me why you would want to eliminate the concept of 'null' from a language. What would you do if your app requires you to do some initialization 'lazily' - that is, you don't perform the operation until the data is needed? Ex:
public class ImLazy {
public ImLazy() {
//I can't initialize resources in my constructor, because I'm lazy.
//Maybe I don't have a network connection available yet, or maybe I'm
//just not motivated enough.
}
private ResourceObject lazyObject;
public ResourceObject getLazyObject() { //initialize then return
if (lazyObject == null) {
lazyObject = new DatabaseNetworkResourceThatTakesForeverToLoad();
}
}
public ResourceObject isObjectLoaded() { //just return the object
return (lazyObject != null);
}
}
In a case like this, how could we return a value for getObject()? We could come up with one of two things:
-require the user to initialize LazyObject in the declaration. The user would then have to fill in some dummy object (UselessResourceObject), which requires them to write all of the same error-checking code (if (lazyObject.equals(UselessResourceObject)...) or:
-come up with some other value, which works the same as null, but has a different name
For any complex/OO language you need this functionality, or something like it, as far as I can see. It may be valuable to have a non-null reference type (for example, in a method signature, so that you don't have to do a null check in the method code), but the null functionality should be available for cases where you do use it.
Interesting discussion happening here.
If I was building a language, I really don't know if I would have the concept of null. I guess it depends on how I want the language to look. Case in point: I wrote a simple templating language whose main strength is nested tokens and ease of making a token a list of values. It doesn't have the concept of null, but then it doesn't really have the concept of any types other than string.
By comparison, the langauge it is built-in, Icon, uses null extensively. Probably the best thing the language designers for Icon did with null is make it synonymous with an uninitialized variable (i.e. you can't tell the difference between a variable that doesn't exist and one that currently holds the value null). And then created two prefix operators to check null and not-null.
In PHP, I sometimes use null as a 'third' boolean value. This is good in "black-box" type classes (e.g. ORM core) where a state can be True, False or I Don't Know. Null is used for the third value.
Of course, both of these languages do not have pointers in the same way C does, so null pointers do not exist.
We use nulls all the time in our application to represent the "nothing" case. For example, if you are asked to look up some data in the database given an id, and no record matches that id: return null. This is very handy because we can store nulls in our cache, which means we don't have to go back to the database if someone asks for that id again in a few seconds.
The cache itself has two different kinds of responses: null, meaning there was no such entry in the cache, or an entry object. The entry object might have a null value, which is the case when we cached a null db lookup.
Our app is written in Java, but even with unchecked exceptions doing this with exceptions would be incredibly annoying.
If one accepts the propositions that powerful languages should have some sort of pointer or reference type (i.e. something which can hold a reference to data which does not exist at compile time), and some form of array type (or other means of having a collection of storage slots which are addressable sequentially via integer index), and that slots of the latter should be able to hold the former, and one accepts the possibility that one may have to read some slots of an array of pointers/references before sensible values exist for all of them, then there will be programs which, from a compiler's perspective, will read an array slot before a sensible value has been written to it (trying to ascertain in the general case whether an array slot could be read before it is written would be equivalent to the Halting Problem).
While it would be possible for a language to require that all array slots be initialized with some non-null reference before any of them could be read, in many situations there isn't really anything that could be stored which would be better than null: if an attempt is made to read an as-yet-unwritten array slot and dereference the (non)item contained there, that represents an error, and it would be better to have the system trap that condition than to access some arbitrary object whose sole purpose for existence is to give the array slots some non-null thing they can reference.

When to use If-else if-else over switch statements and vice versa [duplicate]

This question already has answers here:
Advantage of switch over if-else statement
(23 answers)
Eliminating `switch` statements [closed]
(23 answers)
Is there any significant difference between using if/else and switch-case in C#?
(21 answers)
Closed 2 years ago.
Why you would want to use a switch block over a series of if statements?
switch statements seem to do the same thing but take longer to type.
As with most things you should pick which to use based on the context and what is conceptually the correct way to go. A switch is really saying "pick one of these based on this variables value" but an if statement is just a series of boolean checks.
As an example, if you were doing:
int value = // some value
if (value == 1) {
doThis();
} else if (value == 2) {
doThat();
} else {
doTheOther();
}
This would be much better represented as a switch as it then makes it immediately obviously that the choice of action is occurring based on the value of "value" and not some arbitrary test.
Also, if you find yourself writing switches and if-elses and using an OO language you should be considering getting rid of them and using polymorphism to achieve the same result if possible.
Finally, regarding switch taking longer to type, I can't remember who said it but I did once read someone ask "is your typing speed really the thing that affects how quickly you code?" (paraphrased)
If you are switching on the value of a single variable then I'd use a switch every time, it's what the construct was made for.
Otherwise, stick with multiple if-else statements.
concerning Readability:
I typically prefer if/else constructs over switch statements, especially in languages that allows fall-through cases. What I've found, often, is as the projects age, and multiple developers gets involved, you'll start having trouble with the construction of a switch statement.
If they (the statements) become anything more than simple, many programmers become lazy and instead of reading the entire statement to understand it, they'll simply pop in a case to cover whatever case they're adding into the statement.
I've seen many cases where code repeats in a switch statement because a person's test was already covered, a simple fall-though case would have sufficed, but laziness forced them to add the redundant code at the end instead of trying to understand the switch. I've also seen some nightmarish switch statements with many cases that were poorly constructed, and simply trying to follow all the logic, with many fall-through cases dispersed throughout, and many cases which weren't, becomes difficult ... which kind of leads to the first/redundancy problem I talked about.
Theoretically, the same problem could exist with if/else constructs, but in practice this just doesn't seem to happen as often. Maybe (just a guess) programmers are forced to read a bit more carefully because you need to understand the, often, more complex conditions being tested within the if/else construct? If you're writing something simple that you know others are likely to never touch, and you can construct it well, then I guess it's a toss-up. In that case, whatever is more readable and feels best to you is probably the right answer because you're likely to be sustaining that code.
concerning Speed:
Switch statements often perform faster than if-else constructs (but not always). Since the possible values of a switch statement are laid out beforehand, compilers are able to optimize performance by constructing jump tables. Each condition doesn't have to be tested as in an if/else construct (well, until you find the right one, anyway).
However this isn't always the case, though. If you have a simple switch, say, with possible values of 1 to 10, this will be the case. The more values you add requires the jump tables to be larger and the switch becomes less efficient (not than an if/else, but less efficient than the comparatively simple switch statement). Also, if the values are highly variant ( i.e. instead of 1 to 10, you have 10 possible values of, say, 1, 1000, 10000, 100000, and so on to 100000000000), the switch is less efficient than in the simpler case.
Hope this helps.
Switch statements are far easier to read and maintain, hands down. And are usually faster and less error prone.
Use switch every time you have more than 2 conditions on a single variable, take weekdays for example, if you have a different action for every weekday you should use a switch.
Other situations (multiple variables or complex if clauses you should Ifs, but there isn't a rule on where to use each.
I personally prefer to see switch statements over too many nested if-elses because they can be much easier to read. Switches are also better in readability terms for showing a state.
See also the comment in this post regarding pacman ifs.
This depends very much on the specific case. Preferably, I think one should use the switch over the if-else if there are many nested if-elses.
The question is how much is many?
Yesterday I was asking myself the same question:
public enum ProgramType {
NEW, OLD
}
if (progType == OLD) {
// ...
} else if (progType == NEW) {
// ...
}
if (progType == OLD) {
// ...
} else {
// ...
}
switch (progType) {
case OLD:
// ...
break;
case NEW:
// ...
break;
default:
break;
}
In this case, the 1st if has an unnecessary second test. The 2nd feels a little bad because it hides the NEW.
I ended up choosing the switch because it just reads better.
I have often thought that using elseif and dropping through case instances (where the language permits) are code odours, if not smells.
For myself, I have normally found that nested (if/then/else)s usually reflect things better than elseifs, and that for mutually exclusive cases (often where one combination of attributes takes precedence over another), case or something similar is clearer to read two years later.
I think the select statement used by Rexx is a particularly good example of how to do "Case" well (no drop-throughs) (silly example):
Select
When (Vehicle ¬= "Car") Then
Name = "Red Bus"
When (Colour == "Red") Then
Name = "Ferrari"
Otherwise
Name = "Plain old other car"
End
Oh, and if the optimisation isn't up to it, get a new compiler or language...
The tendency to avoid stuff because it takes longer to type is a bad thing, try to root it out. That said, overly verbose things are also difficult to read, so small and simple is important, but it's readability not writability that's important. Concise one-liners can often be more difficult to read than a simple well laid out 3 or 4 lines.
Use whichever construct best descibes the logic of the operation.
Let's say you have decided to use switch as you are only working on a single variable which can have different values. If this would result in a small switch statement (2-3 cases), I'd say that is fine. If it seems you will end up with more I would recommend using polymorphism instead. An AbstractFactory pattern could be used here to create an object that would perform whatever action you were trying to do in the switches. The ugly switch statement will be abstracted away and you end up with cleaner code.

Resources