In a D compiler, what additional optimization possibilities, if any, does the “final switch” construct give over and above ordinary switch in D? (DLang)
Short answer: None.
Long answer:
The primary advantage of final switch over switch is that when you use it with a value of an enum type, it gives you an error when you're missing one of the enum members, so you can be sure that you have them all covered. And if the enum changes such that it has more members or fewer, you'll know that you need to update your switch statement.
Aside from that, semantically, a final switch is pretty much the same thing as a switch statement with a default case that has assert(0) - it's just that it throws a SwitchError rather than asserting 0. The effect is essentially the same though - your program dies when the switch statement is given a value that's not covered by any of the cases.
The only reason I can really think of to use final switch with anything other than an enum is so that you don't have to write the default case when the case statements are supposed to cover all of the possible cases. And considering that at least some of the time, you could provide a more meaningful message with an assertion than a SwitchError gives you (No appropriate switch clause found), I'd be inclined to argue that it would be better to have an explicit default case with an assert(0) and a message than to use a final switch when the switch statement is not operating on an enum.
Now, as to your question about optimizations, I don't think that there's any chance that final switch provides any optimizations over a normal switch. In order to throw a SwitchError when the final switch is given a value that's not covered by any of the case statements, the final switch would have to be lowered to a normal switch statement with a default case that throws a SwitchError.
So, in terms of the resulting code, there's really no difference between a final switch and a normal switch that has a default case that throws a SwitchError, and optimization-wise, final switch is in a similar boat to a switch statement with a default case that asserts 0 (though assert(0) is probably more likely to result in an optimization than throwing a SwitchError, because the compiler can assume that the program will die when an assertion is explicitly false, whereas a program could catch an Error and continue, even if it shouldn't).
I don't know whether the compiler can do any optimizations based on the guarantee that the default case will kill the program if it's ever hit. If it can, then there may be some optimization that can be garnered by using final switch, but a regular switch with a default case that asserted 0 or threw an Error would be in the same boat as final switch. So, there's nothing magical about final switch in that respect. The magic of final switch is in catching bugs with enums.
Really, I'd suggest that you only use final switch when you're dealing with an enum type with a fixed set of values so that you can catch when the cases don't match the enum members and that aside from that, you just don't use final switch. Honestly, I was surprised to find out that final switch accepted anything other than enums.
Looking to the final switch docs: http://dlang.org/spec/statement.html#FinalSwitchStatement it's just allowing the same optimisations as switch in C (it should set labels in code and just use something like goto basing on the variable value). switch in D is more general than switch in C and does not allow such optimisation. The switch from D can use runtime initialised case values.
Related
I have a UML book and I am reading about nested fragments. It shows an example of a nested fragment. But what I dont get.. why does it say "If the condition "cancelation not sucessful" is true when entering this fragment (i.e. cancelation was unsuccesful) the interaction within this fragment is no longer executed".
What I have learned before is that a condition should be true before the interaction will be executed? But in this case it says the opposite of it.. (because they say it should be false to execute the interaction)
See for the image: https://ibb.co/CmstLcX
I think this is simply a typo in the book. The diagram makes sense, but the text describes nonsense. While the condition is true, the messages in the loop will happen up to three times.
Maybe the author got confused, because the message immediately before the loop is Cancellation. I assume the loop guard is referring to the success of the Order cancellation message, not to this Cancellation.
By the way, reply messages need to have the name of the original message. Most people get this wrong (and some textbooks). I'll grant that the text on the reply message is often meant to be the return value. As such it should be separated with a colon from the name of the message. In your case it is probably not necessary to repeat the message name, even though techically required. However, the colon is mandatory.
If it is the return value, all reply messages in the loops return the value Acceptance. I wonder, how the guard can then evaluate to true after the first time. Maybe it is only showing the scenario for this case. This is perfectly Ok. A sequence diagram almost never shows all possible scenarios. However, then the loop doesn't make sense. I guess the author didn't mean to return a specific value.
Or maybe it is the assignment target. In this case, it should look like this: Acceptance=Order Cancellation. Then the Acceptance attribute of the Dispatcher Workstation would be filled with whatever gets returned by the Order Cancellation message. Of course, then I would expect this attribute to be used in the guard, like this [not Acceptance].
A third possiblity is, that the author didn't mean synchroneous communication and just wanted to send signals. The Acceptance Signal could well contain an attribute Cancellation not successful. Then of course, no filled arrows and no dashed lines.
I can even think of a fourth possibility. Maybe the author wanted to show the name of an out parameter of the called operation. But this would officially look like this: Order Cancellation(Acceptance). Again the name of the message could be omitted, but the round brackets are needed to make the intention clear.
I think the diagram leaves a lot more questions open than you asked.
It only says "If the cancellation was not successful then" (do exception handling). This is pretty straight forward. if not false is the same as if true. Since it is within a Break fragment, this will be performed (with what ever is inside) and as a result will break the fragment where it is contained (usually some loop). Having a break just stand alone seems a bit odd (not to say wrong).
UML 2.5 p. 581:
17.6.3.9 Break
The interactionOperator break designates that the CombinedFragment represents a breaking scenario in the sense that the operand is a scenario that is performed instead of the remainder of the enclosing InteractionFragment. A break operator with a guard is chosen when the guard is true and the rest of the enclosing Interaction Fragment is ignored. When the guard of the break operand is false, the break operand is ignored and the rest of the enclosing InteractionFragment is chosen. The choice between a break operand without a guard and the rest of the enclosing InteractionFragment is done non-deterministically.
A CombinedFragment with interactionOperator break should cover all Lifelines of the enclosing InteractionFragment.
Except for that: you should not take fragments too serious. Graphical programming is nonsense. You make only spare use of any such constructs and only if they help understanding certain behavior. Do not get tempted to re-document existing code this way. Code is much more dense and better to read. YMMV
In some languages, optimization is allowed to change the program execution result. For example,
C++11 has the concept of "copy-elision" which allows the optimizer to ignore the copy constructor (and its side-effects) in some circumstances.
Swift has the concept of "imprecise lifetimes" which allows the optimizer to release objects at any time after last usage before the end of lexical scope.
In both cases, optimizations are not guaranteed to happen, therefore the program execution result can be significantly different based on the optimizer implementations (e.g. debug vs. release build)
Copying can be skipped, object can die while a reference is alive. The only way to deal with these behaviors is by being defensive and making your program work correctly regardless if the optimizations happen or not. If you don't know about the existence of this behavior, it's impossible to write correct programs with the tools.
This is different from "random operations" which are written by the programmer to produce random results intentionally. These behaviors are (1) done by optimizer and (2) can randomize execution result regardless of programmer intention. This is done by the language designer's intention for better performance. A sort of trade-off between performance and predictability.
Does Rust have (or consider) any of this kind of behavior? Any optimization that is allowed to change program execution result for better performance. If it has any, what is the behavior and why is it allowed?
I know the term "execution result" could be vague, but I don't know a proper term for this. I'm sorry for that.
I'd like to collect every potential case here, so everyone can be aware of them and be prepared for them. Please post any case as an answer (or comment) if you think your case produces different results.
I think all arguable cases are worth to mention. Because someone can be helped a lot by reading the case details.
If you restrict yourself to safe Rust code, the optimizer shouldn't change the program result. Of course there are some optimizations that can be observable due to their very nature. For example removing unused variables can mean your code overflows the stack without optimizations, while everything will fit on the stack when compiled with optimizations. Or your code may just be too slow to ever finish when compiled without optimizations, which is also an observable difference. And with unsafe code triggering undefined behaviour anything can happen, including the optimizer changing the outcome of your code.
There are, however, a few cases where program execution can change depending on whether you are compiling in debug mode or in release mode:
Integer overflow will result in a panic in debug build, while integers wrap around according to the two's complement representation in release mode – see RFC 650 for details. This behaviour can be controlled with the -C overflow-checks codegen option, so you can disable overflow checks in debug mode or enable them in release mode if you want to.
The debug_assert!() macro defines assertions that are only executed in debug mode. There's again a manual override using the -C debug-assertions codegen option.
Your code can check whether debug assertions are enabled using the debug-assertions configuration option
These are all related to debug assertions in some way, but this list is not exhaustive. You can probably also inspect the environment to determine whether the code is compiled in debug or release mode, and change the behaviour based on this.
None of these examples really fall into the same category as your examples in the original question. Safe Rust code should generally behave the same regardless of whether you compile in debug mode or release mode.
There are far fewer foot-guns in Rust when compared to C++. In general, they revolve around unsafe, raw pointers and lifetimes derived from them or any form of undefined behavior, which is really undefined in Rust as well. However, if your code compiles (and, if in doubt, passes cargo miri test), you most likely won't see surprising behavior.
Two examples that come to mind which can be surprising:
The lifetime of a MutexGuard; the example comes from the book:
while let Ok(job) = receiver.lock().unwrap().recv() {
job();
}
One might think/hope that the Mutex on the receiver is released once a job has been acquired and job() executes while other threads can receive jobs. However, due to the way value-expressions in place-expressions contexts work in conjunction with temporary lifetimes (the MutexGuard needs an anonymous lifetime referencing receiver), the MutexGuard is held for the entirety of the while-block. This means only one thread will ever execute jobs.
If you do
loop {
let job = receiver.lock().unwrap().recv().unwrap();
job();
}
this will allow multiple threads to run in parallel. It's not obvious why this is.
Multiple times there have been questions regarding const. There is no guarantee by the compiler if a const actually exists only once (as an optimization) or is instantiated wherever it is used. The second case is the way one should think about const, there is no guarantee that this is what the compiler does, though. So this can happen:
const EXAMPLE: Option<i32> = Some(42);
fn main() {
assert_eq!(EXAMPLE.take(), Some(42));
assert_eq!(EXAMPLE, Some(42)); // Where did this come from?
}
I'm new to programming and I have a conceptual question.
That is, can "exception" be perfectly replaced by "if.. else" ?
I know "exception" is to handling some exceptional conditions that might cause error or crash.
But we also use "if.. else" to ensure the correctness of value of variables, don't we?
Or "exception" can really be replaced by "if.. else", but using "exception" has other benefits(like convenience?)
Thank you, and sorry for my poor English.
The biggest difference between exceptions and "if..else" is that exceptions pass up the call stack: an exception raised in one function can be caught in a caller any number of frames up the stack. Using "if" statements doesn't let you transfer control in this way, everything has to be handled in the same function that detected the condition.
Most of your questions relate to Python, so here is an answer based on that fact.
In Python, it is idiomatic (or "pythonic") to use try-except blocks. We call this "EAFP": Easier to ask for forgiveness than permission.
In C, where there were no exceptions, it was usual to "LBYL": Look before you leap, resulting in lots of if (...) statements.
So, while you can LBYL, you should follow the idioms of the language in which you are programming: using exceptions for handling exceptional cases and if-statements for conditionals.
Technically, the answer is yes, exceptions can be perfectly replaced by if-else. Many languages, C for example, have no native notion of exceptions that can be thrown and caught.
The primary advantage of exceptions is code readability and maintainability. They serve a different purpose than if-else. Exceptions are for exceptional conditions, while if-else is for program flow.
See this excellent article explaining the difference.
That's a lot of branch conditions to manage. In theory, exceptions aren't necessary for perfect code, but perfect code does not exist in real life. Exceptions are a well-established mechanism for dealing with problems in a controlled manner.
The old way for handling an error from a function looks something like this:
int result = function_returns_error_code();
if (result != GOOD)
{
/* handle problem */
}
else
{
/* keep going */
}
The problem with this solution (and others like it - using if-else) is that if there is a real problem, and the programmer does not properly handle it with an if...else (if the function returns an error code indicating major problems, but the programmer forgets about it), it is left ignored. With an exception, it goes further and further up the call stack ) until it is either handled or the program quits.
Further, it is tedious to check for error codes in functions, or pass a parameter into which to put an error code. It is simpler, cleaner, and better to use exceptions, for maintainability and abstraction.
In most high-level languages working with exceptions is often more efficient than if-else because you avoid multiple validation. eg:
if value is not 0 then print 10 / value
In most interpreters 10 / value will internally test whether value is a valid divider before using it so you've actually tested for the same problem twice. In some cases the exception may come all the way up from hardware so no software validation is happening at all.
On the other hand:
try print 10 / value ... catch exception
Will only test whether value is valid once. Furthermore there's a good chance the test will be better optimised than your own code and more capable of handling truly unexpected conditions (like out of memory errors).
I found the following code in my team's project:
Public Shared Function isRemoteDisconnectMessage(ByRef m As Message)
isRemoteDisconnectMessage = False
Select Case (m.Msg)
Case WM_WTSSESSION_CHANGE
Select Case (m.WParam.ToInt32)
Case WTS_REMOTE_DISCONNECT
isRemoteDisconnectMessage = True
End Select
End Select
End Function
Never mind that the function doesn't have a return type (I can easily add 'As Boolean'); what I'm wondering is, could there be any reason to prefer the above over the following (to me, much more readable) code?
Public Shared Function isRemoteDisconnectMessage(ByRef m As Message) As Boolean
Return m.Msg = WM_WTSSESSION_CHANGE AndAlso _
m.WParam.ToInt32() = WTS_REMOTE_DISCONNECT
End Function
To put the question in general terms: Does it make sense to use a switch (or, in this case, Select Case) block--and/or nested blocks--to test a single condition? Is this possibly faster than a straightforward if?
If you're worried about performance...profile. Otherwise you can't go wrong erring on the side of readability...
I don't believe it actually matters in terms of speed, the compiler should be able to optimize it.
I think it would just be a matter of preference.
My rule of thumb is to use a switch statement when the number of if/else conditions is greater than three. I don't have any data behind why this makes sense other than readability/maintainability seems to decrease as the number of if/else conditions increases.
I think the answer in the specific case you've given is no - it doesn't make sense, as suggested in other answers one would hope that the compilers would optimise away any practical differences.
I'd put money on this being a bit of cut, paste and delete coding - taking a generalised set of nested case statements and extracting that one bit that gives you the yes/no result you need.
If this were something similar in-line and/or there was a function call where the return flag is set then one might, possibly, be at a point where one could start to justify it but not as it is.
This question already has answers here:
Advantage of switch over if-else statement
(23 answers)
Eliminating `switch` statements [closed]
(23 answers)
Is there any significant difference between using if/else and switch-case in C#?
(21 answers)
Closed 2 years ago.
Why you would want to use a switch block over a series of if statements?
switch statements seem to do the same thing but take longer to type.
As with most things you should pick which to use based on the context and what is conceptually the correct way to go. A switch is really saying "pick one of these based on this variables value" but an if statement is just a series of boolean checks.
As an example, if you were doing:
int value = // some value
if (value == 1) {
doThis();
} else if (value == 2) {
doThat();
} else {
doTheOther();
}
This would be much better represented as a switch as it then makes it immediately obviously that the choice of action is occurring based on the value of "value" and not some arbitrary test.
Also, if you find yourself writing switches and if-elses and using an OO language you should be considering getting rid of them and using polymorphism to achieve the same result if possible.
Finally, regarding switch taking longer to type, I can't remember who said it but I did once read someone ask "is your typing speed really the thing that affects how quickly you code?" (paraphrased)
If you are switching on the value of a single variable then I'd use a switch every time, it's what the construct was made for.
Otherwise, stick with multiple if-else statements.
concerning Readability:
I typically prefer if/else constructs over switch statements, especially in languages that allows fall-through cases. What I've found, often, is as the projects age, and multiple developers gets involved, you'll start having trouble with the construction of a switch statement.
If they (the statements) become anything more than simple, many programmers become lazy and instead of reading the entire statement to understand it, they'll simply pop in a case to cover whatever case they're adding into the statement.
I've seen many cases where code repeats in a switch statement because a person's test was already covered, a simple fall-though case would have sufficed, but laziness forced them to add the redundant code at the end instead of trying to understand the switch. I've also seen some nightmarish switch statements with many cases that were poorly constructed, and simply trying to follow all the logic, with many fall-through cases dispersed throughout, and many cases which weren't, becomes difficult ... which kind of leads to the first/redundancy problem I talked about.
Theoretically, the same problem could exist with if/else constructs, but in practice this just doesn't seem to happen as often. Maybe (just a guess) programmers are forced to read a bit more carefully because you need to understand the, often, more complex conditions being tested within the if/else construct? If you're writing something simple that you know others are likely to never touch, and you can construct it well, then I guess it's a toss-up. In that case, whatever is more readable and feels best to you is probably the right answer because you're likely to be sustaining that code.
concerning Speed:
Switch statements often perform faster than if-else constructs (but not always). Since the possible values of a switch statement are laid out beforehand, compilers are able to optimize performance by constructing jump tables. Each condition doesn't have to be tested as in an if/else construct (well, until you find the right one, anyway).
However this isn't always the case, though. If you have a simple switch, say, with possible values of 1 to 10, this will be the case. The more values you add requires the jump tables to be larger and the switch becomes less efficient (not than an if/else, but less efficient than the comparatively simple switch statement). Also, if the values are highly variant ( i.e. instead of 1 to 10, you have 10 possible values of, say, 1, 1000, 10000, 100000, and so on to 100000000000), the switch is less efficient than in the simpler case.
Hope this helps.
Switch statements are far easier to read and maintain, hands down. And are usually faster and less error prone.
Use switch every time you have more than 2 conditions on a single variable, take weekdays for example, if you have a different action for every weekday you should use a switch.
Other situations (multiple variables or complex if clauses you should Ifs, but there isn't a rule on where to use each.
I personally prefer to see switch statements over too many nested if-elses because they can be much easier to read. Switches are also better in readability terms for showing a state.
See also the comment in this post regarding pacman ifs.
This depends very much on the specific case. Preferably, I think one should use the switch over the if-else if there are many nested if-elses.
The question is how much is many?
Yesterday I was asking myself the same question:
public enum ProgramType {
NEW, OLD
}
if (progType == OLD) {
// ...
} else if (progType == NEW) {
// ...
}
if (progType == OLD) {
// ...
} else {
// ...
}
switch (progType) {
case OLD:
// ...
break;
case NEW:
// ...
break;
default:
break;
}
In this case, the 1st if has an unnecessary second test. The 2nd feels a little bad because it hides the NEW.
I ended up choosing the switch because it just reads better.
I have often thought that using elseif and dropping through case instances (where the language permits) are code odours, if not smells.
For myself, I have normally found that nested (if/then/else)s usually reflect things better than elseifs, and that for mutually exclusive cases (often where one combination of attributes takes precedence over another), case or something similar is clearer to read two years later.
I think the select statement used by Rexx is a particularly good example of how to do "Case" well (no drop-throughs) (silly example):
Select
When (Vehicle ¬= "Car") Then
Name = "Red Bus"
When (Colour == "Red") Then
Name = "Ferrari"
Otherwise
Name = "Plain old other car"
End
Oh, and if the optimisation isn't up to it, get a new compiler or language...
The tendency to avoid stuff because it takes longer to type is a bad thing, try to root it out. That said, overly verbose things are also difficult to read, so small and simple is important, but it's readability not writability that's important. Concise one-liners can often be more difficult to read than a simple well laid out 3 or 4 lines.
Use whichever construct best descibes the logic of the operation.
Let's say you have decided to use switch as you are only working on a single variable which can have different values. If this would result in a small switch statement (2-3 cases), I'd say that is fine. If it seems you will end up with more I would recommend using polymorphism instead. An AbstractFactory pattern could be used here to create an object that would perform whatever action you were trying to do in the switches. The ugly switch statement will be abstracted away and you end up with cleaner code.