Why does the default value of a boolean variable tend to be false? [closed] - programming-languages

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
As far as I'm aware, the default value of a boolean variable in C#, VB, Java and JavaScript is false (or perhaps "behaves like false" is more accurate in the case of JavaScript) and I'm sure there are many other languages in which this is the case.
I'm wondering why this is? Why do language designers pick false for the default? For numerical values, I can see that zero is a logical choice, but I don't see that false is any more a natural state than true.
And as an aside, are there any languages in which the default is true?

From the semantic point of view, boolean values represent a condition or a state. Many languages assume, if not initialized, that the condition is not met (or such state is empty, or whatever). It serves like a flag. Think about it on the other way around. If the default value for a boolean is true, then the semantics of that language would tell you that any condition is initially satisfied, which is illogical.
From the practical point of view, programming languages often internally store boolean values as a bit (0 for false, 1 for true), so the same rules for numeric types apply to booleans in this case.
Java's default value for boolean instance variables is always false, but that doesn't apply for local variables, you're required to initialize it.

Related

Why doesn't rust allow float/int division by default? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 months ago.
Improve this question
The following snippet:
let a: f32 = 2.0;
let b: i32 = 12;
println!("{}",a/b);
fails to compile, with the error message indicating that there is "no implementation for 'f32 / i32'. Now, I understand what this error means, and that I could easily fix it by casting b before dividing. More to the point, the compiler also tells me that I could fix this without modifying the snippet above by implementing the trait Div<i32> for f32.
I don't actually need to divide ints by floats in this manner, but the compiler's message I got made me curious enough to ask the following question: why isn't Div<i32> already implemented for f32?
Of course it would be pretty easy for anyone to implement this by themselves, but I assume it must mean something that it's not a default feature. Is there some complication with the implementation I'm not thinking of? Or is it that the possibility of f32/i32 division somehow lead to "language gotcha's"? Or maybe it's just that rust is more "barebones" in this regard than I assumed?
There are multiple reasons. For one, it's not clear what the return values should be. Should 12 / 2.0 return a float or an integer? What about 12.0 / 2? Many languages opt to just return floats, but this results in hidden conversion costs. Rust as a language tries to be very explicit, especially in case of non-zero cost abstractions.
There is also type safety to consider. Sometimes doing arithmetic between ints and floats indicates a logic error.

Differentiating between string declaration vs char declaration [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
In some languages, single quotes are used to define characters and double quotes are used to define strings. In other languages, both single and double quotes are used to define strings.
Do languages that use single and double quotes to define strings often offer an explicit way to define a single character?
Are there any implications to not being able to specifically define a character? Is it acceptable - or desirable - to automatically optimize single character strings into characters?
If the language has a character data type, then there is usually a way to define a character literal.
In VB.NET for example, a character literal looks like a single character string, but with the C suffix:
Dim space As Char = " "C
(The reason that apostrophes was not used for character literals in VB.NET, as in for example C#, is that they are used as shorthand for the REM command.)
In Javascript for example there is no character data type, so there is no way do specify a character literal. You would represent a character either as a single character string, or as the numerical character code.
Automatically optimising a single character string to a character would not likely be a good solution, unless you also make the automatic conversion back to a string if needed. In practice that would however be the same as automatically convert a single character string to a character when needed.

Stuff that the programming languages does not allow in its syntax [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There is some stuff that I never see in any programming language and I would know why. I believe this things may be useful. Wll,maybe the explanation will be obvious when you point. But let's go.
Why doesn't 10², be valid in its syntax?
sometimes, we want express by using such notation(just like in a paper) instead of pre-computed value(that sometimes, is a big number,and,makes some difficult when seen at first time, I belive that it is the purpose to _ in the D and Java programming languages) or still call math functions for this. Of course that I'm saying to the compiler replace the value of this variable to the computed value,don't leave it to at run-time.
The - in an indentifier. Why is - not acceptable like _?(just lisp dialect does) to me, int name-size = 14; does not means unreadable. Or this "limitation" is attribute to characters set of computer?
I will be so happy when someone answer my questions. Also,if you have another pointer to ask,just edit my answer and post a note on its edition or post as comment.
Okay, so the two specific questions you've given:
102 - how would you expect to type this? Programming languages tend to stick to ASCII for all but identifiers. Note that you can use double x = 10e2; in Java and C#... but the e form is only valid for floating point literals, not integers.
As noted in comments, exponentiation is supported in some languages - but I suspect it just wasn't deemed sufficiently useful to be worth the extra complexity in most.
An identifier with a - in leads to obvious ambiguity in languages with infix operators:
int x = 10;
int y = 4;
int x-y = 3;
int z = x-y;
Is z equal to 3 (the value of the x-y variable) or is it equal to 6 (the value of subtracting y from x)? Obviously you could come up with rules about what would happen, but by removing - from the list of valid characters in an identifier, this ambiguity is removed. Using _ or just casing (nameSize) is simpler than providing extra rules in the language. Where would you stop, anyway? What about . as part of an identifier, or +?
In general, you should be aware that languages can easily suffer from too many features. The C# team in particular have been quite open about how high the bar is for a new feature to make it into the language. Every new feature must be designed, specified, implemented, tested, documented, and then developers have to learn about it if they're going to understand code using it. This is not cheap, so good language designers are naturally conservative.
Can it be done?
2.⁷
1.617 * 10.ⁿ(13)
Apparently yes. You can modify languages such as ruby (define utf-8 named functions and monkey patch numeric classes) or create User-defined literals in C++ to achieve additional expressiveness.
Should it be done?
How would you type those characters?
Which unicode would you use for, say, euler's constant ? U+2107?
I'd say we stick to code we can type and agree on.

Why does Forth use IF statement THEN ... instead of ENDIF? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Why does Forth use IF statement THEN ... instead of ENDIF?
I'm implementing a (non-conforming) Forth compiler thing. Basically, Forth's syntax appears very counter-intuitive to me regarding IF statements.
IF ."Statement is true"
ELSE ."Statement is not true"
THEN ."Printed no matter what;
Why is the ending statement a THEN? This makes the language read extremely weird to me. For my compiler, I'm considering changing it to something like ENDIF which reads more natural. But, what was the rationale behind having backwards IF-THEN statements in the first place?
Just think of it as, "IF that's the case, do this, ELSE do that ... and THEN continue with ..."
Or better yet, use quotations (as in Factor, RetroForth, ...) in which case it's completely postfix without special compile-time words; just regular words taking addresses from the stack: [ do this ] [ do that ] if or [ do this ] when or [ do that ] unless. I personally much prefer this.
Aside RE: quotations
Here is how quotations are compiled in RetroForth. In my own Forth (which compiles to my own VM), I simply added a QUOTE instruction that pushes the next address to the stack and jumps over n-bytes. The n-bytes are expected to be terminated by a RETURN instruction and the if, when, unless words consume a predicate along with the address(es) left by preceding quotations; calling as appropriate. Very simple indeed, and quotations generally open the door for all kinds of beautiful abstractions away from thinking about the stack.

How is Lexical Scoping implemented? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
A couple of years ago I started writing an interpreter for a little Domain Specific Language which included programmer-defined functions.
At first I implemented variable scope using a simple stack of symbol-tables. But now I want to move to proper lexical scoping (with the option of closures). Can anyone explain the data-structure and algorithm behind lexical scope?
To get correct lexical scoping and closures in an interpreter, all you need to do is follow these rules:
In your interpreter, variables are always looked up in an environment table passed in by the caller or kept as a variable, not some global env-stack. The signature of your eval operation is like eval(expression, env) => value.
When interpreted code calls a function, the environment is NOT passed to that function. The signature of your function application operation is like apply(function, arguments) => value.
When an interpreted function is called, the environment its body is evaluated in is the environment in which the function definition was made, and has nothing whatsoever to do with the caller. So if you have a local function, then it is a closure, that is, a data structure containing fields {function definition, env-at-definition-time}.
To expand on that last bit in Python-ish syntax:
x = 1
return lambda y: x + y
should be executed as if it were
x = 1
return make_closure(<AST for "lambda y: x + y">, {"x": x})
where the second dict argument may be just the current-env rather than a data structure constructed at that time. (On the other hand, retaining the entire env rather than just the closed-over variables can mean programs have surprising memory leaks because closures are holding onto things the don't need. This is worth fixing in any 'practical' language implementation but not when you are just experimenting with language semantics.)
There are many different ways to implement lexical scoping. Here are some of my favorites:
If you don't need super-fast performance, use a purely functional data structure to implement your symbol tables, and represent a nested function by a pair containing a pointer to the code and a pointer to the symbol table.
If you need native-code speeds, my favorite technique is described in Making a Fast Curry by Simon Marlow and Simon Peyton Jones.
If you need native-code speeds, but curried functions are not that important, consider closure-passing style.
Read The implementation of Lua 5.0 for instance.
There is no single right way to do this. The important thing is to clearly state the semantics that you are looking to provide, and then the data structures and algorithms will follow.
Stroustrup implemented this in the first C++ compiler simply with one symbol table per scope, and a chaining rule that followed scopes outwards until a definition is found. How this works exactly depends on your precise semantics. Make sure you nail those down first.
Knuth in The Art of Computer Programming, Vol 1, gives an algorithm for a Cobol symbol table whereby scoping is done via links.

Resources