As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
In some languages, single quotes are used to define characters and double quotes are used to define strings. In other languages, both single and double quotes are used to define strings.
Do languages that use single and double quotes to define strings often offer an explicit way to define a single character?
Are there any implications to not being able to specifically define a character? Is it acceptable - or desirable - to automatically optimize single character strings into characters?
If the language has a character data type, then there is usually a way to define a character literal.
In VB.NET for example, a character literal looks like a single character string, but with the C suffix:
Dim space As Char = " "C
(The reason that apostrophes was not used for character literals in VB.NET, as in for example C#, is that they are used as shorthand for the REM command.)
In Javascript for example there is no character data type, so there is no way do specify a character literal. You would represent a character either as a single character string, or as the numerical character code.
Automatically optimising a single character string to a character would not likely be a good solution, unless you also make the automatic conversion back to a string if needed. In practice that would however be the same as automatically convert a single character string to a character when needed.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There is some stuff that I never see in any programming language and I would know why. I believe this things may be useful. Wll,maybe the explanation will be obvious when you point. But let's go.
Why doesn't 10², be valid in its syntax?
sometimes, we want express by using such notation(just like in a paper) instead of pre-computed value(that sometimes, is a big number,and,makes some difficult when seen at first time, I belive that it is the purpose to _ in the D and Java programming languages) or still call math functions for this. Of course that I'm saying to the compiler replace the value of this variable to the computed value,don't leave it to at run-time.
The - in an indentifier. Why is - not acceptable like _?(just lisp dialect does) to me, int name-size = 14; does not means unreadable. Or this "limitation" is attribute to characters set of computer?
I will be so happy when someone answer my questions. Also,if you have another pointer to ask,just edit my answer and post a note on its edition or post as comment.
Okay, so the two specific questions you've given:
102 - how would you expect to type this? Programming languages tend to stick to ASCII for all but identifiers. Note that you can use double x = 10e2; in Java and C#... but the e form is only valid for floating point literals, not integers.
As noted in comments, exponentiation is supported in some languages - but I suspect it just wasn't deemed sufficiently useful to be worth the extra complexity in most.
An identifier with a - in leads to obvious ambiguity in languages with infix operators:
int x = 10;
int y = 4;
int x-y = 3;
int z = x-y;
Is z equal to 3 (the value of the x-y variable) or is it equal to 6 (the value of subtracting y from x)? Obviously you could come up with rules about what would happen, but by removing - from the list of valid characters in an identifier, this ambiguity is removed. Using _ or just casing (nameSize) is simpler than providing extra rules in the language. Where would you stop, anyway? What about . as part of an identifier, or +?
In general, you should be aware that languages can easily suffer from too many features. The C# team in particular have been quite open about how high the bar is for a new feature to make it into the language. Every new feature must be designed, specified, implemented, tested, documented, and then developers have to learn about it if they're going to understand code using it. This is not cheap, so good language designers are naturally conservative.
Can it be done?
2.⁷
1.617 * 10.ⁿ(13)
Apparently yes. You can modify languages such as ruby (define utf-8 named functions and monkey patch numeric classes) or create User-defined literals in C++ to achieve additional expressiveness.
Should it be done?
How would you type those characters?
Which unicode would you use for, say, euler's constant ? U+2107?
I'd say we stick to code we can type and agree on.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
As far as I'm aware, the default value of a boolean variable in C#, VB, Java and JavaScript is false (or perhaps "behaves like false" is more accurate in the case of JavaScript) and I'm sure there are many other languages in which this is the case.
I'm wondering why this is? Why do language designers pick false for the default? For numerical values, I can see that zero is a logical choice, but I don't see that false is any more a natural state than true.
And as an aside, are there any languages in which the default is true?
From the semantic point of view, boolean values represent a condition or a state. Many languages assume, if not initialized, that the condition is not met (or such state is empty, or whatever). It serves like a flag. Think about it on the other way around. If the default value for a boolean is true, then the semantics of that language would tell you that any condition is initially satisfied, which is illogical.
From the practical point of view, programming languages often internally store boolean values as a bit (0 for false, 1 for true), so the same rules for numeric types apply to booleans in this case.
Java's default value for boolean instance variables is always false, but that doesn't apply for local variables, you're required to initialize it.
As part of a code standards document I wrote awhile back, I enforce "you must always use braces for loops and/or conditional code blocks, even (especially) if they're only one line."
Example:
// this is wrong
if (foo)
//bar
else
//baz
while (stuff)
//things
// This is right.
if (foo) {
// bar
} else {
// baz
}
while (things) {
// stuff
}
When you don't brace a single-line, and then someone comments it out, you're in trouble. If you don't brace a single-line, and the indentation doesn't display the same on someone else's machine... you're in trouble.
So, question: are there good reasons why this would be a mistaken or otherwise unreasonable standard? There's been some discussion on it, but no one can offer me a better counterargument than "it feels ugly".
The best counter argument I can offer is that the extra line(s) taken up by the space reduce the amount of code you can see at one time, and the amount of code you can see at one time is a big factor in how easy it is to spot errors. I agree with the reasons you've given for including braces, but in many years of C++ I can only think of one occasion when I made a mistake as a result and it was in a place where I had no good reason for skipping the braces anyway. Unfortunately I couldn't tell you if seeing those extra lines of code ever helped in practice or not.
I'm perhaps more biased because I like the symmetry of matching braces at the same indentation level (and the implied grouping of the contained statements as one block of execution) - which means that adding braces all the time adds a lot of lines to the project.
I enforce this to a point, with minor exceptions for if statements which evaluate to either return or to continue a loop.
So, this is correct by my standard:
if(true) continue;
As is this
if(true) return;
But the rule is that it is either a return or continue, and it is all on the same line. Otherwise, braces for everything.
The reasoning is both for the sake of having a standard way of doing it, and to avoid the commenting problem you mentioned.
I see this rule as overkill. Draconian standards don't make good programmers, they just decrease the odds that a slob is going to make a mess.
The examples you give are valid, but they have better solutions than forcing braces:
When you don't brace a single-line, and then someone comments it out, you're in trouble.
Two practices solve this better, pick one:
1) Comment out the if, while, etc. before the one-liner with the one-liner. I.e. treat
if(foo)
bar();
like any other multi-line statement (e.g. an assignment with multiple lines, or a multiple-line function call):
//if(foo)
// bar();
2) Prefix the // with a ;:
if(foo)
;// bar();
If you don't brace a single-line, and the indentation doesn't display the same on someone else's machine... you're in trouble.
No, you're not; the code works the same but it's harder to read. Fix your indentation. Pick tabs or spaces and stick with them. Do not mix tabs and spaces for indentation. Many text editors will automatically fix this for you.
Write some Python code. That will fix at least some bad indentation habits.
Also, structures like } else { look like a nethack version of a TIE fighter to me.
are there good reasons why this would be a mistaken or otherwise unreasonable standard? There's been some discussion on it, but no one can offer me a better counterargument than "it feels ugly".
Redundant braces (and parentheses) are visual clutter. Visual clutter makes code harder to read. The harder code is to read, the easier it is to hide bugs.
int x = 0;
while(x < 10);
{
printf("Count: %d\n", ++x);
}
Forcing braces doesn't help find the bug in the above code.
P.S. I'm a subscriber to the "every rule should say why" school, or as the Dalai Lama put it, "Know the rules so that you may properly break them."
I have yet to have anyone come up with a good reason not to always use curly braces.
The benefits far exceed any "it feels ugly" reason I've heard.
Coding standard exist to make code easier to read and reduce errors.
This is one standard that truly pays off.
I find it hard to argue with coding standards that reduce errors and make the code more readable. It may feel ugly to some people at first, but I think it's a perfectly valid rule to implement.
I stand on the ground that braces should match according to indentation.
// This is right.
if (foo)
{
// bar
}
else
{
// baz
}
while (things)
{
// stuff
}
As far as your two examples, I'd consider yours slightly less readable since finding the matching closing parentheses can be hard, but more readable in cases where indentation is incorrect, while allowing logic to be inserted easier. It's not a huge difference.
Even if indentation is incorrect, the if statement will execute the next command, regardless of whether it's on the next line or not. The only reason for not putting both commands on the same line is for debugger support.
The one big advantage I see is that it's easier to add more statements to conditionals and loops that are braced, and it doesn't take many additional keystrokes to create the braces at the start.
My personal rule is if it's a very short 'if', then put it all on one line:
if(!something) doSomethingElse();
Generally I use this only when there are a bunch of ifs like this in succession.
if(something == a) doSomething(a);
if(something == b) doSomething(b);
if(something == b) doSomething(c);
That situation doesn't arise very often though, so otherwise, I always use braces.
At present, I work with a team that lives by this standard, and, while I'm resistant to it, I comply for uniformity's sake.
I object for the same reason I object to teams that forbid use of exceptions or templates or macros: If you choose to use a language, use the whole language. If the braces are optional in C and C++ and Java, mandating them by convention shows some fear of the language itself.
I understand the hazards described in other answers here, and I understand the yearning for uniformity, but I'm not sympathetic to language subsetting barring some strict technical reason, such as the only compiler for some environment not accommodating templates, or interaction with C code precluding broad use of exceptions.
Much of my day consists of reviewing changes submitted by junior programmers, and the common problems that arise have nothing to do with brace placement or statements winding up in the wrong place. The hazard is overstated. I'd rather spend my time focusing on more material problems than looking for violations of what the compiler would happily accept.
The only way coding standards can be followed well by a group of programmers is to keep the number of rules to a minimum.
Balance the benefit against the cost (every extra rule confounds and confuses programmers, and after a certain threshold, actually reduces the chance that programmers will follow any of the rules)
So, to make a coding standard:
Make sure you can justify every rule with clear evidence that it is better than the alternatives.
Look at alternatives to the rule - is it really needed? If all your programmers use whitespace (blank lines and indentation) well, an if statement is very easy to read, and there is no way that even a novice programmer can mistake a statement inside an "if" for a statement that is standalone. If you are getting lots of bugs relating to if-scoping, the root cause is probably that you have a poor whitepsace/indentation style that makes code unnecessarily difficult to read.
Prioritise your rules by their measurable effect on code quality. How many bugs can you avoid by enforcing a rule (e.g. "always check for null", "always validate parameters with an assert", "always write a unit test" versus "always add some braces even if they aren't needed"). The former rules will save you thousands of bugs a year. The brace rule might save you one. Maybe.
Keep the most effective rules, and discard the chaff. Chaff is, at a minimum, any rule that will cost you more to implement than any bugs that might occur by ignoring the rule. But probably if you have more than about 30 key rules, your programmers will ignore many of them, and your good intentions will be as dust.
Fire any programmer who comments out random bits of code without reading it :-)
P.S. My stance on bracing is: If the "if" statement or the contents of it are both a single line, then you may omit the braces. That means that if you have an if containing a one-line comment and a single line of code, the contents take two lines, and therefore braces are required. If the if condition spans two lines (even if the contents are a single line), then you need braces. This means you only omit braces in trivial, simple, easily read cases where mistakes are never made in practice. (When a statement is empty, I don't use braces, but I always put a comment clearly stating that it is empty, and intentionally so. But that's bordering on a different topic: being explicit in code so that readers know that you meant a scope to be empty rather than the phone rang and you forgot to finish the code)
Many languanges have a syntax for one liners like this (I'm thinking of perl in particular) to deal with such "ugliness". So something like:
if (foo)
//bar
else
//baz
can be written as a ternary using the ternary operator:
foo ? bar : baz
and
while (something is true)
{
blah
}
can be written as:
blah while(something is true)
However in languages that don't have this "sugar" I would definitely include the braces. Like you said it prevents needless bugs from creeping in and makes the intention of the programmer clearer.
I am not saying it is unreasonable, but in 15+ years of coding with C-like languages, I have not had a single problem with omitting the braces. Commenting out a branch sounds like a real problem in theory - I've just never seen it happening in practice.
Another advantage of always using braces is that it makes search-and-replace and similar automated operations easier.
For example: Suppose I notice that functionB is usually called immediately after functionA, with a similar pattern of arguments, and so I want to refactor that duplicated code into a new combined_function. A regex could easily handle this refactoring if you don't have a powerful enough refactoring tool (^\s+functionA.*?;\n\s+functionB.*?;) but, without braces, a simple regex approach could fail:
if (x)
functionA(x);
else
functionA(y);
functionB();
would become
if (x)
functionA(x);
else
combined_function(y);
More complicated regexes would work in this particular case, but I've found it very handy to be able to use regex-based search-and-replace, one-off Perl scripts, and similar automated code maintenance, so I prefer a coding style that doesn't make that needlessly complicated.
I don't buy into your argument. Personally, I don't know anyone who's ever "accidentally" added a second line under an if. I would understand saying that nested if statements should have braces to avoid a dangling else, but as I see it you're enforcing a style due to a fear that, IMO, is misplaced.
Here are the unwritten (until now I suppose) rules I go by. I believe it provides readability without sacrificing correctness. It's based on a belief that the short form is in quite a few cases more readable than the long form.
Always use braces if any block of the if/else if/else statement has more than one line. Comments count, which means a comment anywhere in the conditional means all sections of the conditional get braces.
Optionally use braces when all blocks of the statement are exactly one line.
Never place the conditional statement on the same line as the condition. The line after the if statement is always conditionally executed.
If the conditional statement itself performs the necessary action, the form will be:
for (init; term; totalCount++)
{
// Intentionally left blank
}
No need to standardize this in a verbose manner, when you can just say the following:
Never leave braces out at the expense of readability. When in doubt, choose to use braces.
I think the important thing about braces is that they very definitely express the intent of the programmer. You should not have to infer intent from indentation.
That said, I like the single-line returns and continues suggested by Gus. The intent is obvious, and it is cleaner and easier to read.
If you have the time to read through all of this, then you have the time to add extra braces.
I prefer adding braces to single-line conditionals for maintainability, but I can see how doing it without braces looks cleaner. It doesn't bother me, but some people could be turned off by the extra visual noise.
I can't offer a better counterargument either. Sorry! ;)
For things like this, I would recommend just coming up with a configuration template for your IDE's autoformatter. Then, whenever your users hit alt-shift-F (or whatever the keystroke is in your IDE of choice), the braces will be automatically added. Then just say to everyone: "go ahead and change your font coloring, PMD settings or whatever. Please don't change the indenting or auto-brace rules, though."
This takes advantage of the tools available to us to avoid arguing about something that really isn't worth the oxygen that's normally spent on it.
Depending on the language, having braces for a single lined conditional statement or loop statement is not mandatory. In fact, I would remove them to have fewer lines of code.
C++:
Version 1:
class InvalidOperation{};
//...
Divide(10, 0);
//...
Divide(int a, in b)
{
if(b == 0 ) throw InvalidOperation();
return a/b;
}
Version 2:
class InvalidOperation{};
//...
Divide(10, 0);
//...
Divide(int a, in b)
{
if(b == 0 )
{
throw InvalidOperation();
}
return a/b;
}
C#:
Version 1:
foreach(string s in myList)
Console.WriteLine(s);
Version2:
foreach(string s in myList)
{
Console.WriteLine(s);
}
Depending on your perspective, version 1 or version 2 will be more readable. The answer is rather subjective.
Wow. NO ONE is aware of the dangling else problem? This is essentially THE reason for always using braces.
In a nutshell, you can have nasty ambiguous logic with else statements, especially when they're nested. Different compilers resolve the ambiguity in their own ways. It can be a huge problem to leave off braces if you don't know what you're doing.
Aesthetics and readability has nothing to do it.
This question already has answers here:
Closed 14 years ago.
I personally do not like programming languages being case sensitive.
(I know that the disadvantages of case sensitivity are now-a-days complemented by good IDEs)
Still I would like to know whether there are any advantages for a programming language if it is case sensitive. Is there any reason why designers of many popular languages chose to make them case sensitive?
EDIT: duplicate of Why are many languages case sensitive?
EDIT: (I cannot believe I asked this question a few years ago)
This is a preference. I prefer case sensitivity, I find it easier to read code this way. For instance, the variable name "myVariable" has a different word shape than "MyVariable," "MYVARIABLE," and "myvariable." This makes it more straightforward at a glance to tell the two identifiers apart. Of course, you should not or very rarely create identifiers that differ only in case. This is more about consistency than the obvious "benefit" of increasing the number of possible identifiers. Some people think this is a disadvantage. I can't think of any time in which case sensitivity gave me any problems. But again, this is a preference.
Case-sensitivity is inherently faster to parse (albeit only slightly) since it can compare character sequences directly without having to figure out which characters are equivalent to each other.
It allows the implementer of a class/library to control how casing is used in the code. Case may also be used to convey meaning.
The code looks more the same. In the days of BASIC these were equivalent:
PRINT MYVAR
Print MyVar
print myvar
With type checking, case sensitivity prevents you from having a misspelling and unrecognized variable. I have fixed bugs in code that is a case insensitive, non typed language (FORTRAN77), where the zero (0) and capital letter O looked the same in the editor. The language created a new object and so the output was flawed. With a case sensitive, typed language, this would not have happened.
In the compiler or interpreter, a case-insensitive language is going to have to make everything upper or lowercase to test for matches, or otherwise use a case insensitive matching tool, but that's only a small amount of extra work for the compiler.
Plus case-sensitive code allows certain patterns of declarations such as
MyClassName myClassName = new MyClassName()
and other situations where case sensitivity is nice.