Why special character( except underscore) are not allowed in variable name of programming language ?
Is there are any reason related to computer architecture or organisation.
Most languages have long histories, using ASCII (or EBCDIC) character sets. Those languages tend to have simple identifier descriptions (e.g., starts with A-Z, followed by A-Z,0-9, maybe underscore; COBOL allows "-" as part of a name). When all you had was an 029 keypunch or a teletype, you didn't have many other characters, and most of them got used as operator syntax or punctuation.
On older machines, this did have the advantage that you could encode an identifier as a radix 37 (A-Z,0-9, null) [6 characters in 32 bits] or radix 64 (A-Z,a-z,0-9,underscore and null) numbers [6 characters in 36 bits, a common word size in earlier generations of machines) for small symbol tables. A consequence: many older languages had 6 character limits on identifier sizes (e.g., FORTRAN).
LISP languages have long been much more permissive; names can be anything but characters with special meaning to LISP, e.g., ( ) [ ] ' ` #, and usually there are ways to insert these characters into names using some kind of escape convention. Our PARLANSE language is like LISP; it uses "~" as an escape, so you can write ~(begin+~)end as a single identifier whose actual spelling is "(begin+end)".
More modern languages (Java, C#, Scala, ...., uh, even PARLANSE) grew up in an era of Unicode, and tend to allow most of unicode in identifiers (actually, they tend to allow named Unicode subsets as parts of identifiers). An identifier made of chinese characters is perfectly legal in such languages.
Its kind of a matter of taste in the Western hemisphere: most identifier names still tend to use just letters and digits (sometimes, Western European letters). I don't know what the Japanese and Chinese really use for identifier names when they have Unicode capable character sets; what little Asian code I have seen tends to follow western identifier conventions but the comments tend to use much more of the local native and/or Unicode character set.
Fundamentally it is because they're mostly used as operators or separators, so it would introduce ambiguity.
Is there any reason relate to computer architecture or organization.
No. The computer can't see the variable names. Only the compiler can. But it has to be able to distinguish a variable name from two variable names separated by an operator, and most language designers have adopted the principle that the meaning of a computer program should not be affected by white space.
Related
In the swift documentation for comparing strings, I found the following:
Two String values (or two Character values) are considered equal if
their extended grapheme clusters are canonically equivalent. Extended
grapheme clusters are canonically equivalent if they have the same
linguistic meaning and appearance, even if they are composed from
different Unicode scalars behind the scenes.
Then the documentation proceeds with the following example which shows two strings that are "cannonically equivalent"
For example, LATIN SMALL LETTER E WITH ACUTE (U+00E9) is canonically
equivalent to LATIN SMALL LETTER E (U+0065) followed by COMBINING
ACUTE ACCENT (U+0301). Both of these extended grapheme clusters are
valid ways to represent the character é, and so they are considered to
be canonically equivalent:
Ok. Somehow e and é look the same and also have the same linguistic meaning. Sure I'll give them that. I have taken a Spanish class sometime and the prof wasn't too strict on whether we used either forms of e, so I'm guessing this is what they are referring to. Fair enough
The documentation goes further to show two strings that are not canonically equivalent:
Conversely, LATIN CAPITAL LETTER A (U+0041, or "A"), as used in
English, is not equivalent to CYRILLIC CAPITAL LETTER A (U+0410, or
"А"), as used in Russian. The characters are visually similar, but do
not have the same linguistic meaning:
Now here is where the alarm bells go off and I decide to ask this question. It seems that appearance has nothing to do with it because the two strings look exactly the same, and they also admit this in the documentation. So it seems that what the string class is really looking for is linguistic meaning?
This is why I ask what it means by the strings having the same/different linguistic meaning, because e is the only form of e that I know which is mainly used in English, but I have only seen é being used in languages like French or Spanish, so why is it that the given that А is used in Russian and A is used in English, is what causes the string class to say that they are not equivalent?
I hope I was able to walk you through my thought process, now my question is what does it mean for two strings to have the same linguistic meaning (in code if possible)?
You said:
Somehow e and é look the same and also have the same linguistic meaning.
No. You have misread the document. Here's the document again:
LATIN SMALL LETTER E WITH ACUTE (U+00E9) is canonically equivalent to LATIN SMALL LETTER E (U+0065) followed by COMBINING ACUTE ACCENT (U+0301).
Here's U+00E9: é
Here's U+0065: e
Here's U+0301: ´
Here's U+0065 followed by U+0301: é
So U+00E9 (é) looks and means the same as U+0065 U+0301 (é). Therefore they must be treated as equal.
So why is Cyrillic А different from Latin A? UTN #26 gives several reasons. Here are some:
“Traditional graphology has always treated them as distinct scripts, …”
“Literate users of Latin, Greek, and Cyrillic alphabets do not have cultural conventions of treating each other's alphabets and letters as part of their own writing systems.”
“Even more significantly, from the point of view of the problem of character encoding for digital textual representation in information technology, the preexisting identification of Latin, Greek, and Cyrillic as distinct scripts was carried over into character encoding, from the very earliest instances of such encodings.”
“[A] unified encoding of Latin, Greek, and Cyrillic would make casing operations an unholy mess, …”
Read the tech note for full details.
Trying to understand the subtleties of modern Unicode is making my head hurt. In particular, the distinction between code points, characters, glyphs and graphemes - concepts which in the simplest case, when dealing with English text using ASCII characters, all have a one-to-one relationship with each other - is causing me trouble.
Seeing how these terms get used in documents like Matthias Bynens' JavaScript has a unicode problem or Wikipedia's piece on Han unification, I've gathered that these concepts are not the same thing and that it's dangerous to conflate them, but I'm kind of struggling to grasp what each term means.
The Unicode Consortium offers a glossary to explain this stuff, but it's full of "definitions" like this:
Abstract Character. A unit of information used for the organization, control, or representation of textual data. ...
...
Character. ... (2) Synonym for abstract character. (3) The basic unit of encoding for the Unicode character encoding. ...
...
Glyph. (1) An abstract form that represents one or more glyph images. (2) A synonym for glyph image. In displaying Unicode character data, one or more glyphs may be selected to depict a particular character.
...
Grapheme. (1) A minimally distinctive unit of writing in the context of a particular writing system. ...
Most of these definitions possess the quality of sounding very academic and formal, but lack the quality of meaning anything, or else defer the problem of definition to yet another glossary entry or section of the standard.
So I seek the arcane wisdom of those more learned than I. How exactly do each of these concepts differ from each other, and in what circumstances would they not have a one-to-one relationship with each other?
Character is an overloaded term that can mean many things.
A code point is the atomic unit of information. Text is a sequence of code points. Each code point is a number which is given meaning by the Unicode standard.
A code unit is the unit of storage of a part of an encoded code point. In UTF-8 this means 8 bits, in UTF-16 this means 16 bits. A single code unit may represent a full code point, or part of a code point. For example, the snowman glyph (☃) is a single code point but 3 UTF-8 code units, and 1 UTF-16 code unit.
A grapheme is a sequence of one or more code points that are displayed as a single, graphical unit that a reader recognizes as a single element of the writing system. For example, both a and ä are graphemes, but they may consist of multiple code points (e.g. ä may be two code points, one for the base character a followed by one for the diaeresis; but there's also an alternative, legacy, single code point representing this grapheme). Some code points are never part of any grapheme (e.g. the zero-width non-joiner, or directional overrides).
A glyph is an image, usually stored in a font (which is a collection of glyphs), used to represent graphemes or parts thereof. Fonts may compose multiple glyphs into a single representation, for example, if the above ä is a single code point, a font may choose to render that as two separate, spatially overlaid glyphs. For OTF, the font's GSUB and GPOS tables contain substitution and positioning information to make this work. A font may contain multiple alternative glyphs for the same grapheme, too.
Outside the Unicode standard a character is an individual unit of text composed of one or more graphemes. What the Unicode standard defines as "characters" is actually a mix of graphemes and characters. Unicode provides rules for the interpretation of juxtaposed graphemes as individual characters.
A Unicode code point is a unique number assigned to each Unicode character (which is either a character or a grapheme).
Unfortunately, the Unicode rules allow some juxtaposed graphemes to be interpreted as other graphemes that already have their own code points (precomposed forms). This means that there is more than one way in Unicode to represent a character. Unicode normalization addresses this issue.
A glyph is the visual representation of a character. A font provides a set of glyphs for a certain set of characters (not Unicode characters). For every character, there is an infinite number of possible glyphs.
A Reply to Mark Amery
First, as I stated, there is an infinite number of possible glyphs for each character so no, a character is not "always represented by a single glyph". Unicode doesn't concern itself much with glyphs, and the things it defines in its code charts are certainly not glyphs. The problem is that neither are they all characters. So what are they?
Which is the greater entity, the grapheme or the character? What does one call those graphic elements in text that are not letters or punctuation? One term that springs quickly to mind is "grapheme". It's a word that precisely conjure up the idea of "a graphical unit in a text". I offer this definition: A grapheme is the smallest distinct component in a written text.
One could go the other way and say that graphemes are composed of characters, but then they would be called "Chinese graphemes", and all those bits and pieces Chinese graphemes are composed of would have to be called "characters" instead. However, that's all backwards. Graphemes are the distinct little bits and pieces. Characters are more developed. The phrase "glyphs are composable", would be better stated in the context of Unicode as "characters are composable".
Unicode defines characters but it also defines graphemes that are to be composed with other graphemes or characters. Those monstrosities you composed are a fine example of this. If they catch on maybe they'll get their own code points in a later version of Unicode ;)
There's a recursive element to all this. At higher levels, graphemes become characters become graphemes, but it's graphemes all the way down.
A Reply to T S
Chapter 1 of the
standard states: "The Unicode character encoding treats alphabetic characters,
ideographic characters, and symbols equivalently, which means they can be used
in any mixture and with equal facility". Given this statement, we should be
prepared for some conflation of terms in the standard. Sometimes the proper
terminology only becomes clear in retrospect as a standard develops.
It often happens in formal definitions of a language that two fundamental
things are defined in terms of each other. For example, in
XML an element is defined as a starting tag
possibly followed by content, followed by an ending tag. Content is defined in
turn as either an element, character data, or a few other possible things. A
pattern of self-referential definitions is also implicit in the Unicode
standard:
A grapheme is a code point or a character.
A character is composed from a sequence of one or more graphemes.
When first confronted with these two definitions the reader might object to the
first definition on the grounds that a code point is a character, but
that's not always true. A sequence of two code points sometimes encodes a
single code point under
normalization, and that
encoded code point represents the character, as illustrated in
figure 2.7. Sequences of
code points that encode other code points. This is getting a little tricky and
we haven't even reached the layer where where character encoding schemes such
as UTF-8 are used to
encode code points into byte sequences.
In some contexts, for example a scholarly article on
diacritics, and individual
part of a character might show up in the text by itself. In that context, the
individual character part could be considered a character, so it makes sense
that the Unicode standard remain flexible as well.
As Mark Avery pointed out, a character can be composed into a more complex
thing. That is, each character can can serve as a grapheme if desired. The
final result of all composition is a thing that "the user thinks of as a
character". There doesn't seem to be any real resistance, either in the
standard or in this discussion, to the idea that at the highest level there are
these things in the text that the user thinks of as individual characters. To
avoid overloading that term, we can use "grapheme" in all cases where we want
to refer to parts used to compose a character.
At times the Unicode standard is all over the place with its terminology. For
example, Chapter 3
defines UTF-8 as an "encoding form" whereas the glossary defines "encoding
form" as something else, and UTF-8 as a "Character Encoding Scheme". Another
example is "Grapheme_Base" and "Grapheme_Extend", which are
acknowledged to be
mistakes but that persist because purging them is a bit of a task. There is
still work to be done to tighten up the terminology employed by the standard.
The Proposal for addition of COMBINING GRAPHEME
JOINER got it
wrong when it stated that "Graphemes are sequences of one or more encoded
characters that correspond to what users think of as characters." It should
instead read, "A sequence of one or more graphemes composes what the user
thinks of as a character." Then it could use the term "grapheme sequence"
distinctly from the term "character sequence". Both terms are useful.
"grapheme sequence" neatly implies the process of building up a character from
smaller pieces. "character sequence" means what we all typically intuit it to
mean: "A sequence of things the user thinks of as characters."
Sometimes a programmer really does want to operate at the level of grapheme
sequences, so mechanisms to inspect and manipulate those sequences should be
available, but generally, when processing text, it is sufficient to operate on
"character sequences" (what the user thinks of as a character) and let the
system manage the lower-level details.
In every case covered so far in this discussion, it's cleaner to use "grapheme"
to refer to the indivisible components and "character" to refer to the composed
entity. This usage also better reflects the long-established meanings of both
terms.
I hope this question is not too pedantic, but is there a technical term for the different "categories" that are part of a password when it is being validated? For example default AD password complexity requirements that must be met (Microsoft calls them "categories"):
Passwords must contain characters from three of the following five **categories**:
Uppercase characters of European languages (A through Z, with diacritic marks, Greek and Cyrillic characters)
Lowercase characters of European languages (a through z, sharp-s, with diacritic marks, Greek and Cyrillic characters)
Base 10 digits (0 through 9)
Nonalphanumeric characters: ~!##$%^&*_-+=`|\(){}[]:;"'<>,.?/
Any Unicode character that is categorized as an alphabetic character but is not uppercase or lowercase. This includes Unicode characters from Asian languages.
Is there a term used by security engineers or cryptographers to refer these "categories"?
There's not any official term for these. I would tend to call it a "character type".
For example, this term is used in Novell's document Creating Password Policies:
The password must contain at least one character from three of the four types of character, uppercase, lowercase, numeric, and special
and this NIST document regarding Enterprise Password Management:
AFAIK, in 10 years working in security, no final and shared nomenclature has been given for this. MS "Categories" is a good one and probably the most used, but it is not formally shared among each context (i.e. Java could call it differently, PHP, OWASP, Oracle, ..., could have their own)
Academically speaking, they are only factors to enlarge the basic character set of an offline bruteforce attack, rainbow table creation time or avoid trivial dictionary brute. Bruteforce complexity is roughly 2|C|^n where n is the expected length of the password and C is the character set chosen, and |C| is the number of elements in there.
Having more categories increases the value of |C| - so they should be called something like "password character set subsets" instead of "categories" but you see why nobody bothers with the theoretical bit here, nomenclature is unfriendly.
If you look for it and you find the way academics call them, please post it, it is always useful.
I wonder, if this
alph = ['a'..'z']
returns me
"abcdefghijklmnopqrstuvwxyz"
How can I return French alphabet then? Can I pass somehow a locale?
Update:
Well ) I know that English and French has the same letters. But my point is if they were not the same but starts with A and ends with Z. Would be nice to have human language range support.
At least some languages come with localizations support.
(just trying Haskell, reading a book)
Haskell Char values are not real characters, they are Unicode code points. In some other languages their native character type may represent other things like ASCII characters or "code page whatsitsnumber" characters, or even something selectable at runtime, but not in Haskell.
The range 'a'..'z' coincides with the English alphabet for historical reasons, both in Unicode and in ASCII, and also in character sets derived from ASCII such as ISO8859-X. There is no commonly supported coded character set where some contiguous range of codes coincides with the French alphabet. That is, if you count letters with diacritics as separate letters. The accepted practice seems to exclude letters with diacritics, so the French alphabet coincides with English, but this is not so for other Latin-derived alphabets.
In order to get most alphabets other than English, one needs to enumerate the characters explicitly by hand and not with any range expression. For some languages one even cannot use Char to represent all letters, as some of them need more than one code point, such as Hungarian "ly" or Spanish "ll" (before 2010) or Dutch "ij" (according to some authorities — there's no one commonly accepted definition).
No language that I know supports arbitrary human alphabets as range expressions out of the box.
While programming languages usually support sorting by the current locale (just search for collate on Hackage), there is no library I know that provides a list of alphabetic characters by locale.
Modern (Unicode) systems allowing for localized characters try to also allow many non-latin alphabets, and thus very many alphabetic characters.
Enumerating all alphabetic characters within Unicode gives over 40k characters:
GHCi> length $ filter Data.Char.isAlpha $
map Data.Char.chr [0..256*256]
48408
While I am aware of libraries allowing to construct alphabetic indices, I don't know about any Haskell binding for this feature.
I would like to be able to put and other characters into a text without it being interpreted by the computer. So was wondering is there a range that is defined as mapping to the same glyphs etc as the range 0-0x7f (the ascii range).
Please note I state that the range 0-0x7f is the same as ascii, so the question is not what range maps to ascii.
I am asking is there another range that also maps to the same glyphs. I.E when rendered will look the same. But when interpreted may be can be seen as a different code.
so I can write
print "hello "world""
characters in bold avoid the 0-0x7f (ascii range)
Additional:
I was meaning homographic and behaviourally, well everything the same except a different code point. I was hopping for the whole ascii/128bit set, directly mapped (an offset added to them all).
The reason: to avoid interpretation by any language that uses some of the ascii characters as part of its language but allows any unicode character in literal strings e.g. (when uft-8 encoded) C, html, css, …
I was trying to retro-fix the idea of “no reserved words” / “word colours” (string literals one colour, keywords another, variables another, numbers another, etc) so that a string literal or variable-name(though not in this case) can contain any character.
I interpret the question to mean "is there a set of code points which are homographic with the low 7-bit ASCII set". The answer is no.
There are some code points which are conventionally rendered homographically (e.g. Cyrillic upparcase А U+0410 looks identical to ASCII 65 in many fonts, and quite similar in most fonts which support this code point) but they are different code points with different semantics. Similarly, there are some code points which basically render identically, but have a specific set of semantics, like the non-breaking space U+00A0 which renders identically to ASCII 32 but is specified as having a particular line-breaking property; or the RIGHT SINGLE QUOTATION MARK U+2019 which is an unambiguous quotation mark, as opposed to its twin ASCII 39, the "apostrophe".
But in summary, there are many symbols in the basic ASCII block which do not coincide with a homograph in another code block. You might be able to find homographs or near-homographs for your sample sentence, though; I would investigate the IPA phonetic symbols and the Greek and Cyrillic blocks.
The answer to the question asked is “No”, as #tripleee described, but the following note might be relevant if the purpose is trickery or fun of some kind:
The printable ASCII characters excluding the space have been duplicated at U+FF01 to U+FF5E, but these are fullwidth characters intended for use in CJK texts. Their shape is (and is meant to be) different: hello world. (Your browser may be unable to render them.) So they are not really homographic with ASCII characters but could be used for some special purposes. (I have no idea of what the purpose might be here.)
Depends on the Unicode standard you use.
In UTF-8, the first 128 characters have the exact ASCII counterparts as code numbers. In UTF-16, the first 128 ASCII characters are between 0x0000 and 0x007F (2 bytes).