Testing for Japanese/Chinese Characters in a string - string

I have a program that reads a bunch of text and analyzes it. The text may be in any language, but I need to test for japanese and chinese specifically to analyze them a different way.
I have read that I can test each character on it's unicode number to find out if it is in the range of CJK characters. This is helpful, however I would like to separate them if possible to process the text against different dictionaries. Is there a way to test if a character is Japanese OR Chinese?

You won't be able to test a single character to tell with certainty that it is Japanese or Chinese because of the way the unihan code points are implemented in the Unicode standard. Basically, every Chinese character is a potential Japanese character. However, the reverse is not true. Also, there are a number of conventions that could be used to test to see if a block of text is in one language or the other.
Simplifications - if the character you are testing is a PRC simplification such as 门 is only available in main land Chinese.
Kana - if the character is one of the many Japanese kana characters such as あいうえお then the text block you are working with is definitely Japanese.
The problem arises with the sheer number of characters and words that are in common. However, if I needed a quick and dirty solution to this problem, I would check my entire blocks of text for kana - if the text contains kana then I know it is Japanese. If you need to distinguish Korean as well, I would test for Hangul. Also, if you need to distinguish what type of Chinese, testing for types of simplifications would be the best approach.

The process of developing Unicode included the Han Unification. This is because a lot of the Japanese characters are derived from, or the same as, Chinese characters; similarly with Korean. There are some characters (katakana and hiragana - see chapter 12 of the Unicode standard v5.1.0) commonly used in Japanese that would indicate that the text was Japanese rather than Chinese, but I believe it would be a statistical test rather than definitive.
Check out the O'Reilly book on CJKV Information Processing (CJKV is short for Chinese, Japanese, Korean, Vietnamese; I have the CJK predecessor lurking somewhere). There's also the O'Reilly book on Unicode Explained which may be some help, though probably not for this question (I don't recall a discussion of how to identify Japanese and Chinese text).

You probably can't do that reliably. Japanese uses a lot of the same characters as Chinese. I think the best you could do is to look at a block of text. If you see any uniquely Japanese characters, then you can assume the whole block is Japanese. If not, then it's probably Chinese.
However, I'm just learning Chinese, so I'm not an expert.

testing for characters in the katakana or hiragana ranges should be a very reliable means of determining whether or not the text is Japanese, especially if you are dealing with 'regular' user-generated text. if you are looking at legal documents or other more official fare it might be slightly more difficult, as there will be a much greater preponderance of complex chinese characters - but it should still be pretty reliable.

A workaround is to check the encoding before it is converted to Unicode.

There are many characters which are only (commonly) used in Japanese or only used in Chinese.
Japan and China both simplified many characters but often in different ways. You can check for Japanese Shinjitai and Simplified Chinese characters. There are many more of the latter than the former. If there are none of either then you probably have Traditional Chinese.
Of course if you're dealing with Unicode text you may find occasional rare characters or mixed languages which could throw off a heuristic so you're better off going with counting the types of characters to make a judgement.
A good way to find out which characters are common in one language and not in the others is to compare the legacy encodings against each other. You can find mappings of each to Unicode easily on the internet.
I used to have some code I wrote which did a binary search by codepoint and it was extremely fast even in JavaScript - I may have lost it in my travels though (-:

Related

Can tesseract recognize sequences of letters in a image that are not necessarily real words nor bound to any human language?

I am trying to do that in Python with tesseract, but it seems to depend on the language to be able to deduce the characters (and that makes sense).
It is a sequence of 14 letters with any of the printable first 800 2-byte utf8 characters, but even if the recognition (OCR) is limited to latin-1 (or less) chars that would be something.
As per this question it seems it does not need proper words, but the installer asks for a training set in a specific language.
ps. To clarify: OCR (at least in academic setting) takes advantage of the context and of a dictionary to help discover difficult letters.

How to determine codepage of a file (that had some codepage transformation applied to it)

For example if I know that ć should be ć, how can I find out the codepage transformation that occurred there?
It would be nice if there was an online site for this, but any tool will do the job. The final goal is to reverse the codepage transformation (with iconv or recode, but tools are not important, I'll take anything that works including python scripts)
EDIT:
Could you please be a little more verbose? You know for certain that some substring should be exactly. Or know just the language? Or just guessing? And the transformation that was applied, was it correct (i.e. it's valid in the other charset)? Or was it single transformation from charset X to Y but the text was actually in Z, so it's now wrong? Or was it a series of such transformations?
Actually, ideally I am looking for a tool that will tell me what happened (or what possibly happened) so I can try to transform it back to proper encoding.
What (I presume) happened in the problem I am trying to fix now is what is described in this answer - utf-8 text file got opened as ascii text file and then exported as csv.
It's extremely hard to do this generally. The main problem is that all the ascii-based encodings (iso-8859-*, dos and windows codepages) use the same range of codepoints, so no particular codepoint or set of codepoints will tell you what codepage the text is in.
There is one encoding that is easy to tell. If it's valid UTF-8, than it's almost certainly no iso-8859-* nor any windows codepage, because while all byte values are valid in them, the chance of valid utf-8 multi-byte sequence appearing in a text in them is almost zero.
Than it depends on which further encodings may can be involved. Valid sequence in Shift-JIS or Big-5 is also unlikely to be valid in any other encoding while telling apart similar encodings like cp1250 and iso-8859-2 requires spell-checking the words that contain the 3 or so characters that differ and seeing which way you get fewer errors.
If you can limit the number of transformation that may have happened, it shouldn't be too hard to put up a python script that will try them out, eliminate the obvious wrongs and uses a spell-checker to pick the most likely. I don't know about any tool that would do it.
The tools like that were quite popular decade ago. But now it's quite rare to see damaged text.
As I know it could be effectively done at least with a particular language. So, if you suggest the text language is Russian, you could collect some statistical information about characters or small groups of characters using a lot of sample texts. E.g. in English language the "th" combination appears more often than "ht".
So, then you could permute different encoding combinations and choose the one which has more probable text statistics.

Overlayed text?

Quick text-processing question. It's not necessarily related to programming, but this is the best place I figured I should go.
Rate down to tell me this kind of question is not welcome here. (Though, I really like my one little reputation point.)
Anyways, how can I encode text so that two characters get rendered in the same charspace?
NOTE: this is for plain-text -- nothing particularly complex.
The best you can do is put a backspace character between the two. However the outcome isn't likely to be useful to you, it will depend on what software is being used to display the text. The most likely is that the backspace will be ignored or shown as some generic "unavailable" glyph. The second most likely is that the second character will completely erase the first. You'd have to be very lucky for the two characters to be displayed one over the other in the same space.
If it's plain text to be processed by any editor, as far as I know you can't. Even if your text is encoded in Unicode, I don't think it provides combining characters for normal letters, but just for accents and similar symbols which are intended to be combined with other glyphs.
BTW, I'm not sure that stackoverflow is the right place for this kind of stuff, I'd see it better in superuser.com.

How would one store German text in an embedded system?

I've created a memory mapped 1 bit interface to an LCD in an embedded system, along with 4 or 5 bit mapped fonts for the 90+ printable ASCII characters. Writing to the screen is as simple as using an echo like statement (it's embedded Linux).
Other than something strictly proprietory, what recommendations can people make for storing German (or Spanish, or French for that mattter)? Unicode seems to be a pretty heavy hitter.
If I understand you right, you are searching a lightwight encoding for german characters? In Europe, you normaly use Latin-1 or better ISO 8859-15. This is a 8-Bit ASCII extension containing most of the characters used by western languages.
Well, UTF-8 isn't that big. I recommend it if you want to be able to use one or more languages where you don't find a matching char in Latin-1.

Brazilian portuguese website to support russian, mandarin and japanese

We have a website in brazilian portuguese developed using Coldfusion (for the user interface), Hibernate (for the business logic) and Oracle database.
If we consider to support russian, mandarin and japanese languages what concerns do we must have?
Thanks in advance.
The main consideration is to make sure everything (and I mean everything OS,shell,web server, appserver, database, editors) is configured to use utf-8 or unicode by default.
If you expect a lot of asian users its slightly better to use full unicode as most chinese characters fit into a 16 bit UTF-16, but, can take up 24 or 32 bits in utf-8.
With Coldfusion and Oracle this should not present any mojor problems.
The other main consideration is how you plan to handle the internationalisation isssues.
The standard way is to keep langauge/cultural specific items in a "bundle". There are several tools out there to support this, basically you write your app in portuguese making sure all text the user will see is in quoted literals, then run the app through a utility which replaces all literals with a library call and extracts all strings into a "bundle" file. You can then edit the bundle to add other language versions of the strings. The great advantage of this is that these formats are standard and translation agencies will have the tools to edit these files -- so you can easily outsource the translation to specialists.
The other option which requires much more work but IMHO produces a nicer result is to branch of a version of the front end for each language/culture supported. This gets around a lot of problems with text height and string size. Also it handles cultural norms better -- different cultures have differnet ordering and conventions for things like address and title.
A classic example of small differences causing big problems is the Irish Republic and Post Codes, they just dont have them. So if your form validation insists on a Zip code it will annoy your Irish users. The Brits do have post codes but these are two 1 to 4 character alphanumeric strings separated by a space, not the more usual 5 or 7 digit numeric codes.

Resources