I use Bluetooth printer but don't support Arabic
I tried many methods and also used UTF-8, but don't work
Is there a way to show the Arabic language in print?
String msg = "مرحبا";
//msg= URLEncoder.encode(msg,"UTF-8");
//StandardCharsets.UTF_8
//"UTF-8"
outputStream.write(msg.getBytes("UTF-8"));
Related
When I type e.g. a Spanish character (with the keyboard), what encoding is used to send that character to the application (e.g. notepad or Word)?
E.g. Does the keyboard send it as a Unicode character and the application convert it to its desired encoding (e.g. ANSI).
Does the default encoding of the OS play a role?
Windows and Linux please.
Using Delphi 2007 & trying to read a txt file with greek characters & opening with Excel, I am not getting greek characters but symbols...Any help?
The text file is created with this code
CSVSTRList.SaveToFile('c:\test\xxx2.txt');
where CSVSTRList is a TStringList.
Looking at your code in your previous question it seems you are taking a stock TStringList and calling SaveToFile. That encodes the text as ANSI. Your Greek characters cannot be encoded as ANSI.
You want to export text using a Unicode encoding. For instance, in a modern Delphi you would use:
StringList.SaveToFile(FileName, Encoding.Unicode);
for UTF-16, or
StringList.SaveToFile(FileName, Encoding.UTF8);
for UTF-8.
I would expect that Excel will understand either of these encodings.
Since you are using a non-Unicode Delphi, things are somewhat more difficult. You'll need to change all you code, every single piece of string handling, to be Unicode aware. So you cannot use the stock string list any more, for example, because it contains 8 bit ANSI strings. The simplest way to do this with legacy Delphi is with the TNT Unicode library.
Or you could take the step of moving to a Unicode Delphi. If you care about international text it is the most sensible option.
I want to use printer (windows driver) to print Japanese in a vb 6 project.
My project is in Japanese Windows Environment (the OS is English Originally, set Japan region and related language).
I use Printer object to print a simple string type of Japanese such as "レジースター", the code like
Dim s As String
s="レジースター"
Printer.Print s
Printer.EndDoc
but the output result is a set of messy code like "OEvƒOEƒ|[ƒg"
Does anybody who can succeeded in printing out Japanese with Vb6 Printer Object in Japanese language Windows Envrionment, please help me.
Finally find the key is simple, it's a little bit tricky but I still don't know why. Just set the font of the Printer Object like "Printer.Font.Charset = 128" (128 for Japanese)
ATTN: Pls pay attention to my case, my OS is English with the language and region setting to Japanese.
What make me confused is that the default ANSI of Windows. As we know, the default value of Printer.Font.Charset is 0, it means ANSI (IF the language environment is Japanese then it will use code page 932, if it is English, it will use Windows-1252).
My OS is Japanese (set to Japanese, not purely, Originally English OS), when I try to Write a file in Japanese it can display Japanese, but when I use the Printer Object to Print, it does have 0(ANSI) value of .Font.Charset, but actually it still use the original OS code page, so it is wired. And when I try to set the system to Chinese and Korean, both of the language is normal, only Japanese have this problem.
the trick that i have used for something like this is to use double StrConv() functions, one with the vbFromUnicode constant and the other with the vbToUnicode constant.
It takes alittle experimenting to get right , but it should look something like this, swap the constants and/or codepage values until you get the right conversion for your system
Dim s as string
s="レジースター"
Dim newS as string
newS = StrConv((StrConv(s,FromUnicode,CodePage1),ToUnicode,CodePage2)
Printer.Print newS
CodePage*N* is the Windows codepage value, 1252 for English, 932 for Japanese
Despite all strings in VB6 are Unicode (UTF-16), when it comes to interfacing with the world VB6 is completely non-Unicode.
You cannot store レジースター in your project file because the file is in ANSI.
You cannot simply pass a string to a declared API function because it will undergo an automatic conversion to ANSI first. To avoid this, you have to declare string parameters as pointers.
Apparently same happens to the Print call - the string gets converted to the currect ANSI codepage before it reaches the printer driver.
You can try printing manually by creating a device context and printing on it.
Or you can search for another solution, like this one (I did not try it).
I have string " Single 63”x14” rear window" am parsing this string into HTML and creating a word document applying styles using(System.IO.File.ReadAllText(styleSheet)).
in the document am getting this string as "Single 63â€x14†rear window" in C#.
How can I get the correct character to show up in Word?
You would have to find out the incoming encoding of the string
" Single 63”x14” rear window"
And also which encoding the word document allows.
It appears that the encoding characters for those funky quotes are not supported by Word. You could always create a nifty little string parser to search for characters outside the Word encoding range and replace them with either String.Empty, or search for specific supported characters that look similar.
Eg. String.Replace("”","\"");
(this probably wouldn't work without directly manipulating the encoding values, but you haven't provided those so can't give an exact example)
The encoding you are looking at appears to be UTF-8. It's actually probably exactly what you want, you just need to view it using a tool which supports UTF-8, and if you process it and put it on a web page, add the required HTML meta tag so that browsers will display it using the correct encoding.
I'm trying to create a webpage in Chinese and I realized that while the text looks fine when I run it on browsers, once I change the Character Encoding, the text becomes gibberish. Here's what's happening:
I create my html file in Emacs, encoded in UTF-8.
I upload it to the server, and view it on my browsers (FF, IE, Chrome, Opera) - no problem.
I try to view the page in other encodings via FF > View > Character Encoding > All those different Chinese encoding systems, e.g. Chinese Simplified (HZ)
Apart from UTF-8, on every other encoding the text becomes gibberish.
I'm assuming this isn't a problem - i.e. browsers are smart enough to know which encoding the page is in, and parse the content accurately. What I'm wondering is why I can't read the Chinese text anymore once I change encoding - is it because I don't have Chinese fonts installed on my OS? Should I stick to UTF-8 if my audience are Chinese or should I choose among one of their many encoding systems?
Thanks in advance for your help/opinions.
UTF isn't a 'catch-all' encoding. It's designed to contain international language character symbols for ease of use, but it is still an encoding, just like the other encodings you've selected. You would have to retype the text in each encoding to make it appear correctly when viewed with that encoding.
Viewer encoding MUST match the file being read. Viewing UTF-8 as something other makes about same sense as renaming .txt to .exe and trying to run it.
You should specify correct encoding in HTML. The option you're using in web browser exist only for those rare occasions when web developer screwed up his job and declared other encoding than actually used OR mixed up 2 different encodings on one page.
Of course changing the encoding in your browser will "break" the text! The browser is taking the stream of UTF-8 codepoints and tries to force another encoding on the raw data. Needless to say, the result ain't pretty. Changing the encoding in the browser is NOT the equivalent of converting.
As you surmised correctly, modern browsers usually guess correctly -- but not always. As Agent_L make sure to declare the encoding in the headers.