I have a converter code which will convert one form of data to another format. But this exe is now running in French OS. While reading the file, decimal separator and other thing will vary according to French OS.
For example, in French, the decimal separator is "," instead of ".". so how to change the cultural info to English while reading data in vc++ 6.0 and not in .NET.
Long time since I needed to do this but I think it's just to set the locale using setlocale.
So something like:
setlocale( LC_ALL, "English" );
Related
I'm using CSVHelper to import CSV files. I am passing the current culture to CSVHelper as follows:
CsvReader csvReader = new CsvReader(stream, System.Threading.Thread.CurrentCulture);
On Windows this recognises that (for example) a British CSV file uses , as a separator and a German file uses ; as a separator.
However on Linux this behavior is different - on Linux a . is used for a separator in a German culture.
I suspect (but haven't checked) that OSx may behave the same as Linux.
How can I normalize this so that the same separators are used across all different cultures between platforms.
I'm using SPSS 21 (on Windows 7) to create some descriptive reports. I want to export my graphics in the best format for word processing. I found that the .emf format works well, i.e. the graphs are quite good when I insert them into a word document.
The only problem is: in the graphs titles, there are some umlauts (german characters like ä, ö, ü) and some accents (french characters like é, à, è. etc), and when I export the graphs, it displays these umlauts and accents as ü, à etc.
I already change (manually) the encoding of the Data and Syntax (in Options of SPSS) and choose the "Unicode" one. But even by changing this encoding, the titles of my graphs are not correctly encoded through the exportation.
Do you have any ideas why ?
Many thanks in advance !
This is a bug that was reported recently (introduced in the course of fixing another bug :-)) and has been fixed for a future release.
Using Delphi 2007 & trying to read a txt file with greek characters & opening with Excel, I am not getting greek characters but symbols...Any help?
The text file is created with this code
CSVSTRList.SaveToFile('c:\test\xxx2.txt');
where CSVSTRList is a TStringList.
Looking at your code in your previous question it seems you are taking a stock TStringList and calling SaveToFile. That encodes the text as ANSI. Your Greek characters cannot be encoded as ANSI.
You want to export text using a Unicode encoding. For instance, in a modern Delphi you would use:
StringList.SaveToFile(FileName, Encoding.Unicode);
for UTF-16, or
StringList.SaveToFile(FileName, Encoding.UTF8);
for UTF-8.
I would expect that Excel will understand either of these encodings.
Since you are using a non-Unicode Delphi, things are somewhat more difficult. You'll need to change all you code, every single piece of string handling, to be Unicode aware. So you cannot use the stock string list any more, for example, because it contains 8 bit ANSI strings. The simplest way to do this with legacy Delphi is with the TNT Unicode library.
Or you could take the step of moving to a Unicode Delphi. If you care about international text it is the most sensible option.
I want to use printer (windows driver) to print Japanese in a vb 6 project.
My project is in Japanese Windows Environment (the OS is English Originally, set Japan region and related language).
I use Printer object to print a simple string type of Japanese such as "レジースター", the code like
Dim s As String
s="レジースター"
Printer.Print s
Printer.EndDoc
but the output result is a set of messy code like "OEvƒOEƒ|[ƒg"
Does anybody who can succeeded in printing out Japanese with Vb6 Printer Object in Japanese language Windows Envrionment, please help me.
Finally find the key is simple, it's a little bit tricky but I still don't know why. Just set the font of the Printer Object like "Printer.Font.Charset = 128" (128 for Japanese)
ATTN: Pls pay attention to my case, my OS is English with the language and region setting to Japanese.
What make me confused is that the default ANSI of Windows. As we know, the default value of Printer.Font.Charset is 0, it means ANSI (IF the language environment is Japanese then it will use code page 932, if it is English, it will use Windows-1252).
My OS is Japanese (set to Japanese, not purely, Originally English OS), when I try to Write a file in Japanese it can display Japanese, but when I use the Printer Object to Print, it does have 0(ANSI) value of .Font.Charset, but actually it still use the original OS code page, so it is wired. And when I try to set the system to Chinese and Korean, both of the language is normal, only Japanese have this problem.
the trick that i have used for something like this is to use double StrConv() functions, one with the vbFromUnicode constant and the other with the vbToUnicode constant.
It takes alittle experimenting to get right , but it should look something like this, swap the constants and/or codepage values until you get the right conversion for your system
Dim s as string
s="レジースター"
Dim newS as string
newS = StrConv((StrConv(s,FromUnicode,CodePage1),ToUnicode,CodePage2)
Printer.Print newS
CodePage*N* is the Windows codepage value, 1252 for English, 932 for Japanese
Despite all strings in VB6 are Unicode (UTF-16), when it comes to interfacing with the world VB6 is completely non-Unicode.
You cannot store レジースター in your project file because the file is in ANSI.
You cannot simply pass a string to a declared API function because it will undergo an automatic conversion to ANSI first. To avoid this, you have to declare string parameters as pointers.
Apparently same happens to the Print call - the string gets converted to the currect ANSI codepage before it reaches the printer driver.
You can try printing manually by creating a device context and printing on it.
Or you can search for another solution, like this one (I did not try it).
Are there any direct consequences of toggling between Unicode, MBCS, and Not Set for the VS C++ compiler settings Configuration Properties->General->Character Set, apart from the setting of the _UNICODE, _MBCS and _T macros (which then, of course, indirectly has consequences through the generic text mappings for string functions)?
I am not expecting it to, but since the documentation says "Tells the compiler to use the specified character set", I'd like to be certain that, specifically, it doesn't have any influence on how any literal non-ASCII text put into strings or wstrings is encoded? (I am aware that non ASCII literals in the source is not portable, but am maintaining a solution where this is used heavily.)
Thanks in advance.
No, it only affects the macro definitions. Which in turn can have wide-ranging effects on anything from <tchar.h> or the Windows T string pointer types (LPTSTR etc).
If you use any non-ASCII codes in your string literals then you depend heavily on the way the compiler decodes the text in your source code file. The default encoding it assumes is your system code page as configured in Control Panel + Regional and Language options. This will not work well when your source code file ever strays too far away from your machine. Specifying utf8 with a BOM is wise so this is never a problem. In the IDE that's set with Save As, arrow on the Save button, "Save with encoding", pick 65001. Support for utf8 encoded source code files is spotty in older versions of C++ compilers.
For unadorned strings, C++ follows C: it's ASCII. If you wrap them with anything, the game changes.
C++0x standardises Unicode strings. UTF in particular. This is a new feature.