Show regional characters in MessageBox without setting required locale - visual-c++

I need to show MessageBox with regional characters described in ISO/IEC 8859-13 codepage without setting Windows locale to this region. I was naive and tried to show ASCI table with one byte character:
void showCodePage()
{
char *a = new char[10000];
char *aa = new char[10000];
int f=0;
int lines =0;
for (int i=1; i<255;i++ )
{
sprintf(a,"%c %0.3d ",i,i);
sprintf(aa,"%c",i);
f++;
a+=6;
aa++;
if (f==8)
{
f=0;
sprintf(a,"%c",0x0d);
a++;
lines++;
}
}
*aa=0;
*a=0;
a-=254*6+lines;
aa-=254;
MessageBox(NULL, aa , "Hello!", MB_ICONEXCLAMATION | MB_OK);
MessageBox(NULL, a , "Hello!", MB_ICONEXCLAMATION | MB_OK);
delete [] a;
delete [] aa;
}
Ok, this doesn't shows ISO/IEC 8859-13 correctly and this is not possible without changing locale:
Now I decide to make unicode wstring. Converting from single byte char to unicode wchar function:
wstring convert( const std::string& as )
{
// deal with trivial case of empty string
if( as.empty() ) return std::wstring();
// determine required length of new string
size_t reqLength = ::MultiByteToWideChar( CP_UTF8, 0, as.c_str(), (int)as.length(), 0, 0 );
// construct new string of required length
std::wstring ret( reqLength, L'\0' );
// convert old string to new string
::MultiByteToWideChar( CP_UTF8, 0, as.c_str(), (int)as.length(), &ret[0], (int)ret.length() );
// return new string ( compiler should optimize this away )
return ret;
}
And changing MessageBox'es:
MessageBoxW(NULL, convert(aa).c_str(), L"Hello!", MB_ICONEXCLAMATION | MB_OK);
MessageBoxW(NULL, convert(a).c_str() , L"Hello!", MB_ICONEXCLAMATION | MB_OK);
Result is still sad:
In other hand what I was expecting? I need somehow tell system which code page it should use to display may characters. How to do that?

The problem with your solution is that the function MultiByteToWideChar with CodePage parameter CP_UTF8 deosn't translate your SPECIFIC ASCII code-page to UTF-16, it translates UTF-8 to UTF-16, which is not what you need.
What you're looking for is a translation table from chars in ISO/IEC 8859-13 to WideChar. You can manually make one from the table in https://en.wikipedia.org/wiki/ISO/IEC_8859-13, e.g. 160 becomes 00A0, 161 to 201D, and so on.

Related

How to set maximum length in String value DART

i am trying to set maximum length in string value and put '..' instead of removed chrs like following
String myValue = 'Welcome'
now i need the maximum length is 4 so output like following
'welc..'
how can i handle this ? thanks
The short and incorrect version is:
String abbrevBad(String input, int maxlength) {
if (input.length <= maxLength) return input;
return input.substring(0, maxLength - 2) + "..";
}
(Using .. is not the typographical way to mark an elision. That takes ..., the "ellipsis" symbol.)
A more internationally aware version would count grapheme clusters instead of code units, so it handles complex characters and emojis as a single character, and doesn't break in the middle of one. Might also use the proper ellipsis character.
String abbreviate(String input, int maxLength) {
var it = input.characters.iterator;
for (var i = 0; i <= maxLength; i++) {
if (!it.expandNext()) return input;
}
it.dropLast(2);
return "${it.current}\u2026";
}
That also works for characters which are not single code units:
void main() {
print(abbreviate("argelbargle", 7)); // argelb…
print(abbreviate("πŸ‡©πŸ‡°πŸ‡©πŸ‡°πŸ‡©πŸ‡°πŸ‡©πŸ‡°πŸ‡©πŸ‡°", 4)); // πŸ‡©πŸ‡°πŸ‡©πŸ‡°πŸ‡©πŸ‡°β€¦
}
(If you want to use ... instead of …, just change .dropLast(2) to .dropLast(4) and "…" to "...".)
You need to use RichText and you need to specify the overflow type, just like this:
Flexible(
child: RichText("Very, very, very looong text",
overflow: TextOverflow.ellipsis,
),
);
If the Text widget overflows, some points (...) will appears.

CCombobox string insertion gives gibberish

I created a custom CCustomCombo by extending CComboBox to implement a DrawItem() function. Here's the code for it.
void CCustomCombo::DrawItem( LPDRAWITEMSTRUCT lpDrawItemStruct )
{
ASSERT( lpDrawItemStruct->CtlType == ODT_COMBOBOX );
LPCTSTR lpszText = ( LPCTSTR ) lpDrawItemStruct->itemData;
ASSERT( lpszText != NULL );
if ( lpDrawItemStruct->itemID == -1 || lpszText == NULL)
return;
CDC dc;
dc.Attach( lpDrawItemStruct->hDC );
// Save these value to restore them when done drawing.
COLORREF crOldTextColor = dc.GetTextColor();
COLORREF crOldBkColor = dc.GetBkColor();
// If this item is selected, set the background color
// and the text color to appropriate values. Erase
// the rect by filling it with the background color.
if ( ( lpDrawItemStruct->itemAction & ODA_SELECT ) &&
( lpDrawItemStruct->itemState & ODS_SELECTED ) )
{
dc.SetTextColor( ::GetSysColor( COLOR_HIGHLIGHTTEXT ) );
dc.SetBkColor( ::GetSysColor( COLOR_HIGHLIGHT ) );
dc.FillSolidRect( &lpDrawItemStruct->rcItem, ::GetSysColor( COLOR_HIGHLIGHT ) );
}
else
{
dc.FillSolidRect( &lpDrawItemStruct->rcItem, crOldBkColor );
}
// Draw the text.
dc.DrawText(
lpszText,
( int ) _tcslen( lpszText ),
&lpDrawItemStruct->rcItem,
DT_CENTER | DT_SINGLELINE | DT_VCENTER );
// Reset the background color and the text color back to their
// original values.
dc.SetTextColor( crOldTextColor );
dc.SetBkColor( crOldBkColor );
dc.Detach();
}
creation part -
m_selectionCombo.Create( WS_VSCROLL |
CBS_DROPDOWNLIST | WS_VISIBLE | WS_TABSTOP| CBS_OWNERDRAWFIXED,
rect, &m_wndSelectionBar, ID_TEMP_BTN))
Now the problem is with the adding string items to the combobox. When I'm using string objects, it always shows some unicode gibberish.
m_selectionCombo.InsertString(0, "One"); //works
char * one = "one";
m_selectionCombo.InsertString(0, one ); //works
CString one = "one";
m_selectionCombo.InsertString(0, one ); //shows gibberish
std::string one = "one";
char *cstr = &one[0];
m_wndSelectionBar.m_selectionCombo.InsertString(0, cstr ); //shows gibberish
The same results appear for AddString. The problem is that I have a set of doubles that I have to insert to the combobox. And I have no way of converting them to string without displaying gibberish. I tried half-a-dozen conversion methods and none worked. I'm literally at my wits end!
The funny thing is it worked perfectly before when I used CComboBox and not my CCustomCombo class/CBS_OWNERDRAWFIXED. I tried using CBS_HASSTRINGS, but it displayed nothing, not even the gibberish, so somehow the strings don't even get added in with CBS_HASSTRINGS.
I need the custom Draw method since I plan to highlight some of the dropdown items. I'm using windows 32, VS 2017.
Any help would be highly appreciated.
Thanks.
LPCTSTR lpszText = (LPCTSTR)lpDrawItemStruct->itemData;
OwnerDraw function is looking at itemData. itemData is assigned using CComboBox::SetItemData. It is not assigned using InsertString or other text functions.
char * one = "one";
m_selectionCombo.InsertString(0, one ); //works
The string and item data are stored in the same memory address when CBS_HASSTRINGS is not set.
See also documentation for CB_SETITEMDATA
If the specified item is in an owner-drawn combo box created without
the CBS_HASSTRINGS style, this message replaces the value in the
lParam parameter of the CB_ADDSTRING or CB_INSERTSTRING message that
added the item to the combo box.
So basically itemData returns a pointer one, and it works fine in this case.
CString one = "one";
m_selectionCombo.InsertString(0, one ); //shows gibberish
This time the string is created on stack, it is destroyed after the function exists. itemData points to invalid address.
Solution:
If you are setting the text using InsertString/AddString then make sure CBS_HASSTRINGS is set. And read the strings using GetLBText. Example:
//LPCTSTR lpszText = (LPCTSTR)lpDrawItemStruct->itemData; <- remove this
if(lpDrawItemStruct->itemID >= GetCount())
return;
CString str;
GetLBText(lpDrawItemStruct->itemID, str);
LPCTSTR lpszText = str;
Otherwise use SetItemData to setup data, and use itemData to read.

VC++ Converting Unicode Traditional Chinese characters to multi byte not always work

My application (MFC) is an Unicode app, and I have a third party DLL which only takes the multi-byte characters, so I have to convert the Unicode string to the multi-byte string to pass it to the 3rd party app. Korean, Japanese and even Simplified chinese strings were converted correctly except Traditional chinese. Below describes my attempt. This CPP is encoded in Unicode.
CString strFilePath(_T("δΈ­ζ–‡ε­— ζ·±ζ°΄εŸ—.docx"));// (_T("δΈ­ζ–‡ε­— ζ·±ζ°΄εŸ—.docx"));
wchar_t tcharPath[260];
wcscpy(tcharPath, (LPCTSTR)(strFilePath));
CString strAll = strFilePath;
int strAllLength = strAll.GetLength() + 1;
int nSize = 0;
char * pszBuf;
CPINFO pCPInfo;
BOOL bUsedDefaultChar = FALSE;
int nRC = GetCPInfo( CP_ACP, &pCPInfo );
if ((nSize = WideCharToMultiByte( CP_ACP, 0, strAll, strAllLength, NULL, 0, NULL, NULL )) > 0)
{ // Get the size of the buffer length
pszBuf = (char*)calloc(nSize + 1, sizeof(char)); // allocate the buffer
if (pszBuf == NULL)
return; // no more memory
nRC = WideCharToMultiByte( CP_ACP, 0, strAll, strAll.GetLength(), pszBuf, nSize+1, NULL, &bUsedDefaultChar ); // store Unicode chars to pszBuf
DWORD dwErr = GetLastError();
::MessageBoxA( NULL, pszBuf, "", MB_OK );
free(pszBuf); // free it.
}
With Simplified Chinese Windows, the above 6 chinese characters were displayed correctly. Unfortunately, in Traditional Chinese Windows, the 6th character "εŸ—" couldn't be, so it was converted to "?".
Can anyone explain why and tell me if it is possible to convert correctly?

Unicode <-> Multibyte conversion (native vs. managed)

I'm trying to convert unicode strings coming from .NET to native C++ so that I can write them to a text file. The process shall then be reversed, so that the text from the file is read and converted to a managed unicode string.
I use the following code:
String^ FromNativeToDotNet(std::string value)
{
// Convert an ASCII string to a Unicode String
std::wstring wstrTo;
wchar_t *wszTo = new wchar_t[lvalue.length() + 1];
wszTo[lvalue.size()] = L'\0';
MultiByteToWideChar(CP_UTF8, 0, value.c_str(), -1, wszTo, (int)value.length());
wstrTo = wszTo;
delete[] wszTo;
return gcnew String(wstrTo.c_str());
}
std::string FromDotNetToNative(String^ value)
{
// Pass on changes to native part
pin_ptr<const wchar_t> wcValue = SafePtrToStringChars(value);
std::wstring wsValue( wcValue );
// Convert a Unicode string to an ASCII string
std::string strTo;
char *szTo = new char[wsValue.length() + 1];
szTo[wsValue.size()] = '\0';
WideCharToMultiByte(CP_UTF8, 0, wsValue.c_str(), -1, szTo, (int)wsValue.length(), NULL, NULL);
strTo = szTo;
delete[] szTo;
return strTo;
}
What happens is that e.g. a Japanese character gets converted to two ASCII chars (ζΌ’ -> "w). I assume that's correct?
But the other way does not work: when I call FromNativeToDotNet wizh "w I only get "w as a managed unicode string...
How can I get the Japanese character correctly restored?
Best to use UTF8Encoding:
static String^ FromNativeToDotNet(std::string value)
{
array<Byte>^ bytes = gcnew array<Byte>(value.length());
System::Runtime::InteropServices::Marshal::Copy(IntPtr((void*)value.c_str()), bytes, 0, value.length());
return (gcnew System::Text::UTF8Encoding)->GetString(bytes);
}
static std::string FromDotNetToNative(String^ value)
{
if (value->Length == 0) return std::string("");
array<Byte>^ bytes = (gcnew System::Text::UTF8Encoding)->GetBytes(value);
pin_ptr<Byte> chars = &bytes[0];
return std::string((char*)chars, bytes->Length);
}
a Japanese character gets converted to two ASCII chars (ζΌ’ -> "w). I assume that's correct?
No, that character, U+6F22, should be converted to three bytes: 0xE6 0xBC 0xA2
In UTF-16 (little endian) U+6F22 is stored in memory as 0x22 0x6F, which would look like "o in ascii (rather than "w) so it looks like something is wrong with your conversion from String^ to std::string.
I'm not familiar enough with String^ to know the right way to convert from String^ to std::wstring, but I'm pretty sure that's where your problem is.
I don't think the following has anything to do with your problem, but it is obviously wrong:
std::string strTo;
char *szTo = new char[wsValue.length() + 1];
You already know a single wide character can produce multiple narrow characters, so the number of wide characters is obviously not necessarily equal to or greater than the number of corresponding narrow characters.
You need to use WideCharToMultiByte to calculate the buffer size, and then call it again with a buffer of that size. Or you can just allocate a buffer to hold 3 times the number of chars as wide chars.
Try this instead:
String^ FromNativeToDotNet(std::string value)
{
// Convert a UTF-8 string to a UTF-16 String
int len = MultiByteToWideChar(CP_UTF8, 0, value.c_str(), value.length(), NULL, 0);
if (len > 0)
{
std::vector<wchar_t> wszTo(len);
MultiByteToWideChar(CP_UTF8, 0, value.c_str(), value.length(), &wszTo[0], len);
return gcnew String(&wszTo[0], 0, len);
}
return gcnew String((wchar_t*)NULL);
}
std::string FromDotNetToNative(String^ value)
{
// Pass on changes to native part
pin_ptr<const wchar_t> wcValue = SafePtrToStringChars(value);
// Convert a UTF-16 string to a UTF-8 string
int len = WideCharToMultiByte(CP_UTF8, 0, wcValue, str->Length, NULL, 0, NULL, NULL);
if (len > 0)
{
std::vector<char> szTo(len);
WideCharToMultiByte(CP_UTF8, 0, wcValue, str->Length, &szTo[0], len, NULL, NULL);
return std::string(&szTo[0], len);
}
return std::string();
}

Converting Byte Array to String (NXC)

Is there a way to show a byte array on the NXTscreen (using NXC)?
I've tried like this:
unsigned char Data[];
string Result = ByteArrayToStr(Data[0]);
TextOut(0, 0, Result);
But it gives me a File Error! -1.
If this isn't possible, how can I watch the value of Data[0] during the program?
If you want to show the byte array in hexadecimal format, you can do this:
byte buf[];
unsigned int buf_len = ArrayLen(buf);
string szOut = "";
string szTmp = "00";
// Convert to hexadecimal string.
for(unsigned int i = 0; i < buf_len; ++i)
{
sprintf(szTmp, "%02X", buf[i]);
szOut += szTmp;
}
// Display on screen.
WordWrapOut(szOut,
0, 63,
NULL, WORD_WRAP_WRAP_BY_CHAR,
DRAW_OPT_CLEAR_WHOLE_SCREEN);
You can find WordWrapOut() here.
If you simply want to convert it to ASCII:
unsigned char Data[];
string Result = ByteArrayToStr(Data);
TextOut(0, 0, Result);
If you only wish to display one character:
unsigned char Data[];
string Result = FlattenVar(Data[0]);
TextOut(0, 0, Result);
Try byte. byte is an unsigned char in NXC.
P.S. There is a heavily-under-development debugger in BricxCC (I assume you're on windows). Look here.
EDIT: The code compiles and runs, but does not do anything.

Resources