Why am I getting gibberish output, along with valid output, when reading a String^ ? - visual-c++

I'm trying to write a few integers to a file (as a string.) Every time I try to run this bit of code I get the integers into the text file like planned, but before the integers, I get some gibberish. I did some experimenting, and found out that if I put nothing into System::String ^ b, it would give the same gibberish output into the file or a message box, but I couldn't figure out why it would do this if I was concatenating those integers to it (as strings). What could be going wrong here?
using namespace msclr::interop;
using namespace System;
using namespace System::IO;
using namespace System::Text;
...
System::IO::StreamWriter ^ x;
char buffer[21], buffer2[3];
int a;
for(a = 0; a < 10; a++){
itoa(weight[a], buffer, 10);
strcat(buffer, buffer2);
}
System::String ^ b = marshal_as<String^>(buffer);
x->WriteLine(b);

What format is the file in? You may be reading a UTF-8 file with a byte-order mark that is silently applied by a text editing program.
http://en.wikipedia.org/wiki/Byte_order_mark

Typo in question or bug in code: pass buffer2 to itoa instead of buffer.
Also, initialize buffer to "";

Related

C++ override quotes

Ok, so I'm using C++ to make a library that'd help me to print lines into a console.
So, I want to override " "(quote operators) to create an std::string instead of the string literal, to make it easier for me to append other data types to that string I want to output.
I've seen this done before in the wxWidgets with their wxString, but I have no idea how I can do that myself.
Is that possible and how would I go about doing it?
I've already tried using this code, but with no luck:
class PString{
std::string operator""(const char* text, std::size_t len) {
return std::string(text, len);
}
};
I get this error:
error: expected suffix identifier
std::string operator""(const char* text, std::size_t len) {
^~
which, I'd assume, want me to add a suffix after the "", but I don't want that. I want to only use ""(quotes).
Thanks!
You can't use "" without defining a suffix. "" is a const char* by itself either with a prefix (like L"", u"", U"", u8"", R"()") or followed by suffixes like (""s, ""sv, ...) which can be overloaded.
The way that wxString works is set and implicit constructor wxString::wxString(const char*); so that when you pass "some string" into a function it is essentially the same as wxString("some string").
Overriding operator ""X yields string literals as the other answer.

junk values in a LPTSTR string

I wrote a function to return the extension from a path, It looks like below:
LPTSTR GetExtension(LPCTSTR path1)
{
CString str(path1);
int length = str.ReverseFind(L'.');
str = str.Right(str.GetLength()-length);
LPTSTR extension= str.GetBuffer(0);
str.ReleaseBuffer();
return extension;
}
I checked the statement and found that extension have a valid value(.txt) while returning but when i use the following statement in main method like below
LPTSTR extension = GetExtension(L"C:\\Windows\\text.txt");
The variable extension is having the following junk values:
ﻮ
ﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮﻮ䞐瀘嗯᠀骰PꬰP⚜叕u
Can anyone tell me what is the reason behind it?
You are returning a pointer to a released buffer. And the buffer is a local variable of the function. Both big no-nos. Change the signature to
size_t GetExtension(LPCTSTR path, LPTSTR buffer, size_t bufferSize)
so that you can copy the result into buffer.
Or return a CString or std::wstring, you're using C++, not C. Using TCHAR is also a heavily outmoded way to handle strings, the last non-Unicode version of Windows died a timely death 12 years ago.
I dont have my compiler with me, but may be you are probably getting the buffer and storing a location of it. Then you release it while LPTSTR is still pointing to one location.
Or may be LPTSTR is on the stack while while you look at it inside the function. On exiting the function you are loosing it.

Converting wchar_t* to char* on iOS

I'm attempting to convert a wchar_t* to a char*. Here's my code:
size_t result = wcstombs(returned, str, length + 1);
if (result == (size_t)-1) {
int error = errno;
}
It indeed fails, and error is filled with 92 (ENOPROTOOPT) - Protocol not available.
I've even tried setting the locale:
setlocale(LC_ALL, "C");
And this one too:
setlocale(LC_ALL, "");
I'm tempted to just throw the characters with static casts!
Seems the issue was that the source string was encoded with a non-standard encoding (two ASCII characters for each wide character), which looked fine in the debugger, but clearly internally was sour. The error code produced is clearly not documented, but it's the equivalent to simply not being able to decode said piece of text.

Convert hex to int

I've seen lots of answers to this, but I cannot seem to get any to work. I think I'm getting confused between variable types. I have an input from NetworkStream that is put a hex code into a String^. I need to take part of this string, convert it to a number (presumably int) so I can add some arithemetic, then output the reult on the form. The code I have so far:
String^ msg; // gets filled later, e.g. with "A55A6B0550000000FFFBDE0030C8"
String^ test;
//I have selected the relevant part of the string, e.g. 5A
test = msg->Substring(2, 2);
//I have tried many different routes to extract the numverical value of the
//substring. Below are some of them:
std::stringstream ss;
hexInt = 0;
//Works if test is string, not String^ but then I can't output it later.
ss << sscanf(test.c_str(), "%x", &hexInt);
//--------
sprintf(&hexInt, "%d", test);
//--------
//And a few others that I've deleted after they don't work at all.
//Output:
this->textBox1->AppendText("Display numerical value after a bit of math");
Any help with this would be greatly appreciated.
Chris
Does this help?
String^ hex = L"5A";
int converted = System::Convert::ToInt32(hex, 16);
The documentation for the Convert static method used is on the MSDN.
You need to stop thinking about using the standard C++ library with managed types. The .Net BCL is really very good...
Hope this helps:
/*
the method demonstrates converting hexadecimal values,
which are broken into low and high bytes.
*/
int main(){
//character buffer
char buf[1];
buf[0]= 0x06; //buffer initialized to some hex value
buf[1]= 0xAE; //buffer initialized to some hex value
int number=0;
//number generated by binary shift of high byte and its OR with low byte
number = 0xFFFF&((buf[1]<<8)|buf[0]);
printf("%x",number); //this prints AE06
printf(“%d”,number); //this prints the integer equivalent
getch();
}

C++'s char * by swig got problem in Python 3.0

Our C++ lib works fine with Python2.4 using Swig, returning a C++ char* back to a python str. But this solution hit problem in Python3.0, error is:
Exception=(, UnicodeDecodeError('utf8', b"\xb6\x9d\xa.....",0, 1, 'unexpected code byte')
Our definition is like(working fine in Python 2.4):
void cGetPubModulus(
void* pSslRsa,
char* cMod,
int* nLen );
%include "cstring.i"
%cstring_output_withsize( char* cMod, int* nLen );
Suspect swig is doing a Bytes->Str conversion automatically. In python2.4 it can be implicit but in Python3.0 it's no long allowed.. Anyone got a good idea? thanks
It's rather Python 3 that does that conversion. In Python 2 bytes and str are the same thing, in Python 3 str is unicode, so something somewhere tries to convert it to Unicode with UTF8, but it's not UTF8.
Your Python 3 code needs to return not a Python str, but a Python bytes. This will not work with Python 2, though, so you need preprocessor statements to handle the differences.
I came across a similar problem. I wrote a SWIG typemap for a custom char array (an unsigned char in fact) and it got SEGFAULT when using Python 3. So I debugged the code within the typemap and I realized the problem Lennart states.
My solution to that problem was doing the following in that typemap:
%typemap(in) byte_t[MAX_FONTFACE_LEN] {
if (PyString_Check($input))
{
$1 = (byte_t *)PyString_AsString($input);
}
else if (PyUnicode_Check($input))
{
$1 = (byte_t *)PyUnicode_AsEncodedString($input, "utf-8", "Error ~");
$1 = (byte_t *)PyBytes_AS_STRING($1);
}
else
{
PyErr_SetString(PyExc_TypeError,"Expected a string.");
return NULL;
}
}
That is, I check what kind of string object PyObject is. The functions PyString_AsString() and PyUnicode_AsString() will return > 0 if its input it's an UTF- 8 string or an Unicode string respectively. If it's an Unicode string, we convert that string to bytes in the call PyUnicode_AsEncodedString() and later on we convert those bytes to a char * with the call PyBytes_AS_STRING().
Note that I vaguely use the same variable for storing the unicode string and converting it later to bytes. Despite of being that questionable and maybe, it could derive in another coding-style discussion, the fact is that I solved my problem. I have tested it out with python3 and python2.7 binaries without any problems yet.
And lastly, the last line is for replicating an exception in the python call, to inform about that input wasn't a string, either utf nor unicode.

Resources