C++/CLI Converting System::String to const char* - string

I'm using Microsoft Visual C++ 2008
I want to join some strings, and then use it with "system" command.
I tried to do it like this:
System::String^ link;
link = "wget.exe --output-document=log http://ADDRESS";
link = link + System::String::Copy(textBox_login->Text);
link = link + "&passwd=";
link = link + System::String::Copy(textBox_passwd->Text);
system(link); //LINE WITH ERROR
But i get error C2664: 'system' : cannot convert parameter 1 from 'System::String ^' to 'const char *'
I appreciate any help ;)

Take a look at this question and this question.
In essence, the problem is that the system function expects a variable of the type const char* rather than System::String.
So you need to convert the string to a const char* (Using code from this answer) and use that as an argument for the system function.
IntPtr p = Marshal::StringToHGlobalAnsi(clistr);
const char* linkStr = static_cast<char*>(p.ToPointer());
system(linkStr);
Marshal::FreeHGlobal(p);

To use system as you do, you will need Marshalling. This requires extra precautions which can lead to unforeseen pain.
I recommend that you call wget via the System::Process class
It integrates with .NET much better and you can use System::String^ directly

after doing as Yacoby said, almost everything works fine, but when it gets to
link = link + "&passwd=";
it cuts everything what is afterwords in string.
when i remove '&' it works just fine... i need the '&' sign

You got the technical solution to your problem but here are a couple other things you might want to consider:
Instead of opening a process to do the HTTP request for you, use an API (.NET or C++, in .NET it's much easier than standard C++, look at WebRequest) to do this. Especially if you plan to do something with the response.
In general if you're appending to a String multiple times, prefer a StringBuilder. Since String is immutable in .NET, every append requires a new String to be constructed.
In this case, don't use a String to build the URL in the first place. Use System::Uri instead.

Related

Node.js URL-encoding for pre-RFC3986 urls (using + vs %20)

Within Node.js, I am using querystring.stringify() to encode an object into a query string for usage in a URL. Values that have spaces are encoded as %20.
I'm working with a particularly finicky web service that will only accept spaces encoded as +, as used to be commonly done prior to RFC3986.
Is there a way to set an option for querystring so that it encodes spaces as +?
Currently I am simply doing a .replace() to replace all instances of %20 with +, but this is a bit tedious if there is an option I can set ahead of time.
If anyone still facing this issue, "qs" npm package has feature to encode spaces as +
qs.stringify({ a: 'b c' }, { format : 'RFC1738' })
I can't think of any library doing that by default, and unfortunately, I'd say your implementation may be the more efficient way to do this, since any other option would probably either do what you're already doing, or will use slower non-compiled pure JavaScript code.
What about asking the web service provider to follow the RFC?
https://github.com/kvz/phpjs is a node.js package that provides all the php functions. The http_build_query implementation at the time of writing this only supports urlencode (the query string includes + instead of spaces), but hopefully soon will include the enc_type parameter / rawurlencode (%20's for spaces).
See http://php.net/http_build_query.
RFC1738 (+'s) will be the default enc_type either way, so you can use it immediately for your purposes.

Use STL string with unicode

I am coding a plugin for autodesk 3dsmax and they recommend to use the _T(x) macro for every string literal to make it work with unicode as well. I am using the stl string class a lot in this code. So do I have to rewrite the code: string("foo") to: string(_T("foo")) ? Actually the stl string class doesnt have a constructor for wchars, so it doesnt make sense, does it?
Thx
Look at the definition of "T" macro - it expands to "L" in "Unicode" builds or nothing in "non-Unicode" builds. If you want to keep using the string calss and follow the recommendation for your plugin, your best bet is to use something like tstring which would follow the same rules.
But the truth is - all this "T" business made a lot of sense 10 years ago - all modern Windows versions are Unicode-only and you can just use wstring.
You could create an own string class say xstring and use the _T for constants and then internally, depending on unicode or not switch to string or wstring. either that or instantiate xstring<yourchartype>

Use of CreateProcess on WinCE6

I'm trying to launch a process from my program, namely cmd.exe.
Doc says I have to use CreateProcess, and below is how I use it :
CreateProcess((LPCWSTR) "\Windows\cmd.exe", (LPCWSTR) "", 0,0,0,0,0,0,0,0);
dw = GetLastError();
printf("%u \n", dw);
The path is the one displayed by the target (on the target, I found a shortcut to cmd.exe which states it resides in \windows.
The error is always the same (2), regardless of how I write the path. Apparently, the error code for (2) is Invalid_Path.
Thanks for having read,
GQ
You are passing an incorrect string to create process. Just casting a byte-oriented string to LPCWSTR doesn't fix the problem that it is incorrect data - you really have to use a Unicode string, which you can spell as
CreateProcess(L"\\Windows\\cmd.exe", NULL, 0,0,0,0,0,0,0,0);
Alternatively, you can use the TEXT() macro.
The path is incorrect. Use double backslash.
CreateProcess(TEXT("\\Windows\\cmd.exe"), TEXT(""), 0,0,0,0,0,0,0,0);
Additionally, the last parameter cannot be NULL. It must be a pointer to PROCESS_INFORMATION structure. For details, see the following link
MSDN link for Creating Process in Windows CE 6.0

Convert char array to UNICODE in MFC C++

I'm using the folowing code to read files from a folder in windows. However since this a MFC application I have to convert the char array to UNICODE. For example if I hard code the path as "C:\images3\test\" as shown below the code works.
WIN32_FIND_DATA FindFileData;
HANDLE hFind = INVALID_HANDLE_VALUE;
hFind = FindFirstFile(_T("C:\\images3\\test\\"), &FindFileData);
What I want is to get this working as follows:
char* pathOfFileType;
hFind = FindFirstFile(_T(pathOfFileType), &FindFileData);
Can anyone tell me how to fix this problem ?
Thanks
Thanks a lot for all your responses. I learnt a lot from those answers because I also didn't have much idea about what is happening underneath. Meanwhile I managed to get rid of the issue by simply converting to UNICODE using the following code with minimum changes to my existing code.
#include <atlconv.h>
USES_CONVERSION;
//An ANSI string
LPSTR lpsz_ANSI_String = pathOfFileType;
//ANSI string being converted to a UNICODE string
LPWSTR lpUnicodeStr = A2W( lpsz_ANSI_String );
hFind = FindFirstFile(lpUnicodeStr, &FindFileData);
You can use the MultiByteToWideChar function to convert a string from chars to UTF-16, but you'd better to get pathOfFileType directly in Unicode from the user or from wherever you take it, otherwise you may still experience problems with paths that contain characters not included in the current CP.
Your question demonstrates a confusion of several issues. First, using MFC doesn't mean you have to convert the character array to Unicode, one has nothing to do with the other. Furthermore, FindFirstFile is a Win32 API, not an MFC function. Finaly, _T("abc") is not necessarily unicode, rather _T(X) is a macro that in multi-byte builds expands to X, and in unicode builds expands to L X, creating a wide character literal. This is designed so that your code can compile in a unciode or multi-byte configuration. To achieve the same flexibility when declaring a variable, you use the TCHAR type instead of char or wchar_t. So your second snippet should look like
TCHAR* pathOfFileType;
hFind = FindFirstFile(pathOfFileType, &FindFileData);
Note no _T macro, that is only applied to string literals, not identifiers.
"since this a MFC application I have to convert the char array to UNICODE"
Not so. If you wish, you can use change to use the Multi-Byte Character Set.
In project properties, general change character set to 'Use Multi-Byte Character Set'
Now this will work
char* pathOfFileType;
hFind = FindFirstFile(pathOfFileType, &FindFileData);
Supposing you want to use UNICODE ( visual studio's name for the 2 byte encoding of UNICODE characters native to Windows ) then you have to explicitly call the MBCS version of the API
char* pathOfFileType;
hFind = FindFirstFileA(pathOfFileType, &FindFileData);

How do I PInvoke a multi-byte ANSI string?

I'm working on a PInvoke wrapper for a library that does not support Unicode strings, but does support multi-byte ANSI strings. While investigating FxCop reports on the library, I noticed that the string marshaling being used had some interesting side effects. The PInvoke method was using "best fit" mapping to create a single-byte ANSI string. For illustration, this is what one method looked like:
[DllImport("thedll.dll", CharSet=CharSet.Ansi)]
public static extern int CreateNewResource(string resourceName);
The result of calling this function with a string that contains non-ASCII characters is that Windows finds a "close" character, generally this looks like it ends up being "???". If we pretend that 'a' is a non-ASCII character, then passing "cat" as a parameter would create a resource named "c?t".
If I follow the guidelines in the FxCop rule, I end up with something like this:
[DllImport("thedll.dll", CharSet=CharSet.Ansi, BestFitMapping = false, ThrowOnUnmappableChar = true)]
public static extern int CreateNewResource([MarshalAs(UnmanagedType.LPStr)] string resourceName);
This introduces a change in behavior; now when a character cannot be mapped an exception is thrown. This concerns me because this is a breaking change, so I'd like to try and marshal the strings as multi-byte ANSI but I cannot see a way to do so. UnmanagedType.LPStr is specified to be a single-byte ANSI string, LPTStr will be Unicode or ANSI depending on the system, and LPWStr is not what the library expects.
How would I tell PInvoke to marshal the string as a multibyte string? I see there's a WideCharToMultiByte() API function, could I change the signature to expect an IntPtr to a string I create in unmanaged memory? It seems like this still has many of the problems that the current implementation has (it still might have to drop or substitute characters), so I'm not sure if this is an improvement. Is there another method of marshaling that I'm missing?
ANSI is multi-byte, and ANSI strings are encoded according to the codepage currently enabled on the system. WideCharToMultiByte works the same way as P/Invoke.
Maybe what you're after is conversion to UTF-8. Although WideCharToMultiByte supports this, I don't think P/Invoke does, since it's not possible to adopt UTF-8 as the system-wide ANSI code page. At this point you'd be looking at passing the string as an IntPtr instead, although if you're doing that, you may as well use the managed Encoding class to do the conversion, rather than WideCharToMultiByte.
Here is the best way I've found to accomplish this. Instead of marshalling as a string, marshal as a byte[]. Put the responsibility on the caller of the pinvoke function API to convert to a byte array in the most appropriate fashion. Most likely by using one of the Text.Encoding classes.
If you end up having to call WideCharToMultiByte manually, I would get rid of the p/invoke and manually marshal this using WideCharToMultiByte in a C++/CLI wrapper function. Managed C++ is much better at these interop scenarios than C# is.
Though, if this is the only p/invoke you have, it's probably not worth it.

Resources