C++ to C# char[] - visual-c++

C# code:
class Hello{
public void helloWorld(char[] chars){
//do something
}
}
C++ code to call C#:
MyCSDLL::Hello* hello;
//init hello, some calls are ok.
char* myCharPtr;
//init with message
HRESULT result = hello->helloWorld(safeArray, (MyCSDLL::_MyRetVal) _retValPtr);
Adapting from How to create and initialize SAFEARRAY of doubles in C++ to pass to C#
void createSafeArray(SAFEARRAY** saData, char* charPtr)
{
char* iterator = charPtr;
SAFEARRAYBOUND Bound;
Bound.lLbound = 0;
Bound.cElements = 10;
*saData = SafeArrayCreate(VT_R8, 1, &Bound);
char HUGEP *pdFreq;
HRESULT hr = SafeArrayAccessData(*saData, (void HUGEP* FAR*)&pdFreq);
if (SUCCEEDED(hr))
{
do {
*pdFreq++ = *iterator;
} while (*iterator++);
}
}
How to call hello->helloWorld()? it is expecting SAFEARRAY*. The current code gives 80131538 error. How to fix it?
C++ Project is not CLR.

Let's suppose the C# code is this:
namespace ClassLibrary1
{
[ComVisible(true)]
[ClassInterface(ClassInterfaceType.AutoDual)]
public class Hello
{
public void helloWorld(char[] chars)
{
...
}
}
}
Then, you can call it with this C/C++ code, for example:
#import "C:\mycode\ClassLibrary1\bin\Debug\classlibrary1.tlb" raw_interfaces_only
using namespace ClassLibrary1;
HRESULT CallHello(wchar_t* charPtr, int count)
{
CComPtr<_Hello> p;
HRESULT hr = p.CoCreateInstance(__uuidof(Hello));
if (FAILED(hr))
return hr;
SAFEARRAY* psa = SafeArrayCreateVector(VT_UI2, 0, count);
if (!psa)
return E_OUTOFMEMORY;
LPVOID pdata;
hr = SafeArrayAccessData(psa, &pdata);
if (SUCCEEDED(hr))
{
CopyMemory(pdata, charPtr, count * 2); // count is the number of chars
SafeArrayUnaccessData(psa);
hr = p->helloWorld(psa);
}
SafeArrayDestroy(psa);
return hr;
}
.NET's char type is unicode, so the binary size is two bytes, the C equivalent is wchar_t (or unsigned short, etc...). So the safearray element type must match that, that's why I used VT_UI2 (VT_R8 that you used is Real of size 8 bytes, so it's equivalent to .NET's double type).
If you really want to use C's char, then you must do some kind of conversion to a 2-byte character.
Also, you can use the SafeArrayCreateVector function which directly allocates a 1-dimension safe array. Don't forget to call cleanup methods.

Related

C++ no acceptable conversion for operator+ with class

I have some 15-year-old C++ code that I am trying to bring up to more modern times. At this stage, I'm trying to get code that compiled with Visual C++ 6.0 to now compile with VS 2003 (Microsoft Visual C++ .NET 69462-335-0000007-18915). If we can get this to compile cleanly & run properly, then we can take another step to get it into a more recent version of VS. But I'm having a number of problems...
Here is a snippet of the (simplified) code:
class toS
{
public:
toS() { buff[0] ='\0'; }
operator LPCTSTR() { return buff; }
protected:
void Append (TCHAR c)
{
LPTSTR p = buff + _tcslen(buff);
*p++ = c;
*p = '\0';
}
TCHAR buff[40];
};
class LtoS : public toS
{
public:
LtoS(LONG n, TCHAR c = '\0')
{
_ltot(n, buff, 10);
Append(c);
}
};
void WriteBool(const CString& Section, const CString& Key, bool Value);
CString Section;
int nLine = 0;
std::vector<bool> *BoolVect;
std::vector<bool>::iterator vi;
...
for (vi = BoolVect->begin(); vi != BoolVect->end(); vi++)
WriteBool(Section, "LineVis " + LtoS(nLine++), *vi);
...
From this I get the following error message:
error C2677: binary '+' : no global operator found which takes type 'LtoS' (or there is no acceptable conversion)
Any idea how this code ever worked? If I can find out what it did in the past, I can begin to define the overloaded operator+ to match the functionality.
Compiler error goes away when I make class tos inherit from CString with:
class tos : public CString { ... }
Hopefully this will not only compile, but will execute correctly...
Deriving from several of the comments, try adding a public conversion operator to class toS as follows:
operator LPCTSTR() const { return &buff[0]; }
You may need to explicitly construct the string in the for loop as well, e.g.:
WriteBool(Section, CString("LineVis ") + static_cast<LPCTSTR>(LtoS(nLine++)), *vi);
(Side note: As you probably know since you just extracted code for an example, there's a problem here:
std::vector<bool> BoolVect;
...
for (vi = BoolVect->begin(); vi != BoolVect->end(); vi++)
The notation you're using to access the BoolVect implies that it is a pointer, but it's not being declared as such in your example.)

Returning wchar_t* from C++/CLI to native

Let's say we have a pure virtual C++ class:
class INativeInterface {
public:
virtual ~INativeInterface () {};
virtual LPCWSTR GetString () = 0;
};
and then we need to provide an implementation of this interface in C++/CLI:
class HalfManagedImplementation : public INativeInterface {
public:
virtual LPCWSTR GetString () override {
// need to return wchar_t const * pointer which points to the
// data of our managedData string
// pin_ptr is not suitable as it will go out of scope
// what other options do we have here?
// perhaps copying managed string contents to unmanaged heap?
wchar_t * unmanagedString = new wchar_t [managedData->Length + 1];
pin_ptr<const wchar_t> pinnedString = PtrToStringChars (managedData);
wcscpy_s (unmanagedString, managedData->Length + 1, pinnedString);
return unmanagedString;
}
private:
String^ managedData;
void SetString (String^ param){
// do something in .net
managedData = param;
}
};
My main questions are:
Can I allocate a native string on CRT heap and return a pointer to it to the native C++ code as I did above given that the memory allocated by managed code will be de-allocated by the native code?
An example of usage:
LPCWSTR data = cppCliObject->GetString ();
// do stuff with returned data or persist it by copying it somewhere else
delete[] data;
Is the first point valid when native C++ code is in a different dll than C++/CLI one?
Are there any other alternatives or best practices when returning wchar_t data to native C++?

How to convert guid to *char

I would like to convert a CLSID to a *char in c++ so I can display it in a text box. I am new to c++ so please make this as simple a s possible.
Thanks
C'ish solution:
/* 128 bit GUID to human-readable string */
char * guid_to_str(const GUID * id, char * out) {
int i;
char * ret = out;
out += sprintf(out, "%.8lX-%.4hX-%.4hX-", id->Data1, id->Data2, id->Data3);
for (i = 0; i < sizeof(id->Data4); ++i) {
out += sprintf(out, "%.2hhX", id->Data4[i]);
if (i == 1) *(out++) = '-';
}
return ret;
}
This assumes the output buffer has been already allocated, and should be of a size of 37 bytes (including the null terminating character).
The output is of the form "75B22630-668E-11CF-A6D9-00AA0062CE6C"
Usage example:
GUID g;
char buffer[37];
std::cout << guid_to_str(&g, buffer);
Note:
This code exists because I had to implement GUID parsing under Linux, otherwise I would have used the Windows API function StringFromCLSID mentioned by #krowe.
Here is a great example for converting GUID to string and vice versa that I am using in my projects:
std::string guidToString(GUID guid) {
std::array<char,40> output;
snprintf(output.data(), output.size(), "{%08X-%04hX-%04hX-%02X%02X-%02X%02X%02X%02X%02X%02X}", guid.Data1, guid.Data2, guid.Data3, guid.Data4[0], guid.Data4[1], guid.Data4[2], guid.Data4[3], guid.Data4[4], guid.Data4[5], guid.Data4[6], guid.Data4[7]);
return std::string(output.data());
}
GUID stringToGUID(const std::string& guid) {
GUID output;
const auto ret = sscanf(guid.c_str(), "{%8X-%4hX-%4hX-%2hX%2hX-%2hX%2hX%2hX%2hX%2hX%2hX}", &output.Data1, &output.Data2, &output.Data3, &output.Data4[0], &output.Data4[1], &output.Data4[2], &output.Data4[3], &output.Data4[4], &output.Data4[5], &output.Data4[6], &output.Data4[7]);
if (ret != 11)
throw std::logic_error("Unvalid GUID, format should be {00000000-0000-0000-0000-000000000000}");
return output;
}
In the example, it firsts uses char* before converting to string so this is exactly what you are looking for in an efficient way.
The Windows API has a function for this:
CLSID clsid;
HRESULT hr = CLSIDFromProgID ((OLESTR "Adobe.SVGCtl.3"),&clsid);
// Get class id as string
LPOLESTR className;
hr = StringFromCLSID(clsid, &className);
// convert to CString
CString c = (char *) (_bstr_t) className;
// then release the memory used by the class name
CoTaskMemFree(className);
// Now c is ready to use
A CLSID is the same as a UUID, so you can use the UuidToString() function
http://msdn.microsoft.com/en-us/library/windows/desktop/aa379352(v=vs.85).aspx

Why this boost thread creation does't compile?

I wrote some multithreading code using Boost thread library. I initialized two threads in the constructor using the placeholder _1 as the argument required by member function fillSample(int num). But this doesn't compile in my Visual Studio 2010. Following is the code:
#include<boost/thread.hpp>
#include<boost/thread/condition.hpp>
#include<boost/bind/placeholders.hpp>
#define SAMPLING_FREQ 250
#define MAX_NUM_SAMPLES 5*60*SAMPLING_FREQ
#define BUFFER_SIZE 8
class ECG
{
private:
int sample[BUFFER_SIZE];
int sampleIdx;
int readIdx, writeIdx;
boost::thread m_ThreadWrite;
boost::thread m_ThreadRead;
boost::mutex m_Mutex;
boost::condition bufferNotFull, bufferNotEmpty;
public:
ECG();
void fillSample(int num); //get sample from the data stream
void processSample(); //process ECG sample, return the last processed
};
ECG::ECG() : readyFlag(false), sampleIdx(0), readIdx(0), writeIdx(0)
{
m_ThreadWrite=boost::thread((boost::bind(&ECG::fillSample, this, _1)));
m_ThreadRead=boost::thread((boost::bind(&ECG::processSample, this)));
}
void ECG::fillSample(int num)
{
boost::mutex::scoped_lock lock(m_Mutex);
while( (writeIdx-readIdx)%BUFFER_SIZE == BUFFER_SIZE-1 )
{
bufferNotFull.wait(lock);
}
sample[writeIdx] = num;
writeIdx = (writeIdx+1) % BUFFER_SIZE;
bufferNotEmpty.notify_one();
}
void ECG::processSample()
{
boost::mutex::scoped_lock lock(m_Mutex);
while( readIdx == writeIdx )
{
bufferNotEmpty.wait(lock);
}
sample[readIdx] *= 2;
readIdx = (readIdx+1) % BUFFER_SIZE;
++sampleIdx;
bufferNotFull.notify_one();
}
I already included the placeholders.hpp header file but it still doesn't compile. If I replace the _1 with 0, then it will work. But this will initialize the thread function with 0, which is not what I want. Any ideas on how to make this work?
Move the creation to the initialization list:
m_ThreadWrite(boost::bind(&ECG::fillSample, this, _1)), ...
thread object is not copyable, and your compiler doesn't support its move constructor.

Convert .Net ref (%) to native (&)

How can I convert a C++/CLI int %tmp to native C++ int &tmp?
void test(int %tmp)
{
// here I need int &tmp2 for another pure C++ function call
}
Neither of the existing answers properly handle in/out parameters, let alone any advanced use cases.
This should work for all cases where other_func does not keep the reference after it returns:
void test(int %tmp)
{
pin_ptr<int> pinned_tmp = &tmp;
other_func(*pinned_tmp);
}
Just tried this, works fine:
//in the C++ dll
void testFunc( int& n )
{
n = 5;
}
//in the CLI app
[DllImport( "my.dll", EntryPoint = "?exported_name_here",
CallingConvention = CallingConvention::StdCall )]
void TestFunc( int& );
void test( int% tmp )
{
int n;
TestFunc( n );
tmp = n;
}
void your_function(int *);
void your_function2(int &);
void test(int %tmp)
{
int tmp2;
your_function(&tmp2);
your_function2(tmp2);
tmp=tmp2;
}

Resources