C# - convert any unsafe unmanaged type to managed type without any new memory allocation - keeping the same size in memory - memory-leaks

I want to convert ushort array to short array, without any allocating to a new memory in C# core.
Also convert any type, such as byte[] to short[] or vice versa - The conversion need to keep the same size in memory for both source and destination
In C there was "union" syntax, which the type can be referred with several type-casting - I didn't find any equivalent for C# (that works only for non-primitive types).
One way is to code like this
ushort[] res = new ushort[1024 * 1024 * 1024];
short[] s = new short[1024 * 1024 * 1024];
Buffer.BlockCopy(s, 0, res, 0, s.Length * 2);
... but, I don't wont to allocate the 's' value - It's too much memory consuming, and may leads to memory leak.
Another alternative I use is using unsafe mode.
The code:
unsafe
{
ushort[] res = new ushort[1024*1024*1024]; // large allocating
fixed (ushort* ss = &res[0])
{
IntPtr ps = (IntPtr)ss;
short[] s0 = (short[])Marshal.PtrToStructure(ps, typeof(short[]));
}
}
run into exception.
No parameterless constructor defined for this object.
...
at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean wrapExceptions, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor)
at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean wrapExceptions, Boolean skipCheckThis, Boolean fillCache)
at System.Runtime.InteropServices.Marshal.PtrToStructure(IntPtr ptr, Type structureType)
How can I eventually convert that big array of 1G bytes, without the need of allocating new memory?
Can I convert array type from unmanaged type to managed type?
I prefer using the unsafe mode, if ever possible.
After deep search, I found a solution, which I would participate (and editing this post), but it has a problem as describe as following:
That's code working very well, if the arrays are of the same size - There is a new question - How can I solve it for non same array base type size (i.e. short[] vs byte[]) ?
The only thing I cannot convert from short[] to byte[].
(Thanks, to some blog on the internet:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/06ac44b0-30d8-44a1-86a4-1716dc431c62/how-to-convert-an-intptr-to-an-object-in-c?forum=clr)
That works on C# core as well on C# .NET
First thing, create a class:
public static class MyConverter
{
public static unsafe T GetInstance<T>(IntPtr pointer)
{
try
{
var fakeInstance = default(T);
TypedReference typedReference = __makeref(fakeInstance);
*(IntPtr*)(&typedReference) = pointer;
T instance = (T) __refvalue(typedReference,T);
return instance;
}
catch
{
return default(T);
}
}
}
and can use it in code like this:
ushort[] x = new ushort[1024];
GCHandle handle1 = GCHandle.Alloc(x);
IntPtr px = (IntPtr)handle1;
x[0] = 1;
x[1] = 2;
x[2] = 3;
short[] s = MyConverter.GetInstance<short[]>(px);
That solution convert ushort[] to short[] without any waste of memory.
Thanks.

You can easily convert like this: (ushort[])(object)myShortArray. The CLR allows this.
It's only allowed for primitive array element types that are the same size.
In other cases you can use Span to treat the memory as a different type.
Span<int> asInts = MemoryMarshal.Cast<byte, int>(asBytes);

Related

Returning string from a remote server using rpcgen

I am going through RPC tutorial and learn few techniques in rpcgen. I have the idea of adding, multiplying different data types using rpcgen.
But I have not found any clue that how could I declare a function in .x file which will return a string. Actually I am trying to build a procedure which will return a random string(rand string array is in server).
Can any one advise me how to proceed in this issue? It will be helpful if you advise me any tutorial regarding this returning string/pointer issue.
Thank you in advance.
Ok, answering to the original question (more than 2 years old), the first answer is correct but a little tricky.
In your .x file, you define your structure with the string inside, having defined previously the size of the string:
typedef string str_t<255>;
struct my_result {
str_t data;
};
...
Then you invoke rpcgen on your .x file to generate client and server stubs and .xdr file:
$rpcgen -N *file.x*
Now you can compile client and server in addition to any program where you pretend to use the remote functions. To do so, I followed the "repcgen Tutorial" in ORACLE's web page:
https://docs.oracle.com/cd/E19683-01/816-1435/rpcgenpguide-21470/index.html
The tricky part is, although you defined a string of size m (array of m characters) what rpcgen and .xdr file create is a pointer to allocated memmory. Something like this:
.h file
typedef char *str_t;
struct my_result {
int res;
str_t data;
};
typedef struct my_result my_result;
.xdr file
bool_t xdr_str_t (XDR *xdrs, str_t *objp)
{
register int32_t *buf;
if (!xdr_string (xdrs, objp, 255))
return FALSE;
return TRUE;
}
So just take into account when using this structure in your server side that it is not a string of size m, but a char pointer for which you'll have to reserve memory before using it or you'll be prompted the same error than me on execution:
Segmentation fault!
To use it on the server you can write:
static my_result response;
static char text[255];
memset(&response, '\0', sizeof(my_result));
memset(text, '\0', sizeof(text));
response.data = text;
And from there you are ready to use it wisely! :)
According to the XDR protocol specification you can define a string type where m is the length of the string in bytes:
The standard defines a string of n (numbered 0 to n -1) bytes to be the number n encoded as an unsigned integer (as described above), and followed by the n bytes of the string. Each byte must be regarded by the implementation as being 8-bit transparent data. This allows use of arbitrary character set encodings. Byte m of the string always precedes byte m +1 of the string, and byte 0 of the string always follows the string's length. If n is not a multiple of four, then the n bytes are followed by enough (0 to 3) residual zero bytes, r, to make the total byte count a multiple of four.
string object<m>;
You can then define a struct with the string type str_t as one of the variables:
typedef string str_t<255>;
struct my_result {
str_t data;
};
Then in your .x file you can define an RPC in your program which returns a struct of type my_result. Since rpcgen will give you a pointer to this struct (which I have called res) you can print the message with prinf("%s\n", res->data);.
program HELLO_PROG {
version HELLO_VERSION {
my_result abc() = 1;
} = 1;
} = 1000;

C++/CX: Why doesn't returning a StringReference work like passing one as an argument?

Platform::StringReference exists so that you can pass a const wchar_t* across the ABI boundary to a function accepting a String^ without making a copy. The StringReference implicitly converts to a String^ whose internal pointer matches the original const wchar_t*. This is verified by the following code; if you step through it you find that pz == z:
void param(String^ s)
{
const wchar_t* z = s->Data();
}
App::App()
{
std::wstring p = L"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz";
const wchar_t* pz = p.c_str();
param(StringReference(pz));
}
However, trying to return a StringReference doesn't seem to work the same way and I'm curious why. If I have a function that returns String^ and I return a StringReference from it then the same implicit conversion operator is called, but when the caller gets their String^ it has a different internal data pointer that contains a copy. Here's some code that tries it:
String^ ret()
{
std::wstring s = L"12345678901234567890123456789012345678901234567890";
const wchar_t* z = s.c_str();
return StringReference(z);
}
App::App()
{
String^ r = ret();
const wchar_t* rz = r->Data();
}
That code verifies in two ways: first, if you step through you'll find that z != rz and second, r ends up pointing to a valid string rather than garbage, so a copy must have been made because the original string is freed at the end of ret.
I also tried returning via out parameter, but I get the same results as a straight return (z != oz and o ends up with a valid string):
void out(String^* r)
{
std::wstring s = L"12345678901234567890123456789012345678901234567890";
const wchar_t* z = s.c_str();
*r = StringReference(z);
}
App::App()
{
String^ o;
out(&o);
const wchar_t* oz = o->Data();
}
Is there a way to return a StringReference across the ABI boundary in the same way that you can pass one? I imagine the behavior would depend on the language of the caller and how that language marshals strings from WinRT, but it seems like at least a C++/CX caller ought to be able to do it.
No you can't return a StringReference across the ABI boundary. Returning a StringReference across the ABI boundary is similar (but not identical) to returning the address of a local variable. That's because the whole point of a StringReference is that the StringReference doesn't allocate any new memory.
Consider what would happen if you could return a StringReference across the ABI boundary. What would happen if you had:
String^ ReturnAString()
{
const wchar_t buffer[500] = "MyString";
return StringReference(buffer);
}
The StringReference is just a wrapper around the stack allocated buffer. And clearly you can't return that across the ABI boundary (the stack storage is reclaimed as soon as the routine exits).
Instead you need to return a real Platform::String - a Platform::String contains a copy of the string data and thus it can safely be returned to the caller.

JNA - Use structure array as byref argument

I know parts of this issue is covered by some posts here and I have looked at them and tested some but with no luck.
I have this native method signature which should populate the provided CBadgeData structure array with results:
int elc_GetBadges(int nHandle, char* cErr, int* nRecCount, CBadgeData** arr)
The CBadgeData structure is implemented as follows:
package test.elcprog;
import java.util.Arrays;
import java.util.List;
import com.sun.jna.Pointer;
import com.sun.jna.Structure;
public class CBadgeData extends Structure{
public static class ByReference extends CBadgeData implements Structure.ByReference { }
public int nBadgeID, nTrigger, nExtraData;
public String cName;
public CBadgeData(Pointer pointer){
super(pointer);
}
public CBadgeData(){ }
public String ToString() {
return nBadgeID + "," + nTrigger + "," + nExtraData + "," + cName;
}
#Override
protected List getFieldOrder() {
String[] s = new String[]{"nBadgeID","nTrigger","nExtraData","cName"};
return Arrays.asList(s);
}
}
My last try to craft this argument and call the method looked like this:
CBadgeData.ByReference[] badges = new CBadgeData.ByReference[max_items];
new CBadgeData.ByReference().toArray(badges);
int ret = inst.elc_GetBadges(handle, err, recCount, badges);
It fails with segmentation error.
My Question is what Java type should be provided here as an argument for the native CBadgeData** in the call to elc_GetBadges?
EDIT -1-
Populating the array myself (with or without terminating null pointer) didn't work and caused further Seg crashes. I then used Pointer[] arg as technomage suggested:
Pointer[] pointers = new Pointer[max_items];
for(int i=0; i<max_items; i++){
pointers[i] = new CBadgeData.ByReference().getPointer();
}
int ret = inst.elc_GetBadges(handle, err, recCount, pointers);
This caused no error but seems to not make any changes to the returning struct which should have contain 4 items in this case:
int bid = new CBadgeData(pointers[i]).nBadgeID; // this returns null for all items
Using explicit read() / write() on the struct led to Seg crashes again (on the read):
Any idea what am I still missing here?
EDIT -2-
Interestingly enough - using the Memory.get directly, after calling the native method, gets the correct results:
Memory m= (Memory)pointers[0];
System.out.println("1st int: "+m.getInt(0)); // this gets 24289 which is 5ee1
System.out.println("2nd int: "+m.getInt(4)); // this gets 3
System.out.println("3rd int: "+m.getInt(8)); // this gets 255
System.out.println("String: "+m.getString(12)); // this gets "Badge[5EE1]" as supposed
But the read() still crashes. Any thoughts?
I'm inferring that CBadgeData** input is intended to be an array of pointer to CBadgeData.
As such, the Structure.ByReference tagging is correct.
Structure.toArray() is probably not appropriate here, or at least not necessary (it allocates a contiguous block of structs in memory). You can just populate your array with CBadgeData.ByReference instances.
Perhaps your callee is expecting a NULL pointer at the end of the array? I don't see another indicator of the array length to the callee.
CBadgeData.ByReference[] badges = new CBadgeData.ByReference[max_items+1];
for (int i=0;i < badges.length-1;i++) {
badges[i] = new CBadgeData.ByReference();
}
badges[badges.length-1] = null;
Pretty sure that works. If for whatever reason there's a bug handling Structure.ByReference[], I know that Pointer[] is reliable and will do the same thing.
EDIT
If you use Pointer[] instead of Structure.ByReference[] (please post a bug to the project site if Structure.ByReference[] does not work), you will have to manually call Structure.write/read before/after your native function call, since JNA will not know that the pointers reference structures that need to be synched with native memory. I'd bet, however, that the cause of your crashes when using Structure.ByReference[] was simply that JNA was automatically calling Structure.read() after the call and triggered the same error that you see when calling it explicitly.
If you get a segfault on read, it likely means that your structure fields aren't properly aligned or defined, or (less likely) that you have corrupt data that can't be read properly. To diagnose this, set jna.dump_memory=true and print out your struct after calling Structure.write() to see if the contents of the structure appear as you'd expect. It'd also help to post the native and JNA forms of your structure here, if possible.

Memory release while reassign char * to null

I'm a little bit confused regarding string memory usage in c++.
Is it good reassign *PChar to NULL second time? Will assigned first time to *PChar string memory be released?
char * fnc(int g)
{
...
}
char *PChar = NULL;
PChar=fnc(1);
if (PChar) { sprintf(s,"%s",PChar); } ;
*PChar = NULL;
PChar=fnc(2);
if (PChar) { sprintf(s,"%s",PChar); } ;
First things first. The following statement is not what you intend:
*PChar = NULL;
PChar=fnc(2);
You are NOT assigning null to the pointer, but putting value zero (0) to the first character of the said buffer. You might be willing to do:
PChar = NULL;
PChar=fnc(2);
As a good programming practice, yes you should assign a pointer to null after it is used (AND possibility memory-deallocated). But assigning a pointer to null will not free the memory - the pointer will not point to allocated memory, but to non-existent memory location. You need to call delete if it was allocated using new, or need to call free if allocated by malloc.
As for the given statement, the compiler would anyway remove the following statement, as the process of optimization:
// PChar = NULL;
PChar=fnc(2);
You need to be very careful while using pointers, and assignment to it with a statically allocated data or dynamically allocated buffer!
I would suggest declaring a buffer of the PChar type and pass pointer to this buffer in a function call.
Good programming practice cals for passing also the allowed length of the buffer that should be checked in th function.
#define MAX_PCHAR_LEN 1024 // or constant const DWORD . . .
PChar PCharbuf[MAX_PCHAR_LEN] = {0}; // initialize array with 0s
//make a call
fnc (&PCharbuf, MAX_PCHAR_LEN, 2); // whatever 2 means
This way you do not have to worry about who allocates and who released memory, since release is automatic after PCharbuf goes out of scope.

MFC multithreading with delete[] , dbgheap.c

I've got a strange problem and really don't understand what's going on.
I made my application multi-threaded using the MFC multithreadclasses.
Everything works well so far, but now:
Somewhere in the beginning of the code I create the threads:
m_bucketCreator = new BucketCreator(128,128,32);
CEvent* updateEvent = new CEvent(FALSE, FALSE);
CWinThread** threads = new CWinThread*[numThreads];
for(int i=0; i<8; i++){
threads[i]=AfxBeginThread(&MyClass::threadfunction, updateEvent);
m_activeRenderThreads++;
}
this creates 8 threads working on this function:
UINT MyClass::threadfunction( LPVOID params ) //executed in new Thread
{
Bucket* bucket=m_bucketCreator.getNextBucket();
...do something with bucket...
delete bucket;
}
m_bucketCreator is a static member. Now I get some thread error in the deconstructor of Bucket on the attempt to delete a buffer (however, the way I understand it this buffer should be in the memory of this thread, so I don't get why there is an error). On the attempt of delete[] buffer, the error happens in _CrtIsValidHeapPointer() in dbgheap.c.
Visual studio outputs the message that it trapped a halting point and this can be either due to heap corruption or because the user pressed f12 (I didn't ;) )
class BucketCreator {
public:
BucketCreator();
~BucketCreator(void);
void init(int resX, int resY, int bucketSize);
Bucket* getNextBucket(){
Bucket* bucket=NULL;
//enter critical section
CSingleLock singleLock(&m_criticalSection);
singleLock.Lock();
int height = min(m_resolutionY-m_nextY,m_bucketSize);
int width = min(m_resolutionX-m_nextX,m_bucketSize);
bucket = new Bucket(width, height);
//leave critical section
singleLock.Unlock();
return bucket;
}
private:
int m_resolutionX;
int m_resolutionY;
int m_bucketSize;
int m_nextX;
int m_nextY;
//multithreading:
CCriticalSection m_criticalSection;
};
and class Bucket:
class Bucket : public CObject{
DECLARE_DYNAMIC(RenderBucket)
public:
Bucket(int a_resX, int a_resY){
resX = a_resX;
resY = a_resY;
buffer = new float[3 * resX * resY];
int buffersize = 3*resX * resY;
for (int i=0; i<buffersize; i++){
buffer[i] = 0;
}
}
~Bucket(void){
delete[] buffer;
buffer=NULL;
}
int getResX(){return resX;}
int getResY(){return resY;}
float* getBuffer(){return buffer;}
private:
int resX;
int resY;
float* buffer;
Bucket& operator = (const Bucket& other) { /*..*/}
Bucket(const Bucket& other) {/*..*/}
};
Can anyone tell me what could be the problem here?
edit: this is the other static function I'm calling from the threads. Is this safe to do?
static std::vector<Vector3> generate_poisson(double width, double height, double min_dist, int k, std::vector<std::vector<Vector3> > existingPoints)
{
CSingleLock singleLock(&m_criticalSection);
singleLock.Lock();
std::vector<Vector3> samplePoints = std::vector<Vector3>();
...fill the vector...
singleLock.Unlock();
return samplePoints;
}
All the previous replies are sound. For the copy constructor, make sure that it doesn't just copy the buffer pointer, otherwise that will cause the problem. It needs to allocate a new buffer, not the pointer value, which would cause an error in 'delete'. But I don't get the impression that the copy contructor will get called in your code.
I've looked at the code and I am not seeing any error in it as is. Note that the thread synchronization isn't even necessary in this GetNextBucket code, since it's returning a local variable and those are pre-thread.
Errors in ValidateHeapPointer occur because something has corrupted the heap, which happens when a pointer writes past a block of memory. Often it's a for() loop that goes too far, a buffer that wasn't allocated large enough, etc.
The error is reported during a call to 'delete' because that's when the heap is validated for bugs in debug mode. However, the error has occurred before that time, it just happens that the heap is checked only in 'new' and 'delete'. Also, it isn't necessarily related to the 'Bucket' class.
What you need to need to find this bug, short of using tools like BoundsChecker or HeapValidator, is comment out sections of your code until it goes away, and then you'll find the offending code.
There is another method to narrow down the problem. In debug mode, include in your code, and sprinkle calls to _CrtCheckMemory() at various points of interest. That will generate the error when the heap is corrupted. Simply move the calls in your code to narrow down at what point the corruption begins to occur.
I don't know which version of Visual C++ you are using. If you're using a earlier one like VC++ 6.0, make sure that you are using the Multitreaded DLL version of the C Run Time Library in the compiler option.
You're constructing a RenderBucket. Are you sure you're calling the 'Bucket' class's constructor from there? It should look like this:
class RenderBucket : public Bucket {
RenderBucket( int a_resX, int a_resY )
: Bucket( a_resX, a_resY )
{
}
}
Initializers in the Bucket class to set the buffer to NULL is a good idea... Also making the Default constructor and copy constructor private will help to make double sure those aren't being used. Remember.. the compiler will create these automatically if you don't:
Bucket(); <-- default constructor
Bucket( int a_resx = 0, int a_resy = 0 ) <-- Another way to make your default constructor
Bucket(const class Bucket &B) <-- copy constructor
You haven't made a private copy constructor, or any default constructor. If class Bucket is constructed via one of these implicitly-defined methods, buffer will either be uninitialized, or it will be a copied pointer made by a copy constructor.
The copy constructor for class Bucket is Bucket(const Bucket &B) -- if you do not explicitly declare a copy constructor, the compiler will generate a "naive" copy constructor for you.
In particular, if this object is assigned, returned, or otherwise copied, the copy constructor will copy the pointer to a new object. Eventually, both objects' destructors will attempt to delete[] the same pointer and the second attempt will be a double deletion, a type of heap corruption.
I recommend you make class Bucket's copy constructor private, which will cause attempted copy construction to generate a compile error. As an alternative, you could implement a copy constructor which allocates new space for the copied buffer.
Exactly the same applies to the assignment operator, operator=.
The need for a copy constructor is one of the 55 tips in Scott Meyer's excellent book, Effective C++: 55 Specific Ways to Improve Your Programs and Designs:
This book should be required reading for all C++ programmers.
If you add:
class Bucket {
/* Existing code as-is ... */
private:
Bucket() { buffer = NULL; } // No default construction
Bucket(const Bucket &B) { ; } // No copy construction
Bucket& operator= (const Bucket &B) {;} // No assignment
}
and re-compile, you are likely to find your problem.
There is also another possibility: If your code contains other uses of new and delete, then it is possible these other uses of allocated memory are corrupting the linked-list structure which defines the heap memory. It is common to detect this corruption during a call to delete, because delete must utilize these data structures.

Resources