Dynamic array allocation of a record in Ada - free

I am trying to dynamically allocate a large array in Ada (well, an array of an array).
For instance, I'm able to dynamically allocate an object like so:
type Object;
type ObjPtr is access Object;
OP : ObjPtr;
-- sometime later
OP := new Object;
OP.Index := I;--OP.Ptr.all;
Free(OP);
I'm trying to emulate this benchmark code:
Object **objList = new Object*[500000];
int32_t *iList = new int32_t[500000];
for (int32_t i = 0; i < 500000; ++i)
{
objList[i] = new Object;
iList[i] = Object::getIndex(objList[i]);
delete objList[i];
}
delete[] iList;
delete[] objList;
Sadly, I'm unable to even do something like this c++ equivalent:
Object *objList = new Object*[500000];
I came up with this much so far:
type objs is array (Positive range <>) of Object;
type objList is access objs;
But I'm probably way off.

In Ada your C++ code would translate roughly to the following:
Alloc_Count : constant := 500_000;
type ObjPtr is access Object;
type ObjArray is array (1 .. Alloc_Count) of ObjPtr;
OA : ObjArray;
begin
for I in OA'Range loop
OA(I) := new Object;
-- ... do the other things
end loop;
If you want to use dispatching operations with your objects (i.e. the Object is defined as a tagged type), use Object'class instead of Object in the ObjPtr declaration.

Related

Object or void* equivalent in Ada

I'm trying to write a version of my C program in Ada. My C function call looks like this:
void convert(const void* in, void* out){
MyType* convertedIn = (MyType*)in;
MyType* convertedOut = (MyType*)out;
//Assignments and operations to translate values across
//Example
convertedOut->meters = convertedIn->feet * 0.3048;
}
After searching, I was unable to find anything out there about type casting or any form of Object class or void pointer object for Ada. How would I implement a function like this in Ada?
If I can't implement the function in Ada, how would I wrap the c function with Ada?
I'm using Ada95
type Example is tagged null record;
procedure Convert (From : in Example'Class;
To : out Example'Class) is
begin
null; -- Implement conversion here
end Convert;
I managed to get what I needed using System.Address and Ada.Unchecked_Conversion. Below is my code:
with MyPackage;
type MyTypePtr is access MyType;
procedure Convert (From : in System.Address;
To : out System.Address) is
function ConvertAddressToMyType is new Ada.Unchecked_Conversion(
Source => System.Address;
Target => MyTypePtr);
begin
null; -- Implement conversion here
end Convert;

Can a zero-length and zero-cap slice still point to an underlying array and prevent garbage collection?

Let's take the following scenario:
a := make([]int, 10000)
a = a[len(a):]
As we know from "Go Slices: Usage and Internals" there's a "possible gotcha" in downslicing. For any slice a if you do a[start:end] it still points to the original memory, so if you don't copy, a small downslice could potentially keep a very large array in memory for a long time.
However, this case is chosen to result in a slice that should not only have zero length, but zero capacity. A similar question could be asked for the construct a = a[0:0:0].
Does the current implementation still maintain a pointer to the underlying memory, preventing it from being garbage collected, or does it recognize that a slice with no len or cap could not possibly reference anything, and thus garbage collect the original backing array during the next GC pause (assuming no other references exist)?
Edit: Playing with reflect and unsafe on the Playground reveals that the pointer is non-zero:
func main() {
a := make([]int, 10000)
a = a[len(a):]
aHeader := *(*reflect.SliceHeader)((unsafe.Pointer(&a)))
fmt.Println(aHeader.Data)
a = make([]int, 0, 0)
aHeader = *(*reflect.SliceHeader)((unsafe.Pointer(&a)))
fmt.Println(aHeader.Data)
}
http://play.golang.org/p/L0tuzN4ULn
However, this doesn't necessarily answer the question because the second slice that NEVER had anything in it also has a non-zero pointer as the data field. Even so, the pointer could simply be uintptr(&a[len(a)-1]) + sizeof(int) which would be outside the block of backing memory and thus not trigger actual garbage collection, though this seems unlikely since that would prevent garbage collection of other things. The non-zero value could also conceivably just be Playground weirdness.
As seen in your example, re-slicing copies the slice header, including the data pointer to the new slice, so I put together a small test to try and force the runtime to reuse the memory if possible.
I'd like this to be more deterministic, but at least with go1.3 on x86_64, it shows that the memory used by the original array is eventually reused (it does not work in the playground in this form).
package main
import (
"fmt"
"unsafe"
)
func check(i uintptr) {
fmt.Printf("Value at %d: %d\n", i, *(*int64)(unsafe.Pointer(i)))
}
func garbage() string {
s := ""
for i := 0; i < 100000; i++ {
s += "x"
}
return s
}
func main() {
s := make([]int64, 100000)
s[0] = 42
p := uintptr(unsafe.Pointer(&s[0]))
check(p)
z := s[0:0:0]
s = nil
fmt.Println(z)
garbage()
check(p)
}

C++/CX: Why doesn't returning a StringReference work like passing one as an argument?

Platform::StringReference exists so that you can pass a const wchar_t* across the ABI boundary to a function accepting a String^ without making a copy. The StringReference implicitly converts to a String^ whose internal pointer matches the original const wchar_t*. This is verified by the following code; if you step through it you find that pz == z:
void param(String^ s)
{
const wchar_t* z = s->Data();
}
App::App()
{
std::wstring p = L"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz";
const wchar_t* pz = p.c_str();
param(StringReference(pz));
}
However, trying to return a StringReference doesn't seem to work the same way and I'm curious why. If I have a function that returns String^ and I return a StringReference from it then the same implicit conversion operator is called, but when the caller gets their String^ it has a different internal data pointer that contains a copy. Here's some code that tries it:
String^ ret()
{
std::wstring s = L"12345678901234567890123456789012345678901234567890";
const wchar_t* z = s.c_str();
return StringReference(z);
}
App::App()
{
String^ r = ret();
const wchar_t* rz = r->Data();
}
That code verifies in two ways: first, if you step through you'll find that z != rz and second, r ends up pointing to a valid string rather than garbage, so a copy must have been made because the original string is freed at the end of ret.
I also tried returning via out parameter, but I get the same results as a straight return (z != oz and o ends up with a valid string):
void out(String^* r)
{
std::wstring s = L"12345678901234567890123456789012345678901234567890";
const wchar_t* z = s.c_str();
*r = StringReference(z);
}
App::App()
{
String^ o;
out(&o);
const wchar_t* oz = o->Data();
}
Is there a way to return a StringReference across the ABI boundary in the same way that you can pass one? I imagine the behavior would depend on the language of the caller and how that language marshals strings from WinRT, but it seems like at least a C++/CX caller ought to be able to do it.
No you can't return a StringReference across the ABI boundary. Returning a StringReference across the ABI boundary is similar (but not identical) to returning the address of a local variable. That's because the whole point of a StringReference is that the StringReference doesn't allocate any new memory.
Consider what would happen if you could return a StringReference across the ABI boundary. What would happen if you had:
String^ ReturnAString()
{
const wchar_t buffer[500] = "MyString";
return StringReference(buffer);
}
The StringReference is just a wrapper around the stack allocated buffer. And clearly you can't return that across the ABI boundary (the stack storage is reclaimed as soon as the routine exits).
Instead you need to return a real Platform::String - a Platform::String contains a copy of the string data and thus it can safely be returned to the caller.

How to handle the generic type Object with protocol buffers, in the .proto file?

I've spent some time looking for some alternative to handle generic objects, I've seen questions similar to mine, but not as specific I suppose?
Protocol buffers has multiple scalar types that I can use, however they are mostly primitive.
I want my message to be flexible and be able to have a field that is a List of some sort.
Let's say my .proto file looked like this:
message SomeMessage
{
string datetime = 1;
message inputData // This would be a list
{
repeated Object object = 1;
}
message Object
{
? // this need to be of a generic type - This is my question
// My work around - Using extentions with some Object
//List all primitive scalar types as optional and create an extension 100 to max;
}
message someObject //some random entity - for example, employee/company etc.
{
optional string name = 1; optional int32 id = 2;
}
extend Object
{
optional someObject obj = 101;
}
}
And this would be fine, and would work, and I'd have a List where Objects could be of any primitive type or could be List < someObject >.
However- The problem here, is that any time I needed to handle a new type of object, I'd need to edit my .proto file, recompile for C# and java (The languages I need it for)...
If protocol buffers is not able to handle generic object types, is there another alternative that can?
Any help on this matter is greatly appreciated.
As Marc Gravell stated above - Protocol Buffers do not handle generics or inheritance.
Though I am late, just for the sake of new audience,
you can use bytes in place of object and that can be any object which you can serialize/de-serialize.
It IS possible to achieve generic message functionality but still adding new types will require rebuilding proto classes.
You use wrapper class
message Wrapper {
extensions 1000 to max;
required uint32 type = 1;
}
Then add some types
message Foo {
extend Wrapper {
optional Foo item = 1000;
}
optional int attr1_of_foo = 1;
optional int attr2_of_foo = 2;
optional int attr3_of_foo = 3;
}
message Bar {
extend Wrapper {
optional Bar item = 1001;
}
optional int attr1_of_bar = 1;
optional int attr2_of_bar = 2;
optional int attr3_of_bar = 3;
}
See how we extending Wrapper class in classes that we want to be stored by Wrapper class using extension.
Now, example of creating Foo wrapped object. I'm using Python, since it's most condensed form. Other languages can do the same.
wrapper = Wrapper()
wrapper.type = Foo.ITEM_FIELD_NUMBER
foo = wrapper.Extensions[Foo.item]
foo.attr1_of_foo = 1
foo.attr2_of_foo = 2
foo.attr3_of_foo = 3
data = wrapper.SerializeToString()
And example of deserializing
wrapper = Wrapper()
wrapper.ParseFromString(data)
if wrapper.type == Foo.ITEM_FIELD_NUMBER:
foo = wrapper.Extensions[Foo.item]
elif wrapper.type == Bar.ITEM_FIELD_NUMBER:
bar = wrapper.Extensions[Bar.item]
else:
raise Exception('Unrecognized wrapped type: %s' % wrapper.type)
Now, because you want generic collection, make Wrapper a repeated field of other message and voilĂ .
Of course it's not complete solution, this architecture will need some more packaging to make it easy to use. For more information read about Protobuf extensions, especially nested ones (https://developers.google.com/protocol-buffers/docs/proto#nested) or google about item marshalling.
Here is the protobuf 3 definition of Struct which basically uses oneof to define such "generic" message type.

why does c++ allow constant references to be initialized by a numeric value?

Why does this work in c++? :
const int& a = 5;
A reference is an alias. Ideally, a reference declaration should not result in allocation of memory to any variable. However, try this:
cout<<&a<<endl;
You will get a memory address!
Instead, the following will do the same thing:
const int a = 5;
while being more elegant.
Again, what is the use of such a statement:
const int& a = 5;
? And why is it allowed in c++?

Resources