Should the packets generated from the system verilog generator module be destroyed to save simulation time? - verilog

Every time when I call the generate_directed_packet a new object is created. Should I worry about deleting the packet object, before creating the next one. If so, how should I go about deleting the packet object?
function void generate_directed_packet();
packet = new();
void'(packet.randomize());
endfunction : generate_directed_packet

SystemVerilog has automatic memory management. That means it only holds an object around as long as there is a class variable containing a handle to that object. The simulator "deletes" the object after there are no more class variables with a handle to that object. The "delete" is in quotes because you have no knowledge of when it deletes the object. More likely it keeps the space around until another new() of the same object and reclaims the space.
If you are using the UVM, the typical case is you are generating packets in a sequence, and sending them to a driver. What you are really doing is creating a handle to a new object in the sequence, and then copying the handle from variables in the sequence to variables in the driver.
As you copy the handle from one variable to another, you are erasing the reference to the older object. So as long as you are put putting handles to your objects in data structure that grows as you add more object handles to it, the space from the old objects get reclaimed.

Related

Can a VBA program see (and close?) open database connections it didn't create? [duplicate]

I've been teaching myself Excel VBA over the last two years, and I have the idea that it is sometimes appropriate to dispose of variables at the end of a code segment. For example, I've seen it done in this bit adapted from Ron de Bruin's code for transferring Excel to HTML:
Function SaveContentToHTML (Rng as Range)
Dim FileForHTMLStorage As Object
Dim TextStreamOfHTML As Object
Dim TemporaryFileLocation As String
Dim TemporaryWorkbook As Workbook
...
TemporaryWorkbook.Close savechanges:=False
Kill TemporaryFileLocation
Set TextStreamOfHTML = Nothing
Set FileForHTMLStorage = Nothing
Set TemporaryWorkbook = Nothing
End Function
I've done some searching on this and found very little beyond how to do it, and in one forum post a statement that no local variables need to be cleared, since they cease to exist at End Sub. I'm guessing, based on the code above, that may not be true at End Function, or in other circumstances I haven't encountered.
So my question boils down to this:
Is there somewhere on the web that explains the when and why for variable cleanup, and I just have not found it?
And if not can someone here please explain...
When is variable cleanup for Excel VBA necessary and when it is not?
And more specifically... Are there specific variable uses (public variables?
Function-defined variables?) that remain loaded in memory for longer
than subs do, and therefor could cause trouble if I don't clean
up after myself?
VB6/VBA uses deterministic approach to destoying objects. Each object stores number of references to itself. When the number reaches zero, the object is destroyed.
Object variables are guaranteed to be cleaned (set to Nothing) when they go out of scope, this decrements the reference counters in their respective objects. No manual action required.
There are only two cases when you want an explicit cleanup:
When you want an object to be destroyed before its variable goes out of scope (e.g., your procedure is going to take long time to execute, and the object holds a resource, so you want to destroy the object as soon as possible to release the resource).
When you have a circular reference between two or more objects.
If objectA stores a references to objectB, and objectB stores a reference to objectA, the two objects will never get destroyed unless you brake the chain by explicitly setting objectA.ReferenceToB = Nothing or objectB.ReferenceToA = Nothing.
The code snippet you show is wrong. No manual cleanup is required. It is even harmful to do a manual cleanup, as it gives you a false sense of more correct code.
If you have a variable at a class level, it will be cleaned/destroyed when the class instance is destructed. You can destroy it earlier if you want (see item 1.).
If you have a variable at a module level, it will be cleaned/destroyed when your program exits (or, in case of VBA, when the VBA project is reset). You can destroy it earlier if you want (see item 1.).
Access level of a variable (public vs. private) does not affect its life time.
VBA uses a garbage collector which is implemented by reference counting.
There can be multiple references to a given object (for example, Dim aw = ActiveWorkbook creates a new reference to Active Workbook), so the garbage collector only cleans up an object when it is clear that there are no other references. Setting to Nothing is an explicit way of decrementing the reference count. The count is implicitly decremented when you exit scope.
Strictly speaking, in modern Excel versions (2010+) setting to Nothing isn't necessary, but there were issues with older versions of Excel (for which the workaround was to explicitly set)
I have at least one situation where the data is not automatically cleaned up, which would eventually lead to "Out of Memory" errors.
In a UserForm I had:
Public mainPicture As StdPicture
...
mainPicture = LoadPicture(PAGE_FILE)
When UserForm was destroyed (after Unload Me) the memory allocated for the data loaded in the mainPicture was not being de-allocated. I had to add an explicit
mainPicture = Nothing
in the terminate event.

Placing a small object at the beggining of memory block

I need to store a object describing the memory details of a memory block allocated by sbrk(), at the beggining of the memory block itself.
for example:
metaData det();
void* alloc = sbrk(sizeof(det)+50000);
//a code piece to locate det at the beggining of alocate.
I am not allowed to use placement new, And not allowed to allocate memory using new/malloc etc.
I know that simply assigning it to the memory block would cause undefined behaviour.
I was thinking about using memcpy (i think that can cause problems as det is not dynamicly allocated).
Could assigning a pointer to the object at the beginning work (only if theres no other choise), or memcpy?
thanks.
I am not allowed to use placement new
Placement new is the only way to place an object in an existing block of memory.
[intro.object] An object is created by a definition, by a new-expression, when implicitly changing the active member of a union, or when a temporary object is created
You cannot make a definition to refer to an existing memory region, so that's out.
There's no union, so that's also out.
A temporary object cannot be created in an existing block of memory, so that's also out.
The only remaining way is with a new expression, and of those, only placement new can refer to an existing block of memory.
So you are out of luck as far as the C++ standard goes.
However the same problem exists with malloc. There are tons of code out there that use malloc without bothering to use placement new. Such code just casts the result of malloc to the target type and proceeds from there. This method works in practice, and there is no sign of it being ever broken.
metaData *det = static_cast<metaData *>(alloc);
On an unrelated note, metaData det(); declares a function, and sizeof is not applicable to functions.

Assign string to zmq::message_t without copying

I need to do some high performance c++ stuff and that is why I need to avoid copying data whenever possible.
Therefore I want to directly assign a string buffer to a zmq::message_t object without copying it. But there seems to be some deallocation of the string which avoids successful sending.
Here is the piece of code:
for (pair<int, string> msg : l) {
comm_out.send_int(msg.first);
comm_out.send_int(t_id);
int size = msg.second.size();
zmq::message_t m((void *) std::move(msg.second).data(), size, NULL, NULL);
comm_out.send_frame_msg(m, false); // some zmq-wrapper class
}
How can I avoid that the string is deallocated before the message is send out? And when is the string deallocated exactly?
Regards
I think that zmq::message_t m((void *) std::move(msg.second).data()... is probably undefined behaviour, but is certainly the cause of your problem. In this instance, std::move isn't doing what I suspect you think it does.
The call to std::move is effectively creating an anonymous temporary of a string, moving the contents of msg.second into it, then passing a pointer to that temporary data into the message_t constructor. The 0MQ code assumes that pointer is valid, but the temporary object is destroyed after the constructor of message_t completes - i.e. before you call send_frame.
Zero-copy is a complicated matter in 0mq (see the 0MQ Guide) for more details, but you have to ensure that the data that hasn't been copied is valid until 0MQ tells you explicitly that it's finished with it.
Using C++ strings in this situation is hard, and requires a lot of thought. Your question about how to "avoid that the string is deallocated..." goes right to the heart of the issue. The only answer to that is "with great care".
In short, are you sure you need zero-copy at all?

VB6 Memory Leaks Without GDI Leaks

All the advice I've found on debugging memory leaks in VB6 code has focused on GDI leaks. In my situation, however, evidence suggests that I don't have a GDI leak but probably do have a memory leak. What might be some likely candidates for the cause of such a leak, and/or good tools to help me determine the portion of my program which is causing this?
Unfortunately VB6 does not have enough debugging instrumentation to detect object leaks. I'm usually implementing instance book-keeping on my own in every class, form or user control with a couple of helper functions. The basic pattern is like this
Private Const MODULE_NAME As String = "<<class_name_here>>"
#If DebugMode Then
Private m_sDebugID As String
#End If
#If DebugMode Then
Private Sub Class_Initialize()
DebugInstanceInit MODULE_NAME, m_sDebugID, Me
End Sub
#End If
#If DebugMode Then
Private Sub Class_Terminate()
DebugInstanceTerm MODULE_NAME, m_sDebugID
End Sub
#End If
Helper function DebugInstanceInit generates next sequential id and assigns it to m_sDebugID, then stores module name and ObjPtr(Me) in a collection, keyed on the id.
Helper function DebugInstanceTerm removes entry from the collection using m_sDebugID as a key.
This way in every moment of application execution you can dump this instance collection and identify count and type of objects that are allocated but still not terminated. If continuous opening and closing of forms increments object count you are leaking objects probably due to circular references (e.g. collection->entries and entry->collection).
Most frameworks and other languages have this kind of book-keeping built-in (at least in debug builds) but unfortunately VB6 has poor support for source code metadata (MODULE_NAME, FUNC_NAME, LINE_NUMBER) and object lifetime management. Lack of implementation inheritance hurts here too.
Generically, you have to watch for circular references. In VB6, an object will remain in memory as long as it is referenced somewhere. So if you have an object-type variable as a member of a form (for example) then it will be in memory until that form is Terminated (unless you explicilty set it to nother early of course). And that reference chain can be very deep:
Here, FormA is keeping ObjectB and ObjectC in memory:
FormA -> ObjectB -> ObjectC
What you need to watch for is something like this:
FormA -> ObjectB <-> ObjectC
ObjectB has a reference to ObjectC and ObjectC has a back or parent reference to ObjectB. Even if FormA is Terminated or otherwise clears it reference to ObjectB, ObjectB and ObjectC will live forever and represent a memory leak.
good tool for finding memory leaks is deleaker. you can try it. And check yet one thing - are there still GDI leak or not. You must be sure!

examples of garbage collection bottlenecks

I remembered someone telling me one good one. But i cannot remember it. I spent the last 20mins with google trying to learn more.
What are examples of bad/not great code that causes a performance hit due to garbage collection ?
from an old sun tech tip -- sometimes it helps to explicitly nullify references in order to make them eligible for garbage collection earlier:
public class Stack {
private static final int MAXLEN = 10;
private Object stk[] = new Object[MAXLEN];
private int stkp = -1;
public void push(Object p) {stk[++stkp] = p;}
public Object pop() {return stk[stkp--];}
}
rewriting the pop method in this way helps ensure that garbage collection gets done in a timely fashion:
public Object pop() {
Object p = stk[stkp];
stk[stkp--] = null;
return p;
}
What are examples of bad/not great code that causes a performance hit due to garbage collection ?
The following will be inefficient when using a generational garbage collector:
Mutating references in the heap because write barriers are significantly more expensive than pointer writes. Consider replacing heap allocation and references with an array of value types and an integer index into the array, respectively.
Creating long-lived temporaries. When they survive the nursery generation they must be marked, copied and all pointers to them updated. If it is possible to coalesce updates in order to reuse of an old version of a collection, do so.
Complicated heap topologies. Again, consider replacing many references with indices.
Deep thread stacks. Try to keep stacks shallow to make it easier for the GC to collate the global roots.
However, I would not call these "bad" because there is nothing objectively wrong with them. They are only inefficient when used with this kind of garbage collector. With manual memory management, none of the issues arise (although many are replaced with equivalent issues, e.g. performance of malloc vs pool allocators). With other kinds of GC some of these issues disappear, e.g. some GCs don't have a write barrier, mark-region GCs should handle long-lived temporaries better, not all VMs need thread stacks.
When you have some loop involving the creation of new object's instances: if the number of cycles is very high you procuce a lot of trash causing the Garbage Collector to run more frequently and so decreasing performance.
One example would be object references that are kept in member variables oder static variables. Here is an example:
class Something {
static HugeInstance instance = new HugeInstance();
}
The problem is the garbage collector has no way of knowing, when this instance is not needed anymore. So its usually better to keep things in local variables and have small functions.
String foo = new String("a" + "b" + "c");
I understand Java is better about this now, but in the early days that would involve the creation and destruction of 3 or 4 string objects.
I can give you an example that will work with the .Net CLR GC:
If you override a finalize method from a class and do not call the super class Finalize method such as
protected override void Finalize(){
Console.WriteLine("Im done");
//base.Finalize(); => you should call him!!!!!
}
When you resurrect an object by accident
protected override void Finalize(){
Application.ObjJolder = this;
}
class Application{
static public object ObjHolder;
}
When you use an object that uses Finalize it takes two GC collections to get rid of the data, and in any of the above codes you won't delete it.
frequent memory allocations
lack of memory reusing (when dealing with large memory chunks)
keeping objects longer than needed (keeping references on obsolete objects)
In most modern collectors, any use of finalization will slow the collector down. And not just for the objects that have finalizers.
Your custom service does not have a load limiter on it, so:
A lot requests come in for some reason at the same time (everyone logs on in the morning say)
The service takes longer to process each requests as it now has 100s of threads (1 per request)
Yet more part processed requests builds up due to the longer processing time.
Each part processed request has created lots of objects that live until the end of processing that request.
The garbage collector spends lots of time trying to free memory it, however it can’t due to the above.
Yet more part processed requests builds up due to the longer processing time…. (including time in GC)
I have encountered a nice example while doing some parallel cell based simulation in Python. Cells are initialized and sent to worker processes after pickling for running. If you have too many cells at any one time the master node runs out of ram. The trick is to make a limited number of cells pack them and send them off to cluster nodes before making some more, remember to set the objects already sent off to "None". This allows you to perform large simulations using the total RAM of the cluster in addition to the computing power.
The application here was cell based fire simulation, only the cells actively burning were kept as objects at any one time.

Resources