Bindings and memory leaks - memory-leaks

Problem
Example use case:
I have a control which displays a status gauge. The visual status of the gauge is bound to a property of the control
The control is part of a topology graph. So depending on the topology e. g. a 100 of these controls may be displayed at once
There are several topologies. Every time you switch to another topology view the whole graph is regenerated
Question
Could this cause a memory leak and do you have to perform a manual unbind in the old topology view before you create the new one? Similar to the bindings, do you have to remove event handlers manually?
The bindings and the event handlers are inside the control. So once the control isn't accessible anymore it should be possible that it's garbage collected. So I think you don't have to do anything, but I don't know.
Thank you very much for the expertise!

If you look into the JavaDocs:
[...] All bindings in our implementation use instances of WeakInvalidationListener, which means usually a binding does not need to be disposed. But if you plan to use your application in environments that do not support WeakReferences you have to dispose unused Bindings to avoid memory leaks.
So if you use or extend the default Bindings the Garbage Collector should be able to do its work.
If you do not, be sure do implement and call Binding.dispose().
As always: If an object is no longer referenced by any other object it gets garbage collected (at some point in the future). So usually one does not need to specifically implement in this direction, as it tends to clutter the code.

Related

Use cocoa bindings and threads

I have a few labels bound to a few variables that are modified in other threads via GCD.
Now I've read that cocoa bindings are not thread safe but my app is running fine (the UI updates when the values of the variables are updated in a background thread)
Would it be the correct way to do the calculations in the background thread and if I need to change the variable value make this via
DispatchQueue.main.sync() {
self.variable = newValue
}
?
If cocoa bindings are not thread safe, why I never encountered any crash because of a "read" of the bound UI element while the value was written by a background process?
What is the preferred way to have a value bound to a UI element (via cocoa bindings) and also modify it by async threads?
Thanks!
Yes, if you modify an object that is observed by Cocoa bindings, you should do so only on the main thread, and GCD dispatching the modification to the main thread is a good enough way to do that.
Yes, your app probably works fine most of the time, but that is likely luck based and not actually correct. The problem is that Cocoa bindings are based on Key Value Observation, and KVO notifications are posted on the thread that causes the mutation.
It’s also a complexity problem. As long as your app is relatively simple and fast, there’s much less chance of two threads running afoul of one another. Imagine when your app gets more complex and computationally intensive... and a problem crops up... but by this point you might have hundreds of places where you’re modifying bound properties from multiple threads. It’ll save you grief in the long run to just follow the rules. Use the main thread for updating bound to objects and try to keep bound properties to immutable, value-semantic types.

How to extend GHC's Thread State Object

I'd like to add two extra fields of type StgWord32 to the thread state object (TSO). Based on the information I found on the GHC-Wiki and from looking at the source code, I have extended the struct in /includes/rts/storage/TSO.h and changed the program that creates different offsets (creating DerivedConstants.h). The compiler, the rts, and a simple application re-compile, but at the end of the execution (in hs_exit_) the garbage collector complains:
internal error: scavenge_stack: weird activation record found on stack: 45
I guess it has to to with cmm and/or the STG implementation details (the offsets are generated since the structs are not visible at cmm level, correct me if I'm wrong). Is the order of fields significant? Have I missed a file that should be changed?
I use a debug build of the compiler and RTS and a rather dated ghc 6.12.3 on a 64bit architecture. Any hints to relevant documentation and comments
on the difference between ghc 6 and 7 regarding TSO handling are welcome, too.
The error that you are getting comes from: ghc/rts/sm/Scav.c. Specifically at line 1917:
default:
barf("scavenge_stack: weird activation record found on stack: %d", (int)(info->i.type));
It looks like you need to also modify ClosureTypes.h, which you can find in ghc/includes/rts/storage. This file seems to contain the different kinds of headers that can appear in a heap object. I've also run into some strange bootstrapping errors, where if I try to rebuild using the stage-1 compiler, I get the error you mentioned, but if I do a clean build, then it compiles just fine.
A workaround that turned out good enough for me was to introduce a separate data structure for each Capability that would hold the additional information for each lightweight thread. I have used a HashTable (see rts/Hash.h and .c) mapping from thread id to the custom info struct. The entries were added when the threads were created from sparks (in schduleActiveteSpark).
Timing the creation, insertion, lookup and destruction of the entries and the table showed negligible overhead for small programs. The main overhead results from the actual usage of the information and should ideally be kept outside of the innermost scheduler loop. For the THREADED_RTS build one needs to ensure that other Capabilities don't access tables that are not their own (or use a mutex if such access is required, which is potential source of additional overhead).

How can I delete and deallocate OVM objects in SystemVerilog?

I would like to delete an ovm object (and its children) so that I can recreate it with different configs. Is there a way to do this in OVM?
Currently, when I try to create the object a second time with new, I get the following VCS runtime error:
[CLDEXT] Cannot set 'ap' as a child of 'instance', which already has a child by that name.
I realize that I can simply use a different name to "re-create" the instance, but then I'll still have the old instance sitting around and soaking up memory.
OVM is just a SystemVerilog library. That means that all the rules of SystemVerilog apply to OVM. So, yes, you can use new() with OVM. Sometimes it's preferable to use the factory, and sometimes it's preferable to use new() (that's a topic for a different discussion).
SystemVerilog does not have a delete operator or a destructor like C++. Instead, when you are done with an object you just remove all references to it and the garbage collector will clean up the memory. Here's a quote from the SystemVerilog reference manual (IEEE 1800-2009) section 8.7:
SystemVerilog does not require the complex memory allocation and deallocation of C++. Construction of an object is straightforward; and garbage collection, as in Java, is implicit and automatic. There can be no memory leaks or other subtle behaviors, which are so often the bane of C++ programmers.
It's not entirely true that you cannot have a memory leak. You can forget to remove all references to an object and the garbage collector will not know to pick it up. However, you do not have to worry about memory with the same detail as you do in C++.
The particular error you received with id CLDEXT is from ovm_component class. From the message it appears that you attempted to create two components with the same name and the same parent. Components in OVM are typically static. That is, you create and elaborate them once, usually at time 0, and don't delete or add components after that. Because of this model there are no methods in ovm_component to remove child components. So there really isn't a good way to replace a component once it has been instantiated. By the way, this only applies to components. Other types of objects can be re-allocated.
If you feel that you need to replace a component with a different one after time 0 you should re-think the architecture of your testbench. There are probably betters ways to accomplish what you are trying to do without replacing components.
I have only UVM experience but I think OVM is similar. I would have liked to reply to #Victor Lyuboslavsky's comment but I can't add comments.
The issue is with the name 'ap' which evidently has already been used for a child of 'instance'. Use this code instead.
static int instNum = 0;
instance_ap = my_ovm_extended_class::type_id::create
($sformatf ("ap%0d", instNum), this);
The first time an object is created & the handle assigned to 'instance_ap', the object would have the name 'instance.ap0'. The next time the code executes an object called 'instance.ap1', and so on.
As mentioned by other posters this ought to be done only for non-component objects, and components should be static and must be created during/before the build phase & connected to each other during/before the connect phase.
Try assigning null to the object before calling new again.
Unless I see someone else answer this question, I'd say there is no easy way to deallocate objects in OVM framework.
OVM testbenches are static and created when the testbench is created.
When the environment class is instantiated, it will call new(create), build, connect, end_of_elaboration, start_of_simulation, run and check on all components.
By the end of the environment build phase all components must be created.
By the end of the environment connect phase all components must have their TLM ports connected.
Because of these requirements, you can not change components (or port connections) except for during the phase.
As part of the static nature of the testbench environment, every component must have a unique get_full_name() response. This is because string lookups are used to identify components in the hierarchy.
Assigning an object to null should deallocate memory. If there is no other handle pointing to that memory location, then it should get reclaimed.

Draw on top of suspended full-screen Direct3D app

Currently, I am able to hook onto Direct3D application and draw custom stuff onto its surface. However, I would like to suspend this application and then draw something else.
Is this even remotely possible to do so? Like creating another my own Direct3D window on top of that application?
I'm targetting only Windows 7, but the application I want to draw on is using only DirectX 9.
The problem is that I have very little experience with DirectX in general.
Sort of.
You're working with two different elements here, one quite large and but not particularly complex: hooking D3D. The other ("suspending" the app) is simple within that, but you don't quite want what you think you want.
To hook D3D, by the simplest method, you need to intercept the call to CreateDirect3D9 and return your own IDirect3D9, which later creates and returns your own IDirect3DDevice9. This will give you full control over the app's render process.
In order to "suspend" it, you need to wait for the desired trigger, then in your IDirect3DDevice9::Present, call your own event loop. This will, for all intents and purposes, suspend execution of the original app's code, but not the process itself (allowing your code and event loop to process). There will be some limitations of this, and you may not be able to consume window/Windows events (simply), but it will give you full control and effectively pause the original app.
Note, however, that you must intercept and reroute execution in every thread you want to "suspend," it's only specific to a single thread and you don't want physics or AI crunching on while render and UI are paused.
You need to perform your overlay drawing, whatever that may be, during your loop or your IDirect3DDevice9::Present hook, then call the real device's Present method as needed. If you want to run multiple frames of your overlay, then call the real Present repeatedly before returning from your Present. Tweak as necessary. Rendering here is done pretty much normally (check out general D3D tutorials for that), but there is one major catch: the device's state is unknown and may be incompatible, but must be "untouched" on return. This is handled simply by caching an IDirect3DStateBlock9 created from the device immediately after creating it. In your Present hook, create another state block with the state on entrance, restore the clean state block, run your code, then restore the entrance state block. You can work with any states, off a fresh slate, without damaging the device's state (I use this in practice, in works great).
If you want some rather extensive examples of how this works, I'd suggest checking out the Voodoo Shader project, which has full D3D8 and 9 hooks, including everything needed for overlays [/shameless own-project promotion]. Feel free to reuse any of the concepts, or comment with further questions; this certainly isn't all the details that may be useful to you.
This is a very complex thing to accomplish, as it is very much a hack to do so. The only people you see doing such things are steam, teamspeak, xfire, fraps, and a few hard-core devs.
There are kits out on the internet that show you have to inject a DLL into the memory space of the target application to achieve such a feat, and methods such as proxy DLLs.
Proxy DLL:
http://www.codeguru.com/cpp/g-m/directx/directx8/article.php/c11453
Injection:
http://www.progamercity.net/d3d/372-c-directx9-0-hooking-via-detours.html
Good luck, this will take you a while.

How does the Garbage Collector decide when to kill objects held by WeakReferences?

I have an object, which I believe is held only by a WeakReference. I've traced its reference holders using SOS and SOSEX, and both confirm that this is the case (I'm not an SOS expert, so I could be wrong on this point).
The standard explanation of WeakReferences is that the GC ignores them when doing its sweeps. Nonetheless, my object survives an invocation to GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced).
Is it possible for an object that is only referenced with a WeakReference to survive that collection? Is there an even more thorough collection that I can force? Or, should I re-visit my belief that the only references to the object are weak?
Update and Conclusion
The root cause was that there was a reference on the stack that was locking the object. It is unclear why neither SOS nor SOSEX was showing that reference. User error is always a possibility.
In the course of diagnosing the root cause, I did do several experiments that demonstrated that WeakReferences to 2nd generation objects can stick around a surprisingly long time. However, a WRd 2nd gen object will not survive GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced).
As per wikipedia "An object referenced only by weak references is considered unreachable (or "weakly reachable") and so may be collected at any time. Weak references are used to avoid keeping memory referenced by unneeded objects"
I am not sure if your case is about weak references...
Try calling GC.WaitForPendingFinalizers() right after GC.Collect().
Another possible option: don't ever use a WeakReference for any purpose. In the wild, I've only ever seen them used as a mechanism for lowering an application's memory footprint (i.e. a form of caching). As the mighty MSDN says:
Avoid using weak references as an
automatic solution to memory
management problems. Instead, develop
an effective caching policy for
handling your application's objects.
I recommend you to check for the "other" references to the weakly referenced objects. Because, if there is another reference still alive, the objects won't be GCed.
Weakly referenced objects do get removed by garbage collection.
I've had the pleasure of debugging event systems where events were not getting fired... It turned out to be because the subscriber was only weakly referenced and so after some eventual random delay the GC would eventually collect it. At which point the UI stopped updating. :)
Yes it is possible. If the WeakReference is located in another generation than the one being collected, for example, if it is in the 2nd Generation, and the GC only does a Gen 0 collection; it will survive. It should not survive a full 2nd Gen collection that completes and where all finalizers run, however.

Resources