What is a good way to think about this? Are handles lost?
glCreateProgram()?
glCreateShader()?
glGenTextures()?
glGenBuffers()?
Im wondering if I am doing what's necessary (or doing too much and leaking memory)
How do you lose a context? The OpenGL context will exist until you destroy it (or the window/HDC).
However, all OpenGL objects are bound to the context(s) that they are created in. If you destroy the context, all objects are destroyed as well (unless you have shared objects with another context. In which case, only the sharable objects will remain). So you must reload them.
For example, do I call all 3 function calls: glCreateProgram() glAttachShader() glLinkProgram() or just the last two?
If the OpenGL context is destroyed, you must call whatever OpenGL functions you need to recreate your objects. Any OpenGL objects that you got from the old context are gone. They are invalid. They are deleted pointers, and using deleted pointers is always wrong.
A new OpenGL context is new. So you must create your objects as though it were a new context. Because it is.
Related
Problem
Example use case:
I have a control which displays a status gauge. The visual status of the gauge is bound to a property of the control
The control is part of a topology graph. So depending on the topology e. g. a 100 of these controls may be displayed at once
There are several topologies. Every time you switch to another topology view the whole graph is regenerated
Question
Could this cause a memory leak and do you have to perform a manual unbind in the old topology view before you create the new one? Similar to the bindings, do you have to remove event handlers manually?
The bindings and the event handlers are inside the control. So once the control isn't accessible anymore it should be possible that it's garbage collected. So I think you don't have to do anything, but I don't know.
Thank you very much for the expertise!
If you look into the JavaDocs:
[...] All bindings in our implementation use instances of WeakInvalidationListener, which means usually a binding does not need to be disposed. But if you plan to use your application in environments that do not support WeakReferences you have to dispose unused Bindings to avoid memory leaks.
So if you use or extend the default Bindings the Garbage Collector should be able to do its work.
If you do not, be sure do implement and call Binding.dispose().
As always: If an object is no longer referenced by any other object it gets garbage collected (at some point in the future). So usually one does not need to specifically implement in this direction, as it tends to clutter the code.
I'm using performBlock on my NSManagedObjectContexts so that my changes happen on the right queue for the given context. My question is - if I'm making a lot of changes and calling methods from within performBlock - is there an easy way to ensure that I use objects from the proper context.
Example:
I have an activeAccount iVar ( created on the Main Queue ) that is a NSMangedObject for the current account in the application. I have some instance methods that use the activeAccount object to perform certain tasks - getting data, setting data. So my question is if I am doing something on a background NSManagedObjectContext and I call one of these shared methods - is there a pattern I can use so that in these methods I know to either use the current activeAccount iVar or get a new one. Also, if I needed to do something that requires a NSManagedObjectContext - how do I know which one to get/use.
One method I have for knowing which NSManagedObjectContext to use is I have a method that checks if it is running on the current thread - it then knows to return the main thread's context or the background thread's context. Also, if I'm on the background thread, am I allowed to read the Object ID of the activeAccount that lives on the main thread so that I can get a copy of it on the background thread? Thanks in advance.
Brian,
Thread confinement can be a tricky proposition to maintain. The key thing you need to maintain is using objects in their proper MOC. As every managed object maintains a link to both its host MOC and its object ID, this is really easy to ensure. For example:
NSManagedObjectContext *newMOC = NSManagedObjectContext.new;
newMOC.persistentStoreCoordinator = oldActiveAccount.managedObjectContext.persistentStoreCoordinator;
ActiveAccount *newActiveAccount = [newMOC objectWithID: oldActiveAccount.objectID];
Now every instance you access from newActiveAccount is created in the newMOC and is, hence, thread confined to that MOC. objectIDs are persistent. The -persistentStoreCoordinator is rarely, if ever, changed on the mainMOC. Hence, the above code is properly confined. There are issues with the above technique if the source MOC is transient. Hence, I cannot guarantee the above code works with respect to two background MOCs.
Andrew
I have to ask first, why are you having so many contexts in use at the same time?
I use one for background operations and one for main thread. If I need to create another one for discardable changes, I'll just create it and pass it on, so now my self.managedObjectContext points to the draft context. I will never let my managed objects to live in a scope where they could access a multitude of contexts.
It is not entirely clear if you are writing for iOS or OSX, but with iOS for example:
If I need to push a new view controller into navigation stack I will initialize my destination view controller's managedObjectContext ivar as well as any NSManagedObject subclass instances. Since in -prepareForSegue: I know whether I'll create a draft context or just pass on my current one, I also know whether I need to initialize those managed object instances by referencing them by their IDs from newly created context or I can just pass them on.
Now inside my view controller I can take it for granted that my managed objects are always tied to the self.managedObjectContext.
I would like to delete an ovm object (and its children) so that I can recreate it with different configs. Is there a way to do this in OVM?
Currently, when I try to create the object a second time with new, I get the following VCS runtime error:
[CLDEXT] Cannot set 'ap' as a child of 'instance', which already has a child by that name.
I realize that I can simply use a different name to "re-create" the instance, but then I'll still have the old instance sitting around and soaking up memory.
OVM is just a SystemVerilog library. That means that all the rules of SystemVerilog apply to OVM. So, yes, you can use new() with OVM. Sometimes it's preferable to use the factory, and sometimes it's preferable to use new() (that's a topic for a different discussion).
SystemVerilog does not have a delete operator or a destructor like C++. Instead, when you are done with an object you just remove all references to it and the garbage collector will clean up the memory. Here's a quote from the SystemVerilog reference manual (IEEE 1800-2009) section 8.7:
SystemVerilog does not require the complex memory allocation and deallocation of C++. Construction of an object is straightforward; and garbage collection, as in Java, is implicit and automatic. There can be no memory leaks or other subtle behaviors, which are so often the bane of C++ programmers.
It's not entirely true that you cannot have a memory leak. You can forget to remove all references to an object and the garbage collector will not know to pick it up. However, you do not have to worry about memory with the same detail as you do in C++.
The particular error you received with id CLDEXT is from ovm_component class. From the message it appears that you attempted to create two components with the same name and the same parent. Components in OVM are typically static. That is, you create and elaborate them once, usually at time 0, and don't delete or add components after that. Because of this model there are no methods in ovm_component to remove child components. So there really isn't a good way to replace a component once it has been instantiated. By the way, this only applies to components. Other types of objects can be re-allocated.
If you feel that you need to replace a component with a different one after time 0 you should re-think the architecture of your testbench. There are probably betters ways to accomplish what you are trying to do without replacing components.
I have only UVM experience but I think OVM is similar. I would have liked to reply to #Victor Lyuboslavsky's comment but I can't add comments.
The issue is with the name 'ap' which evidently has already been used for a child of 'instance'. Use this code instead.
static int instNum = 0;
instance_ap = my_ovm_extended_class::type_id::create
($sformatf ("ap%0d", instNum), this);
The first time an object is created & the handle assigned to 'instance_ap', the object would have the name 'instance.ap0'. The next time the code executes an object called 'instance.ap1', and so on.
As mentioned by other posters this ought to be done only for non-component objects, and components should be static and must be created during/before the build phase & connected to each other during/before the connect phase.
Try assigning null to the object before calling new again.
Unless I see someone else answer this question, I'd say there is no easy way to deallocate objects in OVM framework.
OVM testbenches are static and created when the testbench is created.
When the environment class is instantiated, it will call new(create), build, connect, end_of_elaboration, start_of_simulation, run and check on all components.
By the end of the environment build phase all components must be created.
By the end of the environment connect phase all components must have their TLM ports connected.
Because of these requirements, you can not change components (or port connections) except for during the phase.
As part of the static nature of the testbench environment, every component must have a unique get_full_name() response. This is because string lookups are used to identify components in the hierarchy.
Assigning an object to null should deallocate memory. If there is no other handle pointing to that memory location, then it should get reclaimed.
I'm using [self retain] to hold an object itself, and [self release] to free it elsewhere. This is very convenient sometimes. But this is actually a reference-loop, or dead-lock, which most garbage-collection systems target to solve. I wonder if objective-c's autorelease pool may find the loops and give me surprises by release the object before reaching [self release]. Is my way encouraged or not? How can I ensure that the garbage-collection, if there, won't be too smart?
This way of working is very discouraged. It looks like you need some pointers on memory management.
Theoretically, an object should live as long as it is useful. Useful objects can easily be spotted: they are directly referenced somewhere on a thread stack, or, if you made a graph of all your objects, reachable through some path linked to an object referenced somewhere on a thread stack. Objects that live "by themselves", without being referenced, cannot be useful, since no thread can reach to them to make them perform something.
This is how a garbage collector works: it traverses your object graph and collects every unreferenced object. Mind you, Objective-C is not always garbage-collected, so some rules had to be established. These are the memory management guidelines for Cocoa.
In short, it is based over the concept of 'ownership'. When you look at the reference count of an object, you immediately know how many other objects depend on it. If an object has a reference count of 3, it means that three other objects need it to work properly (and thus own it). Every time you keep a reference to an object (except in rare conditions), you should call its retain method. And before you drop the reference, you should call its release method.
There are some other importants rule regarding the creation of objects. When you call alloc, copy or mutableCopy, the object you get already has a refcount of 1. In this case, it means the calling code is responsible for releasing the object once it's not required. This can be problematic when you return references to objects: once you return it, in theory, you don't need it anymore, but if you call release on it, it'll be destroyed right away! This is where NSAutoreleasePool objects come in. By calling autorelease on an object, you give up ownership on it (as if you called release), except that the reference is not immediately revoked: instead, it is transferred to the NSAutoreleasePool, that will release it once it receives the release message itself. (Whenever some of your code is called back by the Cocoa framework, you can be assured that an autorelease pool already exists.)
It also means that you do not own objects if you did not call alloc, copy or mutableCopy on them; in other words, if you obtain a reference to such an object otherwise, you don't need to call release on it. If you need to keep around such an object, as usual, call retain on it, and then release when you're done.
Now, if we try to apply this logic to your use case, it stands out as odd. An object cannot logically own itself, as it would mean that it can exist, standalone in memory, without being referenced by a thread. Obviously, if you have the occasion to call release on yourself, it means that one of your methods is being executed; therefore, there's gotta be a reference around for you, so you shouldn't need to retain yourself in the first place. I can't really say with the few details you've given, but you probably need to look into NSAutoreleasePool objects.
If you're using the retain/release memory model, it shouldn't be a problem. Nothing will go looking for your [self retain] and subvert it. That may not be the case, however, if you ever switch over to using garbage collection, where -retain and -release are no-ops.
Here's another thread on SO on the same topic.
I'd reiterate the answer that includes the phrase "overwhelming sense of ickyness." It's not illegal, but it feels like a poor plan unless there's a pretty strong reason. If nothing else, it seems sneaky, and that's never good in code. Do heed the warning in that thread to use -autorelease instead of -release.
I'm having a conflict when saving a bunch of NSManagedObjects via an outside thread. For starters, I can tell you the following:
I'm using a separate MOC for each thread.
The MOCs share the same persistent store coordinator.
It's likely that an outside thread is modifying one or many of the records that I'm saving.
OK, so with that out of the way, here's what I'm doing.
In my outside thread, I'm doing some computation and updating a single value in a bunch of managed objects. I do this by looking up the object in the persistent store by my primary key, modifying the single decimal property, and then calling save on the bunch all at once.
In the meantime, I believe the main thread is doing some updating of its own.
When my outside thread does its big save on its managed object context, I get an exception thrown stating a large number of conflicts. All of the conflicts seem to be centered around a single relationship on each record. Though the managed object in the persistent store and my outside thread share the same ObjectID for this relationship, they don't share the same pointer. Based on what I see, that's the only thing that's different between the objects in my NSMergeConflict debug output.
It makes sense to me why the two objects have relationships with different pointers -- they're in different threads. However, as I understand it from Apple's documentation, the only thing cached when an object is first retrieved from the persistent store are the global IDs. So, one would think that when I run save on the outside thread MOC, it compares the ObjectIDs, sees they're the same, and lets it all through.
So, can anyone tell me why I'm getting a conflict?
Per the documentation in the Concurrency with Core Data chapter of The Core Data Programming Guide, the recommended configuration is for the contexts to share the same persistent store coordinator, not just the same persistent store.
Also, the section Track Changes in Other Threads Using Notifications of the same chapter states if you're tracking updates with the NSManagedObjectContextDidSaveNotification then you send -mergeChangesFromContextDidSaveNotification to the main thread's context so it can merge the changes. But if you're tracking with NSManagedObjectContextDidChangeNotification then the external thread should send the object IDs of the modified objects to the main thread which will then send -refreshObject:mergeChanges: to its context for each modified object.
And really, you should know if the main thread is also performing updates through its controller, and propagate its changes in like manner but in the opposite direction.
You need to have all your contexts listening for NSManagedObjectContextDidSaveNotification from any context that makes changes. Otherwise, only the front context will be aware of changes made on the background threads but the background context won't be aware of changes on the front thread.
So, if you have three threads and three context each of which makes changes, all three context must register for notifications from the other two.
Unfortunately, it seems as though this bug was actually being caused by something else -- I was calling the operation causing the error more than once at the same time when I shouldn't have been. Although this doesn't answer the initial question as to why pointers matter in conflicts, updating my code to prevent this situation has resolved my issue.