SetParent(null) on a GameObject does not deallocate it?
Is this common behaviour? I am porting a game from another engine and the general take on this kind of action in most environments is that if no one is the owner of the object, the object will soon be completely removed from memory.
No, SetParent(null) does not "deallocate" it.
If you want to remove it, you can call:
Destroy(gameObject);
"Is this common behaviour?"
It varies from context to context. In Unity3D, you don't have to worry about it.
Related
Suppose I acquire a contextual reference of a bean programmatically, using BeanManager#getBeans(Type, Annotation...) and BeanManager#resolve(Set) and BeanManager#getReference(Bean, Type, CreationalContext). (This is standard stuff.)
Suppose further that the bean in question is in #Singleton scope (it could be any scope other than #Dependent; I've picked #Singleton just (a) to make this concrete and (b) to get client proxies out of the picture to keep it simple). (Note that strictly speaking I should be unaware of what scope applies to the bean I'm working with, but my question may be directly related to this.)
Suppose further that the bean in question has a #Dependent-scoped contextual reference "inside" it. (This helps to highlight the necessary pairing of bean destruction with CreationalContext#release(); see below.)
Suppose further that the reference I get back is the first such reference. That is, my acquiring this reference is what causes the underlying contextual instance singleton to be created. All fine and simple and good so far. Yay; I have my object.
I do whatever I'm going to do with this contextual reference. Now I have just created something and don't want to leak memory, so…what should I do?
Should I simply try to destroy the existing contextual instance underlying the reference, perhaps using AlterableContext#destroy(Contextual)? But don't I now have to know intimate details about the scope and whether it is suitable to destroy any existing instance? (And of course if I do this I'd better call CreationalContext#release() too.)
Should I simply release the CreationalContext used for construction (without explicitly destroying the singleton instance)? But this will destroy the singleton's dependent references. That seems like something that would result in inconsistent state.
Should I do nothing? In the case of #Singleton this does no harm, since a singleton lasts for as long as the application, but, again, now I have to know intimate details about the scope. Is this by design?
Is the proper answer really that I need to know lots of things about the particular underlying scope (which, if user-supplied, it may be impossible to do), and need to tailor my destruction/releasing actions to the semantics of the scope?
I want to understand some best practices regarding using MVVM and multithreading. Let us assume I have a ViewModel and it has an observableCollection. Also, let us assume I pass this collection to another service class which does some calculation and then udpates my collection.
After a point I realize that I want to make this a multithreaded call. When I make the call to the service class using threads or tasks what results is a cross thread operation. The reason is quite obvious because the service class updates the collection whcih in turn will update the UI on the background thread.
In such scenarios what is the best practice? Should we always write our service class in such a way that it first clones the input and then updates that cloned copy? Or should the view model always assuem that the service calls might be multithreaded and send a cloned copy?
What would be the recommended way to solve this?
Thanks
Jithu
A solution that might solve the cross-thread exception is by implementing the OnPropertyChanged in the base class of all ViewModels to switch to the correct thread/synchronization context so all properties in the View that are bound to the changing property will have their handlers called on the correct thread. See: Avoid calling BeginInvoke() from ViewModel objects in multi-threaded c# MVVM application
If/when you create copies you are postponing the synchronization and, in many cases, making it harder than need be.
A web service will always return new objects, how you, or a framework, updates the model using these object is up to you. A lot would depend on the amount of checks and updates coming in. There is no recommended way, see whatever fits the applications requirements.
I am developing an XNA project, where there are two DrawableGameComponents A and B, with the following constraints:
Either A is visible, or B is visible. So only one of their "Draw" methods has to be called.
Both A and B need to be enabled - always. So the "Update" method of each has to be called under all circumstances.
Currently both A and B are executed in the same thread. However, the "Update" methods of them are very CPU-Intensive. Since both GameComponents do not need to talk to each other, and both GameComponents do not need to share any data, it is easily possible to parallelize them.
What I would like to know is how to do that in XNA. The "Update" and "Draw" methods are called by the XNA Framework, so I do not know where to put the Threads. Is there a standard way of doing this?
Usually this is done through game state managment, where the Game1 class (default class) is used to call the Update() and/or Draw() of other classes (game components)
Take a look at the xnadevelopment game state managment tutorial, their they describe how to call diffrent updates and draws of different classes, and hopefully u'll see that multi-threading can be implemented in the Game1 class (default auto-created XNA class)
p.s. if you dont mind doing a lot of reading take a look at this article on XNA multi threading, its accompanied by some diagrams that explain how it works very well.
You haven't specified if you need both Update's to run simultaneously so I'm going off the assumption that one component is the only that needs to be drawn.
Using DrawableGameComponents they are automatically synced with your Game object, but, if you store a reference to each component instead of instantiating them without a stored reference, such as:
componentOne = new FirstComponenet(this);
Components.Add(componentOne);
componentTwo = new SecondComponent(this);
Components.Add(componentTwo);
// Immediately disable componetTwo
componentTwo.Enabled = false; // Prevents Update from firing
componentTwo.Visible = false; // Prevents Draw from firing (for Drawable components only)
Then you can let XNA manage the Update/Draw loops as per normal. componentOne and componentTwo being class level variables, you can manage when each are active.
Again, this is based on the assumption that you don't need one to update at the same time the other does.
I've been starting to get into game development; I've done some tutorials and read lots of articles but one thing I'm not sure about is what is the best way to manage large numbers of temporary objects, e.g. bullets.
Should each entity manage its own bullets, should I have a global bullet manager or should I create each bullet as a new full-blown individual object (that seems pretty inefficient though)?
Also, when using a component pattern what should I do about properties that seems generic, e.g. position, velocity, etc..?
Some stuff I've read seem to think that everything should be in some kind of component while others seem to think that generic properties that will be commonly accessed by a variety of components should be a member of the entity class itself.
Forgive me, these are probably simple but I want to make sure I'm thinking in the right direction.
Thank you very much!
Creating each bullet as a "fully blown object" shouldn't be too inefficient - have a look at the Object pool pattern, which outlines a way to speed up these object creations.
As for your question re: components and generic properties, it depends on how strictly you wish to follow the component architecture. If you want to be really strict with the component architecture, every property should be in a component and different components should talk to each other. Otherwise, for efficiency reasons, share some properties in the main object. For more information, have a look at this page on the component behavioural pattern.
The original Quake uses a fixed-size pool for entities (which are also sometimes called edicts). Anything whose existence persists between frames is an entity. This includes "shaped physical" things like the world and doors, rectangular physical things like monsters, players, and nails, movetype-transparent things like weapons, invisible but touchable rectangular things like trigger fields, and entirely-nonphysical things like delay events.
I think the limit in Quake is something like 700 edicts; the game will crash if the limit is exceeded. I think edicts are simply stored in an array, since every property which exists for any edict exists for all of them.
I have an object that gets instantiated in a linq to sql method. When the object fields are being assigned, i want to check a date field and if it is an old date, retrieve data from another table and perform calculations before continuing with assigning this object.
Is there anything wrong with triggering such an event through the property setter Or should I independently check the date through some service and make the changes if necessary at some point aftwewards?
There's nothing wrong with doing some logic from within your setters, but you should be careful about just how much logic you put within your setters. One of the fundamental problems of setters is that since they act like attributes, but have backing code, it's easy to forget that there are potentially some non-trivial actions going on behind the scenes.
This sort of thing can cause problems if you have accessors which use accessors which use accessors; you can rapidly end up causing unexpected performance problems. Generally, it's a good idea to keep the actions of setters (or getters, for that matter) to a relatively small set of actions. For example, validation can work perfectly fine in a setter, but I'd generally advise against doing validation against external resources, because of two things: first, resource delays can cause problems with expected access speed, and secondly, the number of external resource accesses can destroy your performance.
Generally, the rule is this: keep it simple. It's not unreasonable to do complicated things in a setter, but if you do, it's really important to understand the consequences of all of the actions you'll be causing, and it's EXTREMELY important to document what it does extremely well, so the next guy (or girl) to use the code doesn't just try to naively use the accessor and end up causing massive resource contention issues unexpectedly.
Half the point of using setters instead of, say, public fields, is to be able to trigger events associated with setting certain data.
Keyword: associated. If you're just using the setter as a "convenient" time to do some other stuff because it happens to work, you're doing it wrong. If setting this value requires other work to be done, then by all means, use the setter to do it.