Lets say you make a game engine, and you have several GameObjects and every GameObject have a list of components that you can add or remove.
Lets say there is a MeshComponent who has vertices, normals etc. If several GameObjects have the same MeshComponent, there will be a lot of memory waste. Of course there are many ways to implements this but I want some good advice how to solve this? How do components share data that is not going to be modified?
C++ does not have static classes.
If several GameObjects have the same MeshComponent, there will be NO memory waste.
Because, of course, it is the same MeshComponent...
You will waste memory if you have many copies of the MeshComponent which conceptually should be identical.
If many GameObjects need to refer to the same MeshComponent then they should each hold a (smart) pointer to the same MeshComponent.
Related
I am currently making a behavior assessment of different software modules regarding access to DB, Network, amount of memory allocations, etc.
The main goal is to pick a main use case( let's say system initialization) and recognize the modules that are:
Unnecessarily accessing DB.
Creating too many caches for same data.
Making too many allocations (or too big) at once.
Spawning many threads,
Network access
By assessing those, I could have an overview of the modules that need to be reworked in order to improve performance, delete redundant DB accesses, avoid CPU usage peaks, etc.
I found the sequence diagram a good candidate to represent the use cases behavior, but I am not sure how to depict their interaction with the above mentioned activities.
I could do something like shown in this picture, but that is an "invention" of tagging functions with colors. I not sure if it is too simplistic or childish (too many colors?).
I wonder if there is any specific UML diagram to represent these kind of interactions.
Using SDs is probably the most appropriate approach here. You might consider timing diagrams in certain cases if you need to present timing constraints. However, SDs already have a way to show timing constraints which is quite powerful.
You should adorn your diagram with a comment telling that the length of the colored self-calls represent percentage of use or something like that (or just adding a title telling this). Using colors is perfect by the way.
As a side note: (the colored) self-calls are shown with a self-pointing arrow like this
but I'd guess your picture can be understood by anyone and you can see that as nitpicking. And most likely they are not real self-calls but just indicators. So that's fine too.
tl;dr Whatever transports the message is appropriate.
Im currently working on an exercise, for which I want to create a technical design documentation.
Therefore, I need to evaluate possible solutions to a bunch of problems coming with my fictional project.
Here's a quick glance at the exercise:
The game's art & core game design are split up very harshly - basically, the core system, game mechanics and design are created to be very abstract, in order to allow them to work with a very wide variety of art settings. Also, one of the restrictions is to re-use as many assets, levels & designs as possible.
Now to my question:
I want the level designers to create levels using "template" objects (object which have all the technical features that are required, ie slots for attachments, correct scale, textures etc) and later replace these objects with set of assets I receive from my outsourcer.
Since I dont want to manually replace all objects whenever I get a new set of assets, this is what I wanted to do:
Each template object gets a descriptive label, and each asset delivered by the outsourcer needs to have the exact same label name as its corresponding template-counterpart stored in it as well (for example as a custom attribute, a channel, or simply in its name).
I now want to replace all templates with the related asset using a script.
This would be done for each set of assets. I would also keep several deployments of my engine, one per set, but initially, they'd all start out with the templates that need to be replaced (since there will need to be some modifications for each setting, both visually and from a game design perspective, keeping all assets in one trunk/project didn't make any sense to me).
To make this easier i'd use a "database" of some sorts (probably a simple dictionary which the engine script could query and which would be filled out beforehand by another script upon delivery of new assets?).
My question is: is this possible? If yes, how difficult would this be from a programmers perspective? I have only limited knowledge in this field, so I'd love to hear what you lads & ladies think about this.
Also (very important) - do you know of a better way to achieve this "replacability" of assets? Or simply have an easier way to achieve what I want to do? I appreciate any feedback! Thank you!
quick edit: This would not only be applied to 3d Objects; textures would also need to be replaced, obviously
I think you are looking for Prefabs.
Basically prefabs implements a sort of prototype pattern.
Instead of putting into scene's hierarchy directly a GameObject you can make it a prefab and put into the scene a GameObject that is an instance of that prefab.
When a GameObject into the scene is linked to a prefab, and the prefab will be modified, the linked object will be modified too.
If you have several instances of the same prefab, all istances will be updated as well.
The only strong limitation to this feature is that, since now, nested prefabs aren't supported.
I want the level designers to create levels using "template" objects
(object which have all the technical features that are required, ie
slots for attachments, correct scale, textures etc) and later replace
these objects with set of assets I receive from my outsourcer.
This is the tipical use case. You have a placeholder into the scene (es. a Cube) that will be subistitued by a model when the artists will provide it.
If you instantiate 100 cubes into the scene, when you need to substitute them, you would do it manually for all objects.
If instead you have created a prefab (lets call it ModelPrefab) and the cubes into the scene are instances of that prefab, when you'll have the new 3d model you can simply update the prefab and all linked instances will be updated too.
My question is: is this possible? If yes, how difficult would this be
from a programmers perspective?
If you can work without nested prefabs you have to do nothing, it's already implemented. If you need to implement nested prefabs, it might not be so straightforward.
quick edit: This would not only be applied to 3d Objects; textures
would also need to be replaced, obviously
I made the example above using the models, but you can make a prefab from each GameObject that is actually a collection of Components (have a look at Component Based Object Management if you are interested).
EDIT
Yes, it is possible to update prefabs throught script the required functions are in the UnityEditor namespace, so they mast be used through an editor extension.
You can found all you need in PrefabUtility class.
I've been starting to get into game development; I've done some tutorials and read lots of articles but one thing I'm not sure about is what is the best way to manage large numbers of temporary objects, e.g. bullets.
Should each entity manage its own bullets, should I have a global bullet manager or should I create each bullet as a new full-blown individual object (that seems pretty inefficient though)?
Also, when using a component pattern what should I do about properties that seems generic, e.g. position, velocity, etc..?
Some stuff I've read seem to think that everything should be in some kind of component while others seem to think that generic properties that will be commonly accessed by a variety of components should be a member of the entity class itself.
Forgive me, these are probably simple but I want to make sure I'm thinking in the right direction.
Thank you very much!
Creating each bullet as a "fully blown object" shouldn't be too inefficient - have a look at the Object pool pattern, which outlines a way to speed up these object creations.
As for your question re: components and generic properties, it depends on how strictly you wish to follow the component architecture. If you want to be really strict with the component architecture, every property should be in a component and different components should talk to each other. Otherwise, for efficiency reasons, share some properties in the main object. For more information, have a look at this page on the component behavioural pattern.
The original Quake uses a fixed-size pool for entities (which are also sometimes called edicts). Anything whose existence persists between frames is an entity. This includes "shaped physical" things like the world and doors, rectangular physical things like monsters, players, and nails, movetype-transparent things like weapons, invisible but touchable rectangular things like trigger fields, and entirely-nonphysical things like delay events.
I think the limit in Quake is something like 700 edicts; the game will crash if the limit is exceeded. I think edicts are simply stored in an array, since every property which exists for any edict exists for all of them.
How would you organize an entity that has 100s of properties? One could go as far to say 100s of properties, with a few Value Objects (as a few of the properties have 2 or 3 properties of their own). But the point is, how to handle the large number of properties.
I am re-creating our model from the ground-up using DDD, and the current issue is how to organize one of main entities that is broken up into many many many subsets. Currently it was written to have about a dozen sub-sets of properties. Like CarInfo() with 50+ properties, CarRankings() with 80+, CarStats(), CarColor(), etc, etc.
Think of it as mass-data stored on a single entity root.
Is it appropriate to have a service for the simple purpose of grouping a large collection of properties? Like CarInfoService that would return a Car() object, along with a large collection or sort.
Another idea would be to look at how the data is displayed. There is no one view that shows all of this data. Instead, they are split up based on their subjective matter. Like CarInfo shows all information about the car. Another would be CarStats that shows all stats of the car. So in this sense, the Application layer can build the underlying details needed for the UI. But, I still need a way to store it in the domain.
I have a mind to just put a number of xml property bags on it and call in the day. lol
This is a tough problem. You've got an aggregate root with lots of branches. But it sounds as if your users only work with certain collections at certain times. You could try pruning the tree a bit.
For instance, in your Car example, if your users are comparing the rankings of different cars, you could treat that as its own module or subsystem. No need to load up the detailed car data associated with each ranking, if the specific task is to figure out which ones rate better than others.
Remember, even though the data may be stored in a parent-child hierarchy in the database, it doesn't necessarily mean your domain will be structured in the same way.
By looking at the tasks or functions your users will perform on your data, you might discover concepts that help to break up that giant aggregate into more manageable chunks.
If you do need to assemble the full root with all of its branches, I think you'll definitely want some sort of service to bring everything together.
I think you should consider to split such an entity in different bounded contexts related via shared idenfitiers. Thus you will have different Cars in different BCs (thus different namespaces, too), and each one will handle only the informations that are related to that particular aspect.
This way, in case of deeper insight, chances are that you will have to refactor only a BC without affecting the others.
Our Domain has a need to deal with large amounts (possibly more than 1000 records worth) of objects as domain concepts. This is largely historical data that Domain business logic needs do use. Normally this kind of processing depends on a Stored Procedure or some other service to do this kind of work, but since it is all intimately Domain Related, and we want to maintain the validity of the Model, we'd like to find a solution that allows the Aggregate to manage all of the business logic and rules required to work with the data.
Essentially, we're talking about past transaction data. Our idea was to build a lightweight class and create an instance for each transaction we need to work with from the database. We're uncomfortable with this because of the volume of objects we'd be instantiating and the potential performance hit, but we're equally uncomfortable with offloading this Domain logic to a stored procedure since that would break the consistency of our Model.
Any ideas on how we can approach this?
"1000" isn't really that big a number when it comes to simple objects. I know that a given thread in the system I work on may be holding on to tens of thousands of domain objects at a given time, all while other threads are doing the same at the same time. By the time you consider all of the different things going on in a reasonably complicated application, 1000 objects is kind of a drop in the bucket.
YMMV depending on what sort of resources those objects are holding on to, system load, hard performance requirements, or any number of other factors, but if, as you say, they're just "lightweight" objects, I'd make sure you actually have a performance problem on your hands before you try getting too fancy.
Lazy loading is one technique for mitigating this problem and most of the popular object-relational management solutions implement it. It has detractors (for example, see this answer to Lazy loading - what’s the best approach?), but others consider lazy loading indispensable.
Pros
Can reduce the memory footprint of your aggregates to a manageable level.
Lets your ORM infrastructure manage your units of work for you.
In cases where you don't need a lot of child data, it can be faster than fully materializing ("hydrating") your aggregate root.
Cons
Chattier that materializing your aggregates all at once. You make a lot of small trips to the database.
Usually requires architectural changes to your domain entity classes, which can compromise your own design. (For example, NHibernate just requires you to expose a default constructor make your entities virtual to take advantage of lazy loading - but I've seen other solutions that are much more intrusive).
By contrast, another approach would be to create multiple classes to represent each entity. These classes would essentially be partial aggregates tailored to specific use cases. The main drawback to this is that you risk inflating the number of classes and the amount of logic that your domain clients need to deal with.
When you say 1000 records worth, do you mean 1000 tables or 1000 rows? How much data would be loaded into memory?
It all depends on the memory footprint of your objects. Lazy loading can indeed help, if the objects in question references other objects which are not of interest in your process.
If you end up with a performance hog, you must ask yourself (or perhaps your client) if the process must run synchronously, or if it can be offloaded to a batch process somewhere else.
Using DDD, How Does One Implement Batch Processing?