Something like that which holds FlxBasic, FlxObject members, position and total size so I could group them and set their position as a group with ease.
I don't need other alternative class in another library. I can write a simple class like that. But I just ask so as to not waste my time writing one.
There's nothing like that for FlxBasic or FlxObject, just for FlxSprite (which is one further down the inheritance chain). There are two options:
FlxSpriteGroup: part of core flixel, basically a FlxSprite with a FlxGroup that can hold additional sprites and manipulates the properties of those sprites as well - this means stuff like rotation and scaling works on each individual sprite, as opposed to the group acting as a single rotated / scaled sprite.
FlxNestedSprite: part of flixel-addons (and perhaps not as well maintained as a result), pretty much an imitation of a DisplayObjectContainer.
Related
In the Godot Engine, I am wondering what happens when objects/scenes leave the viewport? For example: I am trying to make a large map with lots of scenes/entities (such as multiple moving enemies, as well as resource nodes). I am trying to figure out the best way to handle the entities that no longer need to be loaded in memory.
My initial thought was that every tile that is moved to, check the "map" array that holds all the tiles and load the new ones off the screen a little, and vice versa for the ones that will disappear. I assume this is horrible practice. I also thought of having "regions" that once entered, could load upcoming sections - but that also gets super complicated.
I noticed that Godot is already handling part of this problem. As an example, when an object emitting particles leaves the viewport, it stops emitting particles.
Globally performancewise, having multiple instances shouldn't be a problem, but if you have a lot of entities, you may want to execute code only when they are in viewport.
For instance :
if(isInViewport):
Do everything
Else:
Do nothing but exist
To that purpose the VisibilityNotifier2D class may be usefull.
Is there an easy way to show 3D entity at all times, even when that entity is hidden behind another entity? For example, I want that lines are always shown event when they are behind mesh surface.
I use Qt3D framework.
Assuming that you are talking about the Qt3D framework I want to extend the answer of Rabbid76.
To disable depth-testing in the Qt3D framework, add a QRenderStateSet to the framegraph branch that renders things (the one that as a QViewPort for example) and add a QDepthTest to it. Then, set the depth function of the QDepthTest to always. This way, the depth test is always passed and entities in the back will also be drawn, depending on the drawing order. You can use QSortPolicy to adjust the drawing order to back-to-front.
But this won't work when the camera position changes and your entity that you always want to be drawn is in the front. I'd suggest you add another framegraph branch and use a QLayerFilter to only deactivate depth-testing for this one entity.
If your entity looks weird when deactivating depth-testing (likely for complex objects), you could replace the QDepthTest by a QClearBuffers and simply clear the depth buffer.
Have a look at my answer here, where I showed an example of a custom framegraph with depth test.
If the depth test is disabled, then the geometry (like a line) is always drawn on top of the previously drawn geometry. The depth test can be disabled by:
glDisable(GL_DEPTH_TEST)
See glEnable
As an alternative the depth test function, can be set to let a fragment always pass depth test. In Qt this can be done by the class QDepthTest, using the enumerator constant Qt3DRender::QDepthTest::Always.
In this case, you have to take care about the order in which the geometry is drawn.
You have to find a way, to render the polygons (opaque geometry) first, by using the depth test function Qt3DRender::QDepthTest::Less.
After that you have to render the lines on top, by using the depth test function Qt3DRender::QDepthTest::Always.
I'm taking an introductory graphics course, and while I intuitively understand that converting a click or touch into object coordinates will make the math much cleaner, reduce the chances for human error, and potentially make debugging easier, none of these are actually a very good explanation, conceptually, of why object coordinate spaces are used in selection tests, as opposed to simply using world coordinates for the test - rather, they're just observations of what tends to happen when object coordinates are used. So I ask: why?
A selection test involves comparing the click coordinates, which you get in window coordinates, against lots and lots of object features, which are represented in object coordinates.
You need to transform them into the same coordinate system in order to do the checks, so you can EITHER transform the one simple click point OR you can transform all the various object features.
Transforming one point or line is just a lot easier that transforming a whole bunch of object features of various types.
There are cases where the location of a specific object or point may not be known within a world coordinate system, but is known relative to some other coordinate system.
To summarize an example from my course text, consider the idea of two different towns, one using a grid system for its layout, and the other using what I can only describe as the New England we-made-cow-trails-into-roads method. A government employee is tasked with creating a layout of the area which includes them, and in doing so has to convert the two coordinate systems into a third, which encompasses the other two.
Sometimes, using a world atlas just isn't practical to get across the street, and so something much more local (and relevant) is used instead, as it provides much more detail over a much smaller area.
The text also explains that it may be more than simply impractical to use a given coordinate system - it may yield results that are improbable or just plain wrong. This is evidenced in the evolution of the geocentric and heliocentric models of the universe - the distance of the stars from us was calculated with very different results using the two models.
Thinking of my own example, the best that comes to mind would be something like your own internal organs - from the outside, you don't know for sure exactly the shape, size, and structure of each of them, but your own body does. In order to be able to access that information, you need to look inside the body (ideally in a way that doesn't kill you). It's not something that is plainly observable from outside.
I want create a tree elements. For example, as this is figure
Can I use treeview, expandableview or NSOutlineView in monotouch?
Is there a tree of objects in monotouh?
There is no built-in or default control to represent a tree on iOS and frankly, you shouldn't really need one and most cases it should probably be avoided.
It's hard to fit a tree like control we have on our desktops in the touch world where you have huge fingers (so huge nodes) and with the nodes offset to show depth, there isn't much space left over. Adding it to the iOS environment would create a weird UX flow so you should re-think your design.
The common solution is to use tables with a detail accessory indicator and show a new controller the data (either a table or something else).
If you absolutely need one, you will need to roll your own. Check this for reference http://dotnet.kapenilattex.com/?p=566
I have a set of objects that can be scaled and translated.
Suppose the user selects an object and drag to some position.
I was thinking about implementing this in two different ways: either changing the coordinates of the objects given the mouse position, or changing the transformation matrix.
Is one of these implementations better than the other?
My main issues are:
Performance
Code organization
Scalability
Objects have certain coordinates, and the way you look at objects has a certain frame of reference. I think it is better not to mess with your coordinates, and instead to change just the matrix that takes you from "the object is here" to "I draw the object here". It is much cleaner. Performance wise you have to apply a transformation to each object being rendered, so you may as well do it just once. From. Code organization perspective it is better to keep things "relating to something physical"; and from a scalability perspective, not applying a transformation to all objects every time the user changes the view is clearly preferable - you only apply the transformation to objects when you render them, so if you can't keep up you skip a step; if you didn't rescale some of your objects during each step you would quickly get into trouble. Finally, applying multiple transformations to the same object would tend to accumulate errors.
Stream of conscience, but clear preference, I think!