What is the PixiJS Geometry used for (as opposed to DisplayObjects)? - graphics

I am reading through the Pixi.js source code, and coming across Geometry which sounds like it would be the core of everything. But it seems the "Graphics.ts" is the core, and the "Renderer.ts". What is the purpose of Geometry as opposed to the DisplayObject? The DisplayObject, Container, and Sprite are basically a tree of rendered objects with their own matrix transforms. But what is Geometry used for?

Currently reading through this myself.
I found this Medium article that discusses the new Mid Level API that was introduced in PixiJS 5. The Mid-level API being the bridge between WebGL code and the higher abstractions in DisplayObject.
Geometry acts as part of this bridge along with the Shader and State.
A geometry refers to a set of attributes and the buffers in which they are stored. PixiJS likes to divide that into two pieces:
Geometry Style: This is the set of attributes and their properties.
Geometry Data: This is the collection of buffers that provide the data for each attribute.

Related

Ifc object with multiple colors

Is it possible, within IFC 4.0, to define an Ifc object (IFC Wall for example) with multiple colors (which are IfcStyledItem)? How can we activate each color inside a BIM Viewer program (like layer in AutoCAD)?
Thanks,
Formally, you can have only one IfcStyledItem per IfcRepresentationItem, but the styled item can refer to multiple presentation styles (IfcPresentationStyle). Similarly, you could have one IfcPresentationLayerWitStyle per IfcPresentationItem referring to multiple presentation styles. A viewer application could then provide user interactions to switch styles on or off per representation item or layer. However I believe that is not the right interpretation of those presentation style sets. The standard is not clear about it, but I would rather assume these styles are supposed to aggregate, that is you have both curve and fill style for example, which complement each other rather than being alternatives.
A more high-level solution and idiomatic with CAD applications is assignment on the level of the whole IfcPresentation (as opposed to the items of a presentation). If you are concerned about styles on the level of product geometry like a whole wall, that would be sufficient detail. You would do this through layer assignment. Layers then have styles and can be switched on and off. Since one presentation can be assigned to multiple layers this would likely resemble the functionality you have in mind.

Is texture name unique in OpenGL?

When I create a texture using glGenTextures, I get a texture name which is actually an integer such as 0,1,2,3...
What the texture name actually mean? Is this the unique index in GPU?
If I create a texture in different threads or processes, I may get the same name.
However I don't think same name means same texture in GPU.
So I guess the texture is just the local texture index in per thread.
So it is impossible to share texture between thread right?
Texture object names are numbers that represent a specific texture. If you generate a texture object name, the system guarantees that it will uniquely identify that specific texture within that OpenGL context, until you delete the texture (and likely sometime thereafter).
And technically, it is unique with the set of contexts that share objects with the current OpenGL context.
But outside of the context sharing group, that texture name has no meaning. It means nothing to the GPU itself; it merely refers to a specific texture.

does Mapkit on IOS support dequeuing of overlays? (like it does annotations)

Does Mapkit on IOS support dequeuing of overlays? (like it does annotations). If so what is the code for achieving this?
Background: With annotations I believe you can add many, leaving MapKit to instantiate views when required for them via the dequeuing approach. What about the same thing for overlays however if I have many across the country? Do I need to write the code for checking which overlays I have are visible or not and then instantiate them / remove them myself in realtime?
Map Kit does not support reuse of overlays in the same way it supports doing this for annotation views. One of the reasons for this must certainly be that the two objects are not analogous. Overlays are model objects that represent an area on the map, whereas annotation views are view objects used from time to time to display the location of an annotation on the map. The technique of reusing view objects (as opposed to creating individual ones for every use) is an optimization that is used in a couple of other places in UIKit, notably for Table View Cells and various bits of Collection Views.
The view reuse pattern is to establish some data index (table index paths, map coordinates) and then have a delegate provide an appropriate view object to use when a particular index/location comes into view. When the data object passes out of sight, the view object is recycled in a queue.
An annotation is analogous to an overlay, and MapKit does not provide reuse for them either and for good reason: they are the data that is being displayed!
The analogous object to the annotation view is an overlay renderer, which (of course!) provides rendering for an overlay. I assume that the reason these are not reused is because they are not view system objects and presumably much more lightweight, so there is little benefit from reuse. We find evidence for this in the fact that until iOS 7.0 the MapView delegate did provide a view object for overlays and this was replaced by the renderer concept.
I hope that helps.
What problem is this causing for you?

libgdx difference between sprite and actor

I'm just going through the javadoc and various tutorials on libgdx and I'm at the stage of trying to figure out differences between various concepts that seem similar to me or provide similar capabilities in libgdx.
At first I thought scene2d was about creating interactive items such as menus, etc but various tutorials I'm reading use scene2d/actors for the main game items (i.e. the player, etc) and others just use sprites.
What exactly is the difference between using Sprite and Actor (i.e. scene2D) in a game and when should you choose?
Thanks.
A Sprite is basically an image with a position, size, and rotation. You draw it using SpriteBatch, and once you have your your Sprites and your SpriteBatch, you have a simple, low-level way to get 2D images on the screen anywhere you want. The rest is up to you.
Actor, on the other hand, is part of a scene graph. It's higher-level, and there's a lot more that goes into a scene graph than just positioning images. The root of the scene graph is the Stage, which is not itself displayed. The Stage is a container for the Actors that you add to it, and is used for organizing the scene. In particular, input events are passed down through the Stage to the appropriate Actor, and the Stage knows when to tell the Actor to draw itself. A touch event, for example, only gets sent to the Actor that got touched.
But note that Actor does not contain a texture like Sprite does. Instead you probably want to use Image, a subclass of Actor that's probably closer to Sprite than just a plain Actor. Other subclasses of Actor contain text, and so on.
Another big advantage of Actors is that they can have Actions. These are a big topic, but they essentially allow you to plan a sequence of events for that Actor (like fading in, moving, etc) that will then happen on their own once you set them.
So basically Actor does a lot more than Sprite because it's part of a graphical framework.
It is more or less matter of taste. If you want to use actions and stage, use actors. Actors cannot be drawn directly, you need to override draw method. Inside draw you can use sprites.

Value object or not for 3d points?

I need to develop a geometry library in python, describing points, lines and planes in 3d space, and various geometry operations. Related to my previous question.
The main issue in the design is if these entities should have identity or not. I was wondering if there's a similar library out there (developed in another language) to take inspiration from, what is the chosen design, and in particular the reason for one choice vs. the other.
I am not familiar with other libraries, but it seems that there 3d points should be (immutable) value objects.
- allows sharing of a point between several containers (lines, planes, etc.)
- Avoids defensive getters and setters
- In real-life a 3d point has no identity.
Also, Josh Bloch is saying (see http://www.infoq.com/presentations/effective-api-design
) that one of the mistakes they did in the design of Java's standard library was that they did not define the Size class as immutable.

Resources