How to make a node look differently in different views in OpenSceneGraph? - openscenegraph

What technique should be used in multi-view app (using CompositeViewer) if some nodes look differently in different views? For example, if some label positions should be recalculated depending on the view's camera parameters? Or if some other kind of annotations (rectangular area with a border some text) are visible or hidden depending on the view scale?

osg has Billboard and Text classes that handle orientation for each camera out of the box (see for instance how the CullVisitor applies to the Billboard class here).
To implement other behaviors which depend on the camera, the right place to make things happen is a Cull Callback added to your nodes: your callback will be invoked multiple times (one for each different camera) on every frame, and you can react accordingly to your needs.

Related

What is the relationship between renderpass, graphics pipeline and draw call in vulkan?

I have a basic knowledge about vulkan and compute graphic. I also have read the vulkan tutorial in https://vulkan-tutorial.com/ . However, I am confused about the relationship between renderpass, graphics pipeline and draw call.
From vulkan API, the graphics pipeline only can hold one renderpass. Does it mean multi-renderpass would need creating multiple graphics pipelines?
Draw call command is recorded in a renderpass, it does not specify any render target, although a renderpass may contain multiple render targets. Does it mean a renderpass only need a draw call? But I often hear something about draw call limits. It seems multiple draw calls likely happen in a renderpass. Why need multiple draw calls?
A graphic pipeline does not "hold" a render pass at all. It is created with respect to a render pass:
renderPass is a handle to a render pass object describing the environment in which the pipeline will be used; the pipeline must only be used with an instance of any render pass compatible with the one provided.
Specifically, it is created with respect to a subpass of a render pass (also a field of VkGraphicsPipelineCreateInfo). You can only use a graphics pipeline when a render pass instance compatible with renderPass is active and when the given subpass of that render pass is active.
Subpasses determine which render pass attachments will be rendered to by any rendering operation. So the fragment shader outputs are routed to the active attachments, as specified in the render pass's subpass data for that subpass.
Draws happen with respect to whatever graphics pipeline is current, and render to the attachments specified by the graphics pipeline's outputs and routed to the attachments for the current subpass of the current render pass instance.

Ifc object with multiple colors

Is it possible, within IFC 4.0, to define an Ifc object (IFC Wall for example) with multiple colors (which are IfcStyledItem)? How can we activate each color inside a BIM Viewer program (like layer in AutoCAD)?
Thanks,
Formally, you can have only one IfcStyledItem per IfcRepresentationItem, but the styled item can refer to multiple presentation styles (IfcPresentationStyle). Similarly, you could have one IfcPresentationLayerWitStyle per IfcPresentationItem referring to multiple presentation styles. A viewer application could then provide user interactions to switch styles on or off per representation item or layer. However I believe that is not the right interpretation of those presentation style sets. The standard is not clear about it, but I would rather assume these styles are supposed to aggregate, that is you have both curve and fill style for example, which complement each other rather than being alternatives.
A more high-level solution and idiomatic with CAD applications is assignment on the level of the whole IfcPresentation (as opposed to the items of a presentation). If you are concerned about styles on the level of product geometry like a whole wall, that would be sufficient detail. You would do this through layer assignment. Layers then have styles and can be switched on and off. Since one presentation can be assigned to multiple layers this would likely resemble the functionality you have in mind.

does Mapkit on IOS support dequeuing of overlays? (like it does annotations)

Does Mapkit on IOS support dequeuing of overlays? (like it does annotations). If so what is the code for achieving this?
Background: With annotations I believe you can add many, leaving MapKit to instantiate views when required for them via the dequeuing approach. What about the same thing for overlays however if I have many across the country? Do I need to write the code for checking which overlays I have are visible or not and then instantiate them / remove them myself in realtime?
Map Kit does not support reuse of overlays in the same way it supports doing this for annotation views. One of the reasons for this must certainly be that the two objects are not analogous. Overlays are model objects that represent an area on the map, whereas annotation views are view objects used from time to time to display the location of an annotation on the map. The technique of reusing view objects (as opposed to creating individual ones for every use) is an optimization that is used in a couple of other places in UIKit, notably for Table View Cells and various bits of Collection Views.
The view reuse pattern is to establish some data index (table index paths, map coordinates) and then have a delegate provide an appropriate view object to use when a particular index/location comes into view. When the data object passes out of sight, the view object is recycled in a queue.
An annotation is analogous to an overlay, and MapKit does not provide reuse for them either and for good reason: they are the data that is being displayed!
The analogous object to the annotation view is an overlay renderer, which (of course!) provides rendering for an overlay. I assume that the reason these are not reused is because they are not view system objects and presumably much more lightweight, so there is little benefit from reuse. We find evidence for this in the fact that until iOS 7.0 the MapView delegate did provide a view object for overlays and this was replaced by the renderer concept.
I hope that helps.
What problem is this causing for you?

libgdx difference between sprite and actor

I'm just going through the javadoc and various tutorials on libgdx and I'm at the stage of trying to figure out differences between various concepts that seem similar to me or provide similar capabilities in libgdx.
At first I thought scene2d was about creating interactive items such as menus, etc but various tutorials I'm reading use scene2d/actors for the main game items (i.e. the player, etc) and others just use sprites.
What exactly is the difference between using Sprite and Actor (i.e. scene2D) in a game and when should you choose?
Thanks.
A Sprite is basically an image with a position, size, and rotation. You draw it using SpriteBatch, and once you have your your Sprites and your SpriteBatch, you have a simple, low-level way to get 2D images on the screen anywhere you want. The rest is up to you.
Actor, on the other hand, is part of a scene graph. It's higher-level, and there's a lot more that goes into a scene graph than just positioning images. The root of the scene graph is the Stage, which is not itself displayed. The Stage is a container for the Actors that you add to it, and is used for organizing the scene. In particular, input events are passed down through the Stage to the appropriate Actor, and the Stage knows when to tell the Actor to draw itself. A touch event, for example, only gets sent to the Actor that got touched.
But note that Actor does not contain a texture like Sprite does. Instead you probably want to use Image, a subclass of Actor that's probably closer to Sprite than just a plain Actor. Other subclasses of Actor contain text, and so on.
Another big advantage of Actors is that they can have Actions. These are a big topic, but they essentially allow you to plan a sequence of events for that Actor (like fading in, moving, etc) that will then happen on their own once you set them.
So basically Actor does a lot more than Sprite because it's part of a graphical framework.
It is more or less matter of taste. If you want to use actions and stage, use actors. Actors cannot be drawn directly, you need to override draw method. Inside draw you can use sprites.

With Haskell and Gtk2hs, how would I create a new widget and associated events?

I have an application that I am working on, and I'm basically self-teaching GUI programming. I asked a fairly involved question over on programmers.stackexchange. This question is about the mechanics of an idea I had not tried.
I have three widgets: a TreeView, a TextField, and a DrawingArea. Each of the three widgets interacts very intimately with events on one necessarily triggering actions on the other. Those three widgets largely do not interact with the rest of the application except (so far) by reading an MVar containing the global application state.
Currently I can think of no case in which the larger application should ever interact directly with any of those three widgets. Further, that identical pattern would be replicated to review other data that has the same form. So, it seems to me that it would make sense to actually bind these three widgets together into a larger composite widget that can interact with GTK's normal event queue. So, for instance
type MyDataViewWidget = (TreeView, TextField, DrawingArea)
data DataUpdatedSignal a = DataUpdatedSignal a
data RedrawEvent a = RedrawEvent a
So, the widget would use DataUpdatedEvent to indicate to the rest of the application that something inside MyDataViewWidget changed, and RedrawEvent would tell the widget that it needs to redraw or re-read the source data.
(technically, I have not thought through semantically what the various actions in the composite widget would do... whether the widgets would just have a read-only copy of the application data and need to receive new read-only copies with the RedrawEvent or perhaps the widgets would have the MVar itself and be allowed to change the data in the MVar, etc... I'm just interested at the moment in how to actually do this)
Are there any examples of doing something like this? Basically, what instances do I need to implement to create the new widget and the two signals? I'd prefer to stick to Haskell, but I could drop to C in order to build up the new widget.
Unfortunately, there is currently no pure-Haskell way to (correctly) implement the Widget type class. You'll need to implement your widget in C, then import it via the FFI. There are numerous examples of this -- basically all of gtk+/gtk2hs is a collection of hundreds of examples of doing this.

Resources