Alternative for JComponent in JavaFX - javafx-2

I am new to JavaFX and i have been recently googling around for a while to understand this.
I am planning to rewrite an existing screen-manager framework in my project which is in Swing.
I am interested to understand what is the alternative to JComponent in FX ?
Is it Stage or window or Control or Parent, i am not able to conclude. Neither i am sure if there is such an alternative any.
Why do i need the alternative for JComponent ?
Well, in the screen-manager framework that i have in place, we always return the type for any individual swing component (say a panel) as JComponent. So i am eager to know the alternative in FX, if we have one.
Thanks in advance for any help.

Somewhat loosely, the equivalent to JComponent is Node
I say loosely, because they are different things and it isn't a 100% equivalence relation, but I guess you can think of them as roughly representing similar things. Nodes represent more stuff than JComponents because they can represent shapes, media views, etc.
JComponent is:
The base class for all Swing components except top-level containers. To use a component that inherits from JComponent, you must place the component in a containment hierarchy whose root is a top-level Swing container. Top-level Swing containers -- such as JFrame, JDialog, and JApplet -- are specialized components that provide a place for other Swing components to paint themselves.
And Node is:
Base class for scene graph nodes. A scene graph is a set of tree data structures where every item has zero or one parent, and each item is either a "leaf" with zero sub-items or a "branch" with zero or more sub-items.
Note that a Scene is contained within a Window (or Stage), so somewhat analogous to a JComponent, a window is not a Node (but pretty much everything else that JavaFX displays is).
See the Working with the Scene Graph tutorial from Oracle for more information on what Nodes are and how they are used.

Related

Entity/Component concepts of GameplayKit

I am designing my game with Entity/Component concepts of GameplayKit in iOS 9, for ShootComponent, should define bullet/missile as Entity?
Reason for Yes:
separate logic from its parent, e.g. playerTank or enemyTank;
if not, TankEntity need distinguish whether its bullet collide with other Entities or itself.
Reason for No:
it is not actual entity in logic world, which is fired by my tank or enemy turret;
bullet always be shot and disappeared, so game need add/remove it now and then;
For your comments pls.
Finally decided to define bullet/missile as entity, so it acts as entity in contact test, rendering and other components.
I would have add it as a component for the entity using it.
So you will be able to make any entity fire bullet or missile.
Keep in mind that your entity should only act as a simple reference with no logic in it.
First lets read Adam Martins original description of his terms. It appears Apple got the idea of entities and components from Martin:
Entity: The entity is a general-purpose object. Usually, it only consists of a unique id.
Component: the raw data for one aspect of the object, and how it interacts with the world.
System: "Each System runs continuously (as though each System had its own private thread) and performs global actions on every Entity that possesses a Component or Components that match that Systems query."
Martin is just defining terms for doing compositional design, which is an alternative to inheritance that is more recombinable and flexible.
So entities are what you might recognize as instances of a class, but classes have been stripped of all their data and methods, which has been moved out into components - and the entities just delegate to the components.
So your missile... it would be an instance of a class in normal OO terms - an object, right? And a missile can behave in a variety of ways... it can seek out an enemy, it can fly straight ahead, it can speed up, etc. It also has properties that indicate if it's hit an enemy, properties for its total damage, health, and so on.
So the missile is an entity while these various methods / data would be components of the missile entity.
Martins approach is interesting, and there hasn't been as much focus on compositional design as there has been OO (for what reason I don't really know), so I can see why Apple would adopt it for a game framework like this.
But his ideas don't seem very well fleshed out. For example, usually in compositional design there is a delegation hierarchy, where objects will keep delegating up a chain until some data or method is found. At the top there's one meta-object that everything delegates to. In this way objects are both components and entities - they act as both the delegating and the delegated to. But Martins terms don't support this... his model is flat - there are only entities, and then components that can be added to them, but no delegation between entities and no meta-object.
Maybe he felt this flat design was appropriate for game development. I have my doubts... you seem to want some kind of hierarchical structure of objects. I would look for a way to mix in inheritance, or setup some kind of custom delegation hierarchy where objects could act as both entities and components. You might look to see if this is possible within that framework, or if it isn't just write your own.

libgdx difference between sprite and actor

I'm just going through the javadoc and various tutorials on libgdx and I'm at the stage of trying to figure out differences between various concepts that seem similar to me or provide similar capabilities in libgdx.
At first I thought scene2d was about creating interactive items such as menus, etc but various tutorials I'm reading use scene2d/actors for the main game items (i.e. the player, etc) and others just use sprites.
What exactly is the difference between using Sprite and Actor (i.e. scene2D) in a game and when should you choose?
Thanks.
A Sprite is basically an image with a position, size, and rotation. You draw it using SpriteBatch, and once you have your your Sprites and your SpriteBatch, you have a simple, low-level way to get 2D images on the screen anywhere you want. The rest is up to you.
Actor, on the other hand, is part of a scene graph. It's higher-level, and there's a lot more that goes into a scene graph than just positioning images. The root of the scene graph is the Stage, which is not itself displayed. The Stage is a container for the Actors that you add to it, and is used for organizing the scene. In particular, input events are passed down through the Stage to the appropriate Actor, and the Stage knows when to tell the Actor to draw itself. A touch event, for example, only gets sent to the Actor that got touched.
But note that Actor does not contain a texture like Sprite does. Instead you probably want to use Image, a subclass of Actor that's probably closer to Sprite than just a plain Actor. Other subclasses of Actor contain text, and so on.
Another big advantage of Actors is that they can have Actions. These are a big topic, but they essentially allow you to plan a sequence of events for that Actor (like fading in, moving, etc) that will then happen on their own once you set them.
So basically Actor does a lot more than Sprite because it's part of a graphical framework.
It is more or less matter of taste. If you want to use actions and stage, use actors. Actors cannot be drawn directly, you need to override draw method. Inside draw you can use sprites.

With Haskell and Gtk2hs, how would I create a new widget and associated events?

I have an application that I am working on, and I'm basically self-teaching GUI programming. I asked a fairly involved question over on programmers.stackexchange. This question is about the mechanics of an idea I had not tried.
I have three widgets: a TreeView, a TextField, and a DrawingArea. Each of the three widgets interacts very intimately with events on one necessarily triggering actions on the other. Those three widgets largely do not interact with the rest of the application except (so far) by reading an MVar containing the global application state.
Currently I can think of no case in which the larger application should ever interact directly with any of those three widgets. Further, that identical pattern would be replicated to review other data that has the same form. So, it seems to me that it would make sense to actually bind these three widgets together into a larger composite widget that can interact with GTK's normal event queue. So, for instance
type MyDataViewWidget = (TreeView, TextField, DrawingArea)
data DataUpdatedSignal a = DataUpdatedSignal a
data RedrawEvent a = RedrawEvent a
So, the widget would use DataUpdatedEvent to indicate to the rest of the application that something inside MyDataViewWidget changed, and RedrawEvent would tell the widget that it needs to redraw or re-read the source data.
(technically, I have not thought through semantically what the various actions in the composite widget would do... whether the widgets would just have a read-only copy of the application data and need to receive new read-only copies with the RedrawEvent or perhaps the widgets would have the MVar itself and be allowed to change the data in the MVar, etc... I'm just interested at the moment in how to actually do this)
Are there any examples of doing something like this? Basically, what instances do I need to implement to create the new widget and the two signals? I'd prefer to stick to Haskell, but I could drop to C in order to build up the new widget.
Unfortunately, there is currently no pure-Haskell way to (correctly) implement the Widget type class. You'll need to implement your widget in C, then import it via the FFI. There are numerous examples of this -- basically all of gtk+/gtk2hs is a collection of hundreds of examples of doing this.

Multi-Threading XNA GameComponents

I am developing an XNA project, where there are two DrawableGameComponents A and B, with the following constraints:
Either A is visible, or B is visible. So only one of their "Draw" methods has to be called.
Both A and B need to be enabled - always. So the "Update" method of each has to be called under all circumstances.
Currently both A and B are executed in the same thread. However, the "Update" methods of them are very CPU-Intensive. Since both GameComponents do not need to talk to each other, and both GameComponents do not need to share any data, it is easily possible to parallelize them.
What I would like to know is how to do that in XNA. The "Update" and "Draw" methods are called by the XNA Framework, so I do not know where to put the Threads. Is there a standard way of doing this?
Usually this is done through game state managment, where the Game1 class (default class) is used to call the Update() and/or Draw() of other classes (game components)
Take a look at the xnadevelopment game state managment tutorial, their they describe how to call diffrent updates and draws of different classes, and hopefully u'll see that multi-threading can be implemented in the Game1 class (default auto-created XNA class)
p.s. if you dont mind doing a lot of reading take a look at this article on XNA multi threading, its accompanied by some diagrams that explain how it works very well.
You haven't specified if you need both Update's to run simultaneously so I'm going off the assumption that one component is the only that needs to be drawn.
Using DrawableGameComponents they are automatically synced with your Game object, but, if you store a reference to each component instead of instantiating them without a stored reference, such as:
componentOne = new FirstComponenet(this);
Components.Add(componentOne);
componentTwo = new SecondComponent(this);
Components.Add(componentTwo);
// Immediately disable componetTwo
componentTwo.Enabled = false; // Prevents Update from firing
componentTwo.Visible = false; // Prevents Draw from firing (for Drawable components only)
Then you can let XNA manage the Update/Draw loops as per normal. componentOne and componentTwo being class level variables, you can manage when each are active.
Again, this is based on the assumption that you don't need one to update at the same time the other does.

Best way to manage things like bullets in a game?

I've been starting to get into game development; I've done some tutorials and read lots of articles but one thing I'm not sure about is what is the best way to manage large numbers of temporary objects, e.g. bullets.
Should each entity manage its own bullets, should I have a global bullet manager or should I create each bullet as a new full-blown individual object (that seems pretty inefficient though)?
Also, when using a component pattern what should I do about properties that seems generic, e.g. position, velocity, etc..?
Some stuff I've read seem to think that everything should be in some kind of component while others seem to think that generic properties that will be commonly accessed by a variety of components should be a member of the entity class itself.
Forgive me, these are probably simple but I want to make sure I'm thinking in the right direction.
Thank you very much!
Creating each bullet as a "fully blown object" shouldn't be too inefficient - have a look at the Object pool pattern, which outlines a way to speed up these object creations.
As for your question re: components and generic properties, it depends on how strictly you wish to follow the component architecture. If you want to be really strict with the component architecture, every property should be in a component and different components should talk to each other. Otherwise, for efficiency reasons, share some properties in the main object. For more information, have a look at this page on the component behavioural pattern.
The original Quake uses a fixed-size pool for entities (which are also sometimes called edicts). Anything whose existence persists between frames is an entity. This includes "shaped physical" things like the world and doors, rectangular physical things like monsters, players, and nails, movetype-transparent things like weapons, invisible but touchable rectangular things like trigger fields, and entirely-nonphysical things like delay events.
I think the limit in Quake is something like 700 edicts; the game will crash if the limit is exceeded. I think edicts are simply stored in an array, since every property which exists for any edict exists for all of them.

Resources