How to bind non-UI entity with UI entity in Bevy - rust

Description
I'm trying to implement trigger logic when the player faced the trigger I should remove the UI element from the screen.
Spawning the trigger point
/// Create a trigger point and when the user faced with
/// it I'll mark the tutorial as `in-progress` and
/// remove it when the collision between tutorial
/// and player is stopped
commands
.insert(Sensor(true))
.insert(Collider::cuboid(8.0, 8.0))
.insert(ActiveEvents::COLLISION_EVENTS)
.insert_bundle(SpriteBundle {
sprite: Sprite {
color: Color::rgb(0.1, 0.1, 0.1),
custom_size: Some(Vec2::new(16.0, 16.0)),
..Default::default()
},
transform: *tutorial_transform,
..Default::default()
})
// Tutorial is a component which I'll filter as `tutorial_entity`
.insert(Tutorial);
Create a UI
commands
.spawn_bundle(NodeBundle {
///
})
/// Trying to bind UI element with `Tutorial` entity
/// to remove it from the screen when the user faced with collider
.insert(Parent(tutorial_entity))
When the user faced collision
// I want to despawn all children UI elements that are linked with this non-UI element
commands.entity(tutorial_entity).despawn_recursive()
Error
I've got an error and no UI on the screen at all
Styled child in a non-UI entity hierarchy. You are using an entity with UI components as a child of an entity without UI components, results may be unexpected
Question
Do you know how to link a non-UI element with a UI element to remove the non-UI element and remove all linked UI elements with it?

I don't know if its still relevant but you could always just create a separate UI entity and add your own reference component.
I guess the for structurings sake i would create ( for example ) a UILinkComponent(pub Entity) and attach it to the world entity.

Pretty sure you shouldn't use Parent for that kind of relationship. The hierarchy is typically used for spatial relationships.
You should instead create a new component that stores the other Entity. You can still destroy it in the same way, but this way it isn't part of the hierarchy so the other UI elements don't get confused.

Related

Clean way to get mousePressed event notified to QGraphicsView

I inherited from QGraphicsItemGroup and made a class that keeps a pointer to its contained items so that I can later refer to them and change properties. It has an ellipse item and a line item and I want only the ellipse to be clickable. I need that press event of the ellipse to propagate to the QGraphicsView so that I can send a signal to some surrounding widgets.
So far I tried inheriting also from QGraphicsObject to have signals available but got stuck with ambigous base error when trying to use scene->addItem. I tried casting to QGraphicsItemGroup but I still get the error. I also tried inheriting from QObject with no success.
I'm new to QGraphics and I know the QGraphics framework has a lot of tools for user interaction and even interaction between GraphicsItems but this is really kicking my butt.
What would be the proper way to get this behavior?
Create a separate "emitter" class
To allow your subclass of QGraphicsItemGroup to emit signals, you can create a separate "emitter" class that inherits from QObject. Then, you can add an instance of this emitter class within your subclass of QGraphicsItemGroup. The emitter object can then emit signals for your subclass as needed.
QGraphicsItemGroup is treated as a single item
Unfortunately, an instance of QGraphicsItemGroup is treated as a single item, so each mousePressEvent will belong to the entire group rather than one of the members of that group (i.e., the ellipse item or the line item). If you want the mousePressEvent to behave differently depending on which item is clicked, they will need to be separate items, or you could try using line->setParentItem(ellipse) to link up the 2 items without using QGraphicsItemGroup.

Relationships are (null) when -awakeFromInsert message received by new object in NSOrderedSet

The following -awakeFromInsert implementation sets properties of new objects instantiated by array controllers in a UI when the user presses an add button:
- (void)awakeFromInsert
{
[super awakeFromInsert];
NSLog(#"Adding perceptron %ld to layer %ld", self.indexInLayer, self.layer.indexInNetwork);
NSLog(#"New perceptron added to layer %#", self.layer );
// more code here to do the configuration
}
The problem is that the self.parent relationship of the new object is not set when -awakeFromInsert is called (it is nil) so I can't use it to access the parent object (for example how many childs there are or what index the new object is).
Output of the above code:
2012-12-17 21:36:39.309 MLPManager[98112:403] Adding perceptron 0 to layer 0
2012-12-17 21:36:39.309 MLPManager[98112:403] New perceptron added to layer (null)
I'm pretty sure that the new objects are being connected up correctly because the indexInLayer method works perfectly when the UITableView calls it to add indexes of the objects to the view:
- (NSUInteger)indexInLayer
{
NSUInteger index = [self.layer.perceptrons indexOfObject:self];
//NSLog(#"indexInLayer returning %ld", index );
return index;
}
My data model has three entities: Network, Layer, Perceptron arranged as ordered sets and connected to the next by to-many relationships (i.e. Parent - Child - Grandchild). My UI has three array controllers and three UITableViews. I've set it up so that the Child array controller only contains the children of the selected Parent, and the Grandchild array controller only contains the grandchildren of the selected Child. When I add childs or grandchilds to these arraycontrollers they are automatically set as children of the currently selected parent. That all works fine.
At what point is self.layer set by the UI? Can someone confirm that this is occurring after -awakeFromInsert? And if so, how am I supposed to configure a new object if I can't do it from within -awakeFromInsert? I note that the Apple documentation for -awakeFromInsert says it is "invoked automatically by the Core Data framework when the receiver is first inserted into a managed object context."
The reason I need information on the layer object and other parts of the data structure is that I need to automatically instantiate various other objects (weights which are children of perceptrons) at the same time as the new perceptron object. Should I be using -awakeFromInsert for these kind of tasks?
First, there appears to be coupling in your code between model (NSManagedObject) and view (UITableView) objects. That is not recommended according to the model-view-controller design pattern.
layer and indexInLayer are not standard attributes of NSManagedObject so I assume these are attributes in your entity. As an alternative to doing the setup in awakeFromInsert, I wonder if you can instead implement your own setter methods for your attributes so that you can do the necessary work at the time the required data is available.
If you choose to implement your own setter methods, you need to follow Apple's guidance in the Managed Object Accessor Methods documentation, specifically:
You must ensure that you invoke the relevant access and change
notification methods (willAccessValueForKey:, didAccessValueForKey:,
willChangeValueForKey:, didChangeValueForKey:,
willChangeValueForKey:withSetMutation:usingObjects:, and
didChangeValueForKey:withSetMutation:usingObjects:).
I do not know why layer and indexInLayer are not usable in your awakeFromInsert method; it's possible you need to solve that problem instead.

What's the best practice to creating different views when sharing one child frame in an MFC MDI app?

I'm not necessarily looking for code help, but rather a high level answer so I can research the solution myself. Basically, I have an MDI app with multiple docs and their views, I'd like all the views to open up as tabs in the one child frame that I have. The thing is my child frame is statically configured with a splitter window with two views, a form and a list view, in the OnCreateClient method. I'd like to keep this as the default tab that appears when the app is launched.
I have a third view (editview) with it's own document template, which I'd like to be able to open as a separate tab. I will have other views that will behave this way. What's the best way to approach this?
Will I need to create separate child frames for each view? Will I lose the 'tab' feature if I create separate child frames?
Or will I have to modify the child frame's OnCreateClient method to test which document template is the current one and create the view for that doc template? I'd like to know how some of you seasoned programmers have had or would do it.
Thanks.
In case this helps others, from what I've gathered, it is perfectly acceptable to create a new child frame class derived from CChildFrame or just use that as your frame with your new view. The doc, frame, and view will be added to the doc template in the initInstance method. for example, let say you have a pair of trios (2 docs, 2 views, 2 frames):
pDocTemplate = new CMultiDocTemplate(IDR_testappTYPE,
RUNTIME_CLASS(CMydoc1),
RUNTIME_CLASS(CMyframe1),
RUNTIME_CLASS(CMyview1));
if (!pDocTemplate)
return FALSE;
AddDocTemplate(pDocTemplate);
pDocTemplate2 = new CMultiDocTemplate(IDR_testappTYPE,
RUNTIME_CLASS(CMydoc2),
RUNTIME_CLASS(CMyframe2),
RUNTIME_CLASS(CMyview2));
if (!pDocTemplate2)
return FALSE;
AddDocTemplate(pDocTemplate2);
If you add another trio with a different childframe because this new frame doesn't use splitters like the ones above, you would do it this way.
pDocTemplate3 = new CMultiDocTemplate(IDR_mditest3TYPE,
RUNTIME_CLASS(CMydoc), //same doc
RUNTIME_CLASS(CMyframeWithoutSplitters), //new frame
RUNTIME_CLASS(CMyview3)); //new view
if (!pDocTemplate3)
return FALSE;
AddDocTemplate(pDocTemplate3);

Monotouch - programming for Swipe Gesture

I am developing a control for an IPAD application (My first time doing Apple development). Its a simple control that mimics a grid - consists of a collection of UIViews (each of which represents a cell) all added to a parent UIView (in a grid like fashion).
One of the requirements is to implement a swipe gesture - the users swipe across the grid to activate/inactivate the cell - this corresponds to a 1/0 in the database.
I create a UISwipeGesture and added it to each of my UIView which represents a cell. That appears to be an incorrect approach as it fires the event for the UIView in which the swipe originated but not across all the UIViews.
My understanding would be that i need to implement the SwipeGesture across the parent UIView which contains all these children UIView. However if i do that how will i know which child UIView has been swiped over? Or any other approach which would make sense?
I know this thread is fairly old, but I created a Swipe extension method that might have helped.
View.Swipe(UISwipeGestureRecognizerDirection.Right).Event += Swipe_Event;
void Swipe_Event(ViewExtensions.SwipeClass sender, UISwipeGestureRecognizer recognizer)
{
View view = sender.View; // do something with view that was swiped.
}
This may not answer your question, but I can speak to the approach I've taken here with a similar use case:
1) I would abandon UIScrollView and use UITableView. You'll notice that UITableView inherits from UIScrollView and has all the performance benefits of virtualization and cell / view re-use. Which you'll find terribly useful as you work towards optimizing your app for performance on device.
2) Utilize the UITableViewCell's ContentView to create custom "Grid" cells. Or better yet, utilize MonoTouch.Dialog if you're not required to create Grid rows ad-hoc.
3) Use this awesome class (props to #praeclarum) to setup gestures in MonoTouch. You essentially provide a UIGestureRecognizer as a generic argument. You can then utilize the LocationInView method to grab the point in the UITableView where the gesture occurred
public void HandleSwipe(UISwipeGestureRecognizer recognizer)
{
if(recognizer.State == UIGestureRecognizerState.Ended) {
var point = recognizer.LocationInView(myTableView);
var indexPath = myTableView.IndexPathForRowAtPoint(point);
// do associated calculations here
}
}
I think you're correct that the gesture recognizer has to be attached to the parent view. In the action method associated with the gesture recognizer I think you can use the Monotouch equivalent of CGRectContainsPoint() to determine whether the swipe occurred in a particular subview. I imagine you would have to iterate through the subviews until you found the one in which the swipe occurred. I'm not aware of a method that would immediately identify the swiped subview.

How can I suspend the working of an NSFetchedResultsController?

I have a UITableViewController fed by an NSFetchedResultsController. From it, the user can call up a modal ViewController in which he or she can enter new data. As this begins, I create a temporary object as follows:
newPtr = [[Entry alloc] initWithEntity:[NSEntityDescription
entityForName:#"Entry" inManagedObjectContext:self.nmocontext]
insertIntoManagedObjectContext:self.nmocontext];
As the user makes choices, attributes of this 'provisional' object, newPtr, are set.
The problem is that the base UITableViewController remains active while the modal ViewController is visible. It seems to be freaking out (causing crashes) in some cases when it realizes a mandatory attribute of newPtr has not been set yet.
What can I do to stop the NSFetchedResultsController from looking at my managed object context until the modal ViewController is dismissed?
Core Data supports "nested" managed object contexts which allow for a flexible architecture that make it easy to support independent, cancellable, change sets. With a child context, you can allow the user to make a set of changes to managed objects that can then either be committed wholesale to the parent (and ultimately saved to the store) as a single transaction, or discarded. If all parts of the application simply retrieve the same context from, say, an application delegate, it makes this behavior difficult or impossible to support.
I haven't tested this myself but a possible approach would be to implement viewWillAppear and viewWillDisappear, and set the fetchedResultsController delegate to self on will appear and nil on will disappear.
OR
You could create an NSObject that mirrors the attributes of your NSManagedObject in your editing window. Once the user has finished editing the attributes (and you have run the appropriate validation rules) you can pass them back to your NSManagedObject instance and let the fetchedResultsController do its job.

Resources