I have been running SpatialUnderstandingExample scene from holo-toolkit. Couldnt figure out how to place my objects into the scene. I want to replace those small boxes that comes default with my own objects. How can I do that?
Thanks
edit: found the draw box but how do i push my object there?
edit2: finally pushed an object at the position but still code is very complicated its messing up with the size and shape of my object. Will try to make it clean and neat.
It's been a while since I've looked at that example so hopefully I remember its method name's correctly. It contains a "DrawBox" method that is called after a successful call to get a location from spatial understanding. The call that creates the box looks something like this:
DrawBox(toPlace, Color.red);
Replace this call with the following (assuming "toPlace" contains the results from the spatial understanding call and "model" contains the model you are trying to place there):
var rotation = Quaternion.LookRotation(toPlace.Normal, Vector3.up);
// Stay center in the square but move down to the ground
var position = toPlace.Postion - new Vector3(0, RequestedSize.y * .5f, 0);
// instantiate the hologram from a model
GameObject newObject = Instantiate(model, position, rotation) as GameObject;
if (newObject != null)
{
// Set the parent of the new object the GameObject it was placed on
newObject.transform.parent = gameObject.transform;
}
Related
When instantiating an object from prefab by this way (in an empty project, Unity 2020.3.2f1):
myObject = Instantiate(preObject, Parent.transform);
This one changes myObject's shape very much. Actually, I don't know why.
Found a decision:
myObject = Instantiate(preObject);
myObject.transform.parent =Parent.transform
Is this a bug or Im just that lazy, I can't read documentation?
I cant make a comment because I dont have enough rep ..
You could also set the transform as a parent and than just set the child scale to 1/parentScale.
obj = Instantiate(newObj, parent);
obj.localScale = new Vector3(1/parent.localScale.x, 1/parent.localScale.y, 1/parent.localScale.z);
If you have more than one parent with streched scale you can try the same with parent.losssyScale
I have a Document based Core Data app with an NSTreeController supplying the content to a view based NSOutlineView. I am "styling" (setting text colour, background colour etc.) the rows based on persistent "transformable" NSColor and NSFont attributes in my data model which the end use can modify. When a new row is popped up, it displays things with the colours/fonts set in the data model. Here is the delegate/datasource code that sets the row background colour:
- (void) outlineView:(NSOutlineView *)outlineView
didAddRowView:(NSTableRowView *)rowView
forRow:(NSInteger)row
{
// Get the relevant nodeType which contains the attributes
QVItem *aNode = [[outlineView itemAtRow:row] representedObject];
if (aNode.backColor)
{
rowView.backgroundColor = aNode.backColor;
}
}
However when the style attributes change I want the associated visible rows to be redrawn with the new style values. Each time a "style" attribute is changed, I am using NSNotificationCenter to send a notification to the Outline view delegate, with the model object whose row needs to be redrawn with the changed style. This is the code in the delegate that receives the notification.
-(void) styleHasChanged: (NSNotification *)aNotification
{
NSTreeNode *aTreeNode = [myTreeController treeNodeForModelObject:aNotification.object];
[myOutlineView reloadItem:aTreeNode];
}
My assumption here is that I can navigate the tree controller to find the tree node which is representing my model object and then ask the outline view to redraw the row for that tree node. This is the "additions" code in the tree controller which walks the tree to find the object - not super efficient, but I don't think there is another way.
#implementation NSTreeController (QVAdditions)
- (NSTreeNode *)treeNodeForModelObject:(id)aModelObject
{
return [self treeNodeForModelObject:aModelObject inNodes:[[self arrangedObjects] childNodes]];
}
- (NSTreeNode *)treeNodeForModelObject:(id)aModelObject inNodes:(NSArray*)nodes
{
for(NSTreeNode* node in nodes)
{
if([node representedObject] == aModelObject)
return node;
if([[node childNodes] count])
{
NSTreeNode * treeNode = [self treeNodeForModelObject:aModelObject inNodes:[node childNodes]];
return treeNode;
}
}
return nil;
}
So sometimes this works and the row redraws, and sometimes it doesn't. The delegate method "styleHasChanged:" is always called, and the tree controller always returns a corresponding tree node (Actually of a subclass of NSTreeNode). But more often than not the outline view does not recognise the tree node, and the row is not redrawn. Its like the tree controller has given back a different tree node object to the one it gave the outline view in the past. But weirdly sometimes it does work and the right row is redrawn with the new background colour. If I collapse the row out of view and pop it open again, it is redrawn correctly.
Anyone any idea why it works sometimes and not other times?
It would be nice to be able to bind the colour/font attributes to the row and columns in some way, so that the outline view did this styling automatically with KVO, but I don't think that is possible - is it?
You spend hours/days trying to work out what you've done wrong; You write the question out; Post it; Sleep on it; and think how stupid can you be.
So I asked the NSTableRowView to redraw itself, but I had not set the new background colour. So here is the new improved (and works) version of styleHasChanged:
-(void) styleHasChanged: (NSNotification *)aNotification
{
QVItem *modelItem = aNotification.object;
NSTreeNode *aTreeNode = [myTreeController treeNodeForModelObject:modelItem];
NSInteger rowIndex = [myOutlineView rowForItem:aTreeNode];
if !(rowIndex == -1)
{
NSTableRowView *rowViewToBeUpdated = [myOutlineView rowViewAtRow:rowIndex makeIfNecessary:YES];
rowViewToBeUpdated.backgroundColor = modelItem.backColor;
}
}
Duh!
I am trying to do a very basic thing but I am new to this.
Basically I have a screen with a 3 objects that can move around,
I have implemented a method which I call when TouchesMoved happens -
if object X moves over main object the object X will be hidden.
What I want to do is when the object Y is released over the main object
it will return to the position it was moved from.
should this be implemented in the TouchesEnded?
what would the method look like?
Any help would be very appreciated.
All you'd have to do here, is remember the object position in touchesBegan: and then restore the object in touchesEnded:
If you're only accepting single touches, then you can use something like this in the touchesBegan / touchesEnded methods to grab the touch...
CGPoint location = [[touches anyObject] locationInView:self];
I've been converting my own personal OGLES 2.0 framework to take advantage of the functionality added by the new iOS 5 framework GLKit.
After pleasing results, I now wish to implement the colour-based picking mechanism described here. For this, you must access the back buffer to retrieve a touched pixel RGBA value, which is then used as a unique identifier for a vertex/primitive/display object. Of course, this requires temporary unique coloring of all vertices/primitives/display objects.
I have two questions, and I'd be very grateful for assistance with either:
I have access to a GLKViewController, GLKView, CAEAGLLayer (of the GLKView) and an EAGLContext. I also have access to all OGLES 2.0
buffer related commands. How do I combine these to identify the color
of a pixel in the EAGLContext I'm tapping on-screen?
Given that I'm using Vertex Buffer Objects to do my rendering, is there a neat way to override the colour provided to my vertex shader
which firstly doesn't involve modifying buffered vertex (colour)
attributes, and secondly doesn't involve the addition of an IF
statement into the vertex shader?
I assume the answer to (2) is "no", but for reasons of performance and non-arduous code revamping I thought it wise to check with someone more experienced.
Any suggestions would be gratefully received. Thank you for your time
UPDATE
Well I now know how to read pixel data from the active frame buffer using glReadPixels. So I guess I just have to do the special "unique colours" render to the back buffer, briefly switch to it and read pixels, then switch back. This will inevitably create a visual flicker, but I guess it's the easiest way; certainly quicker (and more sensible) than creating a CGImageContextRef from a screen snapshot and analyzing that way.
Still, any tips as regards the back buffer would be much appreciated.
Well, I've worked out exactly how to do this as concisely as possible. Below I explain how to achieve this and list all the code required :)
In order to allow touch interaction to select a pixel, first add a UITapGestureRecognizer to your GLKViewController subclass (assuming you want tap-to-select-pixel), with the following target method inside that class. You must make your GLKViewController subclass a UIGestureRecognizerDelegate:
#interface GLViewController : GLKViewController <GLKViewDelegate, UIGestureRecognizerDelegate>
After instantiating your gesture recognizer, add it to the view property (which in GLKViewController is actually a GLKView):
// Inside GLKViewController subclass init/awakeFromNib:
[[self view] addGestureRecognizer:[self tapRecognizer]];
[[self tapRecognizer] setDelegate:self];
Set the target action for your gesture recognizer; you can do this when creating it using a particular init... however I created mine using Storyboard (aka "the new Interface Builder in Xcode 4.2") and wired it up that way.
Anyway, here's my target action for the tap gesture recognizer:
-(IBAction)onTapGesture:(UIGestureRecognizer*)recognizer {
const CGPoint loc = [recognizer locationInView:[self view]];
[self pickAtX:loc.x Y:loc.y];
}
The pick method called in there is one I've defined inside my GLKViewController subclass:
-(void)pickAtX:(GLuint)x Y:(GLuint)y {
GLKView *glkView = (GLKView*)[self view];
UIImage *snapshot = [glkView snapshot];
[snapshot pickPixelAtX:x Y:y];
}
This takes advantage of a handy new method snapshot that Apple kindly included in GLKView to produce a UIImage from the underlying EAGLContext.
What's important to note is a comment in the snapshot API documentation, which states:
This method should be called whenever your application explicitly
needs the contents of the view; never attempt to directly read the
contents of the underlying framebuffer using OpenGL ES functions.
This gave me a clue as to why my earlier attempts to invoke glReadPixels in attempts to access pixel data generated an EXC_BAD_ACCESS, and the indicator that sent me down the right path instead.
You'll notice in my pickAtX:Y: method defined a moment ago I call a pickPixelAtX:Y: on the UIImage. This is a method I added to UIImage in a custom category:
#interface UIImage (NDBExtensions)
-(void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y;
#end
Here is the implementation; it's the final code listing required. The code came from this question and has been amended according to the answer received there:
#implementation UIImage (NDBExtensions)
- (void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {
CGImageRef cgImage = [self CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
if ((x < width) && (y < height))
{
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t offset = ((width * y) + x) * 4;
UInt8 b = data[offset+0];
UInt8 g = data[offset+1];
UInt8 r = data[offset+2];
UInt8 a = data[offset+3];
CFRelease(bitmapData);
NSLog(#"R:%i G:%i B:%i A:%i",r,g,b,a);
}
}
#end
I originally tried some related code found in an Apple API doc entitled: "Getting the pixel data from a CGImage context" which required 2 method definitions instead of this 1, but much more code is required and there is data of type void * for which I was unable to implement the correct interpretation.
That's it! Add this code to your project, then upon tapping a pixel it will output it in the form:
R:24 G:46 B:244 A:255
Of course, you should write some means of extracting those RGBA int values (which will be in the range 0 - 255) and using them however you want. One approach is to return a UIColor from the above method, instantiated like so:
UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
I've a MKPolyline made of CLLocationCoordinate2D array (Points). That's all fine.
I added this line to the Map as an overlay, like so: Map.AddOverlay(line);
I event set this: Map.SetVisibleMapRect(line.BoundingMapRect, true);
But the line does not show up although the Map bounds are correct.
I'm looking into MKPolylineView, but can't get it to work.
Anyone knows to set colour and line width?
Thanks
After much head scratching, here is how to display a MKPolyline on a MKMapView:
Step 1: Create a delegate method for Map GetViewForOverlay
Map.GetViewForOverlay = Map_GetViewForOverlay;
Where Map is the MKMapView.
MKOverlayView Map_GetViewForOverlay(MKMapView mapView, NSObject overlay)
{
if(overlay.GetType() == typeof(MKPolyline))
{
MKPolylineView p = new MKPolylineView((MKPolyline)overlay);
p.LineWidth = 2.0f;
p.StrokeColor = UIColor.Green;
return p;
}
else
return null;
}
Step 2: Create a new instance of MKPolyline
MKPolyline line = MKPolyline.FromCoordinates(polyPoints);
Where polyPoints is an Array of CLLocationCoordinate2D.
Step 3: Add the overlay to the map
Map.AddOverlay(line);
Step 4: Use code below to zoom and change Map bounds to fit route
Map.SetVisibleMapRect(line.BoundingMapRect, true);
I'm pretty sure if your intent is to dynamically draw a map over the MapView given a backing model object that indicates two coordinates you want to take a look at my project here:
https://github.com/anujb/MapWithRoutes
This will allow you to overlay a path and it will update as the map changes. It's a modified version of an obj-C port that makes use of background threads so it's not blocking.
Thanks,
Anuj