I have used a gesture recognizer and the boundingRect for a given overlay to determine when a user taps on it. However I would now like to make a callout appear over the overlay region that the user taps, just like how it is done for annotations. Must I create the overlays as annotations in order to accomplish this? Thanks.
Overlays are also annotation objects if you wish.
From the Location Awareness Programming Guide:
The MKOverlay protocol conforms to the MKAnnotation protocol. As a result, all overlay objects are also annotation objects and can be treated as one or both in your code. If you opt to treat an overlay object as both, you are responsible for managing that object in two places. If you want to display both an overlay view and annotation view for it, you must implement both the mapView:viewForOverlay: and mapView:viewForAnnotation: methods in your application delegate. It also means that you must add and remove the object from both the overlays and annotations arrays of your map.
Related
I'm using prefab instead of objects. I'm able to apply bounds control's handle style (Hololens 2 style) in Objects But not in prefab which I've created.
Is there any solutions, How can I apply Hololens2's Handle style in prefab ?
enter image description here
You need to save the Configuration in BoundsControl/Visuals as an .asset file for reuse in the prefab. You can also drag the .asset file from Packages/Mixed Reality Toolkit Foundation/SDK/StandardAssets/Profile/BoundsControl/HoloLens2Style_Slate to the Assets folder, modify it to the style you want, and then add it to the BoundsControl component of the prefab.
GameViews in OpenTK-1.0 initialize the context in CreateFramebuffer() and destroy said context in DestroyFramebuffer(). What if I want to hold onto my VBOs and just create a bunch of new FBOs? For example, on rotation, I need to create newly sized FBOs, but I don't want to have to completely reload all my VBOs and I just don't understand how this would work without completely reimplementing all/most of GameView. I can't just override these two methods, because the base class does not expose a setter on Renderbuffer or Framebuffer. What am I missing here?
In sum: I want to rotate the device and get a new OpenTK-1.0 FBO, but not destroy the context. How do I go about this?
CreateFramebuffer() creates an OpenGL context, not a framebuffer object. To create a FBO, call GL.GenFramebuffer(). To destroy it, use GL.DeleteFramebuffer().
Refer to the OpenGL wiki for more information. This is written for desktop OpenGL, but this article also applies to OpenGL ES.
I have been trying to find an example or some hints on how to create an app that I could drag, resize, rotate images onto a UIView and then save the individual pieces (including their size, rotation and placement) and the entire UIView into CoreData. Kind of like the omnigraffle app.
Any tutorials, examples or anything on any piece of the app would be greatly appreciated.
for dragging a view
http://www.cocoacontrols.com/platforms/ios/controls/tkdragview
for roting a view http://www.cocoacontrols.com/platforms/ios/controls/ktonefingerrotationgesturerecognizer
for resizing a view
http://www.cocoacontrols.com/platforms/ios/controls/spuserresizableview
What respects to core data, its actually pretty straightforward just, gather the classes in one view, see the properties you need to save, and the new one you will need for your app and thats it.
Like:
Object Canvas containing a many relationship to morphlingViews wich contain all the properties as center, color, width, height, angle, UIPath (if you plan to create custom shapes) layer position (so it gets drawn correctly) and if you plan to connect the views as omnigraffle add a many realtionship to self (in morphlingViews) so you can take the center of different morphlingViews and add a simple line between them. (and a string if you plan to add drawInRect method to allow users to write in the objects, then it will be a good idea to add the text properties as well).
You can also add Quartz Composer drawing styles properties to the object, as shadow, shadowColor, shadowOffset, or add patterColor to add resizable background.
I've a layer with multiple markers with rather big icons, so they overlap. Via the list on the side of the map users can select a marker and the map will pan (and zoom) to it. But it will still be behind some other makers.
How do I get a individual makers z-index and set it? I would be useful to get the highest used z-index and just add one. (another solution is to add the total number of markers to the z-index)
The markers (or features) are in a myLib.features array. The console doesn't show any z-index type functions.
I can't find a appropriate example or api function for this.
EDIT:
I found this example: http://dev.openlayers.org/examples/ordering.html
I don't really understand it. Somehow the created feature takes the next z-index given by the layer via somekind of symbolizer. I have no idea how to work this static sort into a dynamic one.
Try this:
First of all, make sure you are using a OpenLayers.Layer.Vector layer, not a OpenLayers.Layer.Markers layer. Apparently the Markers layer is old news and all new development is done in the Vector layer. It has more features. (I wasted a pile of time with the Markers layer myself).
Then, each of your markers needs to be a OpenLayers.Feature.Vector object. The constructor takes three arguments, the third of which is called the style. The style is where you set your image attributes, the background shadow, the mouse-over text, and the z-index, which has the property name "graphicZIndex". I think that's what you're looking for.
http://dev.openlayers.org/releases/OpenLayers-2.12/doc/apidocs/files/OpenLayers/Feature/Vector-js.html#OpenLayers.Feature.Vector.OpenLayers.Feature.Vector.style
Add your "markers" (which are Vector's) to your Vector layer with the addFeatures function. And just ignore the "options" argument.
http://dev.openlayers.org/releases/OpenLayers-2.12/doc/apidocs/files/OpenLayers/Layer/Vector-js.html#OpenLayers.Layer.Vector.addFeatures
I found that example page too, and I found it confusing too. It was setting all the markers' styles in the Vector layer's constructor (as default values to be used if the marker style was omitted) instead of the marker's constructor. I think it makes more sense to set the marker style in the marker constructor.
To change the style in real-time, take one of your OpenLayers.Feature.Vector markers, called "marker" and do this. And let's call the Vector Layer "layer".
marker.style.graphicZIndex = 13;
layer.redraw();
my question is about view controllers, delegates and all that in general. I feel perfectly comfortable with UIView, UIViewController, Delegates and Sources, like UITableView does for instance. It all makes sense.
Now I have implemented my first real custom view. No XIBs involved. It is an autocomplete address picker very much like in the Mail application. It creates those blue buttons whenever a recipient is added and has all the keyboard support like the original.
It subclasses UIView. There is no controller, no delegate, no source. I wonder if I should have either one of those? Or all, to make it a clean implementation.
I just cannot put my finger on the sense a view controller would make in my case. My custom view acts much like a control and a UIButton doesn't have a controller either.
What would it control in my view's case?
Some of my thoughts:
For the source: currently the view has a property "PossibleAutocompleteRecipients" which contains the addresses it autocompletes. I guess this would be a candidate for a "source" implementation. But is that really worth it? I would rather pass the controller to the view and put the property into the controller.
The selected recipients can be retrieved using a "SelectedRecipients" property. But views should not store values, I learned. Where would that go? Into the controller?
What about all the properties like "AllowSelectionFromAddressBook"? Again, if I compare with UIButton, these properties are similar to the button's "Secure" property. So they are allowed to be in the view.
The delegate could have methods like "WillAddRecipient", "WillRemoveRecipient" and so on and the user could return TRUE/FALSE to prevent the action from happening. Correct?
Should I maybe inherit from UIControl in the first place and not from UIView?
And last but not least: my custom view rotates perfectly if the device is rotated. Why don't all views? Why do some need a controller which implements ShouldAutoRotateToDeviceOrientation()?
Does it make sense what I wrote above? In the end I will provide the source on my website because it took me some time to implement it and I would like to share it as I have not found a similar implementaion of the Mail-App-like autocomplete control in MonoTouch.
I just want to learn and understand as much as possible and include it in the source.
René
I can answer part of your question.
I just cannot put my finger on the
sense a view controller would make in
my case
The ViewController is responsible for handling the View's state transitions (load, appear, rotate, etc) These transitions are used mainly when you use a navigation component (UINavigationViewController, UITabBarController). These components needs to received a ViewController that will handles the view's transitions.
For exemple, when you push a ViewController on a UINavigationViewController, it will cause the ViewDidLoad, ViewWillAppear, ViewDidAppear. It will also cause the ViewWillDisappear, ViewDidDisappear of the current ViewController.
So, if your application has only one portrait view, you don't need a ViewController. You can add your custom view as a subview of the main window.