This seems to be possible because the MKUserLocation annotation is placed at the user's current altitude. However, the protocol for MKAnnotation only includes a coordinate. Is there a way to adjust its altitude as well? Thanks!
Here is method :-
[mapView showAnnotations:yourAnnotationArray animated:YES];
You can pull from an array stored in the map view:
yourAnnotationArray = mapView.annotations;
and quickly adjust the altitude :-
mapView.camera.altitude *= 1.4;
try multipling the cameras altitude by a fraction of one, like mapView.camera.altitude *= .85; for a closer viewport
Related
Is there a function that returns the level of the zoom? Or how could you get the level of the zoom with onZoom?
I would like to add/hide some certain elements on SVG when the level of the zoom gets to a certain level. When zoomed enough, an element would be replaced with another element.
getZoom() gives you relative zoom (relative to initial zoom) while getSizes() return real zoom.
Here are all public API methods https://github.com/ariutta/svg-pan-zoom#public-api
Does anyone have an example of how to zoom an MKMapView to the area of all visible annotations using the annotationVisibleRect property on MKMapView? I have seen this post which offers a decent solution, but it seems that this annotationVisibleRect property would be the simplest solution.
Short answer: There is not a solution to this problem using annotationVisibleRect.
There is no example because this property cannot be used in this way. The limited documentation provided for it is certainly misleading for someone who is looking for something convenient from MapKit to do a somewhat common task.
annotationVisibleRect is the rect with regard to the MKAnnotationContainerView coordinate system. MKAnnotationContainerView is the superview for your annotations. If you look in MKMapView.h, you'll find this:
// annotationVisibleRect is the visible rect where the annotations views are currently displayed.
// The delegate can use annotationVisibleRect when animating the adding of the annotations views in mapView:didAddAnnotationViews:
#property (nonatomic, readonly) CGRect annotationVisibleRect;
Its specific purpose is for manipulation (animation) of the annotation views by providing a rectangle in their superview's coordinate system that matches the map view's viewport.
You might think (as I did) that this or similar will do the trick:
CGRect visibleRect = self.mapView.annotationVisibleRect;
MKCoordinateRegion visibleRegion = [self.mapView convertRect:visibleRect toRegionFromView:self.mapView];
[self.mapView setRegion:visibleRegion animated:YES];
It won't. Calling setRegion:animated: may cause the application to crash because the "fromView" is the incorrect coordinate system and may cause the latitude or longitude to go over their min/max values. You'd actually have to do something like this:
- (void)mapView:(MKMapView *)mapView didAddAnnotationViews:(NSArray *)views
{
if(views.count > 0) {
MKAnnotationView *view = [views objectAtIndex:0];
CGRect visibleRect = self.mapView.annotationVisibleRect;
MKCoordinateRegion visibleRegion = [self.mapView convertRect:visibleRect toRegionFromView:view.superview];
[self.mapView setRegion:visibleRegion animated:YES];
}
}
This won't crash the application, but it won't change the region either. If you compare visibleRegion to self.mapView.region, you will find that they are identical. That is because the annotationVisibleRect represents the same area that is visible in the map view -- just in a different coordinate system to make it convenient for you to do things like make the map pins come flying in from the edge of the view. See this answer for details on how it is used.
Also, for reference, here's where the MKAnnotationView sits in relation to MKMapView:
MKMapView
+-UIView
+-MKScrollContainerView
+-MKAnnotationContainerView <-- coordinate system of annotationVisibleRect
+-MKAnnotationView
Hope that helps clear some things up -- if not, ask away.
SWIFT 4
I think what your looking for is
mapView.showAnnotations(annotations:[MKAnnotation], animated: Bool)
You simply pass in an array of all the annotations you're trying to show which is the MKMapView.annotations
mapView.showAnnotations(mapView.annotations, animated: true)
Any suggestions for a good way to do this?
I want to be able to draw lots of 2D things in XNA- often in offset position. eg if something is in position (X,Y) then ideally I'd like to be able to pass it a modified SpriteBatch which, when Draw(X,Y) is called would take account of the offset and draw the thing at (X+OffsetX, Y+OffsetY).
I don't want to pass this offset to the children and have to deal with it separately in each child- that could screw up and would also screw up my interfaces!
Firstly I thought of having a Decorator to a SpriteBatch which if I call Decorator.Draw for something in position (X,Y) would route this to the original SpriteBatch as (X+offsetX, y+offsetY). But then I can't override the Draw methods in the SpriteBatch class, and even if I created my own "Decorator.DrawOffset", the Decorator seems to need "SpriteBatch.Begin()" called and stuff which seems to break... :(
I then thought of Extension Methods, but I think they'd need the offset passed to them as a variable each time draw() is called? Which still requires me to pass the offset down through the children...
Another option would be to draw the children to a RenderTarget (or whatever they are in XNA4) and then render this to the screen in an offset position... but that seems hideously inefficient?
Thanks for any comments!
you should use a Transformation Matrix.
Matrix Transform = Matrix.CreateTranslation(offsetX, offsetY, 0);
SpriteBatch.Begin(...,...,...., Transform);
Using an iOS device, I am interested in tapping a mapview and getting back the lat / long coordinates of that spot. Is this possible?
You can get the CLLocationCoordinate2D using convertPoint:toCoordinateFromView method of the MKMapView.
CLLocationCoordinate2D coordinate = [self.mapView convertPoint:[gesture locationInView:self.mapView] toCoordinateFromView:self.mapView];
I put this in a tap gesture handler to work. But if you are using some other mechanism to get the touch point, you can use the touch point as the argument passed in convertPoint:.
This is very simple so I'm not sure why I can't do this. All I want to do it position some UIImageViews when my app becomes active. I had been using CGAffineTransformMakeTranslation but I can now see that this is not correct because it moves the view a set amount rather than moving it to a set position. What should I be using?
You could just change the frame of your UIImageView to:
set the x and y origin of the frame to the position you want your
UIImageView to be placed with respect to the containing view.
keep the width and height of your UIImageView the same
Something like this:
imageView.frame = CGRectMake([desired x coordinate], [desired y coordinate], imageView.frame.size.width, imageView.frame.size.height);