I've been doing some experiments with Apple's Metal API recently and now I came to the subject question - How to use different fragment shaders in one Metal API scene? Is it possible?
The background: a whole geometric primitive is rendered by a simple vertex-fragment chain with the colors defined/calculated inside (let's say we have a cube and all it's faces are rendered with described method). Next, a part of the primitive needs to be rendered additionally with a texture (adding some picture to only one of the faces).
Do we need to use different fragment shaders to accomplish that? I guess it's possible to use some default texture on the first step and that will give some solution.
What would you recommend?
//============= Edited part goes further ==========//
I tried to use two different objects of MTLRenderPipelineState with two different pairs of rendering functions, as Warren suggested. Having the following code I don't get the desired result. Each of the states is rendered as they're supposed when it's done separately, but doing it together only gives us the first one being rendered.
creation:
id <MTLFunction> fragmentProgram = [_defaultLibrary newFunctionWithName:#"color_fragment"];
// Load the vertex program into the library
id <MTLFunction> vertexProgram = [_defaultLibrary newFunctionWithName:#"lighting_vertex"];
// Create a vertex descriptor from the MTKMesh
MTLVertexDescriptor *vertexDescriptor = MTKMetalVertexDescriptorFromModelIO(_boxMesh.vertexDescriptor);
vertexDescriptor.layouts[0].stepRate = 1;
vertexDescriptor.layouts[0].stepFunction = MTLVertexStepFunctionPerVertex;
// Create a reusable pipeline state
MTLRenderPipelineDescriptor *pipelineStateDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
pipelineStateDescriptor.label = #"MyPipeline";
pipelineStateDescriptor.sampleCount = _view.sampleCount;
pipelineStateDescriptor.vertexFunction = vertexProgram;
pipelineStateDescriptor.fragmentFunction = fragmentProgram;
pipelineStateDescriptor.vertexDescriptor = vertexDescriptor;
pipelineStateDescriptor.colorAttachments[0].pixelFormat = _view.colorPixelFormat;
pipelineStateDescriptor.depthAttachmentPixelFormat = _view.depthStencilPixelFormat;
pipelineStateDescriptor.stencilAttachmentPixelFormat = _view.depthStencilPixelFormat;
NSError *error = NULL;
_pipelineStateColor = [_device newRenderPipelineStateWithDescriptor:pipelineStateDescriptor error:&error];
if (!_pipelineStateColor) {
NSLog(#"Failed to created pipeline state, error %#", error);
}
pipelineStateDescriptor.fragmentFunction = [_defaultLibrary newFunctionWithName:#"lighting_fragment"];
_pipelineStateTexture = [_device newRenderPipelineStateWithDescriptor:pipelineStateDescriptor error:&error];
if (!_pipelineStateTexture) {
NSLog(#"Failed to created pipeline state, error %#", error);
}
rendering:
- (void)renderInto:(id <MTLRenderCommandEncoder>)renderEncoder
WithPipelineState:(id<MTLRenderPipelineState>)pipelineState
{
[renderEncoder setRenderPipelineState:pipelineState];
[renderEncoder setVertexBuffer:_boxMesh.vertexBuffers[0].buffer offset:_boxMesh.vertexBuffers[0].offset atIndex:0 ];
[renderEncoder setVertexBuffer:_dynamicConstantBuffer offset:(sizeof(uniforms_t) * _constantDataBufferIndex) atIndex:1 ];
[renderEncoder setVertexBuffer:_textureBuffer offset:0 atIndex:2];
[renderEncoder setFragmentTexture:_textureData atIndex:0];
MTKSubmesh* submesh = _boxMesh.submeshes[0];
[renderEncoder drawIndexedPrimitives:submesh.primitiveType
indexCount:submesh.indexCount
indexType:submesh.indexType
indexBuffer:submesh.indexBuffer.buffer
indexBufferOffset:submesh.indexBuffer.offset];
}
- (void)_render
{
dispatch_semaphore_wait(_inflight_semaphore, DISPATCH_TIME_FOREVER);
[self _update];
id <MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
__block dispatch_semaphore_t block_sema = _inflight_semaphore;
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer) {
dispatch_semaphore_signal(block_sema);
}];
MTLRenderPassDescriptor* renderPassDescriptor = _view.currentRenderPassDescriptor;
if(renderPassDescriptor != nil)
{
id <MTLRenderCommandEncoder> renderEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];
renderEncoder.label = #"MyRenderEncoder";
[renderEncoder setDepthStencilState:_depthState];
[self renderInto:renderEncoder WithPipelineState:_pipelineStateColor];
[self renderInto:renderEncoder WithPipelineState:_pipelineStateTexture];
[renderEncoder endEncoding];
[commandBuffer presentDrawable:_view.currentDrawable];
}
_constantDataBufferIndex = (_constantDataBufferIndex + 1) % kMaxInflightBuffers;
[commandBuffer commit];
}
and finally the fragment shaders:
fragment float4 color_fragment(ColorInOut in [[stage_in]])
{
return float4(0.8f, 0.f, 0.1f, 0.5f);
}
fragment float4 texture_fragment(ColorInOut in [[stage_in]],
texture2d<float, access::sample> texture [[texture(0)]])
{
constexpr sampler s(coord::normalized,
address::clamp_to_zero,
filter::linear);
return texture.sample(s, in.texture_coordinate);
}
You can use multiple fragment shaders in a single frame/pass by creating multiple render pipeline states. Simply create a pipeline state for each vertex/fragment function pair, and call setRenderPipelineState: on your render command encoder to set the appropriate pipeline state before issuing the draw call. You will need to write separate fragment shader functions for doing the color passthrough and the texture sampling.
Related
I have developed a project, where a user draws a image on a canvas, I store it in the file using CoreData, I have one-to-many relationship called folder-to-files. So here all are images. I retrive the images from files , resize according to my table cell height and show it on a table. Once it is shown, I want to cache the images.
I also have some labels on the folder cell, which give me some info regarding my files, which I update on fly.
I also swipe the cells to mark it complete and move it to the bottom the cell.
I also show same file images in different Views depending on how user queries it.
I want to know the best method for this, I read through the web, their are many methods, GCD, NSOperationQueues and many more.
Which method will be best suited for me.
I want to show some code
- (UITableViewCell *)tableView:(FMMoveTableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
static NSString *CellIdentifier = #"Cell";
FolderCell *tableCell = (FolderCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier];
if (tableCell == nil)
{
NSArray *nib = [[NSBundle mainBundle] loadNibNamed:#"FolderCell" owner:self options:nil];
tableCell = [nib objectAtIndex:0];
}
NSMutableArray *categoryArray = [[self.controller fetchedObjects]mutableCopy];
Folder *category = [categoryArray objectAtIndex:[indexPath row]];
[tableCell configureCellWithNote:category]; //This function is written in my FolderCell.m function
}
return tableCell;
}
-(void)configureCellWithNote:(Folder *)category
{
self.category = category;
UIImage *image1 = [UIImage imageWithData:category.noteImage];
CGSize newSize;
if(image1.size.width == 620 && image1.size.height == 200)
{
newSize = CGSizeMake(300, 97);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image1 drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.notesImage.image = newImage;
}
So what is happening here is that configureCellWithNote is taking lot of time, because it is resizing images. Please help me out in deciding how can this performance issue be solved.
Regards
Rajit
If you simply want to shuffle the resize operation to a background thread, you could do something like this:
- (void)configureCellWithNote:(Folder *)category
{
self.category = category;
UIImage *image1 = [UIImage imageWithData:category.noteImage];
CGSize newSize;
if(image1.size.width == 620 && image1.size.height == 200)
{
newSize = CGSizeMake(300, 97);
}
dispatch_async(dispatch_get_global_queue(0,0), ^{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image1 drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_async(dispatch_get_main_queue(), ^{
self.notesImage.image = newImage;
});
});
}
If you want to cache the results, then the trick will be to come up with a good cache key. Unfortunately, it's hard to tell from what you've posted what would make a good cache key. Certainly it will need to include the size, but it'll also need to include something that ties it back to the category. I suppose if nothing else you could use the NSManagedObjectID for the category, but I think that'll be specific to each managed object context you have. Assuming there was a property on Folder called uniqueName a caching implementation might look like this:
- (UIImage*)imageForCategory: (Folder*)category atSize: (CGSize)size
{
// A shared (i.e. global, but scoped to this function) cache
static NSCache* imageCache = nil;
// The following initializes the cache once, and only once
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
imageCache = [[NSCache alloc] init];
});
// Generate a cache key sufficient to uniquely identify the image we're looking for
NSString* cacheKey = [NSString stringWithFormat: #"%#|%#", category.uniqueName, NSStringFromSize((NSSize)size)];
// Try fetching any existing image for that key from the cache.
UIImage* img = [imageCache objectForKey: cacheKey];
// If we don't find a pre-existing one, create one
if (!img)
{
// Your original code for creating a resized image...
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
UIImage* image1 = [UIImage imageWithData:category.noteImage];
[image1 drawInRect:CGRectMake(0,0,size.width,size.height)];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Now add the newly-created image to the cache
[imageCache setObject: img forKey: cacheKey];
}
// Return the image
return img;
}
I'm using Apple's KMLViewer to load a KML file and display it in a MapView. There are over 50,000 lines of coordinates in the KML file which, of course, causes it to load slowly. In an attempt to speed things up, I'm trying to perform the parsing in another thread using GCD.
I have it working reasonably well as far as it is displaying properly and the speed is acceptable. However, I'm getting intermittent runtime errors when loading the map. I suspect it is because the way I have things laid out, the UI is being updated within the GCD block. Everything I'm reading says the UI should be updated in the main thread or else runtime errors can occur which are intermittent and hard to track down. Well, that's what I'm seeing.
The problem is, I can't figure out how to update the UI in the main thread. I'm still new to iOS programming so I'm just throwing things against the wall to see what works. Here is my code, which is basically Apple's KMLViewerViewController.m with some modifications:
#import "KMLViewerViewController.h"
#implementation KMLViewerViewController
- (void)viewDidLoad
{
[super viewDidLoad];
activityIndicator.hidden = TRUE;
dispatch_queue_t myQueue = dispatch_queue_create("My Queue",NULL);
dispatch_async(myQueue, ^{
// Locate the path to the route.kml file in the application's bundle
// and parse it with the KMLParser.
NSString *path = [[NSBundle mainBundle] pathForResource:#"BigMap" ofType:#"kml"];
NSURL *url = [NSURL fileURLWithPath:path];
kmlParser = [[KMLParser alloc] initWithURL:url];
[kmlParser parseKML];
dispatch_async(dispatch_get_main_queue(), ^{
// Update the UI
// Add all of the MKOverlay objects parsed from the KML file to the map.
NSArray *overlays = [kmlParser overlays];
[map addOverlays:overlays];
// Add all of the MKAnnotation objects parsed from the KML file to the map.
NSArray *annotations = [kmlParser points];
[map addAnnotations:annotations];
// Walk the list of overlays and annotations and create a MKMapRect that
// bounds all of them and store it into flyTo.
MKMapRect flyTo = MKMapRectNull;
for (id <MKOverlay> overlay in overlays) {
if (MKMapRectIsNull(flyTo)) {
flyTo = [overlay boundingMapRect];
} else {
flyTo = MKMapRectUnion(flyTo, [overlay boundingMapRect]);
}
}
for (id <MKAnnotation> annotation in annotations) {
MKMapPoint annotationPoint = MKMapPointForCoordinate(annotation.coordinate);
MKMapRect pointRect = MKMapRectMake(annotationPoint.x, annotationPoint.y, 0, 0);
if (MKMapRectIsNull(flyTo)) {
flyTo = pointRect;
} else {
flyTo = MKMapRectUnion(flyTo, pointRect);
}
}
// Position the map so that all overlays and annotations are visible on screen.
map.visibleMapRect = flyTo;
});
});
}
-(void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
activityIndicator.hidden = FALSE;
[activityIndicator startAnimating];
}
#pragma mark MKMapViewDelegate
- (MKOverlayView *)mapView:(MKMapView *)mapView viewForOverlay:(id <MKOverlay>)overlay
{
return [kmlParser viewForOverlay:overlay];
}
- (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(id <MKAnnotation>)annotation
{
return [kmlParser viewForAnnotation:annotation];
}
- (void)mapViewDidFinishLoadingMap:(MKMapView *)mapView
{
[activityIndicator stopAnimating];
activityIndicator.hidden = TRUE;
}
#end
Suggestions?
I have used the classes provided by apple CrumbPath.o and CrumbPathView.o, but it supports only iphone 5.0,when I try the same code with iphone 4.0 ,it does not update the route.
Code :
if (newLocation)
{
if (oldLocation.coordinate.latitude == 0.0) {
initialLocation = [[[CLLocation alloc]initWithLatitude:newLocation.coordinate.latitude longitude:newLocation.coordinate.longitude]retain];
}
// make sure the old and new coordinates are different
if((oldLocation.coordinate.latitude != newLocation.coordinate.latitude) &&
(oldLocation.coordinate.longitude != newLocation.coordinate.longitude))
{
if (!crumbs)
{
// This is the first time we're getting a location update, so create
// the CrumbPath and add it to the map.
//
crumbs = [[CrumbPath alloc] initWithCenterCoordinate:newLocation.coordinate];
[mapView addOverlay:crumbs];
// On the first location update only, zoom map to user location
MKCoordinateRegion region =
MKCoordinateRegionMakeWithDistance(newLocation.coordinate, 2000, 2000);
[mapView setRegion:region animated:YES];
}
else
{
// This is a subsequent location update.
// If the crumbs MKOverlay model object determines that the current location has moved
// far enough from the previous location, use the returned updateRect to redraw just
// the changed area.
//
// note: iPhone 3G will locate you using the triangulation of the cell towers.
// so you may experience spikes in location data (in small time intervals)
// due to 3G tower triangulation.
//
Count++;
double latitude = 0.000500 * Count;
double longitude = 0.000020 * Count;
_bean = [[TempBean alloc]init];
_bean.lat = newLocation.coordinate.latitude + latitude;
_bean.lon = newLocation.coordinate.longitude - longitude;
CLLocation *locationToDraw = [[CLLocation alloc]initWithLatitude:_bean.lat longitude:_bean.lon];
// UIAlertView *alert = [[UIAlertView alloc]initWithTitle:#"Update_Loc" message:[NSString stringWithFormat:#"Lat:%f , Lon:%f",locationToDraw.coordinate.latitude,locationToDraw.coordinate.longitude] delegate:self cancelButtonTitle:#"ok" otherButtonTitles:#"Cancel",nil];
// [alert show];
// [alert release];
MKMapRect updateRect = [crumbs addCoordinate:locationToDraw.coordinate];
if (!MKMapRectIsNull(updateRect))
{
// There is a non null update rect.
// Compute the currently visible map zoom scale
MKZoomScale currentZoomScale = (CGFloat)(mapView.bounds.size.width / mapView.visibleMapRect.size.width);
// Find out the line width at this zoom scale and outset the updateRect by that amount
CGFloat lineWidth = MKRoadWidthAtZoomScale(currentZoomScale);
updateRect = MKMapRectInset(updateRect, -lineWidth, -lineWidth);
// Ask the overlay view to update just the changed area.
[crumbView setNeedsDisplayInMapRect:updateRect];
}
[self calDistance:initialLocation SecondCor:locationToDraw];
[locationToDraw release];
locationToDraw =nil;
[_bean release];
_bean = nil;
}}
If that the case then, you should try looking into your CrumbPathView.m see if the render function works correctly or not from what i see here you copied & pasted the code from Apple Docs right? In the file you should find this in method drawMapRect:zoomScale:inContext:
if (path != nil)
{
CGContextAddPath(context, path);
CGContextSetRGBStrokeColor(context, 0.0f, 0.0f, 1.0f, 0.6f);
CGContextSetLineJoin(context, kCGLineJoinRound);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, lineWidth);
CGContextStrokePath(context);
CGPathRelease(path);
}
See if the path is nil or not if it's nil-ed nothing gets rendered. Have you debug into this view? The rendering happens in CrumbPathView so, you should see what happen in it.
If the path is nil then you should check the method which is called to assign its value, located above the if (path != nil)
In the app I'm currently designing I have a MKMapView with overlays on it (customized MKPolylines btw) and I would like to be able to detect touch events on these overlays and assign a specific action to each overlay. Could any one help me on this one ?
Thanks !
Benja
This can be solved combining How to intercept touches events on a MKMapView or UIWebView objects? and How to determine if an annotation is inside of MKPolygonView (iOS). Add this in viewWillAppear:
WildcardGestureRecognizer * tapInterceptor = [[WildcardGestureRecognizer alloc] init];
tapInterceptor.touchesBeganCallback = ^(NSSet * touches, UIEvent * event) {
UITouch *touch = [touches anyObject];
CGPoint point = [touch locationInView:self.mapView];
CLLocationCoordinate2D coord = [self.mapView convertPoint:point toCoordinateFromView:self.mapView];
MKMapPoint mapPoint = MKMapPointForCoordinate(coord);
for (id overlay in self.mapView.overlays)
{
if ([overlay isKindOfClass:[MKPolygon class]])
{
MKPolygon *poly = (MKPolygon*) overlay;
id view = [self.mapView viewForOverlay:poly];
if ([view isKindOfClass:[MKPolygonView class]])
{
MKPolygonView *polyView = (MKPolygonView*) view;
CGPoint polygonViewPoint = [polyView pointForMapPoint:mapPoint];
BOOL mapCoordinateIsInPolygon = CGPathContainsPoint(polyView.path, NULL, polygonViewPoint, NO);
if (mapCoordinateIsInPolygon) {
debug(#"hit!")
} else {
debug(#"miss!");
}
}
}
}
};
[self.mapView addGestureRecognizer:tapInterceptor];
WildcardGestureRecognizer is in the first linked answer. Calling mapView:viewForOverlay: won't be cheap, adding a local cache of those would help.
Just in case it might help some of you...
I couldn't find a way to do that but I added an annotation on my overlays (Anyway, i needed to do that to display some information) and then I could get the touch event on this annotation. I know it is not the best way to do it but in my situation, and maybe yours, it works ;) !
I am trying to capture tap event on my MKMapView, this way I can drop a MKPinAnnotation on the point where user tapped. Basically I have a map overlayed with MKOverlayViews (an overlay showing a building) and I would like to give user more information about that Overlay when they tap on it by dropping a MKPinAnnotaion and showing more information in the callout.
Thank you.
You can use a UIGestureRecognizer to detect touches on the map view.
Instead of a single tap, however, I would suggest looking for a double tap (UITapGestureRecognizer) or a long press (UILongPressGestureRecognizer). A single tap might interfere with the user trying to single tap on the pin or callout itself.
In the place where you setup the map view (in viewDidLoad for example), attach the gesture recognizer to the map view:
UITapGestureRecognizer *tgr = [[UITapGestureRecognizer alloc]
initWithTarget:self action:#selector(handleGesture:)];
tgr.numberOfTapsRequired = 2;
tgr.numberOfTouchesRequired = 1;
[mapView addGestureRecognizer:tgr];
[tgr release];
or to use a long press:
UILongPressGestureRecognizer *lpgr = [[UILongPressGestureRecognizer alloc]
initWithTarget:self action:#selector(handleGesture:)];
lpgr.minimumPressDuration = 2.0; //user must press for 2 seconds
[mapView addGestureRecognizer:lpgr];
[lpgr release];
In the handleGesture: method:
- (void)handleGesture:(UIGestureRecognizer *)gestureRecognizer
{
if (gestureRecognizer.state != UIGestureRecognizerStateEnded)
return;
CGPoint touchPoint = [gestureRecognizer locationInView:mapView];
CLLocationCoordinate2D touchMapCoordinate =
[mapView convertPoint:touchPoint toCoordinateFromView:mapView];
MKPointAnnotation *pa = [[MKPointAnnotation alloc] init];
pa.coordinate = touchMapCoordinate;
pa.title = #"Hello";
[mapView addAnnotation:pa];
[pa release];
}
I setup a long press (UILongPressGestureRecognizer) in viewDidLoad: but it just detect the only one touch from the first.
Where can i setup a long press to detect all touch? (it means the map ready everytime waiting user touch to screen to push a pin)
The viewDidLoad: method!
- (void)viewDidLoad {
[super viewDidLoad];mapView.mapType = MKMapTypeStandard;
UILongPressGestureRecognizer *longPressGesture = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(handleLongPressGesture:)];
[self.mapView addGestureRecognizer:longPressGesture];
[longPressGesture release];
mapAnnotations = [[NSMutableArray alloc] init];
MyLocation *location = [[MyLocation alloc] init];
[mapAnnotations addObject:location];
[self gotoLocation];
[self.mapView addAnnotations:self.mapAnnotations];
}
and the handleLongPressGesture method:
-(void)handleLongPressGesture:(UIGestureRecognizer*)sender {
// This is important if you only want to receive one tap and hold event
if (sender.state == UIGestureRecognizerStateEnded)
{NSLog(#"Released!");
[self.mapView removeGestureRecognizer:sender];
}
else
{
// Here we get the CGPoint for the touch and convert it to latitude and longitude coordinates to display on the map
CGPoint point = [sender locationInView:self.mapView];
CLLocationCoordinate2D locCoord = [self.mapView convertPoint:point toCoordinateFromView:self.mapView];
// Then all you have to do is create the annotation and add it to the map
MyLocation *dropPin = [[MyLocation alloc] init];
dropPin.latitude = [NSNumber numberWithDouble:locCoord.latitude];
dropPin.longitude = [NSNumber numberWithDouble:locCoord.longitude];
// [self.mapView addAnnotation:dropPin];
[mapAnnotations addObject:dropPin];
[dropPin release];
NSLog(#"Hold!!");
NSLog(#"Count: %d", [mapAnnotations count]);
}
}
If you want to use a single click/tap in the map view, here's a snippet of code I'm using. (Cocoa and Swift)
let gr = NSClickGestureRecognizer(target: self, action: "createPoint:")
gr.numberOfClicksRequired = 1
gr.delaysPrimaryMouseButtonEvents = false // allows +/- button press
gr.delegate = self
map.addGestureRecognizer(gr)
in the gesture delegate method, a simple test to prefer the double-tap gesture …
func gestureRecognizer(gestureRecognizer: NSGestureRecognizer, shouldRequireFailureOfGestureRecognizer otherGestureRecognizer: NSGestureRecognizer) -> Bool {
let other = otherGestureRecognizer as? NSClickGestureRecognizer
if (other?.numberOfClicksRequired > 1) {
return true; // allows double click
}
return false
}
you could also filter the gesture in other delegate methods if you wanted the Map to be in various "states", one of which allowed the single tap/click
For some reason, the UIGestureRecognizer just didn't work for me in Swift. When I use the UIGestureRecognizer way. When I used the touchesEnded method, it returns a MKNewAnnotationContainerView. It seems that this MKNewAnnotationContainerView blocked my MKMapView. Fortunately enough, it's a subview of MKMapView. So I looped through MKNewAnnotationContainerView's superviews till self.view to get the MKMapView. And I managed to pin the mapView by tapping.
Swift 4.1
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
let t = touches.first
print(t?.location(in: self.view) as Any)
print(t?.view?.superview?.superview.self as Any)
print(mapView.self as Any)
var tempView = t?.view
while tempView != self.view {
if tempView != mapView {
tempView = tempView?.superview!
}else if tempView == mapView{
break
}
}
let convertedCoor = mapView.convert((t?.location(in: mapView))!, toCoordinateFrom: mapView)
let pin = MKPointAnnotation()
pin.coordinate = convertedCoor
mapView.addAnnotation(pin)
}