Issue with Map view delegate method 'mapView:regionDidChange:' do not call - ios4

while using map view in my application some times MKMapKit delegate method 'mapView: regionDidChange' do not call.
Its happens only when I drag the map. but when i zoom in or Zoom out Its working perfectly. So its create issue related to place new annotations on map while dragging the map.
I have do this code in mapView:regionDidChange:
int j=0;
-(void) mapView:(MKMapView *)mapsView regionDidChangeAnimated:(BOOL)animated{
zoomLevel = self.mapView.region.span.latitudeDelta;
if (![appDelegate internetConnected]){
return;
}
if (appDelegate.isMapViewRegionChanged) {
if (j==0) {
j++;
return;
}else{
j=0;
appDelegate.isMapViewRegionChanged = FALSE;
return;
}
}
[self callGetMapViewWithObject:nil];
}
/*
first boolean is to check Internet connection.
[appDelegate internetConnected]
Second condition is to return when we navigate from any view controller too map View controller.
appDelegate.isMapViewRegionChanged
Third is a method to place new annotations.
[self callGetMapViewWithObject:nil];
*/
I checked all conditions and booleans but my coding is not reason for this bug.
so may be its related to region did change method.
So while using my app with map, 20% of time its behave like Ideal(method do not call).
can some one help me out with this.
Thank you in advance.

EDIT It broke randomly, so I now call the function again (undoing what I said below), no changes extra... and um, it works. I feel like I'm flipping a coin.
I just had this happen because I have a subclassed MKMapView. I don't know if you're subclassing this or not, but for some reason Apple's super functions, eg: -(void) scrollViewDidScroll; called super but was not intercepted properly and skipped that call.
When I removed the "overridden" call, that was just a call to [super scrollView], it started working properly.
I don't know why apple's code is broken that way (calling super doesn't have the same effect not overriding it), but make sure you're not subclassing these:
ScrollView functions
MKMapView functions...
or perhaps using the WildCard Gesture Recognizer provided very kindly by the answer to why Map Views don't respond to touchesBegan/Moved etc here: How to intercept touches events on a MKMapView or UIWebView objects? .
If this doesn't help, ensure you don't have a view on top of the other views, improper delegates, xibs are arranged and hooked up, the usual stuff.

Related

Is there a way to dynamically switch between FetchRequest and SectionedFetchRequest?

I've been playing around with CoreData in the last couple days, trying to build an app to review money spent on shopping. Right now its still pretty simple with just a single Data Model for the individual shops.
I have a list view displaying all of them and I've integrated sorting into the list, first through older workarounds around the predicates, but than I found this video from this years wwdc and I basically just copied. I've really been fascinated by the grouping feature from the SectionedFetchRequest and I wanted to integrate it, while maintaining the original non sectioned List. So I thought I'd skip the FetchRequest in my List and just pass the results to the list view instead of the SortDescriptor
MainView{
ListView(descriptor: SortDescriptor)
}
ListView{
FetchRequest(sortDescriptors: descriptor)
}
changed to:
MainView{
ListView(FetchRequest(sortDescriptors: descriptor))
}
ListView{
FetchedResults
}
But that still leaves me unable to just push a button to turn sectioning on or off.
I'm kind of stuck on how to go on from here.. First idea coming to my mind is creating a wrapper around the ListView handeling which FetchRequest to send out to the ListView based on Button toggle state like
MainView{
Wrapper(sortDescriptors, toggleState)
}
Wrapper{
ListView(FetchRequest(sortDescriptors: descriptor))
}
ListView{
FetchedResults
}
but I still would have the problem that I'd need 2 Variables in my ListView, one for the normal, and one for the sectionedFetchResults.
Has anyone an idea how to handle this ?
TLDR I want to dynamically switch between FetchRequest and SectionedFetchRequest
According to the documentation you can't not section a SectionedFetchRequest, therefore you would have to support both. Therefore, I would make two separate sub views, and show them in a parent view that has logic to control which one is shown. You would need to do this anyway if you are supporting pre-iOS 15 OS's.
MainView{
if sectioned
ListViewSectioned(sortDescriptors: descriptor, sectionID: sectionID)
} else {
ListView(sortDescriptors: descriptor)
}
}
ListView{
FetchRequest(sortDescriptors: descriptor)
}
ListViewSectioned{
SectionedFetchRequest(sectionIdentifier: sectionID, sortDescriptors: descriptor)
}
The main view doesn't have to know any more to choose and set up the different list views. I didn't put an OS check in, but you will need that as well.

MonoTouch tracking movement over controls

I have a simple iPhone app with a simple view and a custom view as a child. The child is just a square painted on the main view.
I need to track a touch event that enters this child view but from a touch that started outside the view.
What I've tried so far is to add the TouchesBegin/TouchesMoved events to the parent view. Also tried to add the to the child controls directly but that doesn't track any touches that are not initiated within that control.
The questions are:
a) can I get the control from Position somehow?
b) is this the best way doing this, or is there another way?
Again, I include this video of GamePlay for the game I'm trying to port (on my spare time). It's not a promotion attempt but illustrates what I'm trying to accomplish. I wrote the Win8 and WP7 version of the game so I'm not trying to copy another persons work here. :) Don't watch it if you don't want to know what game it is. The question is still valid without watching this.
http://www.youtube.com/watch?v=13IczvA7Ipo
Thanks
// Johan
I stumbled on the answer myself. In the parent I override TouchesMoved (and Begin and Ended not shown here).
I then iterate the subviews and chech if it's the view I'm looking for by type and then check if the Frame contains the point. The code below is just concept of course.
public override void TouchesMoved (NSSet touches, UIEvent evt)
{
base.TouchesMoved (touches, evt);
UITouch touch = touches.AnyObject as UITouch;
if (touch != null) {
var rp = touch.LocationInView (touch.View);
foreach(var sv in this.View.Subviews)
{
if(!(sv is LetterControl))
continue;
if(sv.Frame.Contains (rp))
{
Console.WriteLine("LetterControl found");
}
}
}
}

Monotouch - programming for Swipe Gesture

I am developing a control for an IPAD application (My first time doing Apple development). Its a simple control that mimics a grid - consists of a collection of UIViews (each of which represents a cell) all added to a parent UIView (in a grid like fashion).
One of the requirements is to implement a swipe gesture - the users swipe across the grid to activate/inactivate the cell - this corresponds to a 1/0 in the database.
I create a UISwipeGesture and added it to each of my UIView which represents a cell. That appears to be an incorrect approach as it fires the event for the UIView in which the swipe originated but not across all the UIViews.
My understanding would be that i need to implement the SwipeGesture across the parent UIView which contains all these children UIView. However if i do that how will i know which child UIView has been swiped over? Or any other approach which would make sense?
I know this thread is fairly old, but I created a Swipe extension method that might have helped.
View.Swipe(UISwipeGestureRecognizerDirection.Right).Event += Swipe_Event;
void Swipe_Event(ViewExtensions.SwipeClass sender, UISwipeGestureRecognizer recognizer)
{
View view = sender.View; // do something with view that was swiped.
}
This may not answer your question, but I can speak to the approach I've taken here with a similar use case:
1) I would abandon UIScrollView and use UITableView. You'll notice that UITableView inherits from UIScrollView and has all the performance benefits of virtualization and cell / view re-use. Which you'll find terribly useful as you work towards optimizing your app for performance on device.
2) Utilize the UITableViewCell's ContentView to create custom "Grid" cells. Or better yet, utilize MonoTouch.Dialog if you're not required to create Grid rows ad-hoc.
3) Use this awesome class (props to #praeclarum) to setup gestures in MonoTouch. You essentially provide a UIGestureRecognizer as a generic argument. You can then utilize the LocationInView method to grab the point in the UITableView where the gesture occurred
public void HandleSwipe(UISwipeGestureRecognizer recognizer)
{
if(recognizer.State == UIGestureRecognizerState.Ended) {
var point = recognizer.LocationInView(myTableView);
var indexPath = myTableView.IndexPathForRowAtPoint(point);
// do associated calculations here
}
}
I think you're correct that the gesture recognizer has to be attached to the parent view. In the action method associated with the gesture recognizer I think you can use the Monotouch equivalent of CGRectContainsPoint() to determine whether the swipe occurred in a particular subview. I imagine you would have to iterate through the subviews until you found the one in which the swipe occurred. I'm not aware of a method that would immediately identify the swiped subview.

Core Data: delay calling endUpdates till viewWillAppear

I have a Core Data app with a tab-bar controller that displays 2 view controllers. If I add something in the first tab's view controller, it should display in the 2nd tab's VC. Both VCs are based off a NSFetchedResultsController which is based off the same entity; the only difference is that one has a predicate and the 2nd VC doesn't.
This works fine for the normal template, and when data is added from the 1st VC, it gets updated instantly in the 2nd tab using controllerWillChangeContent and controllerDidChangeContent. The problem is that if the user adds or deletes any rows in the 1st VC, when the user comes to the 2nd tab they don't see the rows animatedly inserted or deleted... everything's already there.
What I would like to do, in the 2nd tab's VC, is delay calling the [self.tableView endUpdates] (which causes the animated inserting/deleting of rows in the table) till the user actually goes to that tab, in that VC's viewWillAppear. I've tried this, but doesn't seem to work:
- (void)controllerDidChangeContent:(NSFetchedResultsController *)controller
{
tableviewUpdates = TRUE;
}
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
if (tableviewUpdates) {
tableviewUpdates = FALSE;
[self.tableView endUpdates];
}
}
This works if adding one row at a time and then switching to the 2nd tab, but not if I add multiple rows in the 1st tab and then switch.
Any help would be appreciated.
You're working against the purpose of the NSFetchedResultsController which is to make updating the tableview automatic and effortless.
I'm pretty sure, however, that if you override all the FRC delegate methods you can block all the automatic updates.
You might want to rethink this design. Are users really going to expect to see changes in one view reenacted in a second? Will they understand they are watching a rewind of the previous changes or will they intuitively think that the app is doing something to their data on its own?
The standard UI grammar teaches users to expect that one change animates once and then subsequently just shows up in a standard display. I would suggest you test this design with naive users carefully before deploying such a non-standard interface.

Getting UITabBarController to work with Core Data

I've been reading this thread on Stackoverflow and have been trying to replicate the solution with no success in my own project.
My project has 4 tabs. In my app delegate I do this:
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
Page1 *page1 = (Page1 *)[navController topViewController];
Page2 *page2 = (Page2 *)[navController topViewController];
Page3 *page3 = (Page3 *)[navController topViewController];
Page4 *page4 = (Page4 *)[navController topViewController];
page1.managedObjectContext = self.managedObjectContext;
page2.managedObjectContext = self.managedObjectContext;
page3.managedObjectContext = self.managedObjectContext;
page4.managedObjectContext = self.managedObjectContext;
[self.window makeKeyAndVisible];
return YES;
}
In the originating thread it says I need to create a IBOutlet to each navController for each tab I want to use Core data on.
Whilst you can assign multiple delegates for the UINavigationController the same is not true for the outlets, you can only ever supply ONE outlet for the navController.
I can get Page1 to work, but the other pages simply crash; because of the lack of an IBOutlet.
Do I really need X IBOutlets for Y Tabs or can I do it another way?
Another issue is that the originating thread the accepted answer is:
Ideally you want to pass either the
NSManagedObjectContext,
NSFetchedResultsController or the
relevant NSManagedObject "down" into
the UIViewController.
But there is no code or example of how to do this.
Ideally, I do not want to use a singelton or use the app delegate all over the place.
Any confirmation and clarification would be great.
Thanks.
Your immediate problem has nothing to do with Core Data. You are assigning the same navigation controller to each tab when you need a separate navigation controller for each tab otherwise the navigation controller's hierarchy of views will get scrambled every time you change tabs.
The pattern recommended in the question you linked to is called "dependency injection" and it is the one that Apple recommends in most cases. However, in the case of tabbars or any other complex view/view-controller hierarchy, dependency injection can get to complicated. It's a particular issue with tabbars because you don't usually load all tab view/view-controllers when the app starts but wait until each tab is selected before loading its elements.
Instead, you can use an alternative pattern that exploits the UIApplication objects singleton status. Since there is only one application object, there is only one application delegate object. That means that anywhere in the app you can make a call like this:
(MyApplicationDelegate *) appDelegate=(MyApplicationDelegate *)[[UIApplication sharedApplication] delegate];
... and always get the same application object. Then, if you have the managed object context defined as a property of the app delegate you can get the context just by:
theManagedObjectContext=appDelegate.managedObjectContext
Add these two lines to every view controller and you can always be sure of getting the app delegate's managed object context.

Resources