I'm doing an iPhone app in iOS 5.
In that I'm expanding and reloading the uitableview while recognizing the pinch gesture.
It works great in simulator. But in device it works very slow. For example in device after the UIGestureRecognizerStateEnded only all the rows will be expanded, but in simulator the rows are expanding and reloading while UIGestureRecognizerStateChanged itself.
any suggestions for memory issues?
my code is here
if (pinchRecognizer.state == UIGestureRecognizerStateBegan) {
self.initialPinchHeight = rowHeight;
[self updateForPinchScale:pinchRecognizer.scale];
}
else {
if (pinchRecognizer.state == UIGestureRecognizerStateChanged) {
[self updateForPinchScale:pinchRecognizer.scale];
}
else if ((pinchRecognizer.state == UIGestureRecognizerStateCancelled) || (pinchRecognizer.state == UIGestureRecognizerStateEnded)) {
}
}
-(void)updateForPinchScale:(CGFloat)scale{
CGFloat newHeight = round(MAX(self.initialPinchHeight * scale, DEFAULT_ROW_HEIGHT));
rowHeight = round(MIN(30.0, newHeight));
/*
Switch off animations during the row height resize, otherwise there is a lag before the user's action is seen.
*/
BOOL animationsEnabled = [UIView areAnimationsEnabled];
[UIView setAnimationsEnabled:NO];
[self.tableView beginUpdates];
NSArray *visibleRows = [self.tableView indexPathsForVisibleRows];
[self.tableView reloadRowsAtIndexPaths:visibleRows withRowAnimation:UITableViewRowAnimationNone];
[self.tableView endUpdates];
[UIView setAnimationsEnabled:animationsEnabled];
}
Before trying to figure out what to optimize, you should measure where the problem is. You can do this using the Time Profile and Core Animation instruments. Use Xcode's Product menu and select Profile. Make sure you profile while you are attached to the device, which, as you have noticed, has different performance characteristics than the simulator.
The Time Profile instrument will help you identify work done on the CPU. The Core Animation instrument will help you identify work being done by Core Animation both on the CPU and GPU. Pay attention to the Core Animation instrument's Debug Option checkboxes. They're a little cryptic, but will help you visually identify the parts of your UI that are making the Core Animation do a lot of work.
Documentation on the different instruments available for iOS are here.
I also recommend the WWDC video covering Core Animation.
Related
I am trying to use the code provided by the apple in Demo ARKit app for plane detection, but it's not working consistently, for some cases it detects the surface perfectly but in some cases, it does not detects the plane. Then, I also noticed in the Demo ARKit app same thing happens with plane detection.
When it detects plane surface the yellow square closes but that is not the case every time. Has any one faced the same? How to make this plane detection behavior consistent?
Plane detection depends a lot on real world conditions. You need good lighting, a surface that has a decent amount of visible detail, and a decent amount of clear flat space. For example, a plain white table or a black tablecloth makes plane detection much slower / less reliable. A wood desk with visible grain works much better, but not if it's cluttered with keyboards and mice and cables and devices (not that any of us would have a desk like that, of course...).
Plane detection involves motion and parallax triangulation, too. If you point your device at a good surface (as described above), but only change your perspective on that surface by rotating the device (say, by spinning in your swivel chair), you're not feeding ARKit much more useful information than if you just held still. On the other hand, if you move the device side to side or up and down by at least several inches, its perspective on the surface will gain some parallax, which will speed/improve plane detection.
Update: If you're developing an app that depends on plane detection, it helps to cue the user to perform these motions. The third party demos they showed in the labs at WWDC17 had some great app-specific ways to do this: Lego had a little mini game of guiding a toy helicopter in to a landing, The Walking Dead tells the player to search the floor for zombie footprints; etc.
Add more light in room. ArKit works better in well lighted room.
You can't affect on plane detection. Wait for official iOS 11 release.
class ViewController: UIViewController {
#IBOutlet var sceneView: ARSCNView!
let configuration = ARWorldTrackingConfiguration()
override func viewDidLoad() {
self.sceneView.debugOptions = [SCNDebugOptions.showFeaturePoints, SCNDebugOptions.showWorldOrigin]
self.configuration.planeDetection = .horizontal
self.sceneView.session.run(configuration)
self.sceneView.delegate = self
super.viewDidLoad()
}
}
//MARK: - ARSEN delegate -
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeDetection = anchor as? ARPlaneAnchor else {
return
}
print("Plane anchor detect")
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeDetecttion = anchor as? ARPlaneAnchor else {
return
}
}
func renderer(_ renderer: SCNSceneRenderer, didRemove node: SCNNode, for anchor: ARAnchor) {
guard anchor is ARPlaneAnchor else{
return
}
}
}
When I write my animation code within beginAnimation-commitAnimatin blocks I get a bouncing effect, however I don't get the same effect when I do the same animation with the method written in the subject. Here are two ways to do what I want:
[UIView beginAnimations:nil context:nil];
[UIView setAnimationDelay:0.5];
[UIView setAnimationDuration:1];
[UIView setAnimationCurve:UIViewAnimationCurveEaseIn];
[UIView setAnimationRepeatAutoreverses:YES];
[UIView setAnimationRepeatCount:2];
[UIView setAnimationDelegate:self];
[UIView setAnimationDidStopSelector:
#selector(resetTheChickenProperties)];
theChicken.frame = CGRectMake(15, 330, 62, 90);
[UIView commitAnimations];
the way shown above the image (it's an egg) goes down in the y direction until it hits the ground and bounces back. Bouncing effect is clearly observed. But if I do the same thing with the help of the animateWithDuration:delay:options:animations:compeletion method the egg does not bounce. It rather seems like hung on a spring.
OK, I've found the subtle detail everyone needs to take note of in order to get the animation and transitions work with the method available in iOS 4 and later.When specifying the animation/transition options for the method we must use the constants with the word "Option" in it. So instead of writing
UIViewAnimationCurveEaseIn|UIViewAnimationTransitionCurlUp
we should write
UIViewAnimationOptionCurveEaseIn|UIViewAnimationOptionTransitionCurlUp
after fixing that the animation worked just fine. I was able to get the real bouncing effect
I'm analyzing an image which takes some time and meanwhile I want to display a progress indicator. For this I'm using MBProgressHUD.
It almost works... I get this error: "Modifying layer that is being finalized". I guess it's due to the fact that I do pushViewController not in my main thread. Am I right? Any ideas on how to correct this issue?
My code:
- (IBAction)buttonReadSudoku:(id)sender
{
mbProgress=[[MBProgressHUD alloc] initWithView:self.view];
mbProgress.labelText=#"Läser Sudoku";
[self.view addSubview:mbProgress];
[mbProgress setDelegate:self];
[mbProgress showWhileExecuting:#selector(readSudoku) onTarget:self withObject:nil animated:YES];
}
- (void)readSudoku
{
UIImage *image = imageView.image;
image = [ImageHelpers scaleAndRotateImage:image];
NSMutableArray *numbers = [SudokuHelpers ReadSudokuFromImage:image];
sudokuDetailViewController = [[SudokuDetailViewController alloc] init];
[sudokuDetailViewController setNumbers:numbers];
[[self navigationController] pushViewController:sudokuDetailViewController animated:YES];
}
Define a new method to push your detail view controller and use -performSelectorOnMainThread:withObject:waitUntilDone: to perform it on the main thread. Don't try to make any UI changes from other threads.
All UI changes must be in the main thread, as you note. Rather than you off-main-thread method make any changes to the UI, send an NSNotification to the current viewController telling it to do the UI work.
This is an especially good route if you're crossing an MVC border, or if you already have a viewController that knows what to do so that writing a separate method results in duplicate code.
Up to iOS 3.2, I used this kind of code to load UIImageView image in background, and it worked fine...
Code:
- (void)decodeImageName:(NSString *)name
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
UIImage *newImage = [UIImage imageNamed:name];
[myImageView setImage:newImage];
[pool release];
}
...
[self performSelectorInBackground:#selector(decodeImageName:) withObject:#"ID"]
... even if [UIImageView setImage:] was not thread-safe !
But since iOS 4, it doesn't work any more... Images appear on screen two seconds after setImage call. And if I do a [myImageView performSelectorOnMainThread:#selector(setImage:) withObject:newImage waitUntilDone:YES] instead of [myImageView setImage:newImage], images appear immediately but seem to be re-decoded again on-the-fly (ignoring the previous [UIImage imageNamed:] which should have already decoded the image data), causing a pause on my main thread... Even if documentation says The underlying image cache is shared among all threads..
Any thought ?
Don’t do it in the background! It’s not thread-safe. Since an UIImageView is also an NSObject, I think that using -[performSelectorOnMainThread:withObject:waitUntilDone:] on it might work, like:
[myImageView performSelectorOnMainThread:#selector(setImage:) withObject:newImage waitUntilDone:NO];
And it’s UIImage which is newly made thread-safe. UIImageView is still not thread-safe.
performSelectorInBackground: runs a selector in a background thread. Yet setImage: is a UI function. UI functions should only be run on the main thread. I do not have insight into the particular problem, but this is the first gut feel about this code, and it may be that iOS4 handles the (non-supported) mechanism of running UI functions in background threads somehow differently.
If you're using iOS 4.0, you should really consider reading up on blocks and GCD. Using those technologies, you can simply replace your method with:
- (void)decodeImageName:(NSString *)name
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
UIImage *newImage = [UIImage imageNamed:name];
dispatch_async(dispatch_get_main_queue(), ^{
[myImageView setImage:newImage];
}
[pool release];
}
Let's quote:
#property(nonatomic, readonly) CGImageRef CGImage
Discussion
If the image data has been purged because of memory constraints, invoking this method forces that data to be loaded back into memory. Reloading the image data may incur a performance penalty.
So you might be able to just call image.CGImage. I don't think CGImages are lazy.
If that doesn't work, you can force a render with something like
// Possibly only safe in the main thread...
UIGraphicsBeginImageContext((CGSize){1,1});
[image drawInRect:(CGRect){1,1}];
UIGraphicsEndImageContext();
Some people warn about thread-safety. The docs say UIGraphics{Push,Pop,GetCurrent}Context() are main-thread-only but don't mention anything about UIGraphicsBeginImageContext(). If you're worried, use CGBitmapContextCreate and CGContextDrawImage.
I'm writing a mobile phone game using j2me. In this game, I am using multiple Canvas objects.
For example, the game menu is a Canvas object, and the actual game is a Canvas object too.
I've noticed that, on some devices, when I switch from one Canvas to another, e.g from the main menu to the game, the screen momentarily "flickers". I'm using my own double buffered Canvas.
Is there anyway to avoid this?
I would say, that using multiple canvases is generally bad design. On some phones it will even crash. The best way would really be using one canvas with tracking state of the application. And then in paint method you would have
protected void paint(final Graphics g) {
if(menu) {
paintMenu(g);
} else if (game) {
paintGame(g);
}
}
There are better ways to handle application state with screen objects, that would make the design cleaner, but I think you got the idea :)
/JaanusSiim
Do you use double buffering? If the device itself does not support double buffering you should define a off screen buffer (Image) and paint to it first and then paint the end result to the real screen. Do this for each of your canvases. Here is an example:
public class MyScreen extends Canvas {
private Image osb;
private Graphics osg;
//...
public MyScreen()
{
// if device is not double buffered
// use image as a offscreen buffer
if (!isDoubleBuffered())
{
osb = Image.createImage(screenWidth, screenHeight);
osg = osb.getGraphics();
osg.setFont(defaultFont);
}
}
protected void paint(Graphics graphics)
{
if (!isDoubleBuffered())
{
// do your painting on off screen buffer first
renderWorld(osg);
// once done paint it at image on the real screen
graphics.drawImage(osb, 0, 0, Tools.GRAPHICS_TOP_LEFT);
}
else
{
osg = graphics;
renderWorld(graphics);
}
}
}
A possible fix is by synchronising the switch using Display.callSerially(). The flicker is probably caused by the app attempting to draw to the screen while the switch of the Canvas is still ongoing. callSerially() is supposed to wait for the repaint to finish before attempting to call run() again.
But all this is entirely dependent on the phone since many devices do not implement callSerially(), never mind follow the implementation listed in the official documentation. The only devices I've known to work correctly with callSerially() were Siemens phones.
Another possible attempt would be to put a Thread.sleep() of something huge like 1000 ms, making sure that you've called your setCurrent() method beforehand. This way, the device might manage to make the change before the displayable attempts to draw.
The most likely problem is that it is a device issue and the guaranteed fix to the flicker is simple - use one Canvas. Probably not what you wanted to hear though. :)
It might be a good idea to use GameCanvas class if you are writing a game. It is much better for such purpose and when used properly it should solve your problem.
Hypothetically, using 1 canvas with a sate machine code for your application is a good idea. However the only device I have to test applications on (MOTO v3) crashes at resources loading time just because there's too much code/to be loaded in 1 GameCanvas ( haven't tried with Canvas ). It's as painful as it is real and atm I haven't found a solution to the problem.
If you're lucky to have a good number of devices to test on, it is worth having both approaches implemented and pretty much make versions of your game for each device.