Using retina display Vs Binary size - ios4

I'm currently updating my app's images to have a nice display on iphone4 with retina display. Everything works well and I'm quite happy with it.
However, what concerns me is that increasing binary size... (starting from 2Mb, I reached 4) Did you find a way to have nice images and to keep a decent size for your binaries?
Tips or advices would be welcome!

2 MBs more don't matter at all on a 32 GB device. 4 MB is 1/250th of a GB. :) (okay, it'd allow the user to sync a half MP3 less.)
I always delete non-retina apps. They suck and are ugly. If you don't care about uglyness, I'd suggest you quit iPhone development and start Windows Phone development (A)

It's sometimes worth running something like pngcrush on your image files. It should result in zero visible difference but often results in far smaller files.

Related

Heap Generation 2 and Large Object Heap climbs

I am not sure if I am posting to the right StackOverFlow forum but here goes.
I have a C# desktop app. It receives images from 4 analogue cameras and it tries to detect motion and if so it saves it.
When I leave the app running say over a 24hr cycle I notice the Private Working Set has climbed by almost 500% in Task manager.
Now, I know using Task Manager is not a good idea but it does give me an indication if something is wrong.
To that end I purchase dotMemory profiler from JetBrains.
I have used its tools to determine that the Heap Generation 2 increases a lot in size. Then to a lesser degree the Large Object Heap as well.
The latter is a surprise as the image size is 360x238 and the byte array size is always less than 20K.
So, my issues are:
Should I explicitly call GC.Collect(2) for instance?
Should I be worried that my app is somehow responsible for this?
Andrew, my recommendation is to take memory snapshot in dotMemory, than explore it to find what retains most of the memory. This video will help you. If you not sure about GC.Collect, you can just tap "Force GC" button it will collect all available garbage in your app.

caputre OpenGL window in X11 with fast framerate - possible?

I have an OpenGL application with the size of 800x600 running on my linux machine (X11). The content of this application (the rendered image) should be exported via network to another PC.
First of all, i want to know if it is possible to take snapshots of the applications window with about 30 Hz, save them to jpeg and export them to the other machine via HTTP or whatever (like the IP Cameras are doing). Is it possbile to read the graphic's cards memory (Radeon HD 5800) in a fast way so that i can get a framerate of about 30 pictures per second?
If you're willing to tolerate some latency Pixel Buffer Objects (PBOs) should get you some decent read-back throughput.
libjpeg-turbo looks like a good solution for high-speed JPEG encoding.
If you don't have the source to the app you're trying to monitor then LD_PRELOAD hacks combined with the above should work.
You may want to take a look at VirtualGL which does exactly what you aim for.

Reloading Flash 17 times causes error #2046 and requires a browser restart

I am encountering some very strange behaviour with a Flex 4.1 app I am writing which gets in the way of testing. It seems that I can reload the app 16 times and then on the 17th, the loading process fails with
Error #2046: The loaded file did not have a valid signature
It seems to be consistently happening on the 17th reload on both Firefox 5.0 and Chrome 12. I am not sure if it's relevant, but I am running Flash Player v10.2.159.1 (also happens with 10.3.181.34) on Ubuntu 10.04. Happens with both regular and debugger versions of the player. When I run the app on Windows FF5, it doesn't seem to happen. Closing the current browser window does not seem to fix it. The only way around it is to completely close all browser windows and restart the browser. And then again after 16 successful loads, the 17th fails.
At this point I'm thinking of chalking it as a Linux Flash bug but I'd like to make sure and check if anyone knows if there's something I should be doing to prevent this.
The user from this post seems to have had the same problem but I guess he didn't notice the pattern I have.
Any help will be greatly appreciated.
Ruy
== UPDATE ==
I just realized that after my app starts throwing the 2046 error, trying to load any other Flash that uses signed RSLs also shows the 2046 error (e.g. this app), which means the problem is not specific to my app and most likely related to the Flash cache or something of the sort.
Disclosure: I am a Flash Player Developer at Adobe.
This is unlikely to get much attention as it is Linux only and kind of an edge case: Probably annoying during dev work but very few users will reload the same page more than 16 times. It might also be a browser issue. But it is probably us :) I will look at the jira tomorrow and see if I can bump it up a bit, but I'll be honest in that it is really an edge case and unlikely to get much love. If you want to increase your chances make sure to add the most simple .swf test case you can make to the bug. Also please double check if it still happens with the latest beta.
I also just took a look at the earlier bug reports and forum posts, you probably should post this as a Flash Player bug, not as Flex.
Long shot guess, but it sounds similar to a problem we had.... in the project properties - Flex Build Path - Framework Linkage - change to "merged into code". This fixes a problem very similar to what you are describing, though I wish I knew exactly what the cause is. Good luck!
tl;dr: No idea on the cause, posting random possibility in hope it might give someone else an idea or two for testing.
Considering that it seems to be an unresolved bug in Adobe issue tracker, its unlikely that you will get any definitive answer here. Considering it occurs on both Firefox & Chrome, let's rule out browser bugs and assume it is in either some common library (Flash) or OS API (Linux kernel implementation). A comment in one of the jira issues specifically mentions killing Flash process fixes it, so its a Flash issue and not OS bug.
The most interesting thing I can see here is your observation that it succeeds for exactly 16 times before failing to load. Time for some speculation here, from someone who's never worked on kernel or crypto dev:
With a 2048 bit RSA key and 32k cache for storing them, 16 keys would fit before adding another one fails - so one conjecture is that each time this file is loaded, Flash is caching the signed value (possibly a hashed version) for some reason - maybe to keep track of allowed & used security permissions etc.? If this entry is not removed, then once it is full all file loads will fail if caching the signature is part of checking it.
Things you can experiment with:
Reduce size of app to see if page can be reloaded more often (as suggested by stackfish)
Count number of signed RSLs used and if its a power/multiple of 2 (maybe others get the error after 32 page loads if they use half the no. of signed libs?)
Check if Linux Flash plugin has some option to increase credentials cache or something (or decrease it, just to see if it impacts the no. of loads - if so, could be related to the problem)
I expect that to actually find a solution, you'd have to dive into the library loading code and look at all constants related to loading signed libs that are 4, 16, or a multiple of 16 to see if they might be responsible - in short, unlikely to be soluble by others outside of Flash dev team imho :/
This behavior could be related to a memory leak caused either by the Flex implementation, or the browser plugin. Firefox is notorious for not cleaning up memory anyway and the footprint will continue to grow the longer you have the same browser window open.
If you reduce the size of your flex app to produce something very tiny, does the number of times you can reload the page go up?
Error #2046 on win vista, 64 bit machine wit 1000 mb ati radeon videocard
problem occurs only in msn video sofar
I meet a same problem when I use ppt on icourse163.org . when I open the course site, I can't see the ppt but I use chrome can do it .there are the same flash version(32.0.0.344),and Then ,I copy the tar.gz file that downloaded from adobe.
usr/* to /usr.I solved it.wish can help you .

Pocket PC 2003 C# Performance Issues...Should I Thread It?

Environment
Windows XP SP3 x32
Visual Studio 2005 Standard Edition
Honeywell Dolphin 9500 Pocket PC/Windows Mobile 2003 Platform
Using the provided Honeywell Dolphin 9500 VS2005 SDK
.NET Framework 1.1 and .NET Compact Framework 1.0 SP3
Using VC#
Problem
When I save an image from the built in camera and Honeywell SDK ImageControl to the device's storage card or internal memory, it takes 6 - 7 seconds.
I am currently saving the image as a PNG but have the option of a BMP or JPG as well.
Relevant lines in the code: 144-184 and 222, specifically 162,163 and 222.
Goal
I would like to reduce that time down to something like 2 or 3 seconds, and even less if possible.
As a secondary goal, I am looking for a profiling suite for Pocket PC 2003 devices specifically supporting the .NET Compact Framework Version 1.0. Ideally free but an unfettered short tutorial would work as well.
Things I Have Tried
I looked into asynchronous I/O via System.Threading a little bit but I do not have the experience to know whether this is a good idea, nor exactly how to implement threading for a single operation.
With threading implemented as it is in the code below, there seems to be a trivial speed increase of maybe a second or less. However, something on the next Form requires the image, possibly in the act of being saved, and I do not know how to mitigate the wait or handle that scenario at all, really.
EDIT: Changing the save format from PNG to BMP or JPG, with the threading, seems to reduce the save time considerably..
Code
http://friendpaste.com/3J1d5acHO3lTlDNTz7LQzB
Let me know if the code should just be posted here in code tags. It is a little long (~226 lines) so I went ahead and friendpasted it as that seemed to be acceptable in my last post.
By changing the save format from PNG to BMP and including the Threading code shown in the Code link, I was able to reduce the save time to ~1 second.
You're at the mercy of the Honeywell SDK for this one, since their control is doing the actual saving of the image. Calling this on a separate thread (i.e. not the UI thread) isn't going to help at all (as you've found out), and it will actually make things more difficult for you since you need to wait until the save task is completed before moving on to the next form.
The only suggestion I can make is to make sure you're saving the image to internal memory (and not to the SD card), since writing to an SD card usually takes significantly longer than writing to memory. Or see if you can get technical support from Honeywell - 6-7 seconds seems way too long for a task like this.
Or see if the Honeywell SDK lets you get the image as a byte array (instead of saving to disk). If this call returns in less than 6-7 seconds, you can handle persisting it yourself.

JavaME - LWUIT images eat up all the memory

I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this).
The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage.
The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down.
If anyone has any insight to offer, I would greatly appreciate it.
Mobile devices are usually very low on memory. So you have to use some tricks to conserve and use memory.
We had the same problem at a project of ours and we solved it like this.
for downloaded images:
Make a cache where you put your images. If you need an image, check if it is in the cachemap, if it isn't download it and put it there, if it is, use it. if memory is full, remove the oldest image in the cachemap and try again.
for other resource images:
keep them in memory only for as long as you can see them, if you can't see them, break the reference and the gc will do the cleanup for you.
Hope this helps.
There are a few things that might be happening here:
You might have seen the memory used before garbage collection, which doesn't correspond to the actual memory used by your app.
Some third party code you are running might be pooling some internal datastructures to minimize allocation. While pooling is a viable strategy, sometimes it does look like a leak. In that case, look if there is API to 'close' or 'dispose' the objects you don't need.
Finally, you might really have a leak. In this case you need to get more details on what's going on in the emulator VM (though keep in mind that it is not necessarily the same as the phone VM).
Make sure that your emulator uses JRE 1.6 as backing JVM. If you need it to use the runtime libraries from erlyer JDK, use -Xbootclasspath:<path-to-rt.jar>.
Then, after your application gets in the state you want to see, do %JAVA_HOME%\bin\jmap -dump:format=b,file=heap.bin <pid> (if you don't know the id of your process, use jps)
Now you've got a dump of the JVM heap. You can analyze it with jhat (comes with the JDK, a bit difficult to use) or some third party profilers (my preference is YourKit - it's commercial, but they have time-limited eval licenses)
I had a similar problem with LWUIT at Java DTV. Did you try flushing the images when you don't need them anymore (getAWTImage().flush())?
Use EncodedImage and resource files when possible (resource files use EncodedImage by default. Read the javadoc for such. Other comments are also correct that you need to actually observe the amount of memory, even high RAM Android/iOS devices run out of memory pretty fast with multiple images.
Avoid scaling which effectively eliminates the EncodedImage.
Did you think of the fact, that maybe loading the same image from JAR, many times, is causing many separate image objects (with identical contents) to be created instead of reusing one instance per-individual-image? This is my first guess.

Resources