Reloading Flash 17 times causes error #2046 and requires a browser restart - linux

I am encountering some very strange behaviour with a Flex 4.1 app I am writing which gets in the way of testing. It seems that I can reload the app 16 times and then on the 17th, the loading process fails with
Error #2046: The loaded file did not have a valid signature
It seems to be consistently happening on the 17th reload on both Firefox 5.0 and Chrome 12. I am not sure if it's relevant, but I am running Flash Player v10.2.159.1 (also happens with 10.3.181.34) on Ubuntu 10.04. Happens with both regular and debugger versions of the player. When I run the app on Windows FF5, it doesn't seem to happen. Closing the current browser window does not seem to fix it. The only way around it is to completely close all browser windows and restart the browser. And then again after 16 successful loads, the 17th fails.
At this point I'm thinking of chalking it as a Linux Flash bug but I'd like to make sure and check if anyone knows if there's something I should be doing to prevent this.
The user from this post seems to have had the same problem but I guess he didn't notice the pattern I have.
Any help will be greatly appreciated.
Ruy
== UPDATE ==
I just realized that after my app starts throwing the 2046 error, trying to load any other Flash that uses signed RSLs also shows the 2046 error (e.g. this app), which means the problem is not specific to my app and most likely related to the Flash cache or something of the sort.

Disclosure: I am a Flash Player Developer at Adobe.
This is unlikely to get much attention as it is Linux only and kind of an edge case: Probably annoying during dev work but very few users will reload the same page more than 16 times. It might also be a browser issue. But it is probably us :) I will look at the jira tomorrow and see if I can bump it up a bit, but I'll be honest in that it is really an edge case and unlikely to get much love. If you want to increase your chances make sure to add the most simple .swf test case you can make to the bug. Also please double check if it still happens with the latest beta.
I also just took a look at the earlier bug reports and forum posts, you probably should post this as a Flash Player bug, not as Flex.

Long shot guess, but it sounds similar to a problem we had.... in the project properties - Flex Build Path - Framework Linkage - change to "merged into code". This fixes a problem very similar to what you are describing, though I wish I knew exactly what the cause is. Good luck!

tl;dr: No idea on the cause, posting random possibility in hope it might give someone else an idea or two for testing.
Considering that it seems to be an unresolved bug in Adobe issue tracker, its unlikely that you will get any definitive answer here. Considering it occurs on both Firefox & Chrome, let's rule out browser bugs and assume it is in either some common library (Flash) or OS API (Linux kernel implementation). A comment in one of the jira issues specifically mentions killing Flash process fixes it, so its a Flash issue and not OS bug.
The most interesting thing I can see here is your observation that it succeeds for exactly 16 times before failing to load. Time for some speculation here, from someone who's never worked on kernel or crypto dev:
With a 2048 bit RSA key and 32k cache for storing them, 16 keys would fit before adding another one fails - so one conjecture is that each time this file is loaded, Flash is caching the signed value (possibly a hashed version) for some reason - maybe to keep track of allowed & used security permissions etc.? If this entry is not removed, then once it is full all file loads will fail if caching the signature is part of checking it.
Things you can experiment with:
Reduce size of app to see if page can be reloaded more often (as suggested by stackfish)
Count number of signed RSLs used and if its a power/multiple of 2 (maybe others get the error after 32 page loads if they use half the no. of signed libs?)
Check if Linux Flash plugin has some option to increase credentials cache or something (or decrease it, just to see if it impacts the no. of loads - if so, could be related to the problem)
I expect that to actually find a solution, you'd have to dive into the library loading code and look at all constants related to loading signed libs that are 4, 16, or a multiple of 16 to see if they might be responsible - in short, unlikely to be soluble by others outside of Flash dev team imho :/

This behavior could be related to a memory leak caused either by the Flex implementation, or the browser plugin. Firefox is notorious for not cleaning up memory anyway and the footprint will continue to grow the longer you have the same browser window open.
If you reduce the size of your flex app to produce something very tiny, does the number of times you can reload the page go up?

Error #2046 on win vista, 64 bit machine wit 1000 mb ati radeon videocard
problem occurs only in msn video sofar

I meet a same problem when I use ppt on icourse163.org . when I open the course site, I can't see the ppt but I use chrome can do it .there are the same flash version(32.0.0.344),and Then ,I copy the tar.gz file that downloaded from adobe.
usr/* to /usr.I solved it.wish can help you .

Related

ncurses disable kernel messages on console screen?

Im looking for a way how to get rid of (kernel?) messages that appear in my ncurses app. I wrote the app myself, so i would prefer a API that redirects these messages to /dev/null. I mean messages like, a USB stick that is inserted.
I tried to add this, but unfortunately it doesn't work
freopen("/dev/null", "w", stderr);
I'm not running X, just ncurses direct from the console.
I mean messages like, a USB stick that is inserted.
Thanks!
UPDATE 1:
Someone votes to close this question because it would not be related to programming. But it is, i wrote the ncurses app myself, i'm looking for a way how to disable the kernel message. I updated the question.
UPDATE 2:
Let me explain what i'm doing, and whats the problem in more detail:
I'm using Tiny Core linux, thats after boots starts (self written) ncurses program. Now when you for example connect a USB drive, a message (i suspect kernel) is shown over my program. I guess the message is written straight into the framebuffer. Im using TC 5.x since i need 32 bit, im running as root and have full access to the os.
You should be able to use openvt to have your program run on a new Virtual Terminal.
I'll also note that it should be possible to embed control for the VTs yourself if you prefer to break the external dependency, but note that structures used may not be stable between kernel versions, and may require recompilation.
See the KBD project's sources, specifically openvt.c to see how it works.
Try configuring the kernel through boot parameters with the option:
loglevel=3 (or a lower value)
0 (KERN_EMERG) system is unusable
1 (KERN_ALERT) action must be taken immediately
2 (KERN_CRIT) critical conditions
3 (KERN_ERR) error conditions
4 (KERN_WARNING) warning conditions
5 (KERN_NOTICE) normal but significant condition
6 (KERN_INFO) informational
7 (KERN_DEBUG) debug-level messages
source: https://www.kernel.org/doc/Documentation/kernel-parameters.txt
See also: Change default console loglevel during boot up
It might be impossible to block some other process with sufficient access from writing to /dev/console but you may be able to redefine console as some other device, at boot time by setting console=ttyS0 (first serial port), see:
https://unix.stackexchange.com/questions/60641/linux-difference-between-dev-console-dev-tty-and-dev-tty0
Also if we know exactly which software is sending the message it may be possible to reconfigure it (possibly dynamically) but it would help to know the version and edition of Tiny Core Linux you are using?
E.g. this website has a "Core", "TinyCore" and "CorePlus" versions 1.x up to 7
http://tinycorelinux.net/downloads.html
This would help reproducing the exact same behavior and testing potential solutions.

Performance drop due to NotesDocument.closeMIMEEntities()

After moving my XPages application from one Domino server to another (both version 9.0.1 FP4 and with similar hardware), the application's performance strongly dropped. Benchmarks revealed that the execution of
doc.closeMIMEEntities(false,"body")
which takes ~0.1ms on the old server, now on average takes >10ms on the new one. This difference wouldn't matter if it was only about a few documents, but when initializing the application I read more than 1000 documents and so the initialization time changes from less than 1sec to more than 10sec.
In the code, I use the line above to close the MIME entity without saving any changes after reading from it (NO writing). The function always returns true on both servers. Still it now takes more than 100x longer despite nothing has been changed in the entity.
The facts that both server computers have more or less the same hardware, and the replicas of my application contain the same design and data on both servers, let me believe that the problem has something to do with the settings of the Domino server.
Can anybody help me with this?
PS: I always use session.setConvertMime(false) before opening the NotesDocument, i.e. the conversion from MIME to RichText should not be what causes the problem.
PPS: The HTTPJVMMaxHeapSize is the same on both servers (1024M) and there are multiple 100Mb of free memory. I just mention this in case someone thinks the problem might be related to being out of memory.
The problem is related to the "ImportConvertHeaders bug" in Domino 9.0.1 FP4. It has already been solved with Interim Fix 1 (as pointed out by #KnutHerrmann here).
It turned out that the old Domino server had Interim Fix 1 installed, while the "new" one had not. After applying the fix to the new Domino server the performance is back to normal and everything works as expected.

Netlogo 5.1 (and 5.05) Behavior Space Memory Leak

I have posted on this before, but thought I had tracked it down to the NW extension, however, memory leakage still occurs in the latest version. I found this thread, which discusses a similar issues, but attributes it to Behavior Space:
http://netlogo-users.18673.x6.nabble.com/Behaviorspace-Memory-Leak-td5003468.html
I have found the same symptoms. My model starts out at around 650mb, but over each run the private working set memory rises, to the point where it hits the 1024 limit. I have sufficient memory to raise this, but in reality it will only delay the onset. I am using the table output, as based on previous discussions this helps, and it does, but it only slows the rate of increase. However, eventually the memory usage rises to a point where the PC starts to struggle. I am clearing all data between runs so there should be no hangover. I noticed in the highlighted thread that they were going to run headless. I will try this, but I wondered if anyone else had noticed the issue? My other option is to break the BehSpc simulation into a few batches so the issues never arises, bit i would be nice to let the model run and walk away as it takes around 2 hours to go through.
Some possible next steps:
1) Isolate the exact conditions under which the problem does or not occur. Can you make it happen without involving the nw extension, or not? Does it still happen if you remove some of the code from your model? What if you keep removing code — when does the problem go away? What is the smallest code that still causes the problem? Almost any bug can be demonstrated with only a small amount of code — and finding that smallest demonstration is exactly what is needed in order to track down the cause and fix it.
2) Use standard memory profiling tools for the JVM to see what kind of objects are using the memory. This might provide some clues to possible causes.
In general, we are not receiving other bug reports from users along these lines. It's routine, and has been for many years now, for people to use BehaviorSpace (both headless and not) and do experiments that last for hours or even for days. So whatever it is you're experiencing almost certainly has a more specific cause -- mostly likely, in the nw extension -- that could be isolated.

Android Libspotify Image Loading SIGSEGV

So this particular libspotify error has me a little stumped. My app basically loads a users playlists, and allows users to go into those playlists at which time it populates the track data.
So the problem lies in the function:
sp_image* image = sp_image_create(session, image_id);
(after calling)
const byte* image_id = sp_album_cover(album, SP_IMAGE_SIZE_SMALL);
Now this works fine some of the time, but quite often it will pop up with a 'Corrupt memory passed to dlfree(), SIGSEGV' error, which cause. So first thing I looked for is a memory error but theres plenty of free memory and there are no null pointers when this occurs. The call occurs from with the library to libc.so so its much deeper into the library than i can access.
It obviously has something to do with memory but its odd that it could happen after loading 10 tracks or after 400 tracks and stranger yet, from my testing devices, it only happens on the Nexus 4 and Nexus 7 and not on the Galaxy S3, or HTC sensation. First thing that springs to mind is the fact that the N4 & N7 are Qualcomm devices but thats all i have to go on, and probably has nothing to do with anything!
Any help is much appreciated!
It's probably libspotify's fault, not yours. This library has never been particularly stable on Android (in fact, Spotify still calls it "beta"), however they will be replacing it soon with a new library similar to the Spotify iOS SDK.
My advice to you would be to not use libspotify unless you are under pressure to ship something right away. The new SDK will likely solve many of the headaches familiar to developers working with libspotify on Android.
Edit: The new Spotify Android SDK is out! You should use it instead of libspotify, it will save you much headache.

JavaME - LWUIT images eat up all the memory

I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this).
The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage.
The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down.
If anyone has any insight to offer, I would greatly appreciate it.
Mobile devices are usually very low on memory. So you have to use some tricks to conserve and use memory.
We had the same problem at a project of ours and we solved it like this.
for downloaded images:
Make a cache where you put your images. If you need an image, check if it is in the cachemap, if it isn't download it and put it there, if it is, use it. if memory is full, remove the oldest image in the cachemap and try again.
for other resource images:
keep them in memory only for as long as you can see them, if you can't see them, break the reference and the gc will do the cleanup for you.
Hope this helps.
There are a few things that might be happening here:
You might have seen the memory used before garbage collection, which doesn't correspond to the actual memory used by your app.
Some third party code you are running might be pooling some internal datastructures to minimize allocation. While pooling is a viable strategy, sometimes it does look like a leak. In that case, look if there is API to 'close' or 'dispose' the objects you don't need.
Finally, you might really have a leak. In this case you need to get more details on what's going on in the emulator VM (though keep in mind that it is not necessarily the same as the phone VM).
Make sure that your emulator uses JRE 1.6 as backing JVM. If you need it to use the runtime libraries from erlyer JDK, use -Xbootclasspath:<path-to-rt.jar>.
Then, after your application gets in the state you want to see, do %JAVA_HOME%\bin\jmap -dump:format=b,file=heap.bin <pid> (if you don't know the id of your process, use jps)
Now you've got a dump of the JVM heap. You can analyze it with jhat (comes with the JDK, a bit difficult to use) or some third party profilers (my preference is YourKit - it's commercial, but they have time-limited eval licenses)
I had a similar problem with LWUIT at Java DTV. Did you try flushing the images when you don't need them anymore (getAWTImage().flush())?
Use EncodedImage and resource files when possible (resource files use EncodedImage by default. Read the javadoc for such. Other comments are also correct that you need to actually observe the amount of memory, even high RAM Android/iOS devices run out of memory pretty fast with multiple images.
Avoid scaling which effectively eliminates the EncodedImage.
Did you think of the fact, that maybe loading the same image from JAR, many times, is causing many separate image objects (with identical contents) to be created instead of reusing one instance per-individual-image? This is my first guess.

Resources