My ebook does animations driven by setTimeout. (I am using requestAnimationFrame when available, which it's not on older iPads running iOS 5). The book is composed of about 100 separate XHTML files to ensure page breaks occur exactly where they should, which is otherwise an iffy proposition in iBooks.
Immediately after opening the book, animations are very slow (eg one second per step, rather than 50ms), but after keeping the book open a while (a minute or so?), the animations ran at expected speed.
The reason I found: iBooks is apparently pre-loading all the pages in the book (I suppose in order to get page numbers, or speed up page turning). The pre-loading seems to be interfering with my animations--stealing setTimeout slots, as it were.
I had thought the problem might be the time required at load time to set up the animations on each document, but timed those and found it was just a few milliseconds per page. The problem may be a semi-large script (100K) on each of the 100+ pages, which I imagine iBooks is parsing over and over again as it preloads each page.
I have considered the option of including the large script dynamically when each page is viewed, but got stuck on figuring out how to tell when that is. We have no PageVisibility API in Safari, and the focus event does not fire on initial page load, so how do I tell when the page is actually being viewed, as opposed to stealthily pre-loaded in the background by iBooks?
My next attempt is going to be to shrink the number of individual XHTML pages down to 1 or a few, and take my chances with page-break-* and its ilk to handle page breaking.
What I need is a way to (1) tell iBooks either to not pre-load other pages in the book or (2) give my setTimeout requests priority over those queued up by iBooks for preloading pages or (3) know when a page is being displayed so I can inject the script at that point in time.
See also epub 3, how to prevent pages from running in background ? (iBooks / Readium) and FInding out when page is being viewed in EPUB FXL via Javascript.
Related
I need to process AJAX in my crawler and would prefer using system browser albeit I may have to change my mind. My crawler program may generally be working in background while the user can work on other stuff in other applications.
Anyhow - since WebControl leaks memory if processing JS libs that leak memory - this can cause a crawler to quickly run out of memory. (Many SO posts about this.)
So I have created a solution that uses a separate "dummy" small executable with the webcontrol that takes input/output. This is launched as a separate process by the crawler. This part seems to work great. This child process is created/destroyed as many times as needed.
However, this called process with the embedded-IE grabs focus on every page load (a least if e.g. JS code calls focus) which means if the user is doing work in e.g. Word or whatever - keyboard focus is lost.
I have already moved the embedded IE window off-screen, but I can not make it invisible in the traditional sense since then the embedded IE stops working.
I have tried to disable all parent controls before calling navigate - but it does not work for me.
Any ideas I have not tried? Maybe somehow catch a windows message that focuses webcontrol and ignore it? OR something so I can immediately refocus the earlier control that had focus?
I currently use Delphi - but this question is applicable to VB, C# .Net etc. from my earlier investigations on this matter. I will take a solution and ideas in any language.
While browsing I found many websites say:
wait for 5 seconds and download will begin; or click this link to download now
or
Wait for 5 seconds, we will redirect to specific website; if you are on fire click this link
Why do websites make us wait for 5 seconds? Are they doing something in that time?
Sometimes developer do not execute all code in same request they put their request in queue (exp. Rabbit MQ) so that another servers can handle it. It increase system performance. it takes some time when queue has much packets but it is so fast 5 secs are more enough to handle it. Does it make sense?
Generally there are two reasons (from my experience):
You got to the page via a link and that page either doesn't exist anymore, or was moved. If it doesn't exist anymore, you will sometimes get redirected higher in the navigation stack (Apple does this with their documentation, sending users to a ore filled search of related/similar pages, if you're lucky). If it has been moved due to a change in the IA of the site, it may be in a "sun setting" period wherein the user is moved from the old URL to the new - to slowdown and stop further propagation of the old link. After the sun setting period that redirect page will be dropped for either a 404 page; or, the higher level search concept.
Depending on the type of form you are filling out, there may be a process which must be run without user interaction; however, these rarely have the option to click the link.
Of course, with the latter part of the first reason, there must also be a process in place to stop this sort of thing and take the page down altogether. Either a date or a "when less than X users land on this page in a month, we can take it down" - so, sometimes a well intentioned change management consideration may never get fully resolved to the new way of things.
Hope that helps.
https://ux.stackexchange.com/questions/37731/why-do-websites-use-you-are-being-redirected-pages
I have many examples where text renders slower than an image which almost feels instant. I am doing this via reactjs and server-side rendering with nodejs. For example this gif: http://recordit.co/waMa5ocwdd
Shows you that the header image loads instantly, the CSS is already loaded as the colors are there and present. But, for some reason, the text takes almost a half second to appear. How can I fix or optimize this?
Thanks!
Ok, so debugging this kind of stuff it's useful to hit YSlow for the latest tips, etc.
In general, though, it's good to remember that browsers will make separate requests for each item in your page (i.e. everything with a URL; images, css, etc) and that most of them have some kind of cap on concurrent downloads (4 seems common, but it varies and changes a lot). So while 12 requests isn't a lot, it's still time. As is the time to parse and load your JS, etc. And parsing and loading JS is more time that will happen (and typically, in most browsers, will pause further downloads until it's done)
Without spending a ton of time, I'm guessing that your HTML loads, that calls in the header image, and then the browser starts hitting all the JS and react framework code and it takes a second or two to figure out what to render next.
Again, YSlow has a lot of advice on how to optimize those things, but that's my 2c.
EDIT: adding detail in response to your first question.
As i mentioned above, the JS is only part of the problem. The sum total of render time will include the total time it takes to download and parse everything (including CSS, etc). As an example, looking at it in Chrome debug tools it looks like it takes something around 300ms for the html to download and be parsed enough for the next resources to get called in. In my browser the next two are your main css and logo.png. At around 800ms is when your logo is done downloading, and it's rendered almost immediately. At around the time the html is done downloading, the first js script starts downloading (I don't think turning JS off stops that from happening, though it probably stops the parsing from happening; I've never tested it). At somewhere around 700ms is where you start pulling down the font sets you're using and it looks like they finish downloading around 1second. Your first text shows up about 200ms after that, so I'm guessing that pulling and parsing the font files is what the holdup is (when compounded with them queuing behind other resources).
Since there's no clear explanation in Chrome Extensions documentation, I came here for help.
I learned that background pages are basically invented to extend the extension's lifetime, and designed to hold values or keep the "engine" running in background so no one notices it. Because once you click on the extension's icon, you get what they call it, a "popup", and once you click outside the "popup" it disappears immediately and most important the extension "dies" (its lifetime ends).
So far we are good and everything is nice but: event pages are invented after that
and they are basically background pages that only work when they are called (to provide more memory space).
If that's the case, then wouldn't that be contradictory? What's the use of event pages if they only work when they're called?
Sometimes background pages only need to respond to events outside them (messages, web requests, button clicks, etc.)
In that case, an event page makes sense. It's not completely unloaded as if the extension is stopped - it defines its event handlers (what it wants to listen to) and then it's shut down until needed. Consider this to be "I'm going to sleep; don't wake me up unless A happens."
The difference with your example: closed popup ceases to exist completely, while Chrome remembers it needs to call a particular extension on particular events. If that event happens, the background page is started again an the event is fired in it.
This saves resources, but not always appropriate. Shutting down background page's context wipes its local state; it must be saved in various storage APIs instead of variables. If the local state is complex, it may not be worth the effort. Also, if your extension needs to react really fast or really often, suspend/resume may prove to be a performance hit.
All in all, event pages are not a complete replacement for background pages; that's why they are optional and not default. There are many things to consider when making an event page.
P.S. Regarding your "popup as most important part of the extension": this is exactly why it can't be the most important part in most cases. Usually, a background page is also used alongside a popup to keep event listeners and local state.
I am designing an application that allows users to use animated emoticons(defined using external SWF files) and display them inside another SWF file. This works as long as there are only a very small # emoticons at a time, but if the number increases significantly the performance starts to slow to a crawl.... The bottleneck isnt the network as there are only a few emoticons to choose from, we are just having issues displaying them simultaneously.
How does the Flash threading model handle playing external SWFs? Can we attempt to play them on a separate thread, or will that cause issues(like it does in Swing and Cocoa and the like)
its hard to say, when you mean loading multiple small swf , are they copies or are you creating new instances, with new properties etc...
I have managed to similar thing making a 20 line poker machine that uses swf for all the images, animations . These files each are about 600Kb and there is 12 of these at one time on the screen animated(well depends on the win);
But how i did this was load them into an array every time i needed it. and access it from there. i didnt seem to have any issues.
Its not much of an answer but can you explain your loading method, or put a sample html server i could see whats exactly your problem