Since there's no clear explanation in Chrome Extensions documentation, I came here for help.
I learned that background pages are basically invented to extend the extension's lifetime, and designed to hold values or keep the "engine" running in background so no one notices it. Because once you click on the extension's icon, you get what they call it, a "popup", and once you click outside the "popup" it disappears immediately and most important the extension "dies" (its lifetime ends).
So far we are good and everything is nice but: event pages are invented after that
and they are basically background pages that only work when they are called (to provide more memory space).
If that's the case, then wouldn't that be contradictory? What's the use of event pages if they only work when they're called?
Sometimes background pages only need to respond to events outside them (messages, web requests, button clicks, etc.)
In that case, an event page makes sense. It's not completely unloaded as if the extension is stopped - it defines its event handlers (what it wants to listen to) and then it's shut down until needed. Consider this to be "I'm going to sleep; don't wake me up unless A happens."
The difference with your example: closed popup ceases to exist completely, while Chrome remembers it needs to call a particular extension on particular events. If that event happens, the background page is started again an the event is fired in it.
This saves resources, but not always appropriate. Shutting down background page's context wipes its local state; it must be saved in various storage APIs instead of variables. If the local state is complex, it may not be worth the effort. Also, if your extension needs to react really fast or really often, suspend/resume may prove to be a performance hit.
All in all, event pages are not a complete replacement for background pages; that's why they are optional and not default. There are many things to consider when making an event page.
P.S. Regarding your "popup as most important part of the extension": this is exactly why it can't be the most important part in most cases. Usually, a background page is also used alongside a popup to keep event listeners and local state.
Related
I need to process AJAX in my crawler and would prefer using system browser albeit I may have to change my mind. My crawler program may generally be working in background while the user can work on other stuff in other applications.
Anyhow - since WebControl leaks memory if processing JS libs that leak memory - this can cause a crawler to quickly run out of memory. (Many SO posts about this.)
So I have created a solution that uses a separate "dummy" small executable with the webcontrol that takes input/output. This is launched as a separate process by the crawler. This part seems to work great. This child process is created/destroyed as many times as needed.
However, this called process with the embedded-IE grabs focus on every page load (a least if e.g. JS code calls focus) which means if the user is doing work in e.g. Word or whatever - keyboard focus is lost.
I have already moved the embedded IE window off-screen, but I can not make it invisible in the traditional sense since then the embedded IE stops working.
I have tried to disable all parent controls before calling navigate - but it does not work for me.
Any ideas I have not tried? Maybe somehow catch a windows message that focuses webcontrol and ignore it? OR something so I can immediately refocus the earlier control that had focus?
I currently use Delphi - but this question is applicable to VB, C# .Net etc. from my earlier investigations on this matter. I will take a solution and ideas in any language.
We have a user definable "dashboard". It is possible to add several components into this window. Some windows are browser controls (ActiveX controls created with CLSID_WebBrowser). This browser controls can show content from different sources in the internal web.
To avoid blocking of the application each browser control is hosted in its own thread. Reason here was that the ActiveX Webbrowser control is hosted in an STA and only loads and shows data when message loop is running and this may block other parts of the UI.
So we have one parent window containing child windows of type list control, tree control, statics and group boxes, mixed with some browser controls. Except the browser control, all controls belong to the same UI thread. But the threads use one input queue and AttachThreadInput was executed for each thread to attach it to the main thread.
Now we face the following problem:
When the user designs his screen with a group box and a web control inside it. the application locks when the user moves the mouse over the browser control and clicks or uses the mouse wheel. The application locks and doesn't accept any further input. If you minimize the application and activates it again, you can work on and input is accepted again.
Reasons
With the debugger and spy++ we found out, that any mouse event causes WM_NCHITTEST to be sent to the group box. The group box returns HTTRANSPARENT. But the underlying window is of a different thread. We can see that an infinite loop occurs an WM_NCHITEST messages are fired to the group box and the input is blocked until this loop gets interrupted (minimizing by showing the desktop).
We can read in the documentation that WM_NCHITTEST and HTTRANSPARENT are limited to windows of the same thread. And I can find 2 other article in the net that describe the same or similar problem.
The simple solution
The simple solution is just to take care that the browser controls are never covered by a group box or static control. So changing the Z-order is simple and works (groupbox must follow the windows of the different thread in Z-order)
Question
I would be interested if there is another way to get around this problem. Or if there is a way to prevent such input queue deadlocks. Or if somebody knows what is happening internally when this WM_NCHITTEST must handle windows from different threads.
My ebook does animations driven by setTimeout. (I am using requestAnimationFrame when available, which it's not on older iPads running iOS 5). The book is composed of about 100 separate XHTML files to ensure page breaks occur exactly where they should, which is otherwise an iffy proposition in iBooks.
Immediately after opening the book, animations are very slow (eg one second per step, rather than 50ms), but after keeping the book open a while (a minute or so?), the animations ran at expected speed.
The reason I found: iBooks is apparently pre-loading all the pages in the book (I suppose in order to get page numbers, or speed up page turning). The pre-loading seems to be interfering with my animations--stealing setTimeout slots, as it were.
I had thought the problem might be the time required at load time to set up the animations on each document, but timed those and found it was just a few milliseconds per page. The problem may be a semi-large script (100K) on each of the 100+ pages, which I imagine iBooks is parsing over and over again as it preloads each page.
I have considered the option of including the large script dynamically when each page is viewed, but got stuck on figuring out how to tell when that is. We have no PageVisibility API in Safari, and the focus event does not fire on initial page load, so how do I tell when the page is actually being viewed, as opposed to stealthily pre-loaded in the background by iBooks?
My next attempt is going to be to shrink the number of individual XHTML pages down to 1 or a few, and take my chances with page-break-* and its ilk to handle page breaking.
What I need is a way to (1) tell iBooks either to not pre-load other pages in the book or (2) give my setTimeout requests priority over those queued up by iBooks for preloading pages or (3) know when a page is being displayed so I can inject the script at that point in time.
See also epub 3, how to prevent pages from running in background ? (iBooks / Readium) and FInding out when page is being viewed in EPUB FXL via Javascript.
Currently, I am able to hook onto Direct3D application and draw custom stuff onto its surface. However, I would like to suspend this application and then draw something else.
Is this even remotely possible to do so? Like creating another my own Direct3D window on top of that application?
I'm targetting only Windows 7, but the application I want to draw on is using only DirectX 9.
The problem is that I have very little experience with DirectX in general.
Sort of.
You're working with two different elements here, one quite large and but not particularly complex: hooking D3D. The other ("suspending" the app) is simple within that, but you don't quite want what you think you want.
To hook D3D, by the simplest method, you need to intercept the call to CreateDirect3D9 and return your own IDirect3D9, which later creates and returns your own IDirect3DDevice9. This will give you full control over the app's render process.
In order to "suspend" it, you need to wait for the desired trigger, then in your IDirect3DDevice9::Present, call your own event loop. This will, for all intents and purposes, suspend execution of the original app's code, but not the process itself (allowing your code and event loop to process). There will be some limitations of this, and you may not be able to consume window/Windows events (simply), but it will give you full control and effectively pause the original app.
Note, however, that you must intercept and reroute execution in every thread you want to "suspend," it's only specific to a single thread and you don't want physics or AI crunching on while render and UI are paused.
You need to perform your overlay drawing, whatever that may be, during your loop or your IDirect3DDevice9::Present hook, then call the real device's Present method as needed. If you want to run multiple frames of your overlay, then call the real Present repeatedly before returning from your Present. Tweak as necessary. Rendering here is done pretty much normally (check out general D3D tutorials for that), but there is one major catch: the device's state is unknown and may be incompatible, but must be "untouched" on return. This is handled simply by caching an IDirect3DStateBlock9 created from the device immediately after creating it. In your Present hook, create another state block with the state on entrance, restore the clean state block, run your code, then restore the entrance state block. You can work with any states, off a fresh slate, without damaging the device's state (I use this in practice, in works great).
If you want some rather extensive examples of how this works, I'd suggest checking out the Voodoo Shader project, which has full D3D8 and 9 hooks, including everything needed for overlays [/shameless own-project promotion]. Feel free to reuse any of the concepts, or comment with further questions; this certainly isn't all the details that may be useful to you.
This is a very complex thing to accomplish, as it is very much a hack to do so. The only people you see doing such things are steam, teamspeak, xfire, fraps, and a few hard-core devs.
There are kits out on the internet that show you have to inject a DLL into the memory space of the target application to achieve such a feat, and methods such as proxy DLLs.
Proxy DLL:
http://www.codeguru.com/cpp/g-m/directx/directx8/article.php/c11453
Injection:
http://www.progamercity.net/d3d/372-c-directx9-0-hooking-via-detours.html
Good luck, this will take you a while.
I have always read and worked off a single UI thread since having more than one will screw up message pumping etc etc.
I am answering my own question here but want to validate my understanding on Chrome browser which is known to have multiple processes ( one per tab ) - does it also accelerate some bit on the rendering part by employing multiple UI threads ?
My guess is it does NOT , but if it does It would be very interesting to know or look at some sample c# code to demo the same ( does not have to be web browser demo).
Any pointers in the multiple UI thread direction would help! thanks.
I cant state definitively how Chrome handles the rendering threads - but I would assume that each tab has its own rendering thread. I wouldnt see the point of going through all the effort of process isolating the tabs, only to tie them all together on a common rendering thread. They would all have the opportunity to interfere with each other.
I implemented a 'chrome-style' browser using WPF - the application shell was a single process, then each 'tab' was a MAF AddIn running in a separate process. The rendering was all in child processes - there was nothing shared. Each AddIn returned an INativeHandleContract (a WPF control) which was passed across the process boundary.
The upshot of this, was that an exception ANYWHERE in a child tab, would only take down the tab, and could be detected by the parent process, giving it a chance to provide some feedback/reload the tab etc.
This document wasnt around when I achieved it, but after a quick browse I think it has some pointers:
http://msdn.microsoft.com/en-us/library/bb909794.aspx
Kent Boogaart also lent a helpful hand
http://kentb.blogspot.com/2008/06/maf-gymnastics-service-provider.html
You may also need this QFE from Microsoft to fix a bug in serialization you may experience when passing a WPF control across a process boundary:
http://archive.msdn.microsoft.com/KB982638
In regards to MS Connect bug: https://connect.microsoft.com/VisualStudio/feedback/details/467381/wpf-controls-cannot-be-passed-across-process-boundaries
Don't confuse threads and processes. Each process will have it's own ui thread, but likely also it's own message pump.