We have a user definable "dashboard". It is possible to add several components into this window. Some windows are browser controls (ActiveX controls created with CLSID_WebBrowser). This browser controls can show content from different sources in the internal web.
To avoid blocking of the application each browser control is hosted in its own thread. Reason here was that the ActiveX Webbrowser control is hosted in an STA and only loads and shows data when message loop is running and this may block other parts of the UI.
So we have one parent window containing child windows of type list control, tree control, statics and group boxes, mixed with some browser controls. Except the browser control, all controls belong to the same UI thread. But the threads use one input queue and AttachThreadInput was executed for each thread to attach it to the main thread.
Now we face the following problem:
When the user designs his screen with a group box and a web control inside it. the application locks when the user moves the mouse over the browser control and clicks or uses the mouse wheel. The application locks and doesn't accept any further input. If you minimize the application and activates it again, you can work on and input is accepted again.
Reasons
With the debugger and spy++ we found out, that any mouse event causes WM_NCHITTEST to be sent to the group box. The group box returns HTTRANSPARENT. But the underlying window is of a different thread. We can see that an infinite loop occurs an WM_NCHITEST messages are fired to the group box and the input is blocked until this loop gets interrupted (minimizing by showing the desktop).
We can read in the documentation that WM_NCHITTEST and HTTRANSPARENT are limited to windows of the same thread. And I can find 2 other article in the net that describe the same or similar problem.
The simple solution
The simple solution is just to take care that the browser controls are never covered by a group box or static control. So changing the Z-order is simple and works (groupbox must follow the windows of the different thread in Z-order)
Question
I would be interested if there is another way to get around this problem. Or if there is a way to prevent such input queue deadlocks. Or if somebody knows what is happening internally when this WM_NCHITTEST must handle windows from different threads.
Related
I have a set of old MFC DLLs that act as a frame buffer emulation. One of them contains processing for drawing commands then blits to an in-memory bitmap, the other DLL is the "main" DLL that controls Windowing and events by running a CWnd in its own Afx thread, and displays the in-memory bitmap.
The application that links against these basically has no idea they are there, they simply call "init" and "update" while running their app, expecting to see pixel data output on the Windows window instead of actual hardware.
Now I need to port this to Linux and looking at something like GTK, but during investigation it looks like GTK expects to be in control of the main loop and is event driven which is expected of a GUI toolkit, but many others also allow the user to manually pump the main loop instead of handing off control and only communicating with messaging.
Ideally, I'd want to just kick off GTK in its own thread and let it handle windowing and messaging alone, then blitting when "update" is called, while the user's main app is running as the critical main thread.
If we cant just plop GTK main_loop() into a seperate thread, I see that gtk_main_iteration may be used instead, but there are a lot of questions close to this one that users say that GTK shouldnt be used in this way, but the same could be said of our CWnd MFD implementation. At this point, there is no changing the mechanisms of how these DLLs work, the user's app must be the "main" and the processing/windowing must be transparent.
The other option is to just use X11, but I'd really like to have other widgets easily usable like toolbars, menuing, XShm extensions transparently used, resource management, etc.
Can this be done in GTK or is there better options?
Since there's no clear explanation in Chrome Extensions documentation, I came here for help.
I learned that background pages are basically invented to extend the extension's lifetime, and designed to hold values or keep the "engine" running in background so no one notices it. Because once you click on the extension's icon, you get what they call it, a "popup", and once you click outside the "popup" it disappears immediately and most important the extension "dies" (its lifetime ends).
So far we are good and everything is nice but: event pages are invented after that
and they are basically background pages that only work when they are called (to provide more memory space).
If that's the case, then wouldn't that be contradictory? What's the use of event pages if they only work when they're called?
Sometimes background pages only need to respond to events outside them (messages, web requests, button clicks, etc.)
In that case, an event page makes sense. It's not completely unloaded as if the extension is stopped - it defines its event handlers (what it wants to listen to) and then it's shut down until needed. Consider this to be "I'm going to sleep; don't wake me up unless A happens."
The difference with your example: closed popup ceases to exist completely, while Chrome remembers it needs to call a particular extension on particular events. If that event happens, the background page is started again an the event is fired in it.
This saves resources, but not always appropriate. Shutting down background page's context wipes its local state; it must be saved in various storage APIs instead of variables. If the local state is complex, it may not be worth the effort. Also, if your extension needs to react really fast or really often, suspend/resume may prove to be a performance hit.
All in all, event pages are not a complete replacement for background pages; that's why they are optional and not default. There are many things to consider when making an event page.
P.S. Regarding your "popup as most important part of the extension": this is exactly why it can't be the most important part in most cases. Usually, a background page is also used alongside a popup to keep event listeners and local state.
We have a really strange bug in our software. Call of glXSwapBuffers will block every now and then until some X-events are sent (mouse hovering over window/keyboard events). It seems that the bug is identical to Qt QGLWidget OpenGL rendering from thread blocks on swapBuffers() which was never properly solved. We have a same kind of situation.
In our application we create a multiple number of windows because our application needs to work with multiple screens. Each of our window is basically QWidget which has a class derived from QGLWidget as its only child. Each window has its own rendering thread attached which executes OpengGL-commands.
In this setting, application just halts every now and then. It continues normally if we feed X-events to it (moving the mouse over windows/push keyboard buttons). Based on the debugger info glXSwapBuffers() blocks somewhere inside the closed driver code.
We haven't confirmed this behaviour on NVidia cards, only with AMD-cards, and it is more likely to appear when using multiple AMD-cards. This suggests that the bug may come from the GPU-drivers.
I would like to know has any other bumped into this and has somebody even managed to solve this.
I have always read and worked off a single UI thread since having more than one will screw up message pumping etc etc.
I am answering my own question here but want to validate my understanding on Chrome browser which is known to have multiple processes ( one per tab ) - does it also accelerate some bit on the rendering part by employing multiple UI threads ?
My guess is it does NOT , but if it does It would be very interesting to know or look at some sample c# code to demo the same ( does not have to be web browser demo).
Any pointers in the multiple UI thread direction would help! thanks.
I cant state definitively how Chrome handles the rendering threads - but I would assume that each tab has its own rendering thread. I wouldnt see the point of going through all the effort of process isolating the tabs, only to tie them all together on a common rendering thread. They would all have the opportunity to interfere with each other.
I implemented a 'chrome-style' browser using WPF - the application shell was a single process, then each 'tab' was a MAF AddIn running in a separate process. The rendering was all in child processes - there was nothing shared. Each AddIn returned an INativeHandleContract (a WPF control) which was passed across the process boundary.
The upshot of this, was that an exception ANYWHERE in a child tab, would only take down the tab, and could be detected by the parent process, giving it a chance to provide some feedback/reload the tab etc.
This document wasnt around when I achieved it, but after a quick browse I think it has some pointers:
http://msdn.microsoft.com/en-us/library/bb909794.aspx
Kent Boogaart also lent a helpful hand
http://kentb.blogspot.com/2008/06/maf-gymnastics-service-provider.html
You may also need this QFE from Microsoft to fix a bug in serialization you may experience when passing a WPF control across a process boundary:
http://archive.msdn.microsoft.com/KB982638
In regards to MS Connect bug: https://connect.microsoft.com/VisualStudio/feedback/details/467381/wpf-controls-cannot-be-passed-across-process-boundaries
Don't confuse threads and processes. Each process will have it's own ui thread, but likely also it's own message pump.
I'm new to development on Windows Phone 7 and Silverlight but I do have experience in win32 and threading in general.
Here's my question:
I am trying to "synchronize" the UI thread w/ another thread that seems to be used by the API's of the object that I am working with. In other words, I would like to make sure that before the user dismisses the current XAML page by pressing the back button, the object that I am working with, which is part of the C# class behind the XAML page is deallocated.
The reason for that is that if I have the deallocation code in the NavigatedFrom handler, the UI thread may attempt to release the object WHILE it is in fact used by the other thread. Therefore, I do have to somehow synchronize the deallocation of this object.
Ideally, when the user presses the back button on the phone, all I do is set a flag "quit" to true to indicate that the user intends to exit. The methods used by the object that are running on another thread, would "see" that this flag is set and then would BeginInvoke*emphasized text* the deallocation code of the object (only because the object has been allocated on the UI thread so I figured it makes sense to deallocate it on the same thread, not knowing its internal workings.) Finally, it would call NavigationService.GoBack() to ensure 'orderly' exit.
Unfortunately, I don't see a way of preventing the XAML page to be dismissed when the user presses the back button although I did override the NavigatedFrom and OnBackKeyPress methods. Even though they contain no code at all, the XAML page is dismissed anyway.
Another thing that is interesting and I would appreciate your comments on this, is that I have a timer (System.Windows.Threading.DispatchTimer). Would this timer be associated only with the C# class behind a XAML page that defines it? In other words, is there a concept of a "message pump" associated with each XAML pages or is there just one message pump for the UI thread that basically is used by ALL XAML pages ? I am asking this because although I dismiss the XAML page whose C# class defines the timer, it seems to still be running.
Thank you.
The reason for that is that if I have the deallocation code in the NavigatedFrom handler, the UI thread may attempt to release the object WHILE it is in fact used by the other thread. Therefore, I do have to somehow synchronize the deallocation of this object.
Not really a problem. If you queue the navigation on the Dispatcher as well, you don't get any NullReferenceExceptions.
Simply use Dispatcher.BeginInvoke(() => NavigationService.Navigate(...)) for safe navigation.
Would this timer be associated only with the C# class behind a XAML page that defines it?
If you by "class" means "ViewModel", then yes, it most definitively should be in the ViewModel.