I've been reading a lot about XCB's API and compiling a lot of examples from various sites. However, none of them directly address the issue of actually creating real windows for GUI applications; they all sorta linger on how to draw primitive 2d graphics with XCB. E.g. create a window, draw a few squares, close the window.
How do you actually make processes spawn their windows within your XCB window manager?
I can only assume that when a process is made, X server is notified and forwards the request to your application and then you map a window for the process to the screen and whatever underlying graphics subsystems do the drawing on behalf of X server (and your window manager).
I've also read about how people fork() third party processes from their window manager. This seems like a silly idea. Is it? Shouldn't applications run independently of the window manager?
I've read through a lot of window manager source code and, call me a novice, but I've seen nothing that's directly linked to how applications draw their own windows.
I'd appreciate it greatly if someone could shed light on how you're supposed to handle application's window creation via XCB. Thanks.
However, none of them directly address the issue of actually creating real windows for GUI applications;
That's not something you want to do with bare protocol level APIs anyway. If you want a rich GUI, use a toolkit that does all the heavy lifting for you. Xlib / XCB will give you only rudimentary tools which works well for rather simple windows; even there I wouldn't start a project without at least using something like cairo.
How do you actually make processes spawn their windows within your XCB window manager?
By calling xcb_create_window and then xcb_map_window. That's it. That's all. You create a window and then you make it visible.
Of course there's a lot of other stuff you should do with your windows, but in terms of creating and displaying it, that's it.
I can only assume that when a process is made, X server is notified
The X server doesn't care about processes.
forwards the request to your application
What request?
Shouldn't applications run independently of the window manager?
Well, yes… but realistically, the lifetime of the window manager equals the lifetime of the X server (people don't usually switch window managers in between). And without the X server all X clients die anyway.
So theoretically it's completely independent, but the reality is that there's no real reason to make such a distinction. That said, all window managers I know of which offer some way to launch applications fork it such that the forked process survives even if the window manager exits.
I've read through a lot of window manager source code and, call me a novice, but I've seen nothing that's directly linked to how applications draw their own windows.
That's because the window manager isn't involved in rendering windows – at all. The window manager only knows that windows exist and manages their abstract properties, but for all the window manager knows, a window is a rectangle with a few properties.
Actually rendering to the window is something the X client will do itself directly with the X server.
The one type of rendering a window manager is usually involved in is decoration rendering (window borders and titles and the like). But the window manager is also just an X client, so in this regard it's just another application rendering something (and usually a window manager renders such decorations into frame windows it created itself – so-called reparenting window managers).
Update after your first comment: The X client (which wants to create a window) sends those requests (create / map window) to the X server. X offers multiple implementations of how this happens, the most common case on Linux systems nowadays being UNIX sockets.
In X there's different events clients can select. One of these event types are substructure redirects, which means a client can ask the X server to be notified whenever some window, e.g., creates child windows.
The root window is also just a window, but it has some unique properties like the fact that it always exists and cannot be closed. Only one X client can select substructure redirect on the root window – doing this is what makes the window manager a window manager.
So now we have an X client with substructure redirect on the root window (our WM). Now any time a client requests to map a window, the X server will instead redirect this request to the window manager client (via a MapRequestEvent) and stop there. The only exception to this is map requests coming from the window manager itself: these the X server will process (in order to not just play ping pong with the window manager for all eternity).
This basically set up an intervention loop: client requests X server to map the window, X server forwards request to the window manager, window manager may choose to send the map request for the window back to the server, server processes the map request because it came from the window manager.
And that's it; that's how mapping the window worked.
How is my window manager meant to tell when to make and map a window?
The window manager doesn't tell the client what to do. How would it know what a client even wants to do? It's the opposite: the client does stuff and the window manager intervenes and reacts as it sees fit (in some regards – the window manager by no means has full control over the X server).
Is there some kind of event loop I need to create a case for (where there would be a request to create a window from X server)?
As stated above, deciding when to create windows is up to the client. But yes, a core concept of X clients is that they need to set up an event loop.
For example in the case of mapping the window: the client sends the map request and MUST NOT assume the window to be mapped (because the window manager can choose to reject the request!). The client knows that their window was mapped because when it happens, the X server will create a MapEvent and send it to the client.
Note that the window manager can not only reject map requests from the client, it can even map windows it has never even received a map request from the client for. So a client must always wait for these events and react accordingly if one of its windows has been mapped.
There's a whole other bunch of events that are important for clients, in particular Expose events which tell the client that it needs to redraw (parts of) its window.
Related
When I see the task manager of Google's Chrome I could see few (each) tabs run under individual process and group of tabs run under a single process. Out of curiosity, I searched to know why it runs as multiple process instead of multiple threads. And one thing which brought to my attention is when it runs as a single process and spawns multiple threads there could be few limitations/drawbacks like,
1) Limitation on number of threads that could be created
2) When a single tab becomes unresponsive the entire application would be come useless and we have to quit chrome and restart it due to some misbehaving site.
A few mentioned that Chrome uses single process per domain, but here it doesn't seem to be true.
I'm still not clear on,
1) When Chrome decides to spawn a new process?
2) What are the other advantage of running individual tabs under separate process?
3) How cookies are shared between tabs when each of them run under different process? Is this happening via interprocess communication? If yes, will it be too costly? And will it impact the other tab's (ie' the web page) performance?
After asking this question, I came to see this article (Multi Process Architecture of Chromium) and it answered my question (1).
When Chrome decides to spawn a new process?
Once Google Chrome has created its browser process, it will generally create one renderer process for each instance of a web site you visit. This approach aims to keep pages from different web sites isolated from each other.
You can think of this as using a different process for each tab in the browser, but allowing two tabs to share a process if they are related to each other and are showing the same site. For example, if one tab opens another tab using JavaScript, or if you open a link to the same site in a new tab, the tabs will share a renderer process. This lets the pages in these tabs communicate via JavaScript and share cached objects. Conversely, if you type the URL of a different site into the location bar of a tab, they will swap in a new renderer process for the tab.
They place a limit on the number of renderer processes that they create (as 20 in most cases). Once they hit this limit, they'll start re-using the existing renderer processes for new tabs.
What are the advantage of running tabs under different process?
Google Chrome takes advantage of these properties and puts web apps and plug-ins in separate processes from the browser itself. This means that a rendering engine crash in one web app won't affect the browser or other web apps. It means the OS can run web apps in parallel to increase their responsiveness, and it means the browser itself won't lock up if a particular web app or plug-in stops responding. It also means they can run the rendering engine processes in a restrictive sandbox that helps limit the damage if an exploit does occur.
Interestingly, using multiple processes means Google Chrome can have its own Task Manager (shown below), which you can get to by right clicking on the browser's title bar. This Task Manager lets you track resource usage for each web app and plug-in, rather than for the entire browser. It also lets you kill any web apps or plug-ins that have stopped responding, without having to restart the entire browser.
I'm writing a UAP C#/XAML application, for the time being I'm interested in case when user runs my app in desktop environment (case when keyobard and mouse are available, the machine is running some version of Windows 10 not Windows 10 Mobile).
I want to intercept ALT+F4 in order to ask user a few important questions before they quit, like in for example notepad - when you have unsaved file and the notepad notifies you about this fact and asks if you want to save your work, quit without saving or go back to working with your file.
Is such a behaviour possible in Windows 10 UAP? I tried to play with Application.Suspending event and ExtendedExecutionSession, but it seems like before this event is fired the GUI thread is dead, and all I can do in this event's handler are operations not requiring user interaction.
There is no way to intercept and stop events like this.
By the time your app is told it is suspending following a close event (alt+f4, cross clicked) you have 10 seconds (on desktop) to clear up and save state before you are completely terminated.
With universal apps, you shouldn't need a dialog asking them to save or not, just save state so next time they reopen you refresh the view to how it was before, or, think mail client, save their typings as a draft. The guidance on Microsoft is, however, that if the user closes your app, assume they want you gone so don't restore state.
The only thing you can do for some extra processing is ask the OS for extended execution, though this isn't guaranteed and even if granted can be revoked with 1s notice to termination. It's important to note that, even with extended execution granted, you app is not allowed any UI.
For more information on Windows 10 universal application lifecycle, I'd recommend watching the Application Lifecycle session on Microsoft Virtual Academy.
I am using SailsJS for a web application which basically lets the user download and process any video from the internet(say youtube). The user enters a link to the video and my sails app downloads the video if available and then starts processing the downloaded video using a shell script(Come OpenCV processing to find different frames).
This process takes a very long time to complete, and the user can navigate away from the page and do whatever he wants. Now, to check on the progress by visiting this page later I need to be able to connect with the child process that was created earlier for this video file.
I have come up with two possible solutions:
1) Using gearman to implement a job server and connect to it every time the user navigates to the page and start getting the callback events and show the progress based on them. This is the first time I'll be using gearman.
2) Somehow storing the processID of the child process in the session/db and then using it to find the process using ps-node.
Which of these is the better approach(if you think they'll work fine)? Or is there any other solution I don't know about? Any pointers in the right direction will be appreciated.
Let's start with the second option. Don't use it. Simply because this way your site users will have sort of more control over the number of processes running on your server then you will.
Number one is way better, but using a separate job server seems like a bit of overkill to me (I have to admit though that I'm not fully informed the scale of your plans).
Bottom line, I would use a message/job queue (kue seems like a perfect fit to me) and store the progress in DB or (preferably) Redis (or whatever cache you are using).
I am new to mobile website development, and facing this issue where I want to refresh data on the website in every 30 sec which is invoked from the client side and server provides the data in response. Problem is when I close the browser or when the browser goes in background it stops working. Is there any thing we can do to make this thing possible?
Have a look at the Android Developers - Processes and Threads guide. You'll get a deeper introduction to how process life-cycles work and what the difference is between the states for background- and foreground processes.
You could embed your web app in a WebView. This way you could deal with the closing browser case: you could provide a means to "exit" the app that involves closing only your container activity. That way the timers you have registered in javascript will still be running in the 'WebViewCoreThread'. This is an undesirable behavior and a source of problems, but you can take advantage of it if you want (just make sure you don't run UI-related code there). I've never tested this in Kit Kat (which uses a different WebView based on Chrome) but works for previous versions, as I described here.
Now the user can always close any app. Even without user interaction, the OS can kill your app on low memory. So just give up on long-running apps that never end, because the OS is designed in such a way this is simply not possible.
You could go native and schedule Alarms using the AlarmManager.
Just checked this out on the Android KitKat WebView and as per Mister Smith's comments the javascript will continue executing in the background until the Activity is killed off:
Just tested with this running in a WebView:
http://jsbin.com/EwEjIyaY/3/edit
My gut instinct is that if the user has moved your application into the background, there seems little value in performing updates every 30 seconds, it makes more sense to just start updating again once the user opens the device up and cache what information you currently have available to you.
As far as Chrome for Android goes the same is happening, as Chrome falls into the background the javascript is still running.
If you are experiencing different behaviour then what exactly are you seeing and can you give us an example?
How do you disable closing an application when it is not responding and just wait till it recovers back?
What you're asking is not just impossible (any user with sufficient priviledges can terminate a process...no matter what OS), it's a horrible User Experience (UX) decision.
Think about it from the User's point of view. You're sitting there looking at an application. The application doesn't appear to be doing anything and isn't providing you any visual feedback that it is doing work. You'd think the application was hung and you'd restart it.
You could do anything from showing a scrolling progress bar to having the long running process update some piece of information on the UI thread (think of an installer in mid-install...it's constantly telling you which files it's putting where rather than just making you wait). In any case, you should be providing some visual feedback to the user so they know your application is still running.
Have the GUI work in a separate thread so that it is (hopefully) never "not responding".
If this is a question about programming, your program should never be in that state since you've tied up the GUI thread somehow. And you can't (and shouldn't) stop Windows or the user from closing your program. They've detected your code is rubbish and have every right to forcefully toss it out of their valuable address space.
In any case, your program's too busy doing other stuff - if it can't respond to the user, it probably can't waste time protecting itself either.
At some point, developers need to get it through their thick skulls that the computer belongs to the user, not them.
Of course, if you're talking about how to configure Windows to prevent this (such as on your PC), then this question belongs on serverfault.
Don't. No matter how important you think your application is, your users' ability to control their own systems is more important.
You can always terminate applications from task manager if you have the privileges. You can just disable or not show the system menu options that has the close icon and close menu option in the application window but that is not going to prevent the user from terminating it from task manager as mentioned before. Instead, I would just show some busy processing icon in the application so the user understands what is going on.
Only thing you can do is disable the close button. Users can still kill it from task manager or similar tool, to way around that. You could make killing it harder by launching it as a privileged process, but that comes with many more problems of its own.