Mirror windows on X (Linux) - linux

I am required to write a very efficient application that will mirror the contents of an arbitrary external application multiple times (a lot of times) onto an area of my window, for Linux. On Windows, the way I used to do it was with the help of DwmRegisterThumbnail which would tell the compositor (Desktop Window Manager) that I want it to draw the thumbnail of that foreign window, which it anyway generates, onto a rectangle in my own window, when it composes the desktop image to be displayed to the user on the monitor. This is, I think, one of the lowest overhead ways to achieve my goal, on Windows. The goal is to have very minimal impact on the CPU, as the app will run on a pretty constrained machine. I never tested it against GDI or DirectX methods of copying the screen data, but I do not believe it is faster. Or maybe I am wrong, do correct me if so, please. Is there any other method faster on Windows? The limitations of this method include not being able to touch the actual image data, so no drawing on top of it for example, which is fine for my goal.
Now, my question is, what would be the best approach to achieve this on Linux? I have full liberty of choosing an appropiate X server, display manager if needed and also can write whatever software just to make it as low overhead as it is on Windows. Is there a similar API to the one on Windows for some Linux compositor, like Mutter or KWin, that works well? Or should I hook into X and copy image data from it? Would that eat a lot of CPU?
What's your experience and opinion? How should I take on this?
Thank you very much.

Related

In Vulkan how can you associate each individual video card with monitors they're directly connected to

I have two monitors, each connected to a different GPU. Both GPUs are in a single machine, and I want to run a single application. I have two independent views, and I would like to render each one using a GPU/Monitor set. I can create multiple surfaces and devices, but I want to ensure I associate each surface with the GPU its monitor is plugged into, otherwise I suspect I'll suffer performance issues as the frame buffers need to be copied back and forth between cards.
I'm using fullscreen surfaces, and I was thinking this was something vkGetPhysicalDeviceSurfaceSupportKHR would tell me. However, both VkSurfaceKHR appear to be valid targets for each VkPhysicalDevice so I guess this is something the OS and GPU Driver can handle, but is there any hint about which surface is optimal to associate with a device?
From what I can tell the extension VK_KHR_display is one way of doing this, but it's not available on my Windows 10 machine or Nvidia GPU. It seems to be intended for embedded platforms only. However it lets you list attached displays for each device which is pretty much what I'm looking for: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch29s03.html
This quote from the docs makes me belive this may not be supported on Windows:
Issues
1) Does Win32 need a way to query for compatibility between a particular physical device and a specific screen? Compatibility between a physical device and a window generally only depends on what screen the window is on. However, there is not an obvious way to identify a screen without already having a window on the screen.
RESOLVED: No. While it may be useful, there is not a clear way to do this on Win32. However, a method was added to query support for presenting to the windows desktop as a whole.
However, I'm still interested in hearing if there's a work around to achieve a similar effect.
Finally figured out a work around for this:
Direct X actually supports this through use of the IDXGIAdapter::EnumOutputs function. This lets you list the monitors connected to each GPU. Then using these two extensions you can remap this information to Vulkan:
VK_KHR_external_memory_capabilities
VK_KHR_get_physical_device_properties2
You can use these to get the deviceLUID from VkPhysicalDeviceIDPropertiesKHR.
This can then be compared with the Luid from this structure in Direct X DXGI_ADAPTER_DESC
You can also use glfwGetWin32Window to get the HWND of the monitor. This lets you associate a vulkan surface with a direct x monitor.
You now have all the information you need to accociate vulkan surfaces with the devices they're actually connected to.
At least in my application, setting this up correctly results in a significant difference in performance.
This would all be way simpler (and cross platform) if Windows would just support the VK_KHR_display and VK_KHR_display_swapchain extensions as Linux does.
There are two extensions that are useful for such things: the one mentioned by You, VK_KHR_display and the second called VK_KHR_display_swapchain which allows You to create a swapchain directly on a device’s display without any underlying window system.
But these extensions are rarely supported on Windows. In core Vulkan API there is no way to achieve what You want. And I'm afraid You need to use OS-specific functions (You need to rely on the WinAPI functions in this situation).
[EDIT]
Did You saw this question? How can you get the display adapter used for a particular monitor in Windows? If not, maybe it will help You start with Your research.
As you already discovered, on Win32 you need to use the OS windowing system to pick the display you want to use, using the Window API. It can be straight forward.
BUT if you intend to make simple and agnostic OS code, check GLFW project. It has high level functions to handle windows on all major OSs.
Check :
GLFW monitor Guide
GLFW Vulkan integration
GLFW on its own words:
GLFW is a free, Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platform-independent API for creating windows, contexts and surfaces, reading input, handling events, etc.

How to present to a different window using IDXGISwapChain and ID3D11Device/ID3D11DeviceContext?

Previously, when I've built tools, I've used D3D version 9, where the call to Present() can take a target window and rectangle, and you can thus draw from a single device into many different windows. This is great when using D3D to accelerate desktop applications, and/or building tools rather than games!
I've also built a game renderer with D3D11 before, which is also great, because the state management and threading interfaces are well designed, and you can even target D3D 9 level hardware that's still pretty common in the wild (as opposed to D3D 10, which can only target 10-and-better).
However, now I want to build a tool with D3D11. Unfortunately, the IDXGISwapChain that comes back from D3D11CreateDeviceAndSwapChain() seems to "remember" its HWND, and only wants to present to that window. This is highly inconvenient, because I may have a large number of windows that each need fairly simple graphics drawn to them, and only in response to a WM_PAINT (again, this is for a tool, not a game).
What I want to do is to save back buffer RAM. Specifically, I used to be able to create a single back buffer, the size of the desktop, that I knew could cover all rendering needs, and then that would be the single copy allocated. Even if there are 10 overlapping windows, they all render through the same back buffer, so there's no waste of memory beyond the initial allocation. I can create textures that are not swap chains, and use them as "render targets," but I can't find a good way of presenting to an arbitrary rectangle of an arbitrary client window, without reading back the bitmap and copying it into a DIBSection, which would be really inefficient. Also, there is no way to create many swap chains, and having them share the same back buffer.
The best I can do is to create one swap chain per window, and resize the back buffer of each swap chain to be really small, except when I render to the swap chain, at which point I resize it to match the window. However, this seems inefficient, because resizing the targets is not a "free" operation AFAICT. So, is there a better way?
The answer I ended up with was to create one back buffer per separate display area, and not size it to the back buffer. I imagine that, in a world where desktop composition and transparency can happy to "anything" behind my back, that's probably helpful to the system.
Learn to love the VVM system, I guess :-) (VVM for Virtual Video Memory)

Google Chrome over Linux FrameBuffer

I am working on a project where I need to run Google chromium over Linux FrameBuffer, I need to run it without any windowing system dependency ( It should draw on the buffer we provide it to draw, this will make its porting to any embedded system very easy) , I do not need its multi-tab GUI, I just need its renderer window in the buffer, has any body ever tried this? Any help on what approach should I use for this?
If you need to have some direct control of the window functions, or want to poke around in the DOM data, then the right way to solve this problem is to probably look at embedding webkit directly. This will be much faster and cleaner than what I am about to suggest.
Now, let's suppose you don't need all that fancy control and that you are really lazy. An ancient, low tech solution to your problem could be to create a virtual frame buffer and then read its contents directly. To do this, you can set up xvfb on your server:
http://www.x.org/releases/X11R7.6/doc/man/man1/Xvfb.1.xhtml
xvfb is an old unix tool that lets you create a virtual x-server with whatever type of configuration you want. More importantly, it can be configured to write the contents of its X server's screen directly to a memory mapped file! You can also set it up to use shared memory, which is a bit faster though also more complicated.
I guess you will have better luck with uzbl and GTK/DirectFB. Same engine, and works with javascripts. For the facebook chat issue, I think you just have to change the user-agent string.
There is the Origyn Web Browser, which is supposed to be an embedded WebKit-based browser that looks portable and does not depend on "heavy" libraries (like GTK). Their web page is http://www.sand-labs.org/owb but it looks like their database crashed, which is a little worrying maybe.
try to port webkit engine to the netsurf framebuffer-based code.
HTH
You could buy one of the remaining 10 (or so) OGD1 boards.
http://en.wikipedia.org/wiki/Open_Graphics_Project
Then you can talk directly to hardware using libpci.
However you will still need code that draws a picture into a memory buffer.
I realize this answer is more a shameless plug.
But people who are interested in your question might want such a board.
I already have a board like this and it would help a lot if it got more exposure.
This project:
http://code.google.com/p/wkhtmltopdf/
Achieves that. It runs Webkit on a virtual display and captures the rendered output in form of PDF. You can customize that do do something else.
OR you can create a display with tigthvnc, and set DISPLAY variable so that Chrome renders in that display.
I suggest using the webkit2pdf package (which is available for many different Linux distributions). Then use fbgs which is a wrapper for the fbi frame buffer program, that displays PDF files right on the frame buffer.

Coding a GTK+ application without window manager?

I want to code sth. that basically works like TiVo. Switch it on, you only see the menu or an output, so no underlying OS or anything else is directly visible to the user.
So I want to use Linux as base. Can you suggest a good base distribution?
Can I code a frontend without having a window-manager up and running?
If yes, is that possible with java-gnome or what language/gui-framework combination would you suggest?
If no, what's the minimal window manager that can handle fancy menus, etc?
What does it take to create video-overlays over a HD-stream? Are there some libraries I should take a look at?
Thanks
Yes. If you only have one window you don't need a window manager. Using X you can start some application and set it's position and size from the commandline (making it fullscreen). You might want to take a look at xinit if this is what you want. This is likely the easiest why to get something working. But another option is skip X and use DirectFB. If you want to display several windows, on the other hand, you need some sort of window manager to manage them.
As long as you run X there is no problem using java-gnome as framework if that's what you are confortable with. I guess you didn't mean to run the stock gnome applications, but code everything visible to the user yourself.
This very much depends on what you mean with fancy menus. If you mean transparancy and such you need a composite manager (if you don't just render everything yourself inside your application window). I'm not sure about this but I think you can run a composite manager independent from a window manager if you find that suitable. Again, this is if you run X. Using DirectFB transparency and such are done in a more simple way.
If you intend to write your own media player you should take a look at GStreamer. It can stream, decode and display video and also add video-overlays (among other things) and is extremly easy to use.
Minimalistic tiling window manages like Awesome, Ratpoison or XMonad may be useful as a base, otherwise you'll have to manage focus and window sizing yourself. It is normally fairly easy to make these invisible to the user.
Absolutely.
I wouldn't count on Gnome itself working without a window manager. Other than that... language doesn't matter.
Window managers only do window management. Menus and the like are the job of the widget toolkit. Anyways, Metacity.
... This one I have no clue about.

Designing an MFC App That Will Work on All Resolutions?

I'm currently designing my first ever GUI for Windows. I'm using MFC and Visual Studio 2008. The monitor I have been designing my program on has 1680x1050 native resolution. If I compile and send my program to one of my coworkers to run on their computer (generally a laptop running at 1024x768), my program will not fit on their screen.
I have been trying to read up on how to design an MFC application so that it will run on all resolutions, but I keep finding misleading information. Everywhere I look it seems that DLUs are supposed to resize your application for you, and that the only time you should run into problems is when you have an actual bitmap whose resolution you need to worry about. But if this is the case, why will my program no longer fit on my screen when I set my monitor to a lower resolution? Instead of my program "shrinking" to take up the same amount of screen real estate that it uses at 1680x1050, it gets huge and grainy.
The "obvious" solution here is to set my resolution to 1024x768 and redesign my program to fit on the screen. Except that I've already squished everything on my dialogs as much as possible to try and get my program to fit on screen running at 1024x768. My dialog fonts are set to Microsoft Sans Serif 8 but still appear huge (much larger than 8 points) when running at 1024x768.
I know there HAS to be a way to make my program keep the same scaling... right? Or is this the wrong way to approach the problem? What is the correct/standard way to go about designing an MFC program so that it can run on many resolutions, say 800x600 and up?
I assume your application GUI is dialog based (the main window is a dialog)?
In that case you have a problem, because, as you discovered, MFC has no support for resizing a dialog correctly. Your options are:
Redesign your GUI to use a SDI or MDI GUI.
Use a dialog resize extension. There are many available, for some very good suggestions see this question. Another options are this one and this one.
Don't use MFC. wxWidgets has much better support for dialog resizing.
MFC is only a thin wrapper over the Windows API. They both make an assumption which is hardly ever true: if you have a higher resolution screen, you'll adjust the DPI or font size in Windows to get larger characters. Most of the time, a larger screen size means a larger physical monitor, or a laptop where you want to squeeze as much information into a small screen as possible; people value more information over greater detail. Thus the assumption fails.
If you can't squeeze your entire UI into the smallest size screen you need to support, you'll have to find another way to make it smaller. Without knowing anything about your UI, I might suggest using tabs to group the controls into pages.
I've had good luck making my windows resizable, so that people with larger screens can see more information at once. You need to do this the hard way, responding to the WM_SIZE message to the window and deciding which controls should be made larger and which ones should just move.
There is no automatic way to resize the content of your dialogs when resolution changes. So, you need to set some boundaries.
Option 1.
If you are developing your app for customers, pick one minimum resolution (like 1024x7678), redesign you dialogs so that everything fits. Maybe break up some into several, or use tab strip control.
Option 2.
Create separate dialog forms for each resolution you'd like to support, but use the same class to handle it. At runtime detect resolution and use the appropriate form.
Option 3.
Write your own resizing functionality, so that user could adjust the size of your dialogs to his liking.

Resources