Linux output console to multiple monitors - linux

I want to output my console application to multiple outputs, or be able to select an output. Is this possible from consoleline? The idea is run a fullscreen SDL app using the VESA driver.
SDL has a VESA output mode (vgl), I've read, so it can be started from terminal (if I'm correct) and from console (without X11).
http://arstechnica.com/civis/viewtopic.php?f=16&t=702038
I could not find a non X11 solution for this.

I just found out Vesa does not support multiple screen output at all.
Edit: Although I've found some sources where it is stated some graphics cards support tv-out for VESA. Is there a compatibility list for this???? One would think VESA output to external monitors is pretty critical for laptops (having broken screens, etc.)..

Related

Not visible/refreshed objects or part of screen

Do You know the situation when on screen in application you have not fully refreshed view? You need to hover the mouse on the object to make it visible/refreshed? Or You need to open close page/screen to refresh all objects?
I have PC-box with Win10. PC has only one DVI port, so extended the screens with USB to DVI adapter (3x VGA2725) and desktop is extended to 4 monitors. Each monitor is used to show the same software (SCADA) 4 different 'windows'. On the screens with adapters there is problem with view/screen refresh, like a 'artefacts' or part of screen not updated. In that situation the page needs to be closed/opened again. The CPU and disk is not overloaded.
Could You help me if above is more software issue or more hardware issue? Shell I look for bad drivers? What shall I check?
EDIT: sorry, missed the fact that the adaptor is multiplying the signal to all monitors.
Nevertheless, check with only one monitor at a time and different combinations to isolate is my suggestion.
PLEASE DISCARD THE FIRST ANSWER BELLOW..
Has the adaptor been tested on one of the other monitors?
If it has been tested on all monitors and the error is only on one
of them, checking that monitor's specs, and the adaptor's specs; to
see if there are any incompatibilities, is probably the best approach.
If no testing on a different monitor has been done, I would definitely
try to isolate the issue by connecting the adaptor to all external
monitors.
If you encounter the same error on a different monitor, we know it is
most likely the adaptor, or it's drivers.

How do I enumerate and use OpenGL on a headless GPU?

Despite days of research, I can't seem to find a reliable way to programmatically create GL[ES] contexts on secondary GPUs. On my laptop, for example, I have the Intel GPU driving the panel, and a secondary NVIDIA card not connected to anything. optirun and primusrun let me run on the NVIDIA card, but then I can't detect the Intel GPU. I also don't want to require changing xorg.conf to add a dummy display.
I have tried a number of extensions, but none seem to work correctly:
glXEnumerateVideoDevicesNV returns 0 devices.
eglQueryDevicesEXT returns 0 devices.
eglGetPlatformDisplay only works with the main panel and gives me an Intel context
I am fine getting my hands dirty, e.g. rolling my own loader, but I can't seem to find any documentation for where to start. I've looked at the source for optirun but it just seems to do redirection. Obviously something equivalent to Windows' IDXGIFactory::EnumAdapters would be ideal, but I'm fine with anything that works without additional system configuration.

Get screenshot of EGL DRM/KMS application

How to get screenshot of graphical application programmatically? Application draw its window using EGL API via DRM/KMS.
I use Ubuntu Server 16.04.3 and graphical application written using Qt 5.9.2 with EGLFS QPA backend. It started from first virtual terminal (if matters), then it switch display to output in full HD graphical mode.
When I use utilities (e.g. fb2png) which operates on /dev/fb?, then only textmode contents of first virtual terminal (Ctrl+Alt+F1) are saved as screenshot.
It is hardly, that there are EGL API to get contents of any buffer from context of another process (it would be insecure), but maybe there are some mechanism (and library) to get access to final output of GPU?
One way would be to get a screenshot from within your application, reading the contents of the back buffer with glReadPixels(). Or use QQuickWindow::grabWindow(), which internally uses glReadPixels() in the correct way. This seems to be not an option for you, as you need to take a screenshot when the Qt app is frozen.
The other way would be to use the DRM API to map the framebuffer and then memcpy the mapped pixels. This is implemented in Chromium OS with Python and can be translated to C easily, see https://chromium-review.googlesource.com/c/chromiumos/platform/factory/+/367611. The DRM API can also be used by another process than the Qt UI process that does the rendering.
This is a very interesting question, and I have fought this problem from several angles.
The problem is quite complex and dependant on platform, you seem to be running on EGL, which means embedded, and there you have few options unless your platform offers them.
The options you have are:
glTexSubImage2D
glTexSubImage2D can copy several kinds of buffers from OpenGL textures to CPU memory. Unfortunatly it is not supported in GLES 2/3, but your embedded provider might support it via an extension. This is nice because you can either render to FBO or get the pixels from the specific texture you need. It also needs minimal code intervertion.
glReadPixels
glReadPixels is the most common way to download all or part of the GPU pixels which are already rendered. Albeit slow, it works on GLES and Desktop. On Desktop with a decent GPU is bearable up to interactive framerates, but beware on embedded it might be really slow as it stops your render thread to get the data (horrible framedrops ensured). You can save code as it can be made to work with minimal code modifications.
Pixel Buffer Objects (PBO's)
Once you start doing real research PBO's appear here and there because they can be made to work asynchronously. They are also generally not supported in embedded but can work really well on desktop even on mediocre GPU's. Also a bit tricky to setup and require specific render modifications.
Framebuffer
On embedded, sometimes you already render to the framebuffer, so go there and fetch the pixels. Also works on desktop. You can enven mmap() the buffer to a file and get partial contents easily. But beware in many embedded systems EGL does not work on the framebuffer but on a different 'overlay' so you might be snapshotting the background of it. Also to note some multimedia applications are run with UI's on the EGL and media players on the framebuffer. So if you only need to capture the video players this might work for you. In other cases there is EGL targeting a texture which is copied to the framebuffer, and it will also work just fine.
As far as I know render to texture and stream to a framebuffer is the way they made the sweet Qt UI you see on the Ableton Push 2
More exotic Dispmanx/OpenWF
On some embedded systems (notably the Raspberry Pi and most Broadcom Videocore's) you have DispmanX. Whichs is really interesting:
This is fun:
The lowest level of accessing the GPU seems to be by an API called Dispmanx[...]
It continues...
Just to give you total lack of encouragement from using Dispmanx there are hardly any examples and no serious documentation.
Basically DispmanX is very near to baremetal. So it is even deeper down than the framebuffer or EGL. Really interesting stuff because you can use vc_dispmanx_snapshot() and really get a snapshot of everything really fast. And by fast I mean I got 30FPS RGBA32 screen capture with no noticeable stutter on screen and about 4~6% of extra CPU overhead on a Rasberry Pi. Night and day because glReadPixels got was producing very noticeable framedrops even for 1x1 pixel capture.
That's pretty much what I've found.

Linux virtual keyboard and evdev

I write some software for Linux, which uses libevdev for input processing.
To my surprise all virtual onscreen keyboards that I found simulate high level X Window Server events. So, they're not recognized by udev, don't appear at /dev/input folder and aren't visible with evtest.
Is there any software keyboard that is low-level enough for that? Or maybe some trick for that?
There is a good reason why this is done in this way. The /dev/input devices are devices that have somekind of physical (electrical, optical and/or mechanical) input. These are converted by the linux kernel drive into something that generates EV_EVENTS. These events are processed by the xf86_input_evdev driver in to X11 inputs, which are understood by the server. As you can generate X11 inputs from an X11 program, it is quite a lot of work to create a device driver that accepts input on one side from an X11 app and generates input on the other. So while not impossible, it is a lot of work for no gain to create a driver or two for this purpose.

Create a Wacom-like Linux uinput device for work with touchscreen and pen

This is a fairly broad question, so I will try to keep it as focused as I can.
I currently own a Lenovo laptop with Ubuntu installed and touchscreen functionality and own a pressure-sensitive Bluetooth pen, and been trying to make the two work together as a cheap Cintiq-like tablet.
The pen has, unfortunately, support for only specific apps for iOS phones and tablets.
So after lots of research, I've managed to interface with the pen and create a uinput device for it, so I can register button clicks and pressure changes on the pen and even see them routed to GIMP when configuring the device through the Input Controllers menu.
The code I have so far for that interface is available here.
The trouble starts when trying to test it out with GIMP.
From what I gather, this is because GIMP assumes Wacom devices report their own position, treats touchscreen touches as mouse movements and only allows input from a single device at a time.
My question is, how can I work around this?
More specifically, how can I create a uinput device that would behave as a Wacom tablet and supersede/block the behavior I described?
Or if there's a different solution, such as patching GIMP or writing a plugin for it.
Update (2014-06-07)
The code mentioned above now works.
I have written a blog post on the process of getting this to work: http://gerev.github.io/laptop-cintiq
As you said, Gimp expects you to provide ABS_X and ABS_Y along with ABS_PRESSURE in your driver - which is not strange, because you are using you virtual device as input, so it wouldn't make much sense to pick ABS_X and ABS_Y coordinates from one device and ABS_PRESSURE from another (although they will always be the same in this case). Maybe you can just read the current coordinates of the mouse and copy them as your own device coordinates.
As an example, the project GfxTablet does something similar to what you are trying, they have an Android application for tablets with pen and use uinput to create virtual device that works like pressure-sensitive pen on Linux. I have used it and it worked like a charm in Gimp and mypaint on my laptop, and I had no problem with having a mouse (or the touchpad) active at the same time as the uinput device (I think that Krita added support for generic pressure-sensitive devices recently). You can take a look at the source code of the driver here (surprinsingly simple, to be fair).
Note that this is not a faulty behavior of Gimp, because this is what is expected from a tablet-like device. Take a look at the event codes kernel documentation page, in the last section (Guidelines), it is said that tablets must report ABS_X and ABS_Y. Moreover, they should use BTN_STYLUS and BTN_STYLUS2 to report the tool buttons and some BTN_TOOL_* (e.g. BTN_TOOL_PEN) to report activity (you can find all the available codes in input.h); however, these last does not seem that important, as GfxTablet does not implement them and worked without problem.

Resources