I am trying to draw things on the screen without an X11 server. (I want my program to be in initrd, so I don't want to bloat it with X11.)
It's easy when I have /dev/fb0, but when I use Xen, I don't have it (also I am not sure how that works - the passing of vesafb to the kernel by grub and such).
I've tried SDL but it won't work without /dev/fb0 or X11. How does X11 work without /dev/fb0? It seems like no matter what I do X11 will always work... yet all libraries (like directfb, SDL, etc.) will fail.
the /dev/fb0 handeling with kernel. you must enable:
CONFIG_FRAMEBUFFER_CONSOLE if you want to enable 'df' for console
CONFIG_DRM and VGA card driver in this.
FB and your VGA card driver in this.
Related
I was searching for a syscall that would draw a pixel on a given coordinate on the screen on something similar. But I couldn't find any such syscalls in this site.
I came to know that OS interacts with monitors using graphic drivers. But these drivers may be different on different machines. So is there a common native API provided by linux for handling these?
Much like how there are syscalls for opening, closing, reading, writing to files. Even though underlying file systems maybe different, these syscalls provide an abstract API for user programs to simplify things. I was searching something similar for drawing onto the screen.
Typically a user is running a display server and window system which organizes the screen into windows which applications draw to individually using the API provided by that system. The details will depend on the architecture of this system.
The traditional window system on Linux is the X window system and the more modern Wayland display server/protocol is also in common use. For example X has commands to instruct the X server to draw primitives to the screen.
If no such system is in use, you can directly draw to a display either via a framebuffer device or using the DRM API. Both are not accessed by special syscalls, but instead by using normal file syscalls like open, read, write, etc., but also ioctl, on special device files in /dev, e.g. /dev/dri/card0 for DRM to the first graphics card or /dev/fb0 for the first framebuffer device. DRM is also used for applications to render directly to the screen or a buffer when under a display server or window system as above.
In any case DRM is usually not used directly to draw e.g. pixels to the screen. It still is specific to the graphics card. Typically a library like Mesa3D is used to translate the specific details into a common API like OpenGL or Vulkan for applications to use.
How to get screenshot of graphical application programmatically? Application draw its window using EGL API via DRM/KMS.
I use Ubuntu Server 16.04.3 and graphical application written using Qt 5.9.2 with EGLFS QPA backend. It started from first virtual terminal (if matters), then it switch display to output in full HD graphical mode.
When I use utilities (e.g. fb2png) which operates on /dev/fb?, then only textmode contents of first virtual terminal (Ctrl+Alt+F1) are saved as screenshot.
It is hardly, that there are EGL API to get contents of any buffer from context of another process (it would be insecure), but maybe there are some mechanism (and library) to get access to final output of GPU?
One way would be to get a screenshot from within your application, reading the contents of the back buffer with glReadPixels(). Or use QQuickWindow::grabWindow(), which internally uses glReadPixels() in the correct way. This seems to be not an option for you, as you need to take a screenshot when the Qt app is frozen.
The other way would be to use the DRM API to map the framebuffer and then memcpy the mapped pixels. This is implemented in Chromium OS with Python and can be translated to C easily, see https://chromium-review.googlesource.com/c/chromiumos/platform/factory/+/367611. The DRM API can also be used by another process than the Qt UI process that does the rendering.
This is a very interesting question, and I have fought this problem from several angles.
The problem is quite complex and dependant on platform, you seem to be running on EGL, which means embedded, and there you have few options unless your platform offers them.
The options you have are:
glTexSubImage2D
glTexSubImage2D can copy several kinds of buffers from OpenGL textures to CPU memory. Unfortunatly it is not supported in GLES 2/3, but your embedded provider might support it via an extension. This is nice because you can either render to FBO or get the pixels from the specific texture you need. It also needs minimal code intervertion.
glReadPixels
glReadPixels is the most common way to download all or part of the GPU pixels which are already rendered. Albeit slow, it works on GLES and Desktop. On Desktop with a decent GPU is bearable up to interactive framerates, but beware on embedded it might be really slow as it stops your render thread to get the data (horrible framedrops ensured). You can save code as it can be made to work with minimal code modifications.
Pixel Buffer Objects (PBO's)
Once you start doing real research PBO's appear here and there because they can be made to work asynchronously. They are also generally not supported in embedded but can work really well on desktop even on mediocre GPU's. Also a bit tricky to setup and require specific render modifications.
Framebuffer
On embedded, sometimes you already render to the framebuffer, so go there and fetch the pixels. Also works on desktop. You can enven mmap() the buffer to a file and get partial contents easily. But beware in many embedded systems EGL does not work on the framebuffer but on a different 'overlay' so you might be snapshotting the background of it. Also to note some multimedia applications are run with UI's on the EGL and media players on the framebuffer. So if you only need to capture the video players this might work for you. In other cases there is EGL targeting a texture which is copied to the framebuffer, and it will also work just fine.
As far as I know render to texture and stream to a framebuffer is the way they made the sweet Qt UI you see on the Ableton Push 2
More exotic Dispmanx/OpenWF
On some embedded systems (notably the Raspberry Pi and most Broadcom Videocore's) you have DispmanX. Whichs is really interesting:
This is fun:
The lowest level of accessing the GPU seems to be by an API called Dispmanx[...]
It continues...
Just to give you total lack of encouragement from using Dispmanx there are hardly any examples and no serious documentation.
Basically DispmanX is very near to baremetal. So it is even deeper down than the framebuffer or EGL. Really interesting stuff because you can use vc_dispmanx_snapshot() and really get a snapshot of everything really fast. And by fast I mean I got 30FPS RGBA32 screen capture with no noticeable stutter on screen and about 4~6% of extra CPU overhead on a Rasberry Pi. Night and day because glReadPixels got was producing very noticeable framedrops even for 1x1 pixel capture.
That's pretty much what I've found.
I have been working on making my own keyboard driver for linux. So I came accross these two links: usbkbd.c and atkbd.c.
Now I am
confused which of these is the actual code driving my keyboard at the present. As I see it atkbd.c is quite gory and there is a conversion
of scancodes to keycodes. So it should be the code, though I am not sure.
If atkbd.c is the code, then what is the other code for?
This is easy to check. Let's take usbkbd.c.
The corresponding Kconfig (http://lxr.free-electrons.com/source/drivers/hid/usbhid/Kconfig#L50) says:
Say Y here only if you are absolutely sure that you don't want to use
the generic HID driver for your USB keyboard and prefer to use the
keyboard in its limited Boot Protocol mode instead.
This is almost certainly not what you want. This is mostly useful for
embedded applications or simple keyboards.
So it looks unlikely to be the keyboard driver we are looking for. Also check current kernel config for USB_KBD. The config can be found under /boot directory or by running zcat /proc/config.gz. If USB_KBD is not there, you're not using it. If usbkbd.c is built as module, then will be worth checking if it is actually loaded. Makefile (http://lxr.free-electrons.com/source/drivers/hid/usbhid/Makefile#L10) gives the target as usbkbd. We can check if it is loaded by grepping for it in output of lsmod.
In contrast, Kconfig (http://lxr.free-electrons.com/source/drivers/input/keyboard/Kconfig#L69) for atkbd.c seem much more likely:
Say Y here if you want to use a standard AT or PS/2 keyboard. Usually
you'll need this, unless you have a different type keyboard (USB, ADB
or other). This also works for AT and PS/2 keyboards connected over a
PS/2 to serial converter. If unsure, say Y.
Also check kernel config for KEYBOARD_ATKBD. If it is Y, you know it is being used. If it's M, check output of lsmod for atkbd.
I write some software for Linux, which uses libevdev for input processing.
To my surprise all virtual onscreen keyboards that I found simulate high level X Window Server events. So, they're not recognized by udev, don't appear at /dev/input folder and aren't visible with evtest.
Is there any software keyboard that is low-level enough for that? Or maybe some trick for that?
There is a good reason why this is done in this way. The /dev/input devices are devices that have somekind of physical (electrical, optical and/or mechanical) input. These are converted by the linux kernel drive into something that generates EV_EVENTS. These events are processed by the xf86_input_evdev driver in to X11 inputs, which are understood by the server. As you can generate X11 inputs from an X11 program, it is quite a lot of work to create a device driver that accepts input on one side from an X11 app and generates input on the other. So while not impossible, it is a lot of work for no gain to create a driver or two for this purpose.
I'm working on embedded device with screen rotated 90 degrees clockwise: screen controller reports 800x600 screen, while device's screen is 600x800 portrait.
What do you think, whose responsibility it is to compensate for this: should kernel rotate framebuffer to provide 800x600 screen as expected by upper-level software or applications (X server, bootsplash) should adapt and draw to rotated screen?
Every part of stack is free software, so there are no non-technical problems for modification, the question is more about logical soundness.
It makes most sense for the screen driver to do it - the kernel after all is supposed to provide an abstraction of the device for the userspace applications to work with. If the screen is a 600x800 portrait oriented device, then that's what applications should see from the kernel.
yes,I agree, The display driver should update the display accordingly and keep the control
Not sure exactly how standard your embedded device is, if it is running a regular linux kernel, you might check in the kernel configurator (make xconfig, when compiling a new kernel) , one of the options for kernel 2.6.37.6 in the device, video card section, is to enable rotation of the kernel messages display so it scrolls 90 degrees left or right while booting up.
I think it also makes your consoles be rotated correctly after login too.
This was not available in kernels even 6-8 months ago, at least not available in kernel that slackware64 13.37 came with about that time.
Note that the bios messages are still rotated on a PC motherboard,
but that is hard-coded in the bios, which may not apply to the embedded system you are working with.
If this kernel feature is not useful to you for whatever reason, how they did it in the linux kernel might be good example of where and how to go about it. Once you get the exact name of the option from "make xconfig", it should be pretty easy to search where ever they log the kernel traffic for that name and dig up some info about it.
Hmmm. I just recompiled my kernel today, and I may have been wrong about how new this option is. Looks like it was available with some kernel versions before the included-with-Slackware64 versions that I referenced. Sorry!