I am trying to make an embedded system that is basically a graphical app running on Raspberry Pi with a touchscreen. It seems that Wayland is the best choice, but the documentation for it is lacking at best and uncomprehensible, outdated and awful at worst.
Therefore I'm asking here, how does one make the most basic implementation of Wayland that will allow me to just launch a single fullscreen app and basically draw and rotate a model?
I'm already familliar with OpenGL from using WebGL, so what I need the most is the Wayland part of things.
Related
I understand that in Linux a windowing system (X11, Wayland etc.) is responsible for rendering applications on the screen. I experimented with X11 but never got past obtaining single windows. I also read about Wayland. My question is, if I want to write an application that grabs whatever is shown on the screen, is there a way to get it on such a low-level (drm, dri, kms) that I am not dependent on the windowing system? What choices do these low-level APIs give me compared to the windowing system?
EDIT: I realized reading this that "One of the features of Wayland is its security design, which helps to guard the user against malicious apps. Apps can no longer see everything on the screen and spy on you. But that also means you cannot run a common application (like shutter or gtk-recordmydesktop) and use it to make a screenshot or a screencast of your desktop".
So I would like to create a compositor for wayland which supports 3D effects for windows(something resembling compiz, but on wayland). I already saw this question: Where do I start if I want to write a wayland compositor? but the only answer points to SWC(https://github.com/michaelforney/swc), which is not applicable in my case as I want to use OpenGL and because SWC doesn't support 3D easily. So is there some project/library/book/tutorial/etc where I can learn the necessary things for writing my own WM on wayland? Thanks in advance.
The only purpose of the wayland protocol is the communication between client and server. The server provides the client with input events and the client provides the server with a buffer (that can be mapped to an OpenGL texture). Where the server/compositor gets its input events from and what it does with the buffer is completely up to the compositor.
So the compositor itself needs a source for input events and a way to draw its result. That's why many wayland compositors have multiple backends: they can run on top of X11, directly on top of the Linux kernel or even on top on another wayland server.
The answer to your question really depends on where you want to run your compositor. Writing a compositor that runs on top of X11 might be the easiest way to get started if you're already familiar with how to get an OpenGL app up and running there. If you want to run your compositor directly on top of the Linux kernel you'd probably want to look into evdev and libinput for input and DRM/KMS together with EGL on top of GBM in order to create an OpenGL context and show the result on your monitor. There are also rendering libraries (e.g. evas) that can run directly on top of the Linux kernel but I don't know how far they let you inject your own OpenGL code.
Once you have decided where you want to run your compositor you can start by just writing a regular OpenGL app and then go on and integrate a wayland server in order to display and interact with actual client windows.
There exist a class of applications that use opengl to provide hardware acceleration, but are not GUI based. However it seems that in the default case, to use opengl, there must be running an X-server with GLX (on the same virtual terminal) for those applications to function.
My specific case is attempting to use gstreamer's gl plugins on a headless machine, but I'm asking a more general question.
Is there some way around this (esp without modifying the original code)?
I've been trying to research using the framebuffer kernel module, but not getting very far.
Mesa supports software rasterization on offscreen surfaces.
You can use EGL and render to a PbufferSurface instead of a WindowSurface.
See my answer here: https://stackoverflow.com/a/74226995/1884837
Have you tried Xvfb?
I'm getting started developing with OpenGL ES on ARM/Linux, and I would like to draw something full-screen but don't know where to start.
I'm not developping on iPhone, nor Android. This is a Linux/OpenGL ES question.
I know it's possible to draw on the framebuffer with OpenGL ES without any library but I don't find any resources about that topic, could you help me?
I don't have any code to show how to do it but basicly you use de framebuffer device as the target of OpenGL|ES operations.
Are you developing with an embedded platform as a target? If so, you could use software implementations on your host system and then the actual driver on the embedded device.
There is a small project for supporting OpenGLES 1.1 on linux called dlges. You could also try mesa.
I imagine that the driver itself might have a header for OpenGL that you could look at and see if it supports OpenGLES calls. Alternatively, you could set up function pointers to make your OpenGL Code look more like OpenGL ES.
Good luck!
Don't forget that desktop Linux comes with OpenGL, not OpenGLES! They're similar but not quite compatible. If you want to do work on OpenGLES on a desktop Linux platform, ARM or otherwise, you'll need an OpenGLES emulator library. Sorry, can't recommend any, I'm looking for one myself.
OpenGLES just handles the process of drawing stuff into the window. You also need a windowing library, which handles the process of creating a window to draw stuff into, and an event library, which deals with input events coming back from the window.
SDL will provide both of the last two, as will a bunch of other libraries. Khronos themselves have standardised on EGL as the windowing library and OpenKODE as the event library... but I don't actually know where to get open source implementations of these for Linux. (I work for a company that does EGL and OpenKODE for embedded platforms, so I've never needed to find an open source version!)
ARM offers few GPUs that support OpenGL 2.0. You can find some examples and and emulator that runs on linux on the Mali Developer site.
Of course that's mostly to target ARM GPUs, but I am pretty sure it could be used to examine OpenGL ES programming possibilities.
Here is a tutorial showing how to use SDL in combination with OpenGL ES. It's for the OpenPandora platform, but since that runs Linux, it should be applicable on the desktop if you can get the proper library versions.
Use of SDL is more or less standard with this kind of programming, in Linux. You can of course go the longer route and open the window yourself, attach a GL rendering context and so on, but usually it's easier to learn the relevant parts of SDL. That also gives you easy-to-use API:s for input reading, which is almost always necessary.
You can use PowerVR SDK for Linux http://community.imgtec.com/developers/powervr/graphics-sdk/
There are a lot of samples.
In windows, what does Flash use under the hood?
It's a relatively simple question which I can never find the answer to. Is it GDI (for windows VM implementations) or something else?
You don't need to go into any of the new GPU acceleration features of Flash. I just really want to know the inner workings because it's NEVER discussed.
On 64-bit Linux, the Flash plugin does not link against SDL (according to ldd). It does, however, link against GTK, GDK, and Cairo. It appears, therefore, that it is using either Cairo or raw Xlib calls to do its drawing on Linux.
I don't know on Windows. Flash tends to have minimal dependencies, but Direct-X may be standard enough that they use it. With some kind of a process examiner to tell you what libraries a process has loaded, you could examine a simple web browser embedding Flash and see what system facilities are actually in use.
DirectX mostly. It's hard to achieve good graphics performance with GDI.
I agree with george, GDI is very bad for speed. DirectX for Windows and SDL or similar for Linux (note this is an assumption!). In that sense it probably uses a layer that communicates with the native graphics subsystem on whatever platform it's running on.