Tegra 3 OpenGL ES directly to framebuffer - linux

I am developing an embedded Linux system using the Apapalis T30 Tegra 3 SOM from Toradex. I only need a very simple multi-touch user interface for it. I am trying to push the performance and efficiency of the UI as far as I can because the device will have to render complex 3D models whilst allowing live interactions and I know for a fact that my users will be have models that will make it bog down no matter what. I am therefore trying to push that point as far away as I can. Memory will also be a constraint and some models might use up all the RAM if there is not enough available.
What I would like to do to solve this is to have an OpenGL ES 2 GUI with SVG UI elements combined with GLES 3D views, rendered directly to the frame-buffer. In other words, I want to completely ditch any form of a window/desktop manager because I won't need it. I only need a single full-screen GLES drawing surface. I don't even need pointer events etc. as I will be talking to the touch panel directly from my application.
I have looked around quite a bit but I could not really find any conclusive information. I am constantly reading reports of HW acceleration not working when the frame-buffer is directly used, but I guess one could render the GLES into an image and then just push it to the FB? I am also reading that the graphics driver might be locked to X11 but I am also struggling to find details about the Tegra GFX driver, I am reading reports about Nvidia opensourcing their driver, is this true?
Any assistance or explanations will be greatly appreciated.
PS. Please don't preach me on how bad an idea this is and how I should rather use Qt or something like that, I want to find out how to do what I am planning here.
PPS. What a basically want to be able to do is what I understand embedded Qt 5 does in its "EGLFS" rendering mode.

Related

In Vulkan how can you associate each individual video card with monitors they're directly connected to

I have two monitors, each connected to a different GPU. Both GPUs are in a single machine, and I want to run a single application. I have two independent views, and I would like to render each one using a GPU/Monitor set. I can create multiple surfaces and devices, but I want to ensure I associate each surface with the GPU its monitor is plugged into, otherwise I suspect I'll suffer performance issues as the frame buffers need to be copied back and forth between cards.
I'm using fullscreen surfaces, and I was thinking this was something vkGetPhysicalDeviceSurfaceSupportKHR would tell me. However, both VkSurfaceKHR appear to be valid targets for each VkPhysicalDevice so I guess this is something the OS and GPU Driver can handle, but is there any hint about which surface is optimal to associate with a device?
From what I can tell the extension VK_KHR_display is one way of doing this, but it's not available on my Windows 10 machine or Nvidia GPU. It seems to be intended for embedded platforms only. However it lets you list attached displays for each device which is pretty much what I'm looking for: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch29s03.html
This quote from the docs makes me belive this may not be supported on Windows:
Issues
1) Does Win32 need a way to query for compatibility between a particular physical device and a specific screen? Compatibility between a physical device and a window generally only depends on what screen the window is on. However, there is not an obvious way to identify a screen without already having a window on the screen.
RESOLVED: No. While it may be useful, there is not a clear way to do this on Win32. However, a method was added to query support for presenting to the windows desktop as a whole.
However, I'm still interested in hearing if there's a work around to achieve a similar effect.
Finally figured out a work around for this:
Direct X actually supports this through use of the IDXGIAdapter::EnumOutputs function. This lets you list the monitors connected to each GPU. Then using these two extensions you can remap this information to Vulkan:
VK_KHR_external_memory_capabilities
VK_KHR_get_physical_device_properties2
You can use these to get the deviceLUID from VkPhysicalDeviceIDPropertiesKHR.
This can then be compared with the Luid from this structure in Direct X DXGI_ADAPTER_DESC
You can also use glfwGetWin32Window to get the HWND of the monitor. This lets you associate a vulkan surface with a direct x monitor.
You now have all the information you need to accociate vulkan surfaces with the devices they're actually connected to.
At least in my application, setting this up correctly results in a significant difference in performance.
This would all be way simpler (and cross platform) if Windows would just support the VK_KHR_display and VK_KHR_display_swapchain extensions as Linux does.
There are two extensions that are useful for such things: the one mentioned by You, VK_KHR_display and the second called VK_KHR_display_swapchain which allows You to create a swapchain directly on a device’s display without any underlying window system.
But these extensions are rarely supported on Windows. In core Vulkan API there is no way to achieve what You want. And I'm afraid You need to use OS-specific functions (You need to rely on the WinAPI functions in this situation).
[EDIT]
Did You saw this question? How can you get the display adapter used for a particular monitor in Windows? If not, maybe it will help You start with Your research.
As you already discovered, on Win32 you need to use the OS windowing system to pick the display you want to use, using the Window API. It can be straight forward.
BUT if you intend to make simple and agnostic OS code, check GLFW project. It has high level functions to handle windows on all major OSs.
Check :
GLFW monitor Guide
GLFW Vulkan integration
GLFW on its own words:
GLFW is a free, Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platform-independent API for creating windows, contexts and surfaces, reading input, handling events, etc.

Command-Line linux OpenGL processing

I need to build a command line tool, that will take a 3D model as an argument, and will output photos of it, that may or may not be processed by this application. The tool will be deployed on Linux, but I want to make it as cross-platform as possible.
The program is not supposed to present a window of any kind, or accept any other input apart from the command line arguments.
I was wondering, how would someone approach this? I am currently able to display the 3D model on-screen with the help of GLFW, which actually drives my event handlers to peripheral input, and also my main loop. However, I don't know if using GLFW will help me if I want to make a command-line program with input-output as files.
Does anyone have any indications as to how to approach this?
create invisible/hidden window,
use its gl context to render to FBO and
use readpixels to save that to file
For OpenGL to work you need an OpenGL context. Which used to require some kind of windowing system active, that could produce you some drawable for which the context could be created.
Some OpenGL implementations, like Mesa, actually allow you to create an OpenGL context for drawables that are created without a windowing system; Mesa calls this "off-screen mesa". With Gallium3D drivers on Linux this even may give you GPU acceleration. But usually you end up in the "softpipe" software rasterizer.
Does anyone have any indications as to how to approach this?
Don't use OpenGL for it. OpenGL is mostly meant for creating interactive graphics; but of course if your goal is visualization of complex data, then a GPU would be better suited.
With NVidia hardware you'll need to use an X server for that; the X server must be running and active on the console for this to work. AMD hardware with the open source drivers and Mesa may give you off-screen capabilities without X (but I never tried that).
On Windows Server you don't have proper OpenGL support anyway (just v1.4 and very slow), so don't bother with it.

Hardware acceleration without X

I was wondering if it would be possible to get graphical hardware acceleration without Xorg and its DDX driver, only with kernel module and the rest of userspace driver. I'm asking this because I'm starting to develop on an embedded platform (something like beagleboard or more roughly a Texas instruments ARM chip with integrated GPU), and I would get hardware acceleration without the overhead of a graphical server (that is not needed).
If yes, how? I was thinking about OpenGL or OpengGLES implementations, or Qt embedded http://harmattan-dev.nokia.com/docs/library/html/qt4/qt-embeddedlinux-accel.html
And TI provides a large documentation, but still is not clear to me
http://processors.wiki.ti.com/index.php/Sitara_Linux_Software_Developer%E2%80%99s_Guide
Thank you.
The answer will depend on your user application. If everything is bare metal and your application team is writing everything, the DirectFB API can be used as Fredrik suggest. This might be especially interesting if you use the framebuffer version of GTK.
However, if you are using Qt, then this is not the best way forward. Qt5.0 does away with QWS (Qt embedded acceleration). Qt is migrating to LightHouse, now known as QPA. If you write a QPA plug-in that uses your graphics acceleration by whatever kernel mechanism you expose, then you have accelerated Qt graphics. Also of interest might be the Wayland architecture; there are QPA plug-ins for Wayland. Support exists for QPA in Qt4.8+ and Qt5.0+. Skia is also an interesting graphics API with support for an OpenGL backend; Skia is used by Android devices.
Getting graphics acceleration is easy. Do you want compositing? What is your memory foot print? Who is your developer audience that will program to the API? Do you need object functionality or just drawing primitives? There is a big difference between SKIA, PegUI, WindML and full blown graphics frameworks (Gtk, Qt) with all the widget and dynamics effects that people expect today. Programming to the OpenGL ES API might seem fine at first glance, but if your application has any complexity you will need a richer graphics framework; Mostly re-iterating Mats Petersson's comment.
Edit: From the Qt embedded acceleration link,
CPU blitter - slowest
Hardware blitter - Eg, directFB. Fast memory movement usually with bit ops as opposed to machine words, like DMA.
2D vector - OpenVG, Stick figure drawing, with bit manipulation.
3D drawing - OpenGL(ES) has polygon fills, etc.
This is the type of drawing you wish to perform. A framework like Qt and Gtk, give an API to put a radio button, checkbox, editbox, etc on the screen. It also has styling of the text and interaction with a keyboard, mouse and/or touch screen and other elements. A framework uses the drawing engine to put the objects on the screen.
Graphics acceleration is just putting algorithms like a Bresenham algorithm in a separate CPU or dedicated hardware. If the framework you chose doesn't support 3D objects, the frameworks is unlikely to need OpenGL support and may not perform any better.
The final piece of the puzzle is a window manager. Many embedded devices do not need this. However, many handset are using compositing and alpha values to create transparent windows and allow multiple apps to be seen at the same time. This may also influence your graphics API.
Additionally: DRI without X gives some compelling reasons why this might not be a good thing to do; for the case of a single user task, the DRI is not even needed.
The following is a diagram of a Wayland graphics stack a blog on Wayland.
This is depend on soc gpu driver implement ,
On iMX6 ,you can use wayland composite on framebuffer
I build a sample project as a reference
Qt with wayland on imx6D/Q
On omap3 there is a project
omap3 sgx wayland

Learning about low-level graphics programming

I'm interesting in learning about the different layers of abstraction available for making graphical applications.
I see a lot of terms thrown around: At the highest level of abstraction, I hear about things like C#, .NET, pyglet and pygame. Further down, I hear about DirectX and OpenGL. Then there's DirectDraw, SDL, the Win32 API, and still other multi-platform libraries like WxWidgets.
How can I get a good sense of where one of these layers ends and where the next one begins? What is the "lowest possible level" way of creating a window in Windows, in C? What about C++? (A code sample would be divine.) What about in X11? Are the Windows implementations of OpenGL and DirectX built on top of the Win32 API? Where can I begin to learn about these things?
There's another question on SO where Programming Windows is suggested. What about for Linux? Is there an equivalent such book?
I'm aware that this is very low-level, and that there are many friendlier tools available, but I would like to at least learn the basics of what's going on beneath the surface. As much as I'd like to begin slinging windows and vectors right off the bat, starting with something like pygame is too high-level for me; I really need to make the full conceptual circuit of how you draw stuff on a computer.
I will certainly appreciate suggestions for books and resources, but I think it would be stupendously cool if the answers to this question filled up with lots of different ways to get to "Hello world" with different approaches to graphics programming. C? C++? Using OpenGL? Using DirectX? On Windows XP? On Ubuntu? Maybe I ask for too much.
The lowest level would be the graphics card's video RAM. When the computer first starts, the graphics card is typically set to the 80x25 character legacy mode.
You can write text with a BIOS provided interrupt at this point. You can also change the foreground and background color from a palette of 16 distinctive colors. You can use access ports/registers to change the display mode. At this point you could say, load a different font into the display memory and still use the 80x25 mode (OS installations usually do this) or you can go ahead and enable VGA/SVGA. It's quite complicated, that's what drivers are for.
Once the card's in the 'higher' mode you'd change what's on screen by accessing the memory mapped to the video card. It's stored horizontally pixel by pixel with some 'dirty regions' of pixels that aren't mapped to screen at the end of each line which you have to compensate for. But yeah, you could copy the pixels of an image in memory directly to the screen.
For things like DirectX, OpenGL. rather than write directly to the screen, commands are sent to the graphics card and it updates its screen automatically. Commands like "Hey you, draw this image I've loaded into the VRAM here, here and here" or "Draw these triangles with this transformation matrix..." take a fraction of the time compared to pixel by pixel . The CPU will thank you.
DirectX/OpenGL is a programmer friendly library for sending those commands to the card with all the supporting functions to help you get it done smoothly. A more direct approach would only be unproductive.
SDL is an abstraction layer so without bothering to read up on it I'd guess it would have different ways of working on each system. On one it might use semi-direct screen writing, another Direct3D, etc. Whatever's fastest as long as the code stays cross-platform..able.
The GDI/GDI+ and XWindow system. They're designed specifically to draw windows. Originally they drew using the pixel-by-pixel method (which was good enough because they'd only have to redraw when a button was pressed or a window moved, etc.) but now they use Direct3D/OpenGL for accelerated drawing (and special effects). Optimizations depend on the versions and implementations of these libraries.
So if you want the most power and speed, DirectX/openGL is the way to go. SDL is certainly useful for getting the most from a cross-platform environment and integrates with OpenGL anyway. The windowing system comes last but don't underestimate it. Especially with the stuff Microsoft's coming up with lately.
Michael Abrash's Graphics Programming 'Black Book' is a great place to start. Plus you can download it for free!
If you really want to start at the bottom then drawing a line is the most basic operation. Computer graphics is simply about filling in pixels on a grid (screen), so you need to work out which pixels to fill in to get a line that goes from (x0,y0) to (x1,y1).
Check out Bresenham's algorithm to get a feel for what is involved.
To be a good graphics and image processing programmer doesn't require this low level knowledge, but i do hate to be clueless about the insides of what i'm using. I see two ways to chase this - high-level down, or bottom-level up.
Top-down is a matter of following how the action traces from a high-level graphics operation such as to draw a circle, to the hardware. Get to know OpenGL well. Then the source to Mesa (free!) provides a peek at how OpenGL can be implemented in software. The source to Xorg would be next, first to see how the action goes from API calls through the client side to the X server. Finally you dive into a device driver that interfaces with hardware.
Bottom up: build your own graphics hardware. Think of ways it could connect to a computer - how to handle massive numbers of pixels through a few byte-size registers, how DMA would work. Write a device driver, and try designing a graphics library that might be useful for app programmers.
The bottom-up way is how i learned, years ago when it was a possibility with the slow 8-bit microprocessors. The direct experience with circuitry and hardware-software interfacing gave me a good appreciation of the difficult design decisions - e.g. to paint rectangles using clever hardware, in the device driver, or higher level. None of this is of practical everyday value, but provided a foundation of knowledge to understand newer technology.
see Open GPU Documentation section:
http://developer.amd.com/documentation/guides/Pages/default.aspx
HTH
On MSWindows it is easy: you use what the API provides, whether it is the standard windows programming API or the DirectX-family API's: that's what you use, and they are well documented.
In an X windows environment you use whatever X11-libraries that are provided. If you want to understand the principles behind windowing on X, I suggest that you do this, nevermind that many others tell you not to, it will really help you to understand graphics and windowing under X. You can read the documentation on X-programming (google for it). (After this exercise you would appreciate the higher level libraries!)
Apart from the above, at the absolutely lowest level (excluding chip-level) that you can go is to call the interrupts that switch to the various graphics modes available - there are several - and then write to the screen buffers, but for this you would have to use assembler, anything else would be too slow. Going this way will not be portable at all.
Another post mentions Abrash's Black Book - an excellent resource.
Edit: As for books on programming Linux: it is a community thing, there are many howto's around; also find a forum, join it, and as long as you act civilized you will get all the help you can ever need.
Right off the bat, I'd say "you're asking too much." From what little experience I've had, I would recommend reading some tutorials or getting a book on either directX or OpenGL to start out. To go any lower than that would be pretty complex. Most of the books I've seen in OGL or DX have pretty good introductions that explain what the functions/classes do.
Once you get the hang of one of these, you could always dig in to the libraries to see what exactly they're doing to go lower.
Or, if you really, absolutely MUST learn the LOWEST level... read the book in the above post.
libX11 is the lowest level library for X11. I believe the opengl/directx talk to the driver/hardware directly (or emulate unsupported ops), so they would be the lowest level library.
If you want to start with very low level programming, look for x86 assembly code for VGA and fire up a copy of dosbox or similar.
Vulkan api is an api which gives you very low level access to most if not all features of the gpu, computational and graphical, it works on amd and Nvidia gpus (not all)
you can also use CUDA, but it only works on Nvidia gpus and has access to computational features only, no video output.

Fast, Pixel Precision 2D Drawing API for Graphics App?

I woud like to create a cross-platform drawing program. The one requirement for writing my app is that I have pixel level precision over the canvas. For instance, I want to write my own line drawing algorithm rather than rely on someone elses. I do not want any form of anti-aliasing (again, pixel level control is required.) I would like the users interactions on the screen to be quick and responsive (pending my ability to write fast algorithms.)
Ideally, I would like to write this in Python, or perhaps Java as a second choice. The ability to easily make the final app cross-platform is a must. I will submit to different API's on different OS'es if necessary as long as I can write an abstraction layer around them. Any ideas?
addendum: I need the ability to draw on-screen. Drawing out to a file I've got figured out.
I just this week put together some slides and demo code for doing 2d graphics using OpenGL from python using the library pyglet. Here's a representative post: Pyglet week 2, better vertex throughput (or 3D stuff using the same basic ideas)
It is very fast (relatively speaking, for python) I have managed to get around 1,000 independently positioned and oriented objects moving around the screen, each with about 50 vertices.
It is very portable, all the code I have written in this environment works on windows and Linux and mac (and even obscure environments like Pypy) without me ever having to think about it.
Some of these posts are very old, with broken links between them. You should be able to find all the relevant posts using the 'graphics' tag.
The Pyglet library for Python might suit your needs. It lets you use OpenGL, a cross-platform graphics API. You can disable anti-aliasing and capture regions of the screen to a buffer or a file. In addition, you can use its event handling, resource loading, and image manipulation systems. You can probably also tie it into PIL (Python Image Library), and definitely Cairo, a popular cross-platform vector graphics library.
I mention Pyglet instead of pure PyOpenGL because Pyglet handles a lot of ugly OpenGL stuff transparently with no effort on your part.
A friend and I are currently working on a drawing program using Pyglet. There are a few quirks - for example, OpenGL is always double buffered on OS X, so we have to draw everything twice, once for the current frame and again for the other frame, since they are flipped whenever the display refreshes. You can look at our current progress in this subversion repository. (Splatterboard.py in trunk is the file you'll want to run.) If you're not up on using svn, I would be happy to email you a .zip of the latest source. Feel free to steal code if you look into it.
If language choice is open, a Flash file created with Haxe might have a place. Haxe is free, and a full, dynamic programming language. Then there's the related Neko, a virtual machine (like Java's, Ruby's, Parrot...) to run on Mac, Windows and Linux. Being in some ways a new improved form of Flash, naturally it can draw stuff. http://haxe.org/
QT's Canvas an QPainter are very good for this job if you'd like to use C++. and it is cross platform.
There is a python binding for QT but I've never used it.
As for Java, using SWT, pixel level manipulation of a canvas is somewhat difficult and slow so I would not recommend it. On the other hand Swing's Canvas is pretty good and responsive. I've never used the AWT option but you probably don't want to go there.
I would recommend wxPython
It's beautifully cross platform and you can get per pixel control and if you change your mind about that you can use it with libraries such as pyglet or agg.
You can find some useful examples for just what you are trying to do in the docs and demos download.

Resources