How to enable vtk "direct" hardware rendering, X11 / Ubuntu 11.04 x86-64? - linux

I notice that the IsDirect() method which should say whether hardware rendering is on returns false on my vtkRenderWindow*, link to class doc:
http://www.vtk.org/doc/nightly/html/classvtkRenderWindow.html
How do I enable it? Seems like a build option in cmake should do it, but what? My OpenGL libs are found OK. I notice there's a thread from a while back here: http://www.vtk.org/pipermail/vtkusers/2005-July/081034.html but I don't see any more recent info so far, is it just not supported on Linux?

First of all, I doubt it is a cmake option to enable direct HW rendering.
Applications are falling back to the software rending when the hardware rendering is not possible (GPU doesn't support some of the picked options).
Without seeing some code and getting details (which GPU), it is not possible to give a better answer.

Related

AMD GPU won't work with Blender's "Cycles Render" on Linux

I've been working a lot with Blender and it's "Cycles Render" on Fedora lately. But Blender keeps getting a lot slower while rendering. So I discovered that my Blender is only capable of rendering with my CPU. I tried running Blender from the terminal, so I could see any errors. And if I set "Device" to "GPU Compute" in the rendering settings, I get this output:
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [2]
param: 4, val: 0
My machine's specifications are:
Operaring system: Fedora GNU/Linux 27
Blender version: 2.79
Graphics card: AMD Radeon RX 480 using "amdgpu" driver (default open-source driver)
So it seems like, Blender's Cycle Render won't work with my AMD GPU...
Any ideas?
As far as I've seen on the release docs, the Blender cycles engine is not yet fully optimized for all AMD graphics cards, currently they only support AMD cards with GCN architecture 2.0 and above. The dev team focuses on NVIDIA cards mostly (also blender is most optimized for windows).
However, you might as well try to change the settings, first you must make sure you are using OpenCL and not CUDA in your User Preferences, under the System tab, Compute Device(s). Then if your card is not supported, enable the experimental features on the render properties of your workspace, which warn you that will make everything go unstable, this usually enables most AMD GPUs to be selectable as a render device. here on the render properties, you will also be selecting the compute device you desire to use for each scene.
Also, Using an official AMD driver would make rendering faster (also this is a requirement by Blender to use AMD cards) but its not available for fedora as far as I know. I suggest changing your distro to Ubuntu.
EDIT: You MUST use an official AMD driver for the desired card, I have checked the card you have is on the list of supported cards, just that it IS a requirement to have the AMD driver and not opensource. this is the list of supported cards https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units, according to the blender documentation.
But it must be a driver from this list: https://support.amd.com/en-us/download/linux, according to the blender documentation.
Now if that doesn't solve the issue then it must be a hardware issue or blender bug, although you could try to run it on windows to discard it being a hardware issue, if you are willing to do a dual boot or usb boot test.

Get screenshot of EGL DRM/KMS application

How to get screenshot of graphical application programmatically? Application draw its window using EGL API via DRM/KMS.
I use Ubuntu Server 16.04.3 and graphical application written using Qt 5.9.2 with EGLFS QPA backend. It started from first virtual terminal (if matters), then it switch display to output in full HD graphical mode.
When I use utilities (e.g. fb2png) which operates on /dev/fb?, then only textmode contents of first virtual terminal (Ctrl+Alt+F1) are saved as screenshot.
It is hardly, that there are EGL API to get contents of any buffer from context of another process (it would be insecure), but maybe there are some mechanism (and library) to get access to final output of GPU?
One way would be to get a screenshot from within your application, reading the contents of the back buffer with glReadPixels(). Or use QQuickWindow::grabWindow(), which internally uses glReadPixels() in the correct way. This seems to be not an option for you, as you need to take a screenshot when the Qt app is frozen.
The other way would be to use the DRM API to map the framebuffer and then memcpy the mapped pixels. This is implemented in Chromium OS with Python and can be translated to C easily, see https://chromium-review.googlesource.com/c/chromiumos/platform/factory/+/367611. The DRM API can also be used by another process than the Qt UI process that does the rendering.
This is a very interesting question, and I have fought this problem from several angles.
The problem is quite complex and dependant on platform, you seem to be running on EGL, which means embedded, and there you have few options unless your platform offers them.
The options you have are:
glTexSubImage2D
glTexSubImage2D can copy several kinds of buffers from OpenGL textures to CPU memory. Unfortunatly it is not supported in GLES 2/3, but your embedded provider might support it via an extension. This is nice because you can either render to FBO or get the pixels from the specific texture you need. It also needs minimal code intervertion.
glReadPixels
glReadPixels is the most common way to download all or part of the GPU pixels which are already rendered. Albeit slow, it works on GLES and Desktop. On Desktop with a decent GPU is bearable up to interactive framerates, but beware on embedded it might be really slow as it stops your render thread to get the data (horrible framedrops ensured). You can save code as it can be made to work with minimal code modifications.
Pixel Buffer Objects (PBO's)
Once you start doing real research PBO's appear here and there because they can be made to work asynchronously. They are also generally not supported in embedded but can work really well on desktop even on mediocre GPU's. Also a bit tricky to setup and require specific render modifications.
Framebuffer
On embedded, sometimes you already render to the framebuffer, so go there and fetch the pixels. Also works on desktop. You can enven mmap() the buffer to a file and get partial contents easily. But beware in many embedded systems EGL does not work on the framebuffer but on a different 'overlay' so you might be snapshotting the background of it. Also to note some multimedia applications are run with UI's on the EGL and media players on the framebuffer. So if you only need to capture the video players this might work for you. In other cases there is EGL targeting a texture which is copied to the framebuffer, and it will also work just fine.
As far as I know render to texture and stream to a framebuffer is the way they made the sweet Qt UI you see on the Ableton Push 2
More exotic Dispmanx/OpenWF
On some embedded systems (notably the Raspberry Pi and most Broadcom Videocore's) you have DispmanX. Whichs is really interesting:
This is fun:
The lowest level of accessing the GPU seems to be by an API called Dispmanx[...]
It continues...
Just to give you total lack of encouragement from using Dispmanx there are hardly any examples and no serious documentation.
Basically DispmanX is very near to baremetal. So it is even deeper down than the framebuffer or EGL. Really interesting stuff because you can use vc_dispmanx_snapshot() and really get a snapshot of everything really fast. And by fast I mean I got 30FPS RGBA32 screen capture with no noticeable stutter on screen and about 4~6% of extra CPU overhead on a Rasberry Pi. Night and day because glReadPixels got was producing very noticeable framedrops even for 1x1 pixel capture.
That's pretty much what I've found.

ncurses disable kernel messages on console screen?

Im looking for a way how to get rid of (kernel?) messages that appear in my ncurses app. I wrote the app myself, so i would prefer a API that redirects these messages to /dev/null. I mean messages like, a USB stick that is inserted.
I tried to add this, but unfortunately it doesn't work
freopen("/dev/null", "w", stderr);
I'm not running X, just ncurses direct from the console.
I mean messages like, a USB stick that is inserted.
Thanks!
UPDATE 1:
Someone votes to close this question because it would not be related to programming. But it is, i wrote the ncurses app myself, i'm looking for a way how to disable the kernel message. I updated the question.
UPDATE 2:
Let me explain what i'm doing, and whats the problem in more detail:
I'm using Tiny Core linux, thats after boots starts (self written) ncurses program. Now when you for example connect a USB drive, a message (i suspect kernel) is shown over my program. I guess the message is written straight into the framebuffer. Im using TC 5.x since i need 32 bit, im running as root and have full access to the os.
You should be able to use openvt to have your program run on a new Virtual Terminal.
I'll also note that it should be possible to embed control for the VTs yourself if you prefer to break the external dependency, but note that structures used may not be stable between kernel versions, and may require recompilation.
See the KBD project's sources, specifically openvt.c to see how it works.
Try configuring the kernel through boot parameters with the option:
loglevel=3 (or a lower value)
0 (KERN_EMERG) system is unusable
1 (KERN_ALERT) action must be taken immediately
2 (KERN_CRIT) critical conditions
3 (KERN_ERR) error conditions
4 (KERN_WARNING) warning conditions
5 (KERN_NOTICE) normal but significant condition
6 (KERN_INFO) informational
7 (KERN_DEBUG) debug-level messages
source: https://www.kernel.org/doc/Documentation/kernel-parameters.txt
See also: Change default console loglevel during boot up
It might be impossible to block some other process with sufficient access from writing to /dev/console but you may be able to redefine console as some other device, at boot time by setting console=ttyS0 (first serial port), see:
https://unix.stackexchange.com/questions/60641/linux-difference-between-dev-console-dev-tty-and-dev-tty0
Also if we know exactly which software is sending the message it may be possible to reconfigure it (possibly dynamically) but it would help to know the version and edition of Tiny Core Linux you are using?
E.g. this website has a "Core", "TinyCore" and "CorePlus" versions 1.x up to 7
http://tinycorelinux.net/downloads.html
This would help reproducing the exact same behavior and testing potential solutions.

Some tearing on QML animations

I am noticing some tearing with some QML 2 animations with Qt 5.4.2 on my Tegra 3 based embedded Linux board. I doubt if this is a complete vsync issue because most of the animations are smooth but there are some animations that involve a lot of parallel motion and clipping that tear consistently. These animation come out torn as opposed to simply stuttering so I don't think it is completely a performance issue either. Though it might be caused by the system not being able to put out the necessary FPS to sync properly? The exact same application has no such trouble on my Haswell i7 PC.
I have enabled QT_QPA_EGLFS_FORCEVSYNC to no effect and have not yet managed to find anything else that I can try. I should mention though that I am running EGLFS with an X11 backend (http://code.qt.io/cgit/qt/qtbase.git/tree/src/plugins/platforms/eglfs/qeglfshooks_x11.cpp?h=5.4) as a result of the Nvidia drivers dictating the use of X11. I would assume that this means that I can't really use the FB related settings normally available with EGLFS. Is there anything else that I can try to fix this?
PS. By setting QT_QPA_EGLFS_SWAPINTERVAL to 0 I can get the tearing to become a whole lot worse. This again suggests that I most likely do not have a whole system vsync issue.
PPS. I am getting a "QSGContext::initialize: stencil buffer support missing, expect rendering errors" warning at the start of my application.
On a Freescale/NXP imx6 with Vivante GC2000 I see a similar problem even when not using x11.
Setting "export QT_QPA_EGLFS_SWAPINTERVAL=2" seems to reduce the tearing on 3.14.38 kernel.
On 3.14.52 kernel that did not work but "export FB_MULTI_BUFFER=3" does help on both Qt 5.5.1 and 5.6 with imx6.

Tips to reduce opengl 3 & 4 frame-rate stuttering under Linux

In recent years I've developed several small games and applications for OpenLG 2 and ES.
I'm now trying to build a scene-graph based on opengl 3+ for casual “3D” graphics on desktop systems. (Nothing complex like the unreal- or crytec-engine in mind.)
I started my development with OsX 10.7 and was impressed by Apples recent ogl 3.2 release which achieves equivalent results compared to windows systems.
Otherwise the results for Linux are a disappointment. Even the most basic animation is stuttering and destroys the impression of reality. The results did not differ between the windows toolkits freeglut and glfw. (The Extensions are loaded with glew 1.7)
I would like to mention that I'm talking about the new opengl core, not the old opengl 2 render-path, which works fine under Linux but uses the cpu instead of the gpu for complex operations.
After watching professional demos like the “Unigine heaven demo” I think there is a general problem to use modern real-time 3D graphics with Linux.
Any suggestions to overcome this problem are very welcome.
UPDATE:
I'm using:
AMD Phenom II X6, Radeon HD57XX with latest proprietary drivers (11.8) and Unity(64Bit).
You could take my renderloop from the toolkit documentation:
do {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
...
} while (!glfwGetKey(GLFW_KEY_ESC) && glfwGetWindowParam(GLFW_OPENED));
I'm using VBOs and all transformation stages are done with shaders. Animation timing is done with glfwGetTime(). This problem occurs in window and full-screen mode. I don't know if a composition manager interferes with full-screen applications. But it is also impossible to demand from the user to disable it.
Update 2:
Typo: I'm using a HD57XX card.
GLX_Extension: http://snipt.org/xnKg
PCI Dump: 01:00.0 VGA compatible controller: ATI Technologies Inc Juniper [Radeon HD 5700 Series]
X Info: http://pastie.org/2507935
Update 3:
Disabling the composition manager reduces, but did not completely remove the stuttering.
(I replaced the standard window manager with "ubuntu classic without extensions")
Once a second the animation freezes and ugly distortions appear:
(Image removed - Not allowed to post Images.)
Although Vertical synchronisation is enabled in the driver and checked in the application.
Since you're running Linux we require a bit of detailed information:
Which hardware do you use?
Only NVidia, AMD/ATI and Intel offer 3D acceleration so far.
Which drivers?
For NVidia and AMD/ATI there are propritary (nvidia-glx, fglrx) and open source drivers (nouveau, radeon). For Intel there are only the open source drivers.
Of all open source 3D drivers, the Intel drivers offer the best quality.
The open source AMD/ATI drivers, "radeon" have reached an acceptable state, but still are not on par, performance wise.
For NVidia GPUs, the only drivers that makes sense to use productively are the propritary ones. The open source "nouveau" drivers simply don't cut it, yet.
Do you run a compositing window manager?
Compositing creates a whole bunch of synchronization and timing issues. Also (some of) the OpenGL code you can find in the compositing WMs at some places drives tears into the eyes of a seasoned OpenGL coder, especially if one has experience writing realtime 3D (game) engines.
KDE4 and GNOME3 by default use compositing, if available. The same holds for the Ubuntu Unity desktop shell. Also for some non-compositing WMs the default skripts start xcompmgr for transparency and shadow effects.
And last but not least: How did you implement your rendering loop?
A mistake oftenly found is, that a timer is used to issue redisplay events at "regular" intervals. This is not how it's done properly. Timer events can be delayed arbitrarily, and the standard timers are not very accurate by themself, too.
The proper way is to call the display function in a tight loop and measure the time it takes between rendering iterations, then use this timing to advance the animation accordingly. A truly elegant method is using one of the VSync extensions that delivers one the display refresh frequency and the refresh counter. That way instead of using a timer you are told exactly the time advanced between frames in display refresh cycle periods.

Resources