AMD GPU won't work with Blender's "Cycles Render" on Linux - linux

I've been working a lot with Blender and it's "Cycles Render" on Fedora lately. But Blender keeps getting a lot slower while rendering. So I discovered that my Blender is only capable of rendering with my CPU. I tried running Blender from the terminal, so I could see any errors. And if I set "Device" to "GPU Compute" in the rendering settings, I get this output:
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [2]
param: 4, val: 0
My machine's specifications are:
Operaring system: Fedora GNU/Linux 27
Blender version: 2.79
Graphics card: AMD Radeon RX 480 using "amdgpu" driver (default open-source driver)
So it seems like, Blender's Cycle Render won't work with my AMD GPU...
Any ideas?

As far as I've seen on the release docs, the Blender cycles engine is not yet fully optimized for all AMD graphics cards, currently they only support AMD cards with GCN architecture 2.0 and above. The dev team focuses on NVIDIA cards mostly (also blender is most optimized for windows).
However, you might as well try to change the settings, first you must make sure you are using OpenCL and not CUDA in your User Preferences, under the System tab, Compute Device(s). Then if your card is not supported, enable the experimental features on the render properties of your workspace, which warn you that will make everything go unstable, this usually enables most AMD GPUs to be selectable as a render device. here on the render properties, you will also be selecting the compute device you desire to use for each scene.
Also, Using an official AMD driver would make rendering faster (also this is a requirement by Blender to use AMD cards) but its not available for fedora as far as I know. I suggest changing your distro to Ubuntu.
EDIT: You MUST use an official AMD driver for the desired card, I have checked the card you have is on the list of supported cards, just that it IS a requirement to have the AMD driver and not opensource. this is the list of supported cards https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units, according to the blender documentation.
But it must be a driver from this list: https://support.amd.com/en-us/download/linux, according to the blender documentation.
Now if that doesn't solve the issue then it must be a hardware issue or blender bug, although you could try to run it on windows to discard it being a hardware issue, if you are willing to do a dual boot or usb boot test.

Related

Executable gives different FPS values ​in Yocto and Raspbian(everything looks the same in terms of configuration)

In Yocto project, built my project which is running on Raspbian OS. When i run executable, i get half FPS compared to executable running on Raspbian OS.
The libraries i use:
OpenCV
Tensorflow-Lite, Flatbuffer, Libedgetpu
I use Libedgetpu1-std, Tensorflow-lite 2.4.0 on Raspbian and Libedgetpu 2.5.0, Tensorflow-lite 2.5.0 on Yocto.
Thinking that the problem is that the versions or configurations of the libraries are not the same, i followed these steps:
I ran the executable which i built in Raspbian directly in the runtime of the Yocto project.(I have set the required library versions to the same library versions available in raspbian for it to work in runtime.)
But i still got low FPS. Here is how i calculate that i get half the FPS:
I am using TFLite's interpreter invoke function. I set a timer when entering and exiting the function, i calculate FPS over it. I can exemplify like this:
Timer_Begin();
m_tf_interpreter->Invoke();
Timer_End();
Somehow i think the Interpreter Invoke function is running slower on the Yocto side. I checked Kernel versions, CPU speeds, /boot/config.txt contents, USB power consumes of Raspbian and Yocto. However, I couldn't catch anything from anywhere.
Note : Using RPI4 and Coral-TPU(Plugged into USB 2.0).
We spoke with #Paulo Neves. He recommend Perf profiling and i did . In the perf profiling, i noticed that the CPU is running slowly. Although the frequencies are the same.
When i check the "scaling_governor", i saw that it was in "powersave" mode. The problem solved when i switched from "powersave" to "performance" mode from virtual kernel.
In addition, if you want to make the governor change permanent, you need to create a kernel config fragment.

How do I enumerate and use OpenGL on a headless GPU?

Despite days of research, I can't seem to find a reliable way to programmatically create GL[ES] contexts on secondary GPUs. On my laptop, for example, I have the Intel GPU driving the panel, and a secondary NVIDIA card not connected to anything. optirun and primusrun let me run on the NVIDIA card, but then I can't detect the Intel GPU. I also don't want to require changing xorg.conf to add a dummy display.
I have tried a number of extensions, but none seem to work correctly:
glXEnumerateVideoDevicesNV returns 0 devices.
eglQueryDevicesEXT returns 0 devices.
eglGetPlatformDisplay only works with the main panel and gives me an Intel context
I am fine getting my hands dirty, e.g. rolling my own loader, but I can't seem to find any documentation for where to start. I've looked at the source for optirun but it just seems to do redirection. Obviously something equivalent to Windows' IDXGIFactory::EnumAdapters would be ideal, but I'm fine with anything that works without additional system configuration.

Tips to reduce opengl 3 & 4 frame-rate stuttering under Linux

In recent years I've developed several small games and applications for OpenLG 2 and ES.
I'm now trying to build a scene-graph based on opengl 3+ for casual “3D” graphics on desktop systems. (Nothing complex like the unreal- or crytec-engine in mind.)
I started my development with OsX 10.7 and was impressed by Apples recent ogl 3.2 release which achieves equivalent results compared to windows systems.
Otherwise the results for Linux are a disappointment. Even the most basic animation is stuttering and destroys the impression of reality. The results did not differ between the windows toolkits freeglut and glfw. (The Extensions are loaded with glew 1.7)
I would like to mention that I'm talking about the new opengl core, not the old opengl 2 render-path, which works fine under Linux but uses the cpu instead of the gpu for complex operations.
After watching professional demos like the “Unigine heaven demo” I think there is a general problem to use modern real-time 3D graphics with Linux.
Any suggestions to overcome this problem are very welcome.
UPDATE:
I'm using:
AMD Phenom II X6, Radeon HD57XX with latest proprietary drivers (11.8) and Unity(64Bit).
You could take my renderloop from the toolkit documentation:
do {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
...
} while (!glfwGetKey(GLFW_KEY_ESC) && glfwGetWindowParam(GLFW_OPENED));
I'm using VBOs and all transformation stages are done with shaders. Animation timing is done with glfwGetTime(). This problem occurs in window and full-screen mode. I don't know if a composition manager interferes with full-screen applications. But it is also impossible to demand from the user to disable it.
Update 2:
Typo: I'm using a HD57XX card.
GLX_Extension: http://snipt.org/xnKg
PCI Dump: 01:00.0 VGA compatible controller: ATI Technologies Inc Juniper [Radeon HD 5700 Series]
X Info: http://pastie.org/2507935
Update 3:
Disabling the composition manager reduces, but did not completely remove the stuttering.
(I replaced the standard window manager with "ubuntu classic without extensions")
Once a second the animation freezes and ugly distortions appear:
(Image removed - Not allowed to post Images.)
Although Vertical synchronisation is enabled in the driver and checked in the application.
Since you're running Linux we require a bit of detailed information:
Which hardware do you use?
Only NVidia, AMD/ATI and Intel offer 3D acceleration so far.
Which drivers?
For NVidia and AMD/ATI there are propritary (nvidia-glx, fglrx) and open source drivers (nouveau, radeon). For Intel there are only the open source drivers.
Of all open source 3D drivers, the Intel drivers offer the best quality.
The open source AMD/ATI drivers, "radeon" have reached an acceptable state, but still are not on par, performance wise.
For NVidia GPUs, the only drivers that makes sense to use productively are the propritary ones. The open source "nouveau" drivers simply don't cut it, yet.
Do you run a compositing window manager?
Compositing creates a whole bunch of synchronization and timing issues. Also (some of) the OpenGL code you can find in the compositing WMs at some places drives tears into the eyes of a seasoned OpenGL coder, especially if one has experience writing realtime 3D (game) engines.
KDE4 and GNOME3 by default use compositing, if available. The same holds for the Ubuntu Unity desktop shell. Also for some non-compositing WMs the default skripts start xcompmgr for transparency and shadow effects.
And last but not least: How did you implement your rendering loop?
A mistake oftenly found is, that a timer is used to issue redisplay events at "regular" intervals. This is not how it's done properly. Timer events can be delayed arbitrarily, and the standard timers are not very accurate by themself, too.
The proper way is to call the display function in a tight loop and measure the time it takes between rendering iterations, then use this timing to advance the animation accordingly. A truly elegant method is using one of the VSync extensions that delivers one the display refresh frequency and the refresh counter. That way instead of using a timer you are told exactly the time advanced between frames in display refresh cycle periods.

How to enable vtk "direct" hardware rendering, X11 / Ubuntu 11.04 x86-64?

I notice that the IsDirect() method which should say whether hardware rendering is on returns false on my vtkRenderWindow*, link to class doc:
http://www.vtk.org/doc/nightly/html/classvtkRenderWindow.html
How do I enable it? Seems like a build option in cmake should do it, but what? My OpenGL libs are found OK. I notice there's a thread from a while back here: http://www.vtk.org/pipermail/vtkusers/2005-July/081034.html but I don't see any more recent info so far, is it just not supported on Linux?
First of all, I doubt it is a cmake option to enable direct HW rendering.
Applications are falling back to the software rending when the hardware rendering is not possible (GPU doesn't support some of the picked options).
Without seeing some code and getting details (which GPU), it is not possible to give a better answer.

No touches after adding touchscreen driver to CE 6 in platform builder

I have added a TSHARC touchscreen driver to my Windows CE project, but the touch does not work. The dll is there, as is the touchscreen calibration executable. I have no visibility into which drivers are loaded and when. Any guidance would be appreciated.
You're going to have to do some debugging, and touchscreen drivers tend to be challenging because they get loaded into GWES and because the electrical characteristics of touchpanels change dramatically based on size and manufacturer. It's very rare for a driver to just work right out of the box - you almost always have to adjust sample timings and the like based on panel characteristics, and that's best done using an oscilloscope.
Things to check:
Is the driver getting loaded at all? A RETAILMSG/DEBUGMSG would tell you that
Are you getting touch interrupts?
After a down interrupt, is your code getting back to state to receive an up?
If you look at the timings from panel signals themselves, are you sampling when the signals are stable (i.e. you're not sampling too soon after the interrupt)?
Turns out it was a conflict between the OHCI driver and another USB driver already installed.

Resources