I've used NVidia card, with the properary drivers installed on a Debian Stretch.
But because i'm carry my hard drive between different machines (intel, amd, but always on amd64 act), i'm decided to drop the NVidia card, and rollback opengl to MESA in order to use 3D acceleration on every machine. After a lot of struggling i'm successfully identified and recovered some badly overwritten file (libGL.so, libdrm2.so, by NVidia installer).
Now i'm successfully recovered the 64bit related libraries, so glxgears, browser's WebGL support, gnuplot, etc. works well.
But 32bit libraries (wine, steam) still doesn't work well, it's always falls back to "Mesa X11" render.
I'v used glxgears
$ LIBGL_DEBUG=verbose glxinfo | grep "OpenGL renderer string"
to identify which so and DRI selected. It's prints the lookup process and the renderer:
libGL: OpenDriver: trying /usr/lib/x86_64-linux-gnu/dri/tls/r600_dri.so
libGL: OpenDriver: trying /usr/lib/x86_64-linux-gnu/dri/r600_dri.so
libGL: Using DRI2 for screen 0
OpenGL renderer string: Gallium 0.4 on AMD SUMO (DRM 2.50.0 / 4.12.0-0.bpo.1-amd64, LLVM 3.9.1)
To investigate in 32 bit libraries (we can't have from mesa both the 64 and 32 bit installed), i've downloaded the 32bit version:
$ apt-get download mesa-utils:i386
Unpacked it and tries to figure out why it's fails to select the proper DRI:
LIBGL_DEBUG=verbose ./glxinfo | grep "OpenGL renderer string"
OpenGL renderer string: Mesa X11
The pevious 64bit glxinfo prints debugging information to the stderr therefore we can see how the selection happens.
With 32bit version i can't get any useful information, even if i specify the
LIBGL_DRIVERS_PATH=/usr/lib/i386-linux-gnu/dri/
evironment variable, where mesa might find the proper 32 bit so.
$ file /usr/lib/i386-linux-gnu/dri/r600_dri.so
/usr/lib/i386-linux-gnu/dri/r600_dri.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (GNU/Linux), dynamically linked, BuildID[sha1]=d5177f823f11ac8ea7412e517aa6684154de506e, stripped
How can i get more information about the MESA DRI selection?
Related
Iḿ trying to get vulkan to work but I get the following error:
vulkaninfo
ERROR: [Loader Message] Code 0 : /usr/lib/i386-linux-gnu/libvulkan_radeon.so: wrong ELF class: ELFCLASS32
ERROR: [Loader Message] Code 0 : /usr/lib/i386-linux-gnu/libvulkan_intel.so: wrong ELF class: ELFCLASS32
/build/vulkan-tools-KEbD_A/vulkan-tools-1.2.131.1+dfsg1/vulkaninfo/vulkaninfo.h:477: failed with ERROR_INITIALIZATION_FAILED
Following command dumps:
lspci -nnk | grep -iA2 vga
00:02.0 VGA compatible controller [0300]: Intel Corporation Core Processor Integrated Graphics Controller [8086:0046] (rev 02)
Subsystem: Dell Core Processor Integrated Graphics Controller [1028:0410]
Kernel driver in use: i915
I have added the following to my grub config and initialized it
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amdgpu.si_support=1 radeon.si_support=0 amdgpu.cik_support=1 radeon.cik_support=0"
followed by a reboot. The result is the same error :(
what am I doing wrong, can anyone help me?
Before I forget I installed vukan and mesa vulkan drivers and am running Ubuntu 20.04 LTS on a Dell Latitude E4310. Please help, I just want to play some windows (directX11) games with Wine.
This kind of cryptic error message can happen because vulkaninfo doesn't find any supported GPU.
It is likely that your GPU is not supported by Vulkan (too old), and so you won't be able to use DXVK (DirectX to Vulkan). You still may be able to run games without Vulkan by forcing Wine to use WineD3D (DirectX to OpenGl) instead. See Xaero_Vincent's answer in this reddit thread:
In Lutris you can easily disable DXVK as a option and on steam you can
force OpenGL-based WineD3D:
PROTON_USE_WINED3D=1 %command%
You'll notice though that DirectX 10/11 games will generally run
slower under OpenGL and some games will likely have graphics
artifacts, since DXVK is more mature and further developed.
I have an old x86 machine. I build generic intel-core2-i32 machine configuration and installed yocto on it. Display is very slow.
Ubuntu 12.04 runs smoothly on the same machine. Graphics driver is same on both cases : gma500_gfx
Then i looked into glxinfo and found the difference, ubuntu12.04 is using Gallium on llvmpipe
Whereas yocto is using:
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: softpipe
OpenGL core profile version string: 3.3 (Core Profile) Mesa 19.0.3
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.1 Mesa 19.0.3
OpenGL shading language version string: 1.40
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.1 Mesa 19.0.3
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
OpenGL ES profile extensions:
How can i change the OpenGL renderer string to "llvmpipe" on Yocto and will this help
You should start by checking that llvmpipe driver is actually enabled. The mesa recipe configuration has this suspicious bit (I believe swrast here really means llvmpipe):
GALLIUMDRIVERS = "swrast"
# gallium swrast was found to crash Xorg on startup in x32 qemu
GALLIUMDRIVERS_x86-x32 = ""
Check the configuration logs or the resulting packages: if llvmpipe is not being built, try removing the last line above and see if that improves things.
Can someone explain the difference between the three architectures?
Actually when I built a 64-bit application in Linux, I got a link error saying:
skipping incompatible library.a when searching for library.a
Then I used objdump -f on that library and I got the below output:
a.o: file format elf32-x86-64
architecture: i386:x64-32, flags 0x00000011:
HAS_RELOC, HAS_SYMS
start address 0x00000000
Does it mean the library is 32-bit? Is that the reason I am getting the linker error?
There are 3 common ABIs usable on standard Intel-compatible machines (not Itanium).
The classic 32-bit architecture, often called "x86" for short, which has triples like i[3-6]86-linux-gnu. Registers and pointers are both 32 bits.
The 64-bit extension originally from AMD, often called "amd64" for short, which has GNU triple of x86_64-linux-gnu. Registers and pointers are both 64 bits.
The new "x32" ABI, with a triple of x86_64-linux-gnux32. Registers are 64 bits, but pointers are only 32 bits, saving a lot of memory in pointer-heavy workflows. It also ensures all the other 64-bit only processor features are available.
Each of the above has its on system call interface, own ld.so, own complete set of libraries, etc. But it is possible to run all 3 on the same kernel.
On Linux, their loaders are:
% objdump -f /lib/ld-linux.so.2 /lib64/ld-linux-x86-64.so.2 /libx32/ld-linux-x32.so.2
/lib/ld-linux.so.2: file format elf32-i386
architecture: i386, flags 0x00000150:
HAS_SYMS, DYNAMIC, D_PAGED
start address 0x00000a90
/lib64/ld-linux-x86-64.so.2: file format elf64-x86-64
architecture: i386:x86-64, flags 0x00000150:
HAS_SYMS, DYNAMIC, D_PAGED
start address 0x0000000000000c90
/libx32/ld-linux-x32.so.2: file format elf32-x86-64
architecture: i386:x64-32, flags 0x00000150:
HAS_SYMS, DYNAMIC, D_PAGED
start address 0x00000960
Now, if you're getting the message about "skipping incompatible library", that means something is messed up with your configuration. Make sure you don't have bad variables in the environment or passed on the command line, or files installed outside of your package manager's control.
Beyond usual full 64bit and good old 32bit ABI there is a special ABI (inspired by SGI n32 envirnment) where pointers are 32bit (thus they are 32bit apps), but it is designed to run on 64bit host and have full access to all x64 goodies:
native x64 registers and math
more registers
SSE2/3/4, AVX1/2/...
Full 4Gb address space on 64bit host
It is called x32 ABI, link: https://en.wikipedia.org/wiki/X32_ABI
UPDATE
On Ubuntu system I have to install two packages (with deps) to get x32 working:
> sudo apt install gcc-multilib
> sudo apt install libx32stdc++-5-dev
Then compiling simlple C++ code with g++ -mx32 hellow.cpp works, making x32 executable
> file a.out
./a.out: ELF 32-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /libx32/ld-linux-x32.so.2, for GNU/Linux 3.4.0
I am a beginner learning linux kernel module development. I am following a tutorial that says to recompile my kernel so as to enable various debugging features like forced module unloading e.t.c. Is is okay if I do that? Does it effects my pre-built kernel. In what cases that I am forced to insert a module into a running kernel and the kernel won't allow me to do so?
It is perfectly okay to compile and install a kernel to do kernel module development. If you are in ubuntu, you can follow the following steps to make sure that you are using the same kernel sources as your booted machine.
Step 1. Find out the linux being used in your booting from /boot/grub/grub.cfg file. Look for the entry agains 'linux ' in the boot option entries that you select while booting up.
Example excerpt : linux /boot/vmlinuz-3.13.0-24-generic root=UUID=e377a464-92db-4c07-86a9-b151800630c0 ro quiet splash $vt_handoff
Step 2. Look for the name of the package with the same version using the following command.
dpkg -l | grep linux | grep 3.13.0-24-generic
Example output:
$ dpkg -l | grep linux | grep 3.13.0-24-generic
ii linux-headers-3.13.0-24-generic 3.13.0-24.46 amd64 Linux kernel headers for version 3.13.0 on 64 bit x86 SMP
ii linux-image-3.13.0-24-generic 3.13.0-24.46 amd64 Linux kernel image for version 3.13.0 on 64 bit x86 SMP
ii linux-image-extra-3.13.0-24-generic 3.13.0-24.46 amd64 Linux kernel extra modules for version 3.13.0 on 64 bit x86 SMP
Step 3. Download sources of the package "linux-headers-3.13.0-24-generic" to get the same kernel that was used in your PC.
$ apt-get source linux-headers-3.13.0-24-generic
Step 4. Use the config file that is available at /boot/ folder as the config file to compile this kernel source
Example :
$ ls /boot/config-3.13.0-24-generic (Notice the same version used in this file)
Step 5. Turn on your debugging symbols on this config to do your testing.
Recompiling kernel help us to learn how kernel work.
latest kernel patches can be applied through kernel compile and install.
We can enable debug flag through compilation.
We can remove the not needed code.
Helps to add your own kernel code and test your code.
It is easy to recompile and install the linux kernel but it takes more time if we compile using low speed computer or VM.
For my current project, I need to use CUDA and the Intel C/C++ compilers in the same project. (I rely on the SSYEV implementation of Intel's MKL, which takes roughly 10 times as long when using GCC+MKL instead of ICC+MKL (~3ms from GCC, ~300µs from ICC).
icc -v
icc version 12.1.5
NVIDIA states, that Intel ICC 12.1 is supported (http://docs.nvidia.com/cuda/cuda-samples/index.html#linux-platforms-supported), but even after having downgraded to Intel ICC 12.1.5 (installed as part of the Intel Composer XE 2011 SP1 Update 3), I am still running into this issue:
nvcc -ccbin=icc src/test.cu -o test
/usr/local/cuda-5.5/bin//..//include/host_config.h(72): catastrophic error: #error directive: -- unsupported ICC configuration! Only ICC 12.1 on Linux x86_64 is supported!
#error -- unsupported ICC configuration! Only ICC 12.1 on Linux x86_64 is supported!
Unfortunately, it seems as if Nvidia is merely tolerating the use of ICC, because I would hardly call it "support", given the lack of information provided by Nvidia for using ICC together with CUDA.
I am running Ubuntu 12.10 x86_64 and CUDA 5.5. Telling icc to mimick the behavior of the stock GCC 4.7.2 using the -Xcompiler -gcc-version=470 option did not help either. Using google/search, I was only able to find threads from the Nvidia forums dealing with CUDA 3.x and Intel ICC 11.1, but I was unable to transfer the obtained information to current CUDA releases.
I would be very grateful for any suggestion on how to solving this issue :-)
Referring to the file referenced in the error you received, it's specifically looking for an ICC compiler with a particular build date:
#if defined(__ICC)
#if !(__INTEL_COMPILER == 9999 && __INTEL_COMPILER_BUILD_DATE == 20110811) || !defined(__GNUC__) || !defined(__LP64__)
#error -- unsupported ICC configuration! Only ICC 12.1 on Linux x86_64 is supported!
#endif
The solution would be to have the intel compiler that actually matches that specified build date. As indicated, ICC 12.1, ie. version 12.1.0.233, instead of ICC 12.1.5 should do the trick.
The narrow focus is at least partly due to a test limitation. In this case, a particular ICC variant was tested with the CUDA toolkit before it was released, and so that host config check has this test in it.
I confronted the problem when compiling madagascar-1.5 with icc2013 and ifort2013. Then I try to resolve the problem by downloading ICC version 2011 update7. Based the INTEL_COMPILER_BUILD_DATE which is 20110811, I can download the correct one. I think the date 20110811 matched icc is the correct one.