Covid has me working remotely, and I'm pretty stuck here.
I've had no problem using X11 forwarding for UIs and even for 3D OpenSceneGraph visualizations, but a new application I need to run gives the following error on the remote computer:
X Error of failed request: GLXUnsupportedPrivateRequest
Major opcode of failed request: 149 (GLX)
Minor opcode of failed request: 16 (X_GLXVendorPrivate)
Serial number of failed request: 35
Current serial number in output stream: 38
The remote computer is a linux machine running ubuntu 16.04, the local computer is a mac using XQuartz as the X server.
To the best of my research it seems like the local computer does not support a certain GLX request. I'm wondering if this is just a missing GLX feature from XQuartz that can't be solved.
I tried to investigate if GLX was the problem. If I run 'glxdemo' I see the yellow square no problem. Possibly related - when I run glxgears, I see gears that are NOT moving, despite no errors and the console reporting non-zero fps.
Some other interesting information, here is the result of me running 'glxinfo' on the remote computer. Is this related to direct rendering not being on? I read this has something to do with using libGL.so from NVidia which doesn't support remote GLX. Is that the right path to go down? Something else that is weird is that 'OpenGL renderer string' references the intel chip inside, not the nvidia graphics card. Why would that be the case and is it related?
name of display: localhost:10.0
display: localhost:10 screen: 0
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGIS_multisample,
GLX_SGIX_fbconfig
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
client glx extensions:
GLX_ARB_context_flush_control, GLX_ARB_create_context,
GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile,
GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float,
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age,
GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile,
GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB,
GLX_EXT_import_context, GLX_EXT_stereo_tree, GLX_EXT_swap_control,
GLX_EXT_swap_control_tear, GLX_EXT_texture_from_pixmap,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_NV_copy_buffer,
GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer,
GLX_NV_multisample_coverage, GLX_NV_present_video,
GLX_NV_robustness_video_memory_purge, GLX_NV_swap_group,
GLX_NV_video_capture, GLX_NV_video_out, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SGI_swap_control, GLX_SGI_video_sync
GLX version: 1.4
GLX extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig
OpenGL vendor string: Intel Inc.
OpenGL renderer string: Intel(R) Iris(TM) Plus Graphics 640
OpenGL version string: 1.4 (2.1 INTEL-10.25.24)
OpenGL extensions:
GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_fragment_program,
GL_ARB_fragment_program_shadow, GL_ARB_multisample, GL_ARB_multitexture,
GL_ARB_occlusion_query, GL_ARB_point_parameters, GL_ARB_point_sprite,
GL_ARB_shadow, GL_ARB_texture_border_clamp, GL_ARB_texture_compression,
GL_ARB_texture_cube_map, GL_ARB_texture_env_add,
GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar,
GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat,
GL_ARB_texture_non_power_of_two, GL_ARB_transpose_matrix,
GL_ARB_vertex_program, GL_ARB_window_pos, GL_ATI_texture_mirror_once,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color,
GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate,
GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_draw_range_elements,
GL_EXT_fog_coord, GL_EXT_framebuffer_object, GL_EXT_multi_draw_arrays,
GL_EXT_rescale_normal, GL_EXT_secondary_color,
GL_EXT_separate_specular_color, GL_EXT_shadow_funcs,
GL_EXT_stencil_two_side, GL_EXT_stencil_wrap,
GL_EXT_texture_compression_dxt1, GL_EXT_texture_compression_s3tc,
GL_EXT_texture_env_add, GL_EXT_texture_filter_anisotropic,
GL_EXT_texture_lod_bias, GL_NV_blend_square, GL_NV_depth_clamp,
GL_NV_fog_distance, GL_NV_light_max_exponent, GL_NV_texgen_reflection,
GL_SGIS_generate_mipmap, GL_SGIS_texture_lod
Any help would be super appreciated. Working remotely is kind of tough but I'm forced to do it.
I ran into the same issue with a different program, after installing an Nvidia GPU in my server in order to run CUDA applications. It seems that after doing so, some GL applications (but not glxinfo or glxgears) load Nvidia's libGLX.so instead of the Mesa indirect one that should be used for GLX forwarding over ssh.
This was solved by entering the following on the remote machine:
sudo update-glx --config glx
And then selecting the Mesa driver.
I do think this disables the Nvidia GLX driver system-wide, which may not be desirable if the graphics card is used for local video output.
I'm running linux on a Atlas-SoC Kit/DE0-Nano-SoC Kit.
Through u-boot I've placed the kernel at a different location.
mcboot=setenv bootargs console=ttyS0,115200 root=${mmcroot} rw rootwait;bootz ${loadaddr} - ${fdtaddr}
mmcload=mmc rescan;${mmcloadcmd} mmc 0:${mmcloadpart} ${loadaddr} ${bootimage};${mmcloadcmd} mmc 0:${mmcloadpart} ${fdtaddr} ${fdtimage}
mmcloadcmd=fatload
mmcloadpart=1
mmcroot=/dev/mmcblk0p2 mem=744M memmap=744M$256M
the last line, request 744M starting at 256M offset.
No my question is, how can i verify that this actually happend? This since i'm reading mixed solutions online between using device-tree and memmap configurations. And i want to make sure, before i continue on writing the device driver section.
My /proc/iomem:
root#cyclone5:~# cat /proc/iomem
00000000-2e7fffff : System RAM
00008000-0077656f : Kernel code
007d6000-00859433 : Kernel data
ff702000-ff703fff : /soc/ethernet#ff702000
ff704000-ff704fff : /soc/dwmmc0#ff704000
ffb40000-ffb4fffe : /soc/usb#ffb40000
ffc00000-ffc00fff : c_can_platform
ffc02000-ffc0201f : serial
ffc04000-ffc04fff : /soc/i2c#ffc04000
ffc05000-ffc05fff : /soc/i2c#ffc05000
ffd02000-ffd02fff : /soc/wd#ffd02000
ffe01000-ffe01fff : /soc/amba/pdma#ffe01000
fff01000-fff01fff : fff01000.spi
ffff0000-ffffffff : /soc/sram#ffff0000
any detailed explanations would be highly appreciated,
regards Auke
Device tree and kernel parameter memmap both helps in reserving memory that will not be used by linux kernel. Take a look at linux device tree memory documentation and kernel parameters.
You can use Emulator or high end debugger for example Trace32 to see memory contents.
I use a board running with Linux 3.0.8.
I follow the instructions from Documentation/input/rotary-encoder.txt and call the following in driver module:
platform_device_register(&rotary_encoder_device);
The kernel prints the following when boots up:
[5.629682] input: rotary-encoder as /devices/platform/rotary-encoder.0/input/input1
and I can get this:
# ls /sys/devices/platform/rotary-encoder.0/input/input1/
capabilities id name properties uevent
device modalias phys subsystem uniq
However there is no device node /dev/input/eventN created.
How to fix this? Thanks for any help.
I dual booted arch linux with window 8.1. After installation, while booting for the first time, this error occurred.
Here is the error I found.
{ [0.063411] Ignoring BGRT: Invalid status 0 (expected 1)
starting version 218
Arch_Linux: clean, 70622/18677760 files, 1643393/74703025 blocks
[ 15.759913] nouveau E[ PIBUST][0000:01:00.0] HUBO: 0xffffffff (0x1a
[ 15.759952] nouveau E[ PIBUST][0000:01:00.0] GPCO: 0x00000001 (0x1a
[ 15.823494] nouveau E[ PBUS][0000:01:00.0] MMIO: 0x00000000 FAULT a
[ 15.824050] nouveau E[ DRM]Pointer to TMDS table invalid
[ 15.824072] nouveau E[ DRM]Pointer to flat panel table invalid}
The following steps helped me resolve a similar issue.
add the boot parameter 'nomodeset' to grub at boot. To do this, press e over the Linux option on the grub menu. Then add 'nomodeset' before the 'quiet splash'.
Press Ctrl+X or F10 to continue booting.
This stops the graphics driver from getting loaded prior to booting into Linux.
If this doesn't work, remove the 'quiet splash' so you can inspect the messages at boot.
In my case, one of my drives had errors that needed fsck to fix. Fixing the drive errors let me boot without issues.
Hope this helps someone.
I am writing a usb driver (for a gamepad) on linux, and when I plug it in, ti loads usbhid. How can I make it so it loads my driver (gp_driver)?
I did the unbind usbhid and bind to my driver trick, but I don't want to do it every single time.
Should I have my driver already loaded?
Should I code something in my driver?
I have the vendor and product id in my driver..
thanks
You will want to create a udev rule for your device, which can take care of creating your device file, setting permissions on the device file, and loading relevant drivers.
Resources
http://reactivated.net/writing_udev_rules.html
http://www.redhat.com/magazine/002dec04/features/udev/
Example
Taken from: http://plugcomputer.org/plugwiki/index.php/Load_Serial_Drivers_Automatically_Using_udev
# if no driver has claimed the interface yet, load ftdi_sio
ACTION=="add", SUBSYSTEM=="usb", ENV{DEVTYPE}=="usb_interface", \
ATTRS{idVendor}=="9e88", ATTRS{idProduct}=="9e8f", \
DRIVER=="", \
RUN+="/sbin/modprobe -b ftdi_sio"
Binding a (in your case HID-) device to a specific driver is not a trivial task and depends on the kernel version you are using:
Kernel < 4.16
Before kernel 4.16 you had to edit and recompile drivers/hid/hid-core.c since this file contained a list of devices which should not be not handled by hid-generic (hid_have_special_driver struct), you can see an example of how this was done here:
https://github.com/atar-axis/xpadneo/blob/master/misc/kernel_patches/0002-hid_generic_claims_devices.patch
Kernel >= 4.16
Starting with Kernel 4.16 the list was removed and hid-generic checks if any of the other drivers wants the device, if so - then hid-generic steps back and does not claim the device.
The related patch is: https://github.com/torvalds/linux/commit/e04a0442d33b8cf183bba38646447b891bb02123#diff-88d50bd989bbdf3bbd2f3c5dcd4edcb9
Workaround (works always)
You can always use an udev-rule (e.g. /etc/udev/rules.d/99-xpadneo.rules), either before 4.16 or whenever there is more than one specialized driver in your system:
# unbind the device from hid-generic on kernel < 4.16
# and bind it to the specialized driver (xpadneo in this case)
ACTION=="add", KERNEL=="0005:045E:02FD.*|0005:045E:02E0.*", SUBSYSTEM=="hid",\
RUN+="/bin/bash -c 'echo $kernel > /sys/bus/hid/drivers/hid-generic/unbind'", \
RUN+="/bin/bash -c 'echo $kernel > /sys/bus/hid/drivers/xpadneo/bind'"
# unbind the device from another specialized driver which came first
# and bind it to xpadneo
ACTION=="add", KERNEL=="0005:045E:02FD.*|0005:045E:02E0.*", SUBSYSTEM=="hid",\
RUN+="/bin/bash -c 'echo $kernel > /sys/bus/hid/drivers/microsoft/unbind'", \
RUN+="/bin/bash -c 'echo $kernel > /sys/bus/hid/drivers/xpadneo/bind'"
Notes
Instead of bash you should maybe use sh
I don't remember when bind and unbind where added, but it quite a while ago.
You can read a bit more about loading, binding and registration of (HID-) drivers here:
http://0x0001.de/linux-driver-loading-registration-and-binding/
https://github.com/atar-axis/xpadneo/issues/33
According to this Linux Journal article, you need to have:
A pointer to the module owner of your driver
The name of the USB driver
A list of the USB IDs this driver should provide
A probe() function
A disconnect() function
Now, I suspect, because it is loading the standard driver, you may not have either 3, 4, or maybe you haven't registered the driver with the USB subsystem at all.
I've never written a USB driver before (only hacked char/mem.c), but this info might come in handy.