I have basic Vulkan application, "Hello triangle",derived from this tutorial. In debug mode I use two validations layers:
VK_LAYER_KHRONOS_validation and VK_LAYER_LUNARG_monitor
On Windows I got no problem. Both layers load, I can see FPS counter in the title bar. But I am experiencing different behaviour on Linux (Ubuntu 18.04).
Only VK_LAYER_LUNARG_monitor loads. No FPS counter appears in the title bar.
VK_LAYER_KHRONOS_validation fails to load with the error:
ERROR: libVkLayer_khronos_validation.so: cannot open shared object
file: No such file or directory
I performed Linux setup according to the guidelines appearing in this LunarG doc.
Put the whole LunarG SDK in an arbitrary location on my Linux
filesystem.
Sourced setup-env.sh into .profile file
Inside the setup-env.sh script I also added VK_LOADER_DEBUG=all in order to see detailed info from Vulkan Loader.
Verified that all the variables set by the script point to correct directories in the SDK. They do.
setup-env.sh:
VULKAN_SDK="$(dirname "$(readlink -f "${BASH_SOURCE:-$_}" )" )/x86_64"
export VULKAN_SDK
PATH="$VULKAN_SDK/bin:$PATH"
export PATH
LD_LIBRARY_PATH="$VULKAN_SDK/lib:${LD_LIBRARY_PATH:-}"
export LD_LIBRARY_PATH
VK_LAYER_PATH="$VULKAN_SDK/etc/vulkan/explicit_layer.d"
export VK_LAYER_PATH
export VK_LOADER_DEBUG=all
Here is the chunk I am gettin from debug output that show the failure of loading the layers:
DEBUG: Loading layer library
/home/platform-thirdparty/vulkan-sdk/1.1.121.1/x86_64/lib/libVkLayer_monitor.so
INFO: Inserted device layer VK_LAYER_LUNARG_monitor
(/home/platform-thirdparty/vulkan-sdk/1.1.121.1/x86_64/lib/libVkLayer_monitor.so)
Message:libVkLayer_khronos_validation.so: cannot open shared object
file: No such file or directory ERROR:
libVkLayer_khronos_validation.so: cannot open shared object file: No
such file or directory
Even though it says that
Inserted device layer VK_LAYER_LUNARG_monitor
The monitor doesn't work.
If I replace VK_LAYER_KHRONOS_validation with VK_LAYER_LUNARG_standard_validation I am getting the same load fail error.
The debugger also shows that the loader did find all the manifest files for all the layers in the SDK.
I have no clue what else can be a problem.
My system setup:
Ubuntu 18.04 64bit, Nvidia RTX 3000
Nvidia Drivers: 430.0
VulkanSDK(LunarG): 1.1.121.1
In this SO question lies the answer by user1824407.
It appears that in the latest Ubuntu distributions (at least in 18.04 I use) setting LD_LIBRARY path into config files like:
.profile or /etc/environment or .bashrc
Doesn't take any effec.And indeed,I tried adding the lib path to all of those files, the libs didn't load even that echo $LD_LIBRARY_PATH printed the pass I set.
And only when I added the path with the Vulkan validation layers location to /etc/ld.so.conf.d/devlibs.conf where
devlibs.conf
is an arbitrary name I gave the file,now I see both VK_LAYER_LUNARG_monitor and VK_LAYER_KHRONOS_validation are loaded by Vulkan.
Related
I am using a Linux HPC environment to run C++ models. The models require certain modules to be loaded which I have specified through setting up a .profile file, therefore the modules are always loaded when I log into the HPC through the terminal.
However I would like to run the models when I access the HPC through a visual Rstudio session through the ondemand.HPC software, which runs R as a batch job. I have heard that in this case R is operating through a non-interactive session, which means i need a .bashrc file to specify the modules which require loading. So I have created both files and left them in my home directory.
Here are my .profile :
module load nano
module load use.own
module load intel intelmpi dependencies cmake
and .bashrc files:
module load nano
module load use.own
module load intel intelmpi dependencies cmake
if [ -f ~/.profile ]; then
. ~/.profile
fi
Unfortunately, the modules are still not loaded when opening an batch based visual Rstudio session through OnDemand.HPC. I am a linux rookie therefore any help or tips would be much appreciated.
Thank you for considering this issue.
I have a strange behavior on an embedded linux system. It is created from an image file and has been used on many system for about a year without any issues. But a new system that I installed just now fail to work.
I have an app that is started by one of the boot scripts, and is dependent on a library foo.so in a special path.
So the script sets LD_LIBRARY_PATH to foopath.
LD_LIBRARY_PATH=foopath \
foo.app
But after boot I can see that the app was never started and if I try to start it manually it complains that it can't find the library.
The strange thing is that I can get it to work by setting the same path again by just copying the path using the clipboard, like this:
$ echo $LD_LIBRARY_PATH
foopath
$ export LD_LIBRARY_PATH=foopath
After this my app will find the library and executes correct.
But if I reboot the system the problem is of course back again.
What could be wrong, and what can I do to solve the issue?
As seen in the comments I got it to work by changing my script like this, but I still can't understand why it works on some systems, but not all.
export LD_LIBRARY_PATH=foopath
foo.app
I used a library (the library is called cublas) in my CUDA code which I 'm writing in linux, I can compile my code successfully using nvcc but when I run the code I get this error:
error while loading shared libraries: libcublas.so.7.5: cannot open shared object file: No such file or directory
I found This link as a solution which suggests to run some commands in order to solve the issue. one solution based on this link is running the following command:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib
The system which gpu locates in it and I am using to run my code on it is shared between me and other people, I am not a sudo user in that system. I am wondering if running this command is safe and does not affect other settings of the system
Can anyone please help me to know that>
Thank you so much
I am wondering if running this command is safe and does not affect other settings of the system
Yes, you'll be fine.
This command:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib
only affects the process you execute it in.
It will not affect any other user's process, or any other process of yours.
Furthermore, if you log out and log in again, you will have to execute that command again. It is not "sticky"
Hi I am using an industrial machine vision camera (Basler) in a Qt based project. I work under linux Ubuntu 12.04 64 bit and Qt 4.8
I have set $LD_LIBRARY_PATH and all the required environment variables to find the camera libraries but Still Qt creator gives error when wants to run my application. it gives the following error:
No absolute path for '/opt/GenICam_v2_3/bin/Linux64_x64/GenApi/Generic/libXMLLoader_gcc40_v2_3.so'
it is amazing that when I run my Qt project from the Ubuntu terminal it runs without any problem but When I want to run it from the Qt creator it gives the above error message. why Qt creator fails to find the library path ?
This path is not a valid path in my machine and I don't have such a folder "/opt/GenICam_v2_3" in my machine, and I don't know why Qt insists on creating such folder.
I used the ldd command for my application and all the libraries was found So I'm sure that nothing is missing.
Is there anything else that I have forget to set before I could run it in Qt Creator ? Why such a phantom location is necessary for Qt Creator ?
I tried to put my libXMLLoader_gcc40_v2_3.so which is in another place to the above location but it gives another error message in Qt creator :
Environment variable 'GENICAM_ROOT_V2_3' not found
I have set this environment variable in /etc/bash.bashrc so I'm sure I've declared it but still Qt Creator cannot find it too.
Content of bash.bashrc
export PYLON_ROOT=/opt/pylon4
export GENICAM_ROOT_V2_3=${PYLON_ROOT}/genicam
export GENICAM_CACHE_V2_3=$HOME/genicam_xml_cache
export LD_LIBRARY_PATH=${PYLON_ROOT}/lib64:${GENICAM_ROOT_V2_3}/bin/Linux64_x64:${GENICAM_ROOT_V2_3}/bin/Linux64_x64/GenApi/Generic:$LD_LIBRARY_PATH
I copied all the above information to .bashrc in the home directory too.
What should I do to run my application in Qt creator because I need to debug my application and in Ubuntu terminal I cannot do such thing easily.
Thanks
I am also trying to make a Basler camera work, and I had the exact same problem as you; except that the same error occured when I build from the terminal. I got the error when I attempt to execute one of the examples after building it.
For me the problem was, that I had only set paths for "GENICAM_ROOT" in .bashrc and not for "GENICAM_ROOT_V2_3" (since only "GENICAM_ROOT" was used by the accompanying makefile). Adding "GENICAM_ROOT_V2_3" solved the problem.
My .bashrc:
export PYLON_ROOT=/opt/pylon3
export GENICAM_ROOT=/opt/pylon3/genicam
export GENICAM_ROOT_V2_3=/opt/pylon3/genicam
export GENICAM_CACHE=~/tmp
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${PYLON_ROOT}/lib64
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${GENICAM_ROOT}/bin/Linux64_x64
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${GENICAM_ROOT}/bin/Linux64_x64/GenApi/Generic
Hope this can help someone.
I have a BeagleBone - no LCD/display. In the console when I try and use startx, it says /dev/fb0 doesn't exist. The xorg.conf file is using the fbdev driver. Apparently, if an LCD is detected, everything works.
How can I setup a virtual display so I can vnc to it?
Thought I better answer this for reference. Oh, I also got the 'Tumbleweed' badge... Great...
If no LCD/DVI cape is attached, then the boot doesn't load a frame buffer (/dev/fb0). As such, no X11 server starts up. x11vnc requires a real X11 server to be running for it to work. There is also the program xvnc which can create a virtual X11/frame buffer on your behalf, but I couldn't see it in the Angstrom packages.
So, I installed Xvfb - and created a virtual frame buffer. Install the package
xserver-xorg-xvfb
When starting, keep in mind (for the newbies like me coming from Windows), it is case-sensitive. To create a virtual X11 server;
Xvfb :1 -screen 0 1024x768x16 &
When you do this, you will probably get these errors;
(EE) AIGLX error: dlopen of /usr/X11/lib/dri/swrast_dri.so failed (dlopen(/usr/X11/lib/dri/swrast_dri.so, 5): image not found)
(EE) GLX: could not load software renderer
So, load the package;
mesa-dri-driver-swrast
OK, error gone. Now we can export our display (an environment variable so Firefox, or whatever X11 client you run, can attach to the display).
export DISPLAY=:1
Load up Firefox (something to see)
firefox &
And now we try and start the x11vnc;
x11vnc -display :1 -bg -nopw -xkb
At this point, with this distro, you'll see an error about XTEST not being found/not available when it was built. Here describes the issue.
I made sure that I had all the proper libraries installed, so I figured it must have been a bad build on Angstrom. So, now to build it myself. I ensured all required libraries were available; these are the ones ending with '-dev'; by default they all appeared to be available. I followed the instructions here.
Except the copy line didn't work too good for me, so do what you need to do to copy it to the /usr/bin folder.
Now it starts, and there are no errors about XTEST, and the input works!