I'm new to Theano.
Trying to set up a config file.
First of all, I notice that I have no .theanorc file:
locate .theanorc - returns nothing
echo $THEANORC - returns nothing
theano.test() - passes ok
I'm guessing some default configuration was created wen i installed theano. Where is it?
Theano does not create any configuration file by itself, but has default values for all its configuration flags. You only need such a file if you want to modify the default values.
This can be done by creating a .theanorc file in your home directory. For example, if you want floatX to be always float32, you can do this:
echo -e "\n[global]\nfloatX=float32\n" >> ~/.theanorc
under Linux and Mac. Under windows, this can also be done. See this page for more details:
http://deeplearning.net/software/theano/library/config.html
In Linux in terminal Home directory write:
nano .theanorc
In the file copy the following lines
[global]
floatX = float32
device = gpu0
[lib]
cnmem = 1
Save it.
When I import theano in python I was having cnmem memory problems. Seems that is because the monitor is connected to the gpu. To resolve it change cnmem to 0.8. This number below 1 is the percentage of gpu reserved for theano
I had a similar question and this is what helped me:
import theano
//...
theano.config.floatX = 'float32' //or 'float64' whatever you want
This worked for me:
nano ~/.theanorc
Then I entered:
[global]
floatX = float32
device = cuda
Code to check if Theano is using the GPU is on the Theano doc page.
(I am using Ubuntu 14.04, Theano 0.9.0 (conda), NVIDIA 1080 Ti GPU).
I have been having similar problems. I have NVIDIA 1070 GPU on a desktop machine with Asus Z270E motherboard and was able to import theano after setting up the .theanorc file as below. (And rebooting afterwards)
[global]
floatX = float32
device = gpu
[cuda]
root = /usr/local/cuda
[lib]
cnmem = 1
Related
SYSTEM: Ryzen 5800x, rx 6700xt, 32 gigs of RAM, Ubuntu 22.04.1
I'm attempting to install Stable-Diffusion by following https://youtu.be/d_CgaHyA_n4
When attempting to run the SD script, I get the "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!" error.
I believe this is caused by PyTorch not working as expected. When validating Pytorchs' installation with "The Master Test", I get the same error:
"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
Aborted (core dumped)
I believe that it is install correctly as using the conda list command tells me that torch 1.12.0a0+git2a932eb and torchvision 0.13.0a0+f5afae5 are installed. Interestingly, when I change the command ever-so-slightly to torch.cuda.is_available, (without the parentheses), I get the following output: <function is_available at 0x7f42278788b0>. Granted, I'm not sure what this is telling me. Following the "Verification" step resulted in the expected array of random numbers. However, failed the GPU driver check.
Thank you in advance.
Try running the following command:
export HSA_OVERRIDE_GFX_VERSION=10.3.0
This made it work on my machine using an RX 6600 XT, with which I got the same error running it, before exporting the variable.
I was struggling with ROCm enabled Tensorflow, the tensorflow-rocm, package. Without setting up the environment variable
export HSA_OVERRIDE_GFX_VERSION=10.3.0
, tensorflow-rocm crashes.
After setting it up, it works with 6700XT as well (at least it does not crash.) But still having problem with finding libraries.
It seems ROCm has a lot to catch up.
I am trying to get PyVisa working on my mac. I have installed it and NI libraries but when I try to run the following simple python file from terminal in VS code:
import sys
import pyvisa as visa
rm = visa.ResourceManager()
rm.list_resources()
inst = rm.open_resources('GPIB::10::INSTR') #Instrument GPIB Addr = 10
print (inst.query("*IDN?"))
By running 'python3 temp.py' I get the following error message:
Error Domain=KMErrorDomain Code=71 "Insufficient permissions for action: Unable to load 'com.highpoint-tech.kext.HighPointRR' in unprivileged request." UserInfo={NSLocalizedDescription=Insufficient permissions for action: Unable to load 'com.highpoint-tech.kext.HighPointRR' in unprivileged request.}
zsh: abort python3 temp.py
Make sure com.ni.driver.NiViPciK extension is installed and loaded. For this, go to About This Mac -> System Report -> Extensions (in Software section) and find NiViPciK. It will likely say "Loaded: No". You need to manually allow National Instruments extensions.
Boot into recovery mode by holding Cmd-R while powering up.
Open Terminal from Tools menu.
Execute: spctl kext-consent add SKTKK2QZ74
Reboot
This did the trick for me:
I have been using NI IVI backend, but lately gave a spin to open source PyVISA-py backend instead and it has worked great for our automation purposes both on OSX and Linux.
I changed
rm = visa.ResourceManager() to rm = visa.ResourceManager('#py').
Some minor modifications may be needed (I had to remove instrument.clear() calls for some reason).
I have basic Vulkan application, "Hello triangle",derived from this tutorial. In debug mode I use two validations layers:
VK_LAYER_KHRONOS_validation and VK_LAYER_LUNARG_monitor
On Windows I got no problem. Both layers load, I can see FPS counter in the title bar. But I am experiencing different behaviour on Linux (Ubuntu 18.04).
Only VK_LAYER_LUNARG_monitor loads. No FPS counter appears in the title bar.
VK_LAYER_KHRONOS_validation fails to load with the error:
ERROR: libVkLayer_khronos_validation.so: cannot open shared object
file: No such file or directory
I performed Linux setup according to the guidelines appearing in this LunarG doc.
Put the whole LunarG SDK in an arbitrary location on my Linux
filesystem.
Sourced setup-env.sh into .profile file
Inside the setup-env.sh script I also added VK_LOADER_DEBUG=all in order to see detailed info from Vulkan Loader.
Verified that all the variables set by the script point to correct directories in the SDK. They do.
setup-env.sh:
VULKAN_SDK="$(dirname "$(readlink -f "${BASH_SOURCE:-$_}" )" )/x86_64"
export VULKAN_SDK
PATH="$VULKAN_SDK/bin:$PATH"
export PATH
LD_LIBRARY_PATH="$VULKAN_SDK/lib:${LD_LIBRARY_PATH:-}"
export LD_LIBRARY_PATH
VK_LAYER_PATH="$VULKAN_SDK/etc/vulkan/explicit_layer.d"
export VK_LAYER_PATH
export VK_LOADER_DEBUG=all
Here is the chunk I am gettin from debug output that show the failure of loading the layers:
DEBUG: Loading layer library
/home/platform-thirdparty/vulkan-sdk/1.1.121.1/x86_64/lib/libVkLayer_monitor.so
INFO: Inserted device layer VK_LAYER_LUNARG_monitor
(/home/platform-thirdparty/vulkan-sdk/1.1.121.1/x86_64/lib/libVkLayer_monitor.so)
Message:libVkLayer_khronos_validation.so: cannot open shared object
file: No such file or directory ERROR:
libVkLayer_khronos_validation.so: cannot open shared object file: No
such file or directory
Even though it says that
Inserted device layer VK_LAYER_LUNARG_monitor
The monitor doesn't work.
If I replace VK_LAYER_KHRONOS_validation with VK_LAYER_LUNARG_standard_validation I am getting the same load fail error.
The debugger also shows that the loader did find all the manifest files for all the layers in the SDK.
I have no clue what else can be a problem.
My system setup:
Ubuntu 18.04 64bit, Nvidia RTX 3000
Nvidia Drivers: 430.0
VulkanSDK(LunarG): 1.1.121.1
In this SO question lies the answer by user1824407.
It appears that in the latest Ubuntu distributions (at least in 18.04 I use) setting LD_LIBRARY path into config files like:
.profile or /etc/environment or .bashrc
Doesn't take any effec.And indeed,I tried adding the lib path to all of those files, the libs didn't load even that echo $LD_LIBRARY_PATH printed the pass I set.
And only when I added the path with the Vulkan validation layers location to /etc/ld.so.conf.d/devlibs.conf where
devlibs.conf
is an arbitrary name I gave the file,now I see both VK_LAYER_LUNARG_monitor and VK_LAYER_KHRONOS_validation are loaded by Vulkan.
I don't want to install cuDNN on a computer for which I haven't sudo privileges. For my TensorFlow model I don't need cuDNN, therefore I use the environment variable TF_USE_CUDNN=0. However, I get:
ImportError: libcudnn.so.5: cannot open shared object file: No such file or directory
How I can use TensorFlow without cuDNN?
As far as I can tell from the code that reads the environment variable and its callers, the TF_USE_CUDNN environment variable only disables the use of cuDNN at runtime. Due to the way the standard TensorFlow-on-GPU distribution is linked, it still requires you to install the cuDNN library, because it will be loaded with the rest of the module. The easiest workaround would be to install cuDNN locally in your $HOME/lib directory, and set up LD_LIBRARY_PATH to look for the library there.
After flashing the Tegra Board the CUDA Drivers (including the CUDA compiler nvcc) must be installed. To do so I simply used the given format:
sudo ./cuda-l4t.sh .cuda-repo-l4t-8-0-local_8.0-76_arm64.deb 8.0 8-0
and
source ~/.bashrc
but when I enter nvcc -V it does not show the wanted specification about the nvcc, it just tells me that the command was not recognized
First of all you should check if CUDA was really installed!
To do so go to the path:
/usr/local
There must be a cuda folder or a folder named cuda-8-0 or whatever version you installed. Remember the name and the path.
Now check your bashrc by using gedit/vi/nano/vim whatever you prefer:
vim ~/.bashrc
Go to the bottom of the file. There should be some exports regarding the PATH Variable and the LD_LIBRARY_PATH. Check if it was written to these variables and then overwritten again.
You must export the Path to the bin folder of your CUDA application and the Path to lib64 folder.
To do so at the bottom of the bashrc must stand something like this:
export PATH=/usr/local/cuda-8.0/bin: ....
export LD_LIBARY_PATH=/usr/local/cuda-8.0/lib64: ....
After the double dots may follow some other paths. make sure that the path given to the cuda application is the correct one and that it is not overwritten again.
After you made the correct changed do not forget to source the basrhc again.