How to run Caffe2 on Macbook Pro M1 GPU - pytorch

I was able to run PyTorch with Macbook Pro M1 Max GPU. However Caffe2 does not use the GPUs.
import torch
torch.device("mps")
from caffe2.python import core
WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
I created the PyTorch and Caffe2 from the nightly code using
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
BUILD_CAFFE2=1 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
Any suggestions on how to solve this?

Related

pytorch unable to run inference with GPU

I'm developing a project based on yolov7, but I started facing this error where torch recognizes my GPU but torchvision throws an Not Implemented Error.
This is the error
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
I tried installing torchvision with cuda built-in but that gave me the same error, also tried reinstalling pytorch , that didn't work either
the version of torch vision installed in my env was not equipped with cuda as it was a common install with pip with pip install torchvision whereas for torchvision to function with cuda it has to be equipped with cuda in-order for it to function with an Nvidia GPU to do so install torch with the following command conda install pytorch torchvision torchaudio pytorch-cuda={CUDA version} -c pytorch -c nvidia

Uable to train model using gpu on M1 Mac

How to reproduce the behaviour
install space for apple , select model training option and follow the on screen instructions.
generate config files for model training.
Declare your training and testing corpus in base_config file then auto fill the to generate the final config file.
python -m spacy init fill-config base_config.cfg config.cfg
python -m spacy train config/config.cfg -g 0 --output trf_model-2
Your Environment
Operating System: MacOS 12.4
Python Version Used: 3.10
spaCy Version Used: 3.4.0
Environment Information: -
spaCy version: 3.4.0
Platform: macOS-12.4-arm64-arm-64bit
Python version: 3.10.5
Pipelines: en_core_web_trf (3.4.0)
While I'm trying to train the model using gpu on m1 Mac got this error
RuntimeError: invalid gradient at index 0 - expected device cpu but got mps:0
The motel GPU support for Apple silicon M1 Mac processor is not there in spacy yet,
We have to wait till the spacy team provides proper support for apple silicon based hardware.
As you noted, there's no GPU support for training on Apple Silicon at this time. However, spaCy can leverage some accelerations on Apple Silicon if the thinc-apple-ops module is installed. You can specify it as an option when installing spaCy.
pip install spacy[apple]

Deeplab v2 in Google Colab: Implementation of Caffe lib fails

Here is the Google Colab notebook dropped on github
As explained in the notebook, my purpose is to to experiment Deeplab v2 with Caffe (not with Tensorflow -With Tensorflow lib, I've managed to run an implementation of Deeplab in Google Colab).
Python does not fetch the link to the lib. As you'll notice in notebook, instruction import caffe throws an error when executed.
I have also tried straight forward install of the package using !apt install caffe-cuda in a different notebook but this does not work and everywhere I read like here, it suggests to build Caffe from source.
OS: Ubuntu 18.04.5 LTS
Python: Python 3.7.12
Cuda: Cuda compilation tools, release 11.1, V11.1.105
openCV: 4.1.2

Problems getting GPU to work with older version of Tensorflow and Keras

I have a project I'm trying to work on but it's based on code that's a few years old and for whatever reason this code tends to fail if Tensorflow or NumPy aren't the correct versions (which means everything I'm using has to be old). This has meant that I've needed to dual-install an older version of Python to then be able to install the correct versions of the dependencies.
I'm running:
Python 3.7.5
NumPy 1.17.4
Pandas 0.25.3
pyyaml 5.1.2
more_itertools 7.2.0
keras 2.3.1
tensorflow 2.0.1
CUDA 10.0
CuDNN 7.4.1
I'm particularly interested in the keras and tensorflow versions. From my research, it seems they should work with GPU (as is?) according to this:
https://www.tensorflow.org/install/source (towards the bottom under tested build configurations for GPU).
However, when I try to detect GPU devices on my build with
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
I get
2022-03-22 19:34:53.410102: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 1722158755601489749
]
Which seems like it isn't recognising my GPU.
Is there something I'm missing in the setup process that I need to enable GPU support? As far as I can tell my versions of Tensorflow and Keras are compatible with GPU processing and have compatible versions of CUDA and CuDNN installed.

Running python code on CUDA

While trying to run this code https://wltrimbl.github.io/2014-06-10-spelman/intermediate/python/04-multiprocessing.html on my GPU system which has 300 cores, i used the comment with tf.device('/GPU:0') on the beginning of code. But found that it does not run on GPU. Then i tried
import tensorflow as tf
tf.device('/GPU:0'): # for run in GPU
init = tf.initialize_all_variables()
# initializing all variables
sess = tf.Session(
config=tf.ConfigProto(
intra_op_parallelism_threads=1))
Does this code run in GPU? or is there any method for run a python code on GPU.
No, it won't run without a GPU optimized version of TensorFlow.
Python multiprocesing is for CPU only.
TensorFlow GPU is available (see here https://www.nvidia.com/en-us/data-center/gpu-accelerated-applications/tensorflow/). The Tensorflow GPU implementation is using CUDA with cuDNN under the hood.
To run your own Python script on GPU, you need to use a library like PyCUDA or Cupy which use the CUDA API under the hood as well.

Resources