python3 install of Tensorflow on Apple silicon M1 - python-3.x

I have macOS Big Sur on a Apple Silicon M1 and I'm unable to install Tensorflow in python3.
I removed xcode python3 and installed brew arm64 python3 (x86 python3 doesn't work as well)
I checked successful 64 bis version
python3 -c "import sys; print(sys.version)" or python -c "import struct; print(struct.calcsize('P')*8)"
3.8.7 (default, Dec 30 2020, 02:09:32)
[Clang 12.0.0 (clang-1200.0.32.28)]
Can this work anyhow, or who knows, how to make this work ?

Apple M1 is a processor with an ARM64 architecture, while all pip packages of TensorFlow are compiled for the x86_64 architecture. (excluding the raspberry pi packages, but they would not be compatible with MacOs anyway).
If we look at the name of the pip packages on the install page of TensorFlow, most of them contains either x86_64 or amd64, that indicates that they are built against the x86_64 architecture.
If you want to run TensorFlow on the M1, you either need to:
compile TensorFlow from sources targeting x86_64 through Rosetta 2. Unfortunately, according to this issue on github, Rosetta 2 does not support the AVX instruction set which are enabled in the pip builds of TensorFlow, so rebuilding from source is needed.
Use the experimental support for TensorFlow on M1 developed by Apple, that you can find on this github repository. Note that, if going that way, some python packages might not be compatible/available with the M1.
Note that as of 2021/01/04, the Apple M1 is not a supported architecture by the TensorFlow team:
We currently cannot support Mac ARM. There is less than one developer than can focus on build issues on all 3 operating systems.
Hence, support has to come from the the community. There is SIG Build that focuses on build related issues on several other platforms, leaving us to only focus on the pip packages we officially bless.

Things should work better now with the TensorFlow Metal plugin.
As of Oct. 25, 2021 macOS 12 Monterey is generally available.
Steps to set up M1 for TensorFlow:
Upgrade your machine to Monterey.
If you have conda installed, uninstall it.
Then follow the instructions from Apple here.
Cleaned up below:
Download and install Conda from Miniforge:
chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate
In an active conda environment, install the TensorFlow dependencies, base TensorFlow, and TensorFlow metal:
conda install -c apple tensorflow-deps
pip install tensorflow-macos
pip install tensorflow-metal
You should be good to go.

Related

is there any way to install cupy without Cuda

pip install cupy won't install cupy without cuda.
Linux: lubuntu v21.10
pip v3.10
conda v22.11.1
I have linux installed on an older laptop and it doesn't have any dedicated GPU.
I have a python framework I'm trying to test out and it requires cupy to run. I tried pip install cupy but it wouldn't run without any cuda installation.
I tried conda install cupy but the framework won't run when conda install cuda
https://pypi.org/project/cupy/ is "NumPy & SciPy for GPU" (emphasize mine — phd). You cannot install it without CUDA.
If you want to use pyVHR without GPU you need to switch to the branch pyVHR_CPU. There is pyVHR_CPU_env.yml to create CPU-ony conda environment. See the installation instructions.
Another approach would be to install an emulator (qemu, VirtualBox, etc.) and configure it to emulate a GPU. Not sure if it worth the trouble in terms of speed.

whl file is not a supported wheel on this platform - how can i fix?

i am currently using raspberry pi desktop, running debian in my empty laptop.
I wanna use tensorflow lite interpreter from tensorflow official website.
I typed $ python3 --version and got Python 3.7.3 ,
typed $ uname -a and got Linux raspberry 4.19.0-13-amd64 #1 SMP Debian 4.19.160-2 (2020-11-28) x86_64 GNU/Linux. this is my platform info.
and visited tensorflow lite interpreter website and downloaded compatible wheel file named
https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_x86_64.whl
i finally typed $ pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_x86_64.whl
I cannot find any weird thing in my work, and i want your help to complete this job.
$ pip install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_x86_64.whl also does not work.

How to tell PyTorch which CUDA version to take?

I have two version of CUDA installed on my Ubuntu 16.04 machine: 9.0 and 10.1.
They are located in /usr/local/cuda-9.0 and /usr/local/10.1 respectively.
If I install PyTorch 1.6.0 (which needs CUDA 10.1) via pip (pip install torch==1.6.0), it uses version 9.0 and thus detects no GPUs. I already changed my LD_LIBRARY_PATH to "/usr/local/cuda-10.1/lib64:/usr/local/cuda-10.1/cuda/extras/CUPTI/lib64" but PyTorch is still using CUDA 9.0.
How do I tell PyTorch to use CUDA 10.1?
Prebuilt wheels for torch built with different versions of CUDA are available at torch stable releases page. For example you can install torch v1.9.0 built with CUDA v11.1 like this:
pip install --upgrade torch==1.9.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
But not all the combinations are available.

PyTorch having trouble detecting CUDA

I am running CNN on PyTorch. The torch.cuda.is_available() function returned false and no GPU is detected. However, I can run Keras model with GPU. Here is my system information:
OS: Ubuntu 18.04.3
Python 3.7.3 (Conda)
GPU: GTX1080Ti
Nvidia driver: 430.50
When I check nvidia-smi, the output said that the CUDA version is 10.1. However, the nvcc -V command tells me that it is CUDA 9.1.
I downloaded NVIDIA-Linux-x86_64-430.50.run from the official site and install it with command line. I installed CUDA 10.1 using these following command line recommended by the official site:
wget http://developer.download.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.243_418.87.00_linux.run
sudo sh cuda_10.1.243_418.87.00_linux.run
I installed PyTorch through pip install. What is wrong? Thanks in advance!
The default Pytorch 1.2 package depends on CUDA 10.0, but you have CUDA 9.1. The output of nvidia-smi just tells you the maximum CUDA version your GPU supports, nvcc gives the CUDA installed on your system. It seems that your installation of CUDA 10.1 was unsuccessful.
In addition to CUDA 10.0, Pytorch also supports CUDA 9.2 and I've found that the Pytorch package compiled for CUDA 10.0 also works with CUDA 10.1. So you can either upgrade your CUDA installation to 9.2 and install the Pytorch CUDA 9.2 package with
pip3 install torch==1.2.0+cu92 torchvision==0.4.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html
Or get a working installation of CUDA 10.1. There are detailed Linux instructions here. (Note that you may have to remove previous installations of CUDA before installing a new one.)
FYI, this answer is a hack which could mess up your conda env, but may work more easily than installing a fresh env. A consistency-checking tool would be really helpful because of all the people having exactly this problem. Matching anaconda's CUDA version with the system driver and the actual hardware and the other system environment settings is challenging to say the least and almost an art.
I found that Anaconda improperly guesses the CUDA version to use frequently. So I have found the best way to fix this is to surgically uninstall and reinstall just pytorch with pip:
pip uninstall torch
pip install torch
Note that pip calls pytorch torch while conda calls it pytorch.
However, I also found that pip sometimes refuses to reinstall torch because it didn't get rid of the anaconda site package files. If that is the case you can very carefully remove them manually as:
rm -fr $HOME/miniconda3/envs/<ENV>/lib/python3.9/site-packages/torch/
rm -fr $HOME/miniconda3/envs/<ENV>/lib/python3.9/site-packages/torch-*.dist-info/
where should be replaced with your environment name and miniconda might be anaconda or something else depending on your installation.
Be very careful not to delete anything other than the torch files or you may mess something else up. Then you would be best served by installing yet another fresh environment.
After this pip install torch should work and torch.cuda.is_available() should return True. Unless there is another problem... YMMV.
Note that I recommend using miniconda because the full anaconda comes overloaded with packages and I find it quickly gets clogged and broken.

pip compiling vs binaries

Sometimes pip install launches a lengthy compilation process. Sometimes it does not. This was most notable with numpy, because it takes significant time to compile but is negligible time when installing binaries. I have an Ubuntu 14 machine where it always compiles numpy, and an Ubuntu 16 machine where it never compiles.
I assumed that Ubuntu 14 packages were no longer available or something. But then I launched a brand new VM with this same older OS, and pip install numpy, went super fast (no compiling). So clearly it is not simply the OS version impacting me. What is going on here?
It's probably a difference in the version of pip you're using. Both binary and source wheels exist in pypi so the question is which will pip choose. I found that on the same machine, pip 1.5.4 would choose the source wheel but pip 9.0.1 would choose the binary one.
Newer versions of pip (ex. 9.0.1) have options for controlling this behaviour: https://pip.pypa.io/en/stable/reference/pip_install/#cmdoption-no-binary
I'm guessing maybe older versions of pip don't even support binary wheels.
So try upgrading pip and then it should install without compiling.

Resources