How to revert back to PyTorch 1.4? - pytorch

I have had PyTorch 1.4 installed on my machine for a little while now, and just tried to install 1.5 using the directions found directly on PyTorch.org.
Unfortunately, it failed, as do most things I try to do exactly as told. No shocker there.
Now, I have some half-installation of PyTorch 1.5 that will not import, but insists that it's on my machine. It fails with a FileNotFoundError, referring specifically to caffe2_nvrtc.dll.
I found an issue on GitHub with this problem that was closed last month, but of course it still is an issue, and many other people have reported it since it was closed. There has been no real resolution.
I just want to go back. What is the best way of getting rid of 1.5 without screwing up my machine and its environments even worse? And then, where do I find v1.4? It is not listed on the "Previous PyTorch Versions" page.

The old versions are still available as before and you can install them directly with pip by specifying the version you want to install:
pip install torch==1.4.0 -f https://download.pytorch.org/whl/torch_stable.html
As you seem to be on Windows the -f https://download.pytorch.org/whl/torch_stable.html part is required, because the Windows versions are not distributed over PyPI, but you have get them from PyTorch's registry. There you can also find all the versions (including past versions) of PyTorch.
Installing an older version of the same package should remove the current one, but in case you run into problems with that, you can first uninstall PyTorch before you install the older version:
pip uninstall torch

Related

several dependency errors causing in Azure AutoML while running model

I am trying to work on different models on a small piece of ML project which needs to work on azure platform and get the score.py with all the values. It is getting not a single library issue, but getting multiple Module errors and Attribute errors. I am using latest SDK version only, but I am not sure, where I am going side path.
Any previous observations on this error?
The compatibility break is there for the newer version of the packages based on the current version of SDK. If the current SDK version is 1.13.0 and above, previous versions of packages are not in working stage. The compatibility issue is raising because of support of packages from SDK for different versions. It differs from version-to-version package support from SDK.
Because of this we are getting Module not found, ImportError and AttributeError.
This solution depends on the AutoML SDK training version.
If you are using 1.13.0 above version of SDK, update the versions of pandas to 0.25.1 and scikit-learn to 0.22.1
Using the following command in BASH to upgrade the versions.
pip install –upgrade pandas==0.25.1
pip install –upgrade sickit-learn==0.22.1
The generic syntax for upgrading is:
pip install –upgrade package_name==version
If the error occurs in AutoML Configuration file, then need to upgrade that also.
But it is suggestable to uninstall and reinstall AutoMLConfig.
pip uninstall azureml-train automl
Then reinstall using the below code,
pip install azureml-train automl
If you are using windows operating system, then install Miniconda.
If you are a linux user, then using sudo or conda syntaxes for the same operation.
Some of the advanced libraries of computer vision supportive like TensorFlow will be installed by default. Then we need to install them from dependencies.
azureml.core.runconfig import RunConfiguration from
azureml.core.conda_dependencies import CondaDependencies run_config =
RunConfiguration() run_config.environment.python.conda_dependencies =
CondaDependencies.create(conda_packages=['tensorflow==1.12.0'])
Documentation credit to #Larry Franks.

pytorch code sudden fails on colab with NVIDIA driver on your system is too old

I had some code which worked on colab (gpu runtime) just a short while ago. Suddenly I am getting
The NVIDIA driver on your system is too old (found version 10010).
nvcc shows
Cuda compilation tools, release 10.1, V10.1.243
I tried torch versions 1.5.1, then 1.13.0. Both keep getting this error.
There is a discussion showing other people having doubts. with no clear resolution.
https://github.com/pytorch/pytorch/issues/27738
Anyone having the same problem?
The light-the-torch package is designed to solve exactly this type of issue. Try this:
!pip install light-the-torch
!ltt install torch torchvision
I think this might be to do with the fact that Google Colab randomly connects you to a GPU when you start a runtime. Some might have different drivers installed, which could result in that error to display only part of the time, as you've experienced.
You can see the current version of CUDA by running !nvidia-smi in Colab. You can then simply install a version of PyTorch that is compatible with this version of CUDA. The PyTorch website can generate a pip command for your language/environment/CUDA version, and there is also a list of previous versions and their corresponding commands if you have a CUDA version that the current version doesn't support.
This is what I got working with a CUDA version of 10.1:
!pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 -f https://download.pytorch.org/whl/torch_stable.html
Late reply, but maybe it will help others to ease their problems.
Pytorch has a previous versions page that has both PyTorch and Torchvision installation commands recommended for older versions. For your case, I used !pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html and worked just fine in Google Colab, taking into consideration the NVidia driver version as well.
I was also getting the same error.
This one fixed the issue for me:
pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
I just had the same issue on google colab. Following this github issue I downgraded to torch==1.4.0 using !pip.
Since I had several other requirements, I am reading them from a local file like this:
with open('attribute_hallucination/editing_tool/requirements.txt', 'w') as f:
f.write("cupy-cuda101==7.4.0\ncycler==0.10.0\nfastrlock==0.4\nfuture==0.18.2\nimageio==2.8.0\njoblib==0.14.1\nkiwisolver==1.2.0\nmatplotlib==3.2.1\nnumpy==1.18.4\nopencv-python==4.2.0.34\nPillow==7.1.2\npynvrtc==9.2\npyparsing==2.4.7\npython-dateutil==2.8.1\nscikit-learn==0.22.2.post1\nscipy==1.2.0\nsix==1.14.0\nsklearn==0.0\ntorch==1.4.0\ntorchvision==0.6.0\ntqdm==4.46.0")
!pip install -r attribute_hallucination/editing_tool/requirements.txt
This one worked for me:
pip install torch==1.8.1+cu101 torchvision==0.9.1+cu101 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
Source: https://discuss.pytorch.org/t/userwarning-cuda-initialization-the-nvidia-driver-on-your-system-is-too-old-found-version-10010/141547/5
I'm not sure if it is important, but I ran this after I installed all the my other dependencies.

Using installed pip packages in a newer release of Python3

Because of the size of the installed pip packages , I moved the ~/.local/lib/python3.6 directory to another partition and then made a symlink(a.k.a shortcut for windows users) to it , and that had been worked fine.
But recently I installed a newer linux distribution which has python3.8 pre-installed and I want to re-use those python3.6 packages(since the minor version is the same) to obviate the need of re-installing them.(almost 3.5 GB)
I made a symlink (~/.local/lib/python3.8/ which is the exact name that pip creates itself) to /path/to/external/lib/python3.6 , but now python shell is unable to recognize those packages.
Any ideas?
Do I need to re-install them?
Thanks.
These projects have to be reinstalled. There is no compatibility guarantee between minor versions (or major obviously), only between maintenance versions.

How do I "build with the CLI option" with conda-build?

I am new to Python, the Anaconda environment, conda, pip, all of it, so please bear with me if these are simple questions. I've asked a couple previous questions about this install which so far have been resolved. Here was my previous question. All of my issues have to do with conda-build meta files which don't work and need some hand-editing in order to succeed.
Background:
I am trying to install the package called ibm-watson in my Python, in a separate conda environment cloned from my base environment. This is in support of a Coursera course. The courseware builds this package in its own Jupyter window with a pip install. I wanted to build the examples in my own environment, and I'm working in Anaconda at the recommendation of many people.
When I first ran into issues with conda-build which I couldn't figure out, I decided to try pip. That worked, but led to other problems (which online articles warned about). Conda (I read) doesn't know about things installed with pip, and that screws up its ability to manage packages and environments. So I decided to back out the pip install and try to make it work with conda.
First question: Why does pip install work correctly and recursively build all dependencies but conda-build does not? Am I just not using the right options for conda?
So here are the meta.yaml issues I've uncovered so far and resolved with the help of people here.
Version string that said '>=2.0,' with an extraneous comma.
package name that was shown with underscores but actually needed hyphens
Dependencies which I fixed by downloading the required packages one by one and building, frequently dealing with the same issues above in the meta.yaml
Most recently, this string, which was throwing an error till I guessed that the quote marks were the issue: typing; # [ py <'3.5' ]
Also the install command from the conda documentation conda install --use-local my-package doesn't work, and per a discussion on Github, I am instead using conda install -c ${CONDA_PREFIX}/conda-bld/my-package
Second question: Why is the conda process so buggy? Are the IBM developers just careless in their testing or is it conda that's at fault or am I using all of this wrong?
And finally, the real question
The last dependency I had to build was python-dotenv installed from PyPi. I built that with conda like the others:
conda skeleton pypi python-dotenv
conda-build python-dotenv (after making the above change to meta.yaml)
That gets all the way through building but then throws this error:
Run pip install "python-dotenv[cli]" to fix this.Tests failed for python-dotenv-0.11.0-py37_0.tar.bz2 - moving package to /Users/(myname)/opt/anaconda3/envs/coursera/conda-bld/broken
Since I'm not using pip, how do I do what it's asking me to do? I tried just doing conda-build "python-dotenv[cli]" but got "no valid recipes for python-dotenv[cli]".

Tensor flow package installation in PyCharm

I have been successfully using PyCharm for my python work.All the packages can be easily installed by going to settings and then project interpreter but tensorflow installation is showing error.In suggestions it asked me to upgrade pip module.But even after that it shows error with following message:
" Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow"
I am able to install all other packages but error is with tensorflow only.I am using windows.
Please run pip install tensorflow in command line and post the output here. Tensorflow can be installed on Windows but the process is often annoying.
You could also tried anaconda. It is has very nice UI and you could switch between different version.

Resources