conda info showing the same old version after conda update in Linux - linux

I followed the conda doc to update the conda on a google could server with this:
conda update -n base -c defaults conda
after this, it shows:
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.5.11
latest version: 4.12.0
Please update conda by running
$ conda update -n base -c defaults conda
# All requested packages already installed.
then i try this:
conda update --all
the same output plus some packages and progress:
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.5.11
latest version: 4.12.0
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: <path>
The following packages will be downloaded:
<packages and progress>
then i run conda info
active environment : None
shell level : 0
user config file : <path>/.condarc
populated config files :
conda version : 4.5.11
conda-build version : 2.0.2
python version : 3.5.6.final.0
base environment : <path>/anaconda3 (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/linux-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/pro/linux-64
https://repo.anaconda.com/pkgs/pro/noarch
package cache : <path>/anaconda3/pkgs
<path>/.conda/pkgs
envs directories : <path>//anaconda3/envs
<path>//.conda/envs
platform : linux-64
user-agent : conda/4.5.11 requests/2.25.1 CPython/3.5.6 Linux/5.13.0-1019-gcp ubuntu/20.04 glibc/2.31
UID:GID : <the ID>
netrc file : None
offline mode : False
There was no error during the last installation. but it seems it is still not updated, i did it not in any environment, i downloaded and installed the conda version 4.2.0 on this google cloud server, I want to know is my conda updated to 4.12 or not, if not how could i update it properly?
thanks

While normally not recommend to update Python in-place, that is quite outdated and is likely what is preventing the conda package from being updated.
Try:
conda install -n base --dry-run python=3.9 conda=4.12
to see if updating is possible. If so, try again without the --dry-run flag.
⚠️ Note this is a risky update - if the conda package does not upgrade correctly with python, the installation could fail. I'd recommend a backup first.

Related

ansible-inventory --list command failing with gce plugin error: gce inventory plugin cannot start

I have verified my Python3 , module path and collections path on a new Ansible control server (Debian 11). However, when pointing the ansible-inventory --list command at a Google cloud gcp.yml inventory file, I am getting this error:
[WARNING]: * Failed to parse /home/user/CODE/ops/ansible/inventory/gcp-dynamic/demo3/gcp.yml with
ansible_collections.google.cloud.plugins.inventory.gcp_compute plugin: gce inventory plugin cannot start: Failed to import the required Python library
(google-auth) on server's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location
This is the full inventory command:
ansible-inventory --list -i /path/to/gcp.yml
This is the gcp.yml file:
---
plugin: gcp_compute
projects:
- project9
auth_kind: serviceaccount
service_account_file: /home/user/.config/credentials.json
# setup group parasing prefix=inventory label
keyed_groups:
- key: labels
prefix: demo3
This is my ansible --version output:
ansible [core 2.13.2]
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.local/lib/python3.9/site-packages/ansible/modules']
ansible python module location = /home/user/.local/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.local/lib/python3.9/site-packages/ansible_collections
executable location = /home/user/.local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 3.1.2
libyaml = True
this is the output from pip3 show google-auth:
Name: google-auth
Version: 2.10.0
Summary: Google Authentication Library
Home-page: https://github.com/googleapis/google-auth-library-python
Author: Google Cloud Platform
Author-email: googleapis-packages#google.com
License: Apache 2.0
Location: /home/user/.local/lib/python3.9/site-packages
Requires: cachetools, pyasn1-modules, rsa, six
Required-by:
In my ~/.ansible.cfg I have this:
# (pathspec) Colon separated paths in which Ansible will search for Modules.
library=/home/user/.local/lib/python3.9/site-packages/ansible/modules:/home/user/.local/lib/python3.9/site-packages
Any suggestions on what to try next?
I went back and forth between installing Ansible via pip as the ansible docs prescribe, and 'apt'.
I finally got this dynamic inventory working with this setup:
Debian11 base VM from GCP , then do everything as root:
python --comes installed (/usr/bin/python3)
pip
ansible
google-auth
I made sure to do all the additional package adds as root and not have my installation geared toward a single user:
pip --installed pip with apt: sudo apt install python3-pip
ansible --installed with apt: sudo apt install ansible
google-auth --also installed as root: sudo pip3 install google-auth
It seems the major issue was mis-matching installation methods between pip and apt. In this case everything sorted itself out when I did everything via apt and did not install Ansible via pip.

install cudf on databricks

I am trying to use cudf on databricks.
I started following https://medium.com/rapids-ai/rapids-can-now-be-accessed-on-databricks-unified-analytics-platform-666e42284bd1. But the init script link is broken.
Then, I followed this link (https://github.com/rapidsai/spark-examples/blob/master/getting-started-guides/csp/databricks/databricks.md#start-a-databricks-cluster) which install the cudf jars on the cluster. Still I could not import cudf.
I also tried:
%sh conda install -c rapidsai -c nvidia -c numba -c conda-forge cudf=0.13 python=3.7 cudatoolkit=10.1 which also failed with a long error ending with:
active environment : /databricks/python
active env location : /databricks/python
shell level : 2
user config file : /root/.condarc
populated config files : /databricks/conda/.condarc
conda version : 4.8.2
conda-build version : not installed
python version : 3.7.6.final.0
virtual packages : __cuda=10.2
__glibc=2.27
base environment : /databricks/conda (writable)
channel URLs : https://conda.anaconda.org/nvidia/linux-64
https://conda.anaconda.org/nvidia/noarch
https://conda.anaconda.org/rapidsai/linux-64
https://conda.anaconda.org/rapidsai/noarch
https://conda.anaconda.org/numba/linux-64
https://conda.anaconda.org/numba/noarch
https://conda.anaconda.org/conda-forge/linux-64
https://conda.anaconda.org/conda-forge/noarch
https://conda.anaconda.org/pytorch/linux-64
https://conda.anaconda.org/pytorch/noarch
https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /databricks/python/pkgs
/local_disk0/conda/pkgs
envs directories : /databricks/conda/envs
/root/.conda/envs
platform : linux-64
user-agent : conda/4.8.2 requests/2.22.0 CPython/3.7.6 Linux/4.4.0-1114-aws ubuntu/18.04.5 glibc/2.27
UID:GID : 0:0
netrc file : None
offline mode : False
An unexpected error has occurred. Conda has prepared the above report.
Upload successful.
Any idea how to use cudf on a databricks cluster ?
I remember helping write that blog a while ago :). It's out of date now.
Karthik and team made some great updates since with spark-rapids. Here is the newest implementation of RAPIDs with databricks in spark: https://nvidia.github.io/spark-rapids/docs/get-started/getting-started-databricks.html. That should get you running with the newest version of cudf.
I'll ask that someone add a disclaimer with that like on that specific blog, so that others don't get confused either. Thanks for alerting us through this question!
Perhaps you need cudatoolkit=10.2? You have virtual packages : __cuda=10.2 in that report.
I am investigating install issues on a databricks GPU cluster (different issue though) and noted that the version of CUDA was 10.2 and not the 10.1 that I expected.
I think the OP want to use python with cudf.
If so, that is not covered in the documentation.
But I tried to add below into the generate-init-script.ipynb to make it work:
#Use mamba to install packages to speed up conda resolve time
base=$(conda info --base)
conda create -y -n mamba -c conda-forge mamba
pip uninstall -y pyarrow
${base}/envs/mamba/bin/mamba remove -y c-ares zstd libprotobuf pandas
${base}/envs/mamba/bin/mamba install -y "pyarrow=1.0.1" -c "conda-forge"
${base}/envs/mamba/bin/mamba install -y -c "rapidsai" -c "nvidia" -c "conda-forge" -c "defaults" "cudf=0.18" "cudatoolkit=10.1"
conda env remove -n mamba
Note: Change the cudf version and cudatoolkit according to your env.

Change conda default pkgs_dirs and envs dirs

I am using conda on a cluster. There is a space limit on the /home directory.
This has reached its limits and I would like to move the ~/.conda directory to another one keeping already installed packages. What is the best way to proceed?
conda version : 4.7.5
Linux: centos
I tried:
conda config --set pkgs_dirs /mnt/lustre/users/bdiallo/.conda/pkgs/
and got the following error message:
CondaValueError: Key 'pkgs_dirs' is not a known primitive parameter.
Windows:
conda config --add pkgs_dirs \\server\share\conda\pkgs
Linux:
conda config --add pkgs_dirs /mnt/lustre/users/bdiallo/.conda/pkgs/
Or manually edit .condarc in your home directory (ref):
pkgs_dirs:
- /opt/anaconda/pkgs
Watch for typos, both pkg and dir need to be plural: pkgs_dirs.
Thanks to #darthbith and #naeem-khoshnevis in comments.

pip failed to install dbus-python

I planed to install dbus-python:
$ pip --version; python --version
pip 19.0.3 from /home/me/anaconda3/lib/python3.7/site-packages/pip (python 3.7)
Python 3.7.3
The platform:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.10
Release: 18.10
Codename: cosmic
When I run pip install dbus-python, it reports these errors:
checking for DBUS... no
configure: error: in `/tmp/pip-install-hr9djbwg/dbus-python/build/temp.linux-x86_64-3.7':
configure: error: The pkg-config script could not be found or is too old. Make sure it
is in your PATH or set the PKG_CONFIG environment variable to the full
path to pkg-config.
Alternatively, you may set the environment variables DBUS_CFLAGS
and DBUS_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
Check
$ echo $PKG_CONFIG
#return nothing
I installed pkg-config just now:
$ pkg-config --version
0.29.1
It still reports these errors:
checking for DBUS... no
configure: error: Package requirements (dbus-1 >= 1.8) were not met:
No package 'dbus-1' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables DBUS_CFLAGS
and DBUS_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
However, the dbus-1 package cannot be installed:
$ pip install dbus-1
Collecting dbus-1
Could not find a version that satisfies the requirement dbus-1 (from versions: )
No matching distribution found for dbus-1
First I looked up for package
sudo apt-cache search dbus-1
Then I installed required packages with command:
sudo apt install libdbus-1-3 libdbus-1-dev
Encountered the same problem. dbus-python seems to be a deprecated module. dbus-next seems to its replacement.
pip install dbus-next

How to transfer a conda environment to an off-line computer

I have a conda environment at home that I'm using on my Ph.D., but now that I'm needing more computational power I have to transfer (or install a perfect copy) of my environment on one of the University's computers. The computers have no internet connection, all I have is SSH.
My attempt to copy the entire /anaconda3 directory and .bashrc to a path similar to the one I use at my home (/home/henrique/bin) have not worked.
What is the correct way to transfer my anaconda install?
Conda-pack is a command line tool that archives a conda environment, which includes all the binaries of the packages installed in the environment. This is useful when you want to reproduce an environment with limited or no internet access. All the previous methods download packages from their respective repositories to create an environment. Keep in mind that conda-pack is both platform and operating system specific and that the target computer must have the same platform and OS as the source computer.
To install conda-pack, make sure you are in the root or base environment so that it is available in sub-environments. Conda-pack is available at conda-forge or PyPI.
conda-forge:
conda install -c conda-forge conda-pack
PyPI:
pip install conda-pack
To package an environment:
# Pack environment my_env into my_env.tar.gz
$ conda pack -n my_env
# Pack environment my_env into out_name.tar.gz
$ conda pack -n my_env -o out_name.tar.gz
# Pack environment located at an explicit path into my_env.tar.gz
$ conda pack -p /explicit/path/to/my_env
To install the environment:
# Unpack environment into directory `my_env`
$ mkdir -p my_env
$ tar -xzf my_env.tar.gz -C my_env
# Use Python without activating or fixing the prefixes. Most Python
# libraries will work fine, but things that require prefix cleanups
# will fail.
$ ./my_env/bin/python
# Activate the environment. This adds `my_env/bin` to your path
$ source my_env/bin/activate
# Run Python from in the environment
(my_env) $ python
# Cleanup prefixes from in the active environment.
# Note that this command can also be run without activating the environment
# as long as some version of Python is already installed on the machine.
(my_env) $ conda-unpack
Source

Resources