I am using Conda on a shared compute cluster where numerical and io libraries have been tune for the system.
How can I tell Conda to use these and only worry about the libraries and packages which are not already there on path?
For example:
There is a openmpi library installed and the package which I would like to install and mange with Conda has it also as a dependency.
How can I tell Conda to just worry about what is not there?
One trick is to use a shell package - an empty package whose only purpose is to satisfy constraints for the solver. This is something that Conda Forge does with mpich, as mentioned in this section of the documentation. Namely, for every version, they include an external build variant, that one could install like
conda install mpich=3.4.2=external_*
signaling that it will be supplied by the host. One can consult the recipe's meta.yaml for a concrete example.
I don't think this is great (seems like a lot of work), but I also don't know of a better alternative.
Related
According to PEP 632, distutils will be formally marked as deprecated, and in Python 3.12, it will be removed. My product is soon going to support Python 3.10 and I don't want to put up with deprecation warnings, so I would like to remove references to distutils now. The problem is that I can't find good, comprehensive documentation that systematically lets me know that A in distutils can be replaced by B in modules C, D, and E. The Migration Advice in the PEP is surprisingly sketchy, and I haven't found standard documentation for distutils, or for whatever modules (such as setuptools?) that are required to replace distutils, that would let me fill in the gaps. Nor am I sure how to look at the content of the installed standard distribution (that is, the physical directories and files) in order to answer these questions for myself.
The "Migration Advice" section says:
For these modules or types, setuptools is the best substitute:
distutils.ccompiler
distutils.cmd.Command
distutils.command
distutils.config
distutils.core.Distribution
distutils.errors
...
For these modules or functions, use the standard library module shown:
...
distutils.util.get_platform — use the platform module
Presumably, that means that setuptools has either a drop-in replacement or something close to it for these modules or types (though I'm not sure how to verify that). So, for instance, perhaps setuptools.command.build_py can replace distutils.command.build_py. Is that correct? In any case, what about these?
distutils.core.setup
distutils.core.Extension
Furthermore, what am I supposed to make of the fact that setuptools does not appear under the modules or index list in the standard documentation? It is part of the standard distribution, right? I do see it under Lib/site-packages.
UPDATE 1: If setuptools is not currently part of the standard distribution, is it expected to become one, say, in Python 3.11 or 3.12? Are customers expected to install it (via pip?) before they can run a setup.py script that imports setuptools? Or is the thought that people shouldn't be running setup.py anymore at all?
Knowing how to replace distutils.core.setup and distutils.core.Extension is probably enough for my current needs, but answers to the other questions I've asked would be quite helpful.
UPDATE 2:
setuptools is indeed part of the python.org standard distribution, as can be determined by importing it from a freshly installed Python interpreter from python.org. The thing that was confusing me was that it is documented on a separate site, with a separate style, from python.org. However, as SuperStormer pointed out in the comments, some Linux distributions, such as Debian, don't install it by default when you install Python.
UPDATE 3:
This command:
Python-3.9.1/python -m ensurepip --default-pip
installs both pip and setuptools on Debian 10 on a Python installation freshly downloaded and built from python.org. Before the command is executed, pip is absent and setuptools cannot be imported.
I need libv8-3.14 to run some R packages on linux, but I don't have root access/sudo access on the linux computer I'm using so I'd like to install an external folder instance of libv8-3.14. I've seen R packages reference this as external as CDFLAG="folder/v8-3.14" so I know it is possible.
I'm new(ish) to linux but I've installed external libraries before with tar.gz files which then have a configure file in them, which I set the external folder with ./configure --prefix==/folder/loc, but the only downloads I can find of libv8 are .git (which I can't get to work either).
How can I install an libv8-3.14 to a folder and install so I can set:
export PATH=$PATH:/path/to/install/
and
export `LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/install/`
I had the exact same problem. In case somebody in the future comes across this post, I will leave my suggestions and how it worked out in the end. Also, all credits go to an experienced colleague of mine.
The most sure thing to do is to consult IT, or someone who has already had the same problem, there is usually a workaround these issues.
A way you can do it yourself:
Create an anaconda environment, you can name it 'V8' or something (make sure the environment is based on the latest python version, or recent enough for r-v8).
activate it
install the conda version of the V8 R interface with conda install -c conda-forge r-v8
That's it. Whenever you need V8, fire up your environment beforehand, and it should be A-OK.
Further advice: If you run into errors when installing r-v8, it may be a good idea to update your conda and all the packages. However, depending on your conda version conda update conda and conda upgrade --all MAY BREAK your conda installation, so be careful. (For further information on this problem, see the endless complaints of people in this issue: https://github.com/conda/conda/issues/8920).
V8 doesn't use autotools, so it has no ./configure. In fact, it provides no installation facilities at all, because it is meant for embedding, not installing.
What I would try is to download the Ubuntu package (guessing from your other question, you are on Ubuntu, right?) for the right architecture from https://packages.ubuntu.com/trusty/libv8-3.14.5, and extracting it manually. .deb files are just ZIP archives.
As a side note, there's no point in setting PATH, because libv8, being a library, provides no executables. LD_LIBRARY_PATH is all you need.
I am using conda version 4.5.11, python 3.6.6, and Windows 10.
I create a virtual environment using conda
conda create --name venv
When I check for installed packages
conda list
it is (as expected), empty.
But
pip list
is quite long.
Question #1: Why? - when I create a virtual environment using
python -m venv venv
the pip list is empty.
When I am not in an activated virtual environment, then
conda list
is also quite long, but it isn't the same as the pip list (* see follow up below)
In general, the pip list is a subset of the conda list. There is at least one exception ('tables' in the pip list, not in conda list) but I haven't analysed too closely. The conda list changes/displays some (all?) hyphens to underscores (or pip does the reverse). There also a few instances of versions being different.
Question #2: Why? (and follow up questions - can they be? and should I care?)
I was hoping to have a baseline conda 'environment' (that may not be the right word) -ie, the packages I have installed/updated into Ananconda/conda and then all virtual environments would be pulled from that. If I needed to install something new, it would be first installed into the baseline. Only when I need to create an application using different versions of packages from the baseline (which I don't envision in the foreseeable future) would I need to update the virtual environments differently.
Question #3: Am I overthinking this? I am looking for consistency and hoping for understanding.
-- Thanks.
Craig
Follow Up #1: After installing some packages to my empty conda venv, the results of conda list and pip list are still different, but the pip list is much shorter than it was, but is a subset of the conda list (it does not include two packages I don't use, so I don't care)
Follow Up #2: In the empty environment, I ran some code
python my-app.py
and was only mildly surprised that it ran without errors. As expected, when I installed a package (pytest), it failed to run due to the missing dependencies. So ... empty is not empty.
1. conda list vs pip list
If all you did was create the environment (conda create -n venv), then nothing is installed in there, including pip. Nevertheless, the shell is still going to try to resolve pip on using the PATH environment variable, and is possibly finding the pip in the Anaconda/Miniconda base environment.
2. pip list is subset of conda list outside env
This could simply be a matter of conda installing things other than Python packages, which pip doesn't have the option to install. Conda is a more generic package manager and brings in all the dependencies (e.g., shared libraries) necessary to run each package - by very definition this is a broader range than what is available from the PyPI.
3. Overthinking
I think this is more of a workflow style question, and generally outside the scope of StackOverflow because it's going to get opinionated answers. Try searching around for best practice recommendations and pick a style suited to your goals.
Personally, I would never try to install everything into my base/root Conda environment simply because the more one installs, the more one has dependency requirements pulling in different directions. In the end, Conda will centralize all packages anyway (anaconda/pkgs or miniconda3/pkgs), so I focus on making modular environments that serve specific purposes.
I'm trying to write a Python utility that is a thin wrapper around an existing command line program (wmctrl - https://sites.google.com/site/tstyblo/wmctrl/).
I'd like to make my program available on PyPI to share my work later on. But as wmctrl has the core functionality, my code depends heavily on it being installed.
Is there a way to configure setuptools to depend on a non-Python/non-setuptools dependency like wmctrl? I'd like setuptools to fail the install if the binary isn't there. (Ideally, I'd like setuptools to install it, but that seems less likely...).
Python Packaging/setuptools seems to be really geared towards working with other Python-packaged dependencies (PyPI packages, setuptools-based VCS repos, etc.). I haven't been able to find anything online about configuring it to depend on other third-party executables.
Thanks to anyone who can offer some guidance/help here.
I'm experimenting a problem with the interaction between the ghc-mod plugin in emacs, and NixOS 14.04. Basically, once packages are installed via nix-env -i, they are visible from ghc and ghci, recognised by haskell-mode, but not found by ghc-mod.
To avoid information duplication, you can find all details, and the exact replication of the problem in a VM, in the bug ticket https://github.com/kazu-yamamoto/ghc-mod/issues/269
The current, default, package management set up for Haskell on NixOS does work will with packages that use the ghc-api, or similar (ghc-mod, hint, plugins, hell, ...) run time resources. It takes a little more work to create a Nix expression that integrates them well into the rest of the environment. It is called making a wrapper expression for the package, for an example look at how GHC is installed an operates on NixOS.
It is reasonable that this is difficult since you are trying to make a install procedure that is atomic, but interacts with an unknown number of other system packages with their own atomic installs and updates. It is doable, but there is a quicker work around.
Look at this example on the install page on the wiki. Instead of trying to create a ghc-mod package that works atomically you weld it on to ghc so ghc+ghc-mod is an atomic update.
I installed ghc+ghc-mod with the below install script added to my ~/.nixpkgs/nixpkgs.nix file.
hsEnv = haskellPackages.ghcWithPackages (self : [
self.ghc
self.ghcMod
# add more packages here
]);
Install package with something like:
nix-env -i hsEnv
or better most of the time:
nix-env -iA nixpkgs.haskellPackages.hsEnv
I have an alias for the above so I do not have to type it out every time. It is just:
nixh hsEnv
The down side of this method is that other Haskell packages installed with nix-env -i[A] will not work with the above installation. If I wanted to get everything working with the lens package then I would have to alter the install script to include lens like:
hsEnv = haskellPackages.ghcWithPackages (self : [
self.ghc
self.ghcMod
self.lens
# add more packages here
]);
and re-install. Nix does not seem to use a different installation for lens or ghc-mod in hsEnv and with the ghc from nix-env -i ghc so apparently only a little more needs to happen behind the scenes most of the time to combine existing packages in the above fashion.
ghc-mod installed fine with the above script but I have not tested out its integration with Emacs as of yet.
Additional notes added to the github thread
DanielG:
I'm having a bit of trouble working with this environment, I can't even get cabal install to behave properly :/ I'm just getting lots of errors like:
With Nix and NixOS you pretty much never use Cabal to install at the global level
Make sure to use sandboxes, if you are going to use cabal-install. You probably do not need it but its there and it works.
Use ghcWithPackages when installing packages like ghc-mod, hint, or anything needs heavy runtime awareness of existing package (They are hard to make atomic and ghcWithPackages gets around this for GHC).
If you are developing install the standard suite of posix tools with nix-env -i stdenv. NixOS does not force you to have your command line and PATH cultured with tools you do not necessarily need.
cabal assumes the existence a few standard tools such as ar, patch(I think), and a few others as well if memory services me right.
If you use the standard install method and/or ghcWithPackages when needed then NixOS will dedup, on a package level (If you plot a dependency tree they will point to the same package in /nix/store, nix-store --optimise can always dedup the store at a file level.), many packages automatically unlike cabal sandboxes.
Response to comment
[carlo#nixos:~]$ nix-env -iA nixos.pkgs.hsEnv
installing `haskell-env-ghc-7.6.3'
these derivations will be built:
/nix/store/39dn9h2gnp1pyv2zwwcq3bvck2ydyg28-haskell-env-ghc-7.6.3.drv
building path(s) `/nix/store/minf4s4libap8i02yhci83b54fvi1l2r-haskell-env-ghc-7.6.3'
building /nix/store/minf4s4libap8i02yhci83b54fvi1l2r-haskell-env-ghc-7.6.3
collision between `/nix/store/1jp3vsjcl8ydiy92lzyjclwr943vh5lx-ghc-7.6.3/bin/haddock' and `/nix/store/2dfv2pd0i5kcbbc3hb0ywdbik925c8p9-haskell-haddock-ghc7.6.3-2.13.2/bin/haddock' at /nix/store/9z6d76pz8rr7gci2n3igh5dqi7ac5xqj-builder.pl line 72.
builder for `/nix/store/39dn9h2gnp1pyv2zwwcq3bvck2ydyg28-haskell-env-ghc-7.6.3.drv' failed with exit code 2
error: build of `/nix/store/39dn9h2gnp1pyv2zwwcq3bvck2ydyg28-haskell-env-ghc-7.6.3.drv' failed
It is the line that starts with collision that tells you what is going wrong:
collision between `/nix/store/1jp3vsjcl8ydiy92lzyjclwr943vh5lx-ghc-7.6.3/bin/haddock' and `/nix/store/2dfv2pd0i5kcbbc3hb0ywdbik925c8p9-haskell-haddock-ghc7.6.3-2.13.2/bin/haddock' at /nix/store/9z6d76pz8rr7gci2n3igh5dqi7ac5xqj-builder.pl line 72.
It is a conflict between two different haddocks. Switch to a new profile and try again. Since this is a welding together ghc+packages it should not be installed in a profile with other Haskell packages. That does not stop you from running binaries and interrupters from both packages at once, they just need to be in their own name space so when you call haddock, cabal, ghc, there is only one choice per profile.
If you are not familiar with profiles yet you can use:
nix-env -S /nix/var/nix/profiles/per-user/<user>/<New profile name>
The default profile is either default or channels do not which one it will be for your set up. But check for it so you can switch back to it later. There are some tricks so that you do not have to use the /nix/var/nix/profiles/ directory to store you profiles to cut down on typing but that is the default location.