Why do cabal configure? - haskell

In the Cabal User Guide it says that Cabal is often compared with autoconf and automake since the command line interface for actually configuring and building packages follows the same steps steps:
./configure --prefix=...
make
make install
compared to
cabal configure --prefix=...
cabal build
cabal install
My understanding is that ./configure uses a config file (produced by autoconf) to adapt the make process to the environment in which it will run and also to check dependencies. So ./configure therefore always have an "input" to conform to. But if cabal configure is not given any arguments what does it do, and why is it necessary before running cabal build?

The cabal configure step does at least two things I know of:
Check that the package description parses OK.
Check that all required dependencies are already installed (and report an error if not).
Basically it's running the constraint solver to decide exactly which packages you're going to build against. (E.g., if you have several versions of ByteString installed, which version are you going to use? Well it might depend on which version the packages you depend on are expecting...)
Also I believe it's possible to supply options at configure time which change exactly which features of the package get built (but I don't have experience with this).
I think originally you had to call configure before you could call build, but I believe now the cabal command-line tool does that step for you automatically in many cases. (E.g., cabal run now seems to automatically reconfigure if the package description file is newer than the configuration DB.)

Related

Haskell Platform vs homebrew

I recently downloaded the Haskell Platform from the Haskell website. Under the suggestion of the newer answers in this, I blindly ran brew install ghc cabal-install and cabal install cabal cabal-install. Did I install two versions of Haskell on my machine? What should I do to fix any problems?
It doesn't necessarily lead to problems to have multiple versions (I think I have three different versions installed). If you need the disk space uninstall one of the two (instructions for the brew one, for the packaged platform it seems you should be able to use the command sudo uninstall-hs but check it yourself first). If you don't mind the lost disk space, you only have to make sure you have your PATH set up correctly, with the directory containing the ghc binary you want to use in your PATH, before the directory of the other one.
Also, cabal install cabal-install (which you might need to run to update cabal) tends to install cabal in a different place than the platform/brew do, so there, again, you need to make sure your PATH is appropriately set. Normally cabal installs executables in ~/.cabal/bin (local installs) or /usr/local/bin (global installs). The directory containing cabal should go before the others, because an old version of cabal might stick around and you want the new one to be found first.
You probably know this but you can use which ghc and which cabal to check the location of the executable actually being used.
To make things even more complicated, lately it's popular to use Stack, which can also install ghc for you (I find this very convenient, everything is kept in a very controlled environment). So depending on your experience/use case this might be worth looking at as well (but if you just want to try Haskell I recommend you stick with the platform or the brew installation).

What is the intended/planned way of configuring/installing software that uses Rust Cargo as build system?

Existing build systems usually have some kind of install targets, that is used either manually (for installing in /usr/local or other location that user can access) or automatically (by package build systems of binary based distros or by package managers of source based ones).
What is the intended way of installing software that uses Cargo? How an analog of make install should look like?
Cargo itself uses additional configure/make stuff that handles configuration, detection of system dependencies, running cargo build and installation.
Is this the right way for any other software built with Cargo? It means are there plans to cover this tasks by Cargo itself or is Cargo intended only as a tool for dependency fetching and compilation without any configuration/detection of installed deps/installation?
Or are any plans to add this functionality?
cargo install
As of Rust 1.5 you can use cargo install to install binary crates onto your system. You can install crates from:
crates.io (the default), using cargo install crate_name
any git repository, using cargo install --git repository_url
any directory, using cargo install --path /path/to/crate
The first two have additional options you can specify:
With crates.io, you can use --vers to specify the crate version.
With git repositories, you can use --branch to set the branch to install from, --tag to specify the tagged release to use, and --rev to build from a specific commit.
Installation location:
cargo install can be configured to install in a custom directory through the following methods, in order of precedence (highest first):
By passing --root /path/to/directory (this path can be relative)
By setting the $CARGO_INSTALL_ROOT environment variable
By setting the install.root configuration key
By setting the $CARGO_HOME environment variable (which will affect more than the installation directory of cargo install)
If none of the above are present, cargo will install the crates in ~/.cargo/bin.
In all of the above cases, the output files will actually be placed in the bin subdirectory (e.g. --root /path/to/directory will actually place ouput in /path/to/directory/bin).
Uninstallation
cargo uninstall can be used to remove previously-installed crates. If you have multiple crates with the same name installed, you can specify --root to only remove the version in that directory.
Example: I want to use rustfmt:
I can use the version on crates.io:
cargo install rustfmt
I like using original releases:
cargo install rustfmt --vers 0.0.1
I want it installed in /opt/rust_crates:
cargo install rustfmt --root /opt/rust_crates
I really need to use the bleeding-edge version:
cargo install --git https://github.com/rust-lang-nursery/rustfmt.git
The latest commit has a bug in it!
cargo install --git https://github.com/rust-lang-nursery/rustfmt.git --rev f5bd7b76e0185e8dd37ae6b1b5fb5e11187f0b8c
I truly desire the version that uses git submodules for its dependencies:
cargo install --git https://github.com/rust-lang-nursery/rustfmt.git --branch submods
I've cloned it and made some edits:
cargo install --path ~/my_rustfmt
Actually, I insist on doing my formatting entirely manually:
cargo uninstall rustfmt
(This answer is intended for developers who want to distribute their own programs, not users who have received a Cargo project and need to install it on their own system; That's the domain of Toby's answer).
In addition to Toby's Answer:
Cargo's install feature is not meant to be the primary way to distribute Rust programs. It's designed to be used only for distribution to other Rust developers. There's several drawbacks to using this feature:
Cargo requires end-users to install the entire Rust toolchain first.
Users will have to build the program locally, which can be slow, especially in Rust.
There's no support for upgrading programs once they're installed (without an additional tool).
There's (currently) no way to include assets, such as documentation.
The packages will be installed only for the current user unless flags are passed to cargo install.
In other words, cargo install is the make && sudo make install of Cargo programs; It's not the ideal way to distribute a Rust program, unless it's intended primarily for Rust programmers.
So what is the correct way?
Let's look at the alternatives.
Manually distribute a tarball/zip
You can replicate the effects of cargo install by simply using cargo build --release. This will place a (mostly, see the drawbacks below) statically linked crate binary in target/release/crate_name, and this can be repackaged into a .tar.gz or .zip and given out to other users.
Pros:
Doesn't require users to install Rust or build the program themselves.
Allows developers to copy assets into the tarball/zip and distribute them along with the program itself.
Cons:
Installing a .tar.gz/.zip is nonstandard and generally not considered ideal for most users.
If the crate needs any system dependencies beyond libc, it will fail to load them with a difficult to understand error.
This requires a developer to manually build a package to release for each version and platform combination.
Use a CI service to build releases
It's possible to recreate any of these methods using a cloud-based CI service. For example, using Travis CI, you can use the Trust project to automatically deploy in much the same way that you would from a tarball, but automatically with only a tag being required.
Pros:
(All of the advantages of a tarball, plus)
The developers don't have to manually release the program, they just need to tag a release.
As a side effect, it's possible to build for every package the program supports at once.
Cons:
The process can be frustrating to debug if it doesn't work correctly, because there's limited control over the server.
The build process is tied to a service, which means releases can be missed if the service is down when they are released.
With Trust or similar tools, you're still ultimately distributing a .tar.gz/.zip, which means there's still inconvenience for users and a lack of system dependency management.
In addition to Travis, see Appveyor and GitHub Actions as possible build platforms.
Provide a package
This is considered the ideal method for many end users, and is the standard way to distribute any program, not just Cargo programs. This alleviates almost every issue with the tarball approach, though not without some problems of its own.
Pros:
Included in the system like any other program.
Can be submitted to Linux distribution repositories to allow programs to be installed in only one command.
Allows updating, removal, and asset inclusion.
Tracks system dependencies, which is especially helpful for GUI apps.
Cons:
By far the most complex of these options.
Requires building a package separately for every supported platform (this can be alleviated with CI, but it will be even more complex to setup this way.)
This approach is best handled with additional tools:
cargo-deb: Build a package for Debian and Ubuntu.
cargo-rpm: Build a package for Fedora, Red Hat, and CentOS.
cargo-aur: Build a package for Arch Linux.
cargo-wix: Make a Windows Installer package.
These usually are tools that are meant to be run by developers, to create the files that are used to generate packages. See their own documentation for more information.
Source: https://rust-cli.github.io/book/tutorial/packaging.html

Building rpm, overriding _topdir, but getting BuildRequires deps?

I have a libfoo-devel rpm that I can create, using the trick to override _topdir. Now I want to build a package "bar" which has a BuildRequires 'libfoo-devel". I can't seem to find the Right Way to get access to the contents of libfoo-devel without having to install it on the build host. How should I be doing it?
EDIT:
My build and target distros are both SuSE.
I prefer solutions that don't require mock, since I believe SuSE does not include it in its stock repo.
Subsequent EDIT:
I believe that the answer I seek is in the build package. Perhaps it's SuSE's answer to mock? Or it's the distributed version of the oBS service?
DESCRIPTION
build is a tool to build SuSE Linux
RPMs in a safe and clean way. build
will install a minimal SuSE Linux as
build system into some directory and
will chroot to this system to compile
the package. This way you don't risk
to corrupt your working system (due to
a broken spec file for example), even
if the package does not use BuildRoot.
build searches the spec file for a
BuildRequires: line; if such a line is
found, all the specified rpms are
installed. Otherwise a selection of
default packages are used. Note that
build doesn't automatically resolve
missing dependencies, so the specified
rpms have to be sufficient for the
build.
Note that if you really don't need libfoo-devel installed to build package bar the most sensible alternative would be to remove libfoo-devel from the BuildRequires directive (and maybe put the requirement where it belongs).
However, if you cannot do that for some reason, create a "development" rpm database. Basically it involves using rpm --initdb --root /path/to/fake/root. Then populate it with all of the "target packages" of your standard distro installation.
That's a lot of rpm --install --root /path/to/fake/root --justdb package-name.rpm commands, but maybe you can figure out a way to copy over your /var/lib/rpm/* database files and use those as a starting point. Once you have the alternative rpm database, you can fake the installation of the libfoo-devel package with a --justdb option. Then you'll be home free on the actual rpm build.
If neither mock nor the openSUSE Build Service are a viable choice then you will have to buckle down and install the package, either directly or in a chroot; the package provides files that the SRPM packager has decided are required to build, and hence is in the BuildRequires tag.

Haskell cabal: I just installed packages, but now the packages are not found

Over here is the only reason I can find that packages I'm installing using cabal are not being found by GHC:
This happens when you install a package globally, and the previous packages were installed locally. Note that cabal-install install locally by default [...]
Presumably, "local installation" means putting packages in ~/.cabal/. First question: where are global installs?
I've been running cabal using sudo, so I guess that's a global install? The reason I've been doing this is that it complains about permissions when run without sudo, so this contradicts the statement "cabal-install install locally by default". Second question: how do I install locally and how do I install globally?
Trying to fix this mess, I've been randomly using sudo ghc-pkg unregister and randomly removing stuff from ~/.cabal/. Consequently my package tree is broken, probably locally and globally. Third question: How do I start again?
Edit: I'm running Ubuntu 10.10. I installed the Haskell Platform 2011.
Are you using Windows, OS X or some version of Linux? Are you using the Haskell Platform? Have you had a version of ghc or cabal before? For a Linux distribution, subtleties about your package manager may come in, of course. (Traces of an old ghc in particular, and an old ~/.ghc/ directory can be a source of trouble.)
Here are a few elementary thoughts of the type one goes through on #haskell with such problems. (My comprehension is not completely adequate, of course):
The chief question seems to be, Why you were being invited to do what should be local installs with sudo? A global install (cabal install pony --global) would of course require privileges if ghc and its libraries are in /usr/... or some other protected place, but otherwise sudo vs non-sudo is independent of the place of installation. What you do with cabal install pony --user (--user is the default, in theory) should not require superuser authority. (I have sometimes found on OS X that privileges are requested where the gcc needs to be called, but this has usually been due to curiosities about my setup.) But in any case sudo doesn't affect where cabal is putting them: the implicit --user and explicit --global, and more specific incantations for development, do that.
If you do ghc-pkg list, for example, it will divide the packages into the different places they are registered in according to two or more package.conf.d directories it is summarizing. On my laptop at the moment these are
/Users/applicative/.ghc/x86_64-darwin-7.0.3/package.conf.d/...
for the local things in ~/.cabal/lib/... and the protected
/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/lib/ghc-7.0.3/package.conf.d
for things that were installed globally with the Haskell Platform installer (this location involves some OS X peculiarities, ghc, ghci and so on are in the woods somewhere, but symlinked to /usr/bin). The conf files for different packages tell you exactly where the libraries were installed. So, for example about the sacred base library,
$ cat base-4.3.1.0-f5c465200a37a65ca26c5c6c600f6c76.conf
tells me:
import-dirs:
/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/lib/ghc-7.0.3/base-4.3.1.0
library-dirs:
/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/lib/ghc-7.0.3/base-4.3.1.0
In any case, where does ghc-pkg list say your cabal install-ed packages are going? In the ~/.cabal folder, look at the file config. If you haven't edited it, I think the commented and uncommented lines, if they state a preference, are stating the defaults for installation with --global and --user. In the ~.ghc/ directory check out the subdirectory myghcversion/package.conf.d and see if anything is there, which should be the same as what ghc-pkg tells you. (You might study the options for ghc-pkg in general, eg. ghc-pkg check and ghc-pkg recache, if you haven't. You may have installed something in some odd way.)
If you installed ghc and cabal and co. by installing the Haskell Platform with a binary installer or your package manager, which seems like a good idea, it is also a good idea, I think, to keep the Platform libraries as something sacred, and make sure you never install anything globally from Hackage; among other things this is likely to have you overwriting Platform libraries -- though this doesn't seem the difficulty here: it would be more obvious if it were.

Best way to Manage Packages Compiled from Source

I'm looking into trying to find an easy way to manage packages compiled from source so that when it comes time to upgrade, I'm not in a huge mess trying to uninstall/install the new package.
I found a utility called CheckInstall, but it seems to be quite old, and I was wondering if this a reliable solution before I begin using it?
http://www.asic-linux.com.mx/~izto/checkinstall/
Also would simply likely to know any other methods/utilities that you use to handle these installations from source?
Whatever you do, make sure that you eventually go through your distribution's package management system (e.g. rpm for Fedora/Mandriva/RH/SuSE, dpkg for Debian/Ubuntu etc). Otherwise your package manager will not know anything about the packages you installed by hand and you will have unsatisfied dependencies at best, or the mother of all messes at worst.
If you don't have a package manager, then get one and stick with it!
I would suggest that you learn to make your own packages. You can start by having a look at the source packages of your distribution. In fact, if all you want to do is upgrade to version 1.2.3 of MyPackage, your distribution's source package for 1.2.2 can usually be adapted with a simple version change (unless there are patches, but that's another story...).
Unless you want distribution-quality packages (e.g. split library/application/debugging packages, multiple-architecture support etc) it is usually easy to convert your typical configure & make & make install scenario into a proper source package. If you can convince your package to install into a directory rather than /, you are usually done.
As for checkinstall, I have used it in the past, and it worked for a couple of simple packages, but I did not like the fact that it actually let the package install itself onto my system before creating the rpm/deb package. It just tracked which files got installed so that it would package them, which did not protect against unwelcome changes. Oh, and it needed root prilileges to work, which is another main sticking point for me. And lets not go into what happens with statically linked core utilities...
Most tools of the kind seem to work that way, so I simply learnt to build my own packages The Right Way (TM) and let checkinstall and friends mess around elsewhere. If you are still interested, however, there is a list of similar programs here:
http://www.dwheeler.com/essays/automating-destdir.html
PS: BTW checkinstall was updated at the end of 2009, which probably means that it's still adequately current.
EDIT:
In my opinion, the easiest way to perform an upgrade to the latest version of a package if it is not readily available in a repository is to alter the source package of the latest version in your distribution. E.g. for Centos the source packages for the latest version are here:
http://mirror.centos.org/centos/5.5/os/SRPMS/
http://mirror.centos.org/centos/5.5/updates/SRPMS/
...
If you want to upgrade e.g. php, you get the latest SRPM for your distrbution e.g. php-5.1.6-27.el5.src.rpm. Then you do:
rpm -hiv php-5.1.6-27.el5.src.rpm
which installs the source package (just the sources - it does not compile anything). Then you go to the rpm build directory (on my mandriva system its /usr/src/rpm), you copy the latest php source tarball to the SOURCES subdirectory and you make sure it's compressed in the same way as the tarball that just got installed there. Afterwards you edit the php.spec file in the SPECS directory to change the package version and build the binary package with something like:
rpmbuild -ba php.spec
In many cases that's all it will take for a new package. In others things might get a bit more complicated - if there are patches or if there are some major changes in the package you might have to do more.
I suggest you read up on the rpm and rpmbuild commands (their manpages are quite good, in a bit extensive) and check up the documentation on writing spec files. Even if you decide to rely on official backport repositories, it is useful to know how to build your own packages. See also:
http://www.rpm.org/wiki/Docs
EDIT 2:
If you are already installing packages from source, using rpm will actually simplify the building process in the long term, apart from maintaining the integrity of your system. The reason for this is that you won't have to remember the quirks of each package on your own ("oooh, right, now I remember, foo needs me to add -lbar to its CFLAGS"), as the build process will be in the .spec file, which you could imagine as a somewhat structured build script.
As far as upgrading goes, if you already have a .spec file for a previous version of the package, there are two main issues that you may encounter, but both exist whether you use rpm to build your package or not:
A patch that was applied to the previous version by the distribution does not apply any more. In many cases the patch has already been applied to the upstream package, so you can simply drop it. In others you may have to edit it - or I suppose if you deem it unimportant you can drop it too.
The package changed in some major way which affected e.g. the layout of the files it installs. You do read the release notes notes for each new version, don't you?
Other than these two issues, upgrading often boils down to just changing a version number in the spec file and running rpmbuild - even easier than installing from a tarball.
I would suggest that you have a look at the tutorials or at the source package for some simple piece of software such as:
http://mirror.centos.org/centos/5.5/os/SRPMS/ipv6calc-0.61-1.src.rpm
http://mirror.centos.org/centos/5.5/os/SRPMS/libevent-1.4.13-1.src.rpm
If you have experience in buildling packages from a tarball, using rpm to build software is not much of a leap really. It will never be as simple as installing a premade binary package, however.
I use checkinstall on Debian. It should not be so different on CentOS. I use it like that:
./configure
make
sudo checkinstall make install # fakeroot in place of sudo works usally for more security
# install the package generated

Resources