When I have a sandbox, it seems cabal install ignores packages in $HOME/.ghc/x86_64-linux-7.8.4/package.conf.d.
How can I configure the sandbox such that these packages become visible?
I am seeing a vague reference to --package-db=db in https://www.haskell.org/cabal/users-guide/installing-packages.html#sandboxes-advanced-usage
but I understand neither how nor when to use it. (with sandbox init? configure? install? none seems to work - none gives any error message either.)
I know about add-source but my question refers to installed packages.
The whole point of the sandbox is that it ignores your local package database.
If you want to share installations across many sandboxes, you may install to the global database; but then you should be very careful, as fixing the badness of a broken package is much more difficult. Keep it to really core packages that you expect to be widely shared across many, many projects -- not just the half dozen you're stressing out about right now for your job.
Alternately, you may share one sandbox between the builds of many packages; simply set the CABAL_SANDBOX_CONFIG variable to an absolute path pointing to the appropriate cabal.sandbox.config file. This is significantly safer, and much more flexible, as you can choose how widely your installed packages are shared (and in bad cases, simply nuke the sandbox and start over).
Here is something you can try - copy (or symlink) the files from ~/.ghc/{arch-os-ghc-version}/package.conf.d to the sandbox's {arch-os-ghc-version}-packages.conf.d directory.
There is a question about the package.cache file. The following procedure seems to be a safe way to proceed:
Start with an empty sandbox
Copy the package.conf.d files from ~/.ghc to the sandbox (including package.cache)
Add packages to the sandbox via cabal install --only-dependencies
I don't know if the package.cache file is required or if there is a way to rebuild it.
One disadvantage is that cabal install --only-deps seems to reinstall broken packages in the sandbox even if they are not required by your application. Maybe there is work-around for this.
Related
I recently downloaded the Haskell Platform from the Haskell website. Under the suggestion of the newer answers in this, I blindly ran brew install ghc cabal-install and cabal install cabal cabal-install. Did I install two versions of Haskell on my machine? What should I do to fix any problems?
It doesn't necessarily lead to problems to have multiple versions (I think I have three different versions installed). If you need the disk space uninstall one of the two (instructions for the brew one, for the packaged platform it seems you should be able to use the command sudo uninstall-hs but check it yourself first). If you don't mind the lost disk space, you only have to make sure you have your PATH set up correctly, with the directory containing the ghc binary you want to use in your PATH, before the directory of the other one.
Also, cabal install cabal-install (which you might need to run to update cabal) tends to install cabal in a different place than the platform/brew do, so there, again, you need to make sure your PATH is appropriately set. Normally cabal installs executables in ~/.cabal/bin (local installs) or /usr/local/bin (global installs). The directory containing cabal should go before the others, because an old version of cabal might stick around and you want the new one to be found first.
You probably know this but you can use which ghc and which cabal to check the location of the executable actually being used.
To make things even more complicated, lately it's popular to use Stack, which can also install ghc for you (I find this very convenient, everything is kept in a very controlled environment). So depending on your experience/use case this might be worth looking at as well (but if you just want to try Haskell I recommend you stick with the platform or the brew installation).
The installation instructions at the Stackage web site describe how to use it for one project.
Is there a way how to configure Stackage to be the default for all users and install packages globally available to them?
AFAIK cabal does not support a global config file. But even that won't help by itself because afaict, you can't disable configured remote-repos anyway.
So I see two approaches with obvious drawbacks.
Clean way for new users
Install a /etc/skel/.cabal/config file that will be copied to new user accounts. That won't help with older users though.
Hacky way for all users
Install a global alias (or shell script wrapper) with name cabal that calls cabal --remote-repo=hackage.haskell.org:http://www.stackage.org/lts.
Users can opt out by unaliasing cabal or using the real cabal executable when using a shell script.
Users will be utterly confused though, because cabal will tell users it uses hackage, when in fact it is using stackage.
It's rather nice that ghc-pkg check will list broken packages, and why they are broken. But as far as I know, there is no automated way to take care of those broken packages. What is the recommended way to deal with broken packages? (Preferably not reinstall GHC)
Hopefully, you have been wise enough to not break any in your global package database. Breakage there can easily mean a reinstallation of GHC is necessary. So, let us assume that the breakage is restricted to the user package db (except possibly a package or two in the global shadowed by user packages). If only few packages are broken, you can fix your setup by unregistering the offending packages,
$ ghc-pkg unregister --user borken
that will often complain that unregistering borken will break other packages. Whether you try to unregister those first or unregister borken immediately with --force and deal with the newly broken afterwards is mostly a matter of choice. Make sure that you only unregister packages from the user db. If things aren't too grim, after unregistering a handful of packages, ghc-pkg check will report no more broken packages.
If, on the other hand, a large proportion of packages is broken, it will probably be easier to completely wipe the user db, $ rm -rf ~/.ghc/ghc-version/package.conf.d or the equivalent on other OSs.
Either way, you will have lost packages you still want to use, so you will try to reinstall them without breaking anything anew. Run
$ cabal install world --dry-run
that will try to produce a consistent install plan for all the packages you installed with cabal-install. If it fails to do so, it will print out the reasons, you may then be able to fix the issues by adding constraints to the packages listed in the world file (~/.cabal/world) - for example, although I have no broken packages (according to ghc/ghc-pkg), cabal install world --dry-run told me it could not configure vector-algorithms-0.5.2, which depends on vector >= 0.6 && < 0.8 (I have vector-0.7.1 installed). The reason is that hmatrix-0.12.0.1 requires vector >= 0.8. Replacing the -any "constraint" on hmatrix by a "< 0.12" in the world file produced a clean install-plan.
So, after a bit of fiddling with constraints in the world file, you will get an install plan from cabal. Check whether that would reinstall any packages you already have (installing a newer version is probably okay, reinstalling the same version means trouble). If you're happy with cabal's install-plan, cabal install world and brew a nice pot of tea while GHC is busy. Run ghc-pkg check once more, to verify all is in order.
A piece of generally good advice: If you don't know what installing a package entails, always use --dry-run first.
If you broke your global package database by doing global installs with cabal, the strategy of unregistering offenders may work, but it may also irrevocably break your ghc, that depends on what is broken in which way. If you broke your global db by installing packages from your OS distro, install a fresh GHC, curse the distro-packagers, and try to help them prevent further such events.
A cabal repair command would be very nice, but for the time being, repairing a broken setup is unfortunately much more work.
For some time I've relied on this ghc-pkg-clean script. It removes all broken packages and I reinstall them as needed. For more serious breakage, I use the ghc-pkg-reset script.
Today, though, I found ghc-pkg-autofix, which automates this further - broken packages become unbroken. I don't know what it does, YMMV.
Over here is the only reason I can find that packages I'm installing using cabal are not being found by GHC:
This happens when you install a package globally, and the previous packages were installed locally. Note that cabal-install install locally by default [...]
Presumably, "local installation" means putting packages in ~/.cabal/. First question: where are global installs?
I've been running cabal using sudo, so I guess that's a global install? The reason I've been doing this is that it complains about permissions when run without sudo, so this contradicts the statement "cabal-install install locally by default". Second question: how do I install locally and how do I install globally?
Trying to fix this mess, I've been randomly using sudo ghc-pkg unregister and randomly removing stuff from ~/.cabal/. Consequently my package tree is broken, probably locally and globally. Third question: How do I start again?
Edit: I'm running Ubuntu 10.10. I installed the Haskell Platform 2011.
Are you using Windows, OS X or some version of Linux? Are you using the Haskell Platform? Have you had a version of ghc or cabal before? For a Linux distribution, subtleties about your package manager may come in, of course. (Traces of an old ghc in particular, and an old ~/.ghc/ directory can be a source of trouble.)
Here are a few elementary thoughts of the type one goes through on #haskell with such problems. (My comprehension is not completely adequate, of course):
The chief question seems to be, Why you were being invited to do what should be local installs with sudo? A global install (cabal install pony --global) would of course require privileges if ghc and its libraries are in /usr/... or some other protected place, but otherwise sudo vs non-sudo is independent of the place of installation. What you do with cabal install pony --user (--user is the default, in theory) should not require superuser authority. (I have sometimes found on OS X that privileges are requested where the gcc needs to be called, but this has usually been due to curiosities about my setup.) But in any case sudo doesn't affect where cabal is putting them: the implicit --user and explicit --global, and more specific incantations for development, do that.
If you do ghc-pkg list, for example, it will divide the packages into the different places they are registered in according to two or more package.conf.d directories it is summarizing. On my laptop at the moment these are
/Users/applicative/.ghc/x86_64-darwin-7.0.3/package.conf.d/...
for the local things in ~/.cabal/lib/... and the protected
/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/lib/ghc-7.0.3/package.conf.d
for things that were installed globally with the Haskell Platform installer (this location involves some OS X peculiarities, ghc, ghci and so on are in the woods somewhere, but symlinked to /usr/bin). The conf files for different packages tell you exactly where the libraries were installed. So, for example about the sacred base library,
$ cat base-4.3.1.0-f5c465200a37a65ca26c5c6c600f6c76.conf
tells me:
import-dirs:
/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/lib/ghc-7.0.3/base-4.3.1.0
library-dirs:
/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/lib/ghc-7.0.3/base-4.3.1.0
In any case, where does ghc-pkg list say your cabal install-ed packages are going? In the ~/.cabal folder, look at the file config. If you haven't edited it, I think the commented and uncommented lines, if they state a preference, are stating the defaults for installation with --global and --user. In the ~.ghc/ directory check out the subdirectory myghcversion/package.conf.d and see if anything is there, which should be the same as what ghc-pkg tells you. (You might study the options for ghc-pkg in general, eg. ghc-pkg check and ghc-pkg recache, if you haven't. You may have installed something in some odd way.)
If you installed ghc and cabal and co. by installing the Haskell Platform with a binary installer or your package manager, which seems like a good idea, it is also a good idea, I think, to keep the Platform libraries as something sacred, and make sure you never install anything globally from Hackage; among other things this is likely to have you overwriting Platform libraries -- though this doesn't seem the difficulty here: it would be more obvious if it were.
I'm looking into trying to find an easy way to manage packages compiled from source so that when it comes time to upgrade, I'm not in a huge mess trying to uninstall/install the new package.
I found a utility called CheckInstall, but it seems to be quite old, and I was wondering if this a reliable solution before I begin using it?
http://www.asic-linux.com.mx/~izto/checkinstall/
Also would simply likely to know any other methods/utilities that you use to handle these installations from source?
Whatever you do, make sure that you eventually go through your distribution's package management system (e.g. rpm for Fedora/Mandriva/RH/SuSE, dpkg for Debian/Ubuntu etc). Otherwise your package manager will not know anything about the packages you installed by hand and you will have unsatisfied dependencies at best, or the mother of all messes at worst.
If you don't have a package manager, then get one and stick with it!
I would suggest that you learn to make your own packages. You can start by having a look at the source packages of your distribution. In fact, if all you want to do is upgrade to version 1.2.3 of MyPackage, your distribution's source package for 1.2.2 can usually be adapted with a simple version change (unless there are patches, but that's another story...).
Unless you want distribution-quality packages (e.g. split library/application/debugging packages, multiple-architecture support etc) it is usually easy to convert your typical configure & make & make install scenario into a proper source package. If you can convince your package to install into a directory rather than /, you are usually done.
As for checkinstall, I have used it in the past, and it worked for a couple of simple packages, but I did not like the fact that it actually let the package install itself onto my system before creating the rpm/deb package. It just tracked which files got installed so that it would package them, which did not protect against unwelcome changes. Oh, and it needed root prilileges to work, which is another main sticking point for me. And lets not go into what happens with statically linked core utilities...
Most tools of the kind seem to work that way, so I simply learnt to build my own packages The Right Way (TM) and let checkinstall and friends mess around elsewhere. If you are still interested, however, there is a list of similar programs here:
http://www.dwheeler.com/essays/automating-destdir.html
PS: BTW checkinstall was updated at the end of 2009, which probably means that it's still adequately current.
EDIT:
In my opinion, the easiest way to perform an upgrade to the latest version of a package if it is not readily available in a repository is to alter the source package of the latest version in your distribution. E.g. for Centos the source packages for the latest version are here:
http://mirror.centos.org/centos/5.5/os/SRPMS/
http://mirror.centos.org/centos/5.5/updates/SRPMS/
...
If you want to upgrade e.g. php, you get the latest SRPM for your distrbution e.g. php-5.1.6-27.el5.src.rpm. Then you do:
rpm -hiv php-5.1.6-27.el5.src.rpm
which installs the source package (just the sources - it does not compile anything). Then you go to the rpm build directory (on my mandriva system its /usr/src/rpm), you copy the latest php source tarball to the SOURCES subdirectory and you make sure it's compressed in the same way as the tarball that just got installed there. Afterwards you edit the php.spec file in the SPECS directory to change the package version and build the binary package with something like:
rpmbuild -ba php.spec
In many cases that's all it will take for a new package. In others things might get a bit more complicated - if there are patches or if there are some major changes in the package you might have to do more.
I suggest you read up on the rpm and rpmbuild commands (their manpages are quite good, in a bit extensive) and check up the documentation on writing spec files. Even if you decide to rely on official backport repositories, it is useful to know how to build your own packages. See also:
http://www.rpm.org/wiki/Docs
EDIT 2:
If you are already installing packages from source, using rpm will actually simplify the building process in the long term, apart from maintaining the integrity of your system. The reason for this is that you won't have to remember the quirks of each package on your own ("oooh, right, now I remember, foo needs me to add -lbar to its CFLAGS"), as the build process will be in the .spec file, which you could imagine as a somewhat structured build script.
As far as upgrading goes, if you already have a .spec file for a previous version of the package, there are two main issues that you may encounter, but both exist whether you use rpm to build your package or not:
A patch that was applied to the previous version by the distribution does not apply any more. In many cases the patch has already been applied to the upstream package, so you can simply drop it. In others you may have to edit it - or I suppose if you deem it unimportant you can drop it too.
The package changed in some major way which affected e.g. the layout of the files it installs. You do read the release notes notes for each new version, don't you?
Other than these two issues, upgrading often boils down to just changing a version number in the spec file and running rpmbuild - even easier than installing from a tarball.
I would suggest that you have a look at the tutorials or at the source package for some simple piece of software such as:
http://mirror.centos.org/centos/5.5/os/SRPMS/ipv6calc-0.61-1.src.rpm
http://mirror.centos.org/centos/5.5/os/SRPMS/libevent-1.4.13-1.src.rpm
If you have experience in buildling packages from a tarball, using rpm to build software is not much of a leap really. It will never be as simple as installing a premade binary package, however.
I use checkinstall on Debian. It should not be so different on CentOS. I use it like that:
./configure
make
sudo checkinstall make install # fakeroot in place of sudo works usally for more security
# install the package generated