So I'd like to set up a linux machine for Haskell development with one huge caveat -- no root privs on this machine. We could of course get the admins to install GHC for us, eventually. However, in the long-term then we need to hassle them when we want to upgrade, etc. So much better to do everything in userland. Which also means that we'll want to install c libs we link to in userland as well, etc. to keep everything as hassle-free as possible.
So, the question is, how, soup-to-nuts, would I go about doing a purely userland install of GHC? The machine will have gcc, and the usual toolchain. If necessary, we can start with a typical ghc install to get the ball rolling, but it would be nice not to.
Additionally, any tips on managing an environment like this would be appreciated, especially involving how such a setup can be manageable with multiple devs/accounts.
I did this too. I created a directory ~/usr and passed --prefix=$HOME/usr to all configure scripts. Using the Haskell Platform makes this process even smoother.
You obviously need a directory that all pertinent users have at least read permission on. Say /home/foo, with subdirectories bin, lib, share, .cabal. Then ./configure --prefix=/home/foo and make && make install, and make sure that /home/foo/* is before /usr/* in everybody's PATH, LIBRARY_PATH etc. You should probably start with installing gcc and c-libs there, and when everything C is installed, install ghc.
I managed to install ghc through stack by following these instructions. It worked like a charm; the only additional thing I had to do was to install the GMP library and to add it to the LD_LIBRARY_PATH.
If you want to use stack to install ghc or ghci, follow this offical manual:
download the tar.gz file from the release link (curl/wget/even scp can upload your local file to a remote server)
extract the file with tar xvzf and enter the folder test if ./stack run properly
add
export PATH="<stack_path>:$PATH"
to ~/.bashrc
Every time you start the terminal, do source ~/.bashrc
install ghci locally
stack ghci
It will install ghci automatically and launch it.
Related
I'm writing a program that requires LLVM, and thinking of using autotools to ship it on Linux, so from the user's viewpoint the process would look like the well-known ./configure && make && sudo make install.
With autotools, one normally relies on the system package manager to install dependencies. The problem is that, for whatever reason, this doesn't work with LLVM; on Ubuntu 14.04, apt-get thinks the latest version is 3.4, whereas a more recent version would actually be needed. Thus, I need to supply a script to download and build LLVM first (a local copy thereof, not interfering with any older version that might be on the system), a process which takes a few hours.
The most obvious place to put this process is at the start of configure. Is this considered normal and reasonable? Or is there a convention that configure should only contain the things autotools normally puts in it, and installing dependencies should be another script that the user runs first and separately? In the latter case, is there a convention regarding what that separate script should be called?
Don't install anything during configure. The scripts name is "configure" not "install-dependencies".
Write a configure check, and if llvm is missing, Give the user an explanation how to install it. If necessary provide a separate script to download llvm.
It is good practice to run configure (and make) as normal unprivileged user and not as root. So you may not even have permissions to install anything. You would have to check if "sudo" is installed, etc.
It may also happen that the system the user is installing has no network connectivity (firewall etc.), so your download will fail.
I have the 3.0.1 version of Alex installed on my /usr/bin. I think the Haskell Platform originally put it there (although I'm not 100% sure...).
Unfortunately, version 3.0.1 is bugged so I need to upgrade it to 3.0.5. I tried using cabal to install the latest version of Alex but cabal install alex-3.0.5 it installed the executable on .cabal/bin over on my home folder instead of on /usr/bin
Do I just manually copy the executable to /usr/bin? (that sound like a lot of trouble to do all the time)
Do I change my PATH environment variable so that .cabal/bin comes before /usr/bin? (I'm afraid that an "ls" executable or similar over on the cabal folder might end up messing up my system)
Or is there a simpler way to go at it in general?
I want to first point out the layout that works well for me, and then suggest how you might proceed in your particular situation.
What works well for me
In general, I think that a better layout is to have the following search path:
directories with important non-Haskell related binaries
directory that cabal install installs to
directory that binaries from the Haskell platform are in
This way, you can use cabal install to update binaries from the Haskell platform, but they cannot accidently shadow some non-Haskell related binary.
(On my Windows machine, this layout is easy to achieve, because the binaries from the Haskell platform are installed in a separate directory by default. So I just manually adapt the search path and that's it. I don't know how to achieve it on other platforms).
Suggestion for your particular situation
In your specific situation with the Haskell platform binaries already installed together with the non-Haskell related binaries, maybe you can use the following layout for the search path:
directory containing links to some of the binaries in 3
directory with important non-Haskell related binaries and Haskell platform binaries
directory that cabal install installs to.
This way, binaries from cabal install cannot accidently shadow the important stuff in 2. But if you decide you want to shadow something form the Haskell platform, you can manually add a link to 1. If it's a soft link, I think you only have to do that once per program name, and then you can call cabal install for that program to update it. You could even look up what executables are bundled with the Haskell platform and do that once and for all.
On second though, putting /.cabal/bin in front of /usr/bin in the PATH is simpler and is what most people do already.
Its also not a big deal since only cabal will put files in .cabal/bin so it should be predictable and with little risk of overwriting stuff.
I have a question about the structure of the source code from a cygport package.
Here is the contents of a Cygports source file:
the actual source bundle for the project (tar.gz, tar.bz2, etc.)
the any number of *.patch files.
a .cygport file
I am trying to build gedit-3.4.2 from cygports repository.
How does the .cygport file help me run the proper options in the ./configure ?
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error. How do I get the list of ./configure options that were used to build the project when the cygport was built?
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
Here is the contents of gedit-3.4.2-1.cygport:
inherit python gnome2
DESCRIPTION="GNOME text editor"
PATCH_URI="3.4.2-cygwin.patch"
DEPEND="gnome-common gtk-doc
girepository(Gtk-3.0)
pkgconfig(enchant)
pkgconfig(gtksourceview-3.0)
pkgconfig(libpeas-gtk-1.0)"
PKG_NAMES="${PN} ${PN}-devel"
PKG_HINTS="setup devel"
gedit_CONTENTS="--exclude=gtk-doc --exclude=libgedit* etc/ usr/bin/ usr/lib/gedit/ ${PYTHON_SITELIB#/} usr/share/"
gedit_devel_CONTENTS="usr/include/ usr/lib/gedit/libgedit* usr/lib/pkgconfig/ usr/share/gtk-doc/"
DIFF_EXCLUDES="*.desktop.in *.schemas.in *-marshal.h"
CYGCONF_ARGS="--libexecdir=/usr/lib --enable-python"
KEEP_LA_FILES="none"
EDIT Someone from Cygwin Ports mailing list said:
"The configure options are
--libexecdir=/usr/lib --enable-python
Which is from CYGCONF_ARGS."
Here is the contents of a Cygports source file:
You'd do better to think of it as a Cygwin package source file.
cygport is simply a tool for automating the creation of Cygwin binary and source packages. It is the primary tool available, but unlike with some other packaging systems, there's really nothing forcing you to use it. It is quite possible to build a Cygwin package entirely by hand, since it is really nothing more than a tarball that Cygwin's setup.exe can blindly unpack into the Cygwin root directory (typically c:\cygwin) with the expectation that this will put the package's files in sensible locations.
Before cygport existed, people did build their own ad hoc packaging systems. Many Cygwin package maintainers still use these tools they created. (Yours truly included; two of my three packages use cygport, but the third still uses a custom build system.)
Ultimately, you want to read the cygport manual, in /usr/share/doc/cygport/manual.html.
(Yes, I know, "RTFM" answers are frowned on here. But, as one who currently maintains two cygport based packages in the official Cygwin package repository, please believe me when I tell you that the manual is still the single best resource available on this topic.)
How does the .cygport file help me run the proper options in the ./configure ?
As you found out through other resources, you'd first need to edit the CYGCONF_ARGS value in the .cygport file.
The simplest possible step after that is cygport gedit-3.4.2-1.cygport all. That attempts to rebuild all the binary packages in a single step. It also builds a new source package containing updated .cygport and patch files.
If something breaks in the all build process, it is usually faster to switch to using the sub-commands contained by all instead of completely restarting the process. The all step just runs prep, compile, install, package, and finish for you, in that order. For instance, if all fails during the compilation step, there's probably no need to repeat the prep step.
(It is exceptionally uncommon for cygport or a sane build system to wreck the build tree, forcing you to re-run prep. Far more commonly, you end up needing to re-do prep when you manually wreck the build tree while trying to get a new package to build for the first time and need to start over.)
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error.
You can probably fix that by installing the libaspell-devel package from the official Cygwin package repository with setup.exe.
Personally, I wouldn't disable any feature unless it meant installing unofficial packages, such as those from the Cygwin Ports project.[*] It is nice to have Cygwin Ports repository, but because it contains so many packages, installing one can end up creating an "install the world" situation: package A depends on packages B, C and D, and C depends on E, F, G, H, and G depends on I, J, K, and... Dependency hierarchies within the Cygwin package repo tend to be flatter and narrower than those in the Cygports repo.
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
You have guessed that you just add --prefix=/my/private/program/tree to CYGCONF_ARGS, I trust.
[*] If you are feeling confused about "Cygwin Ports" and cygport, the naming similarity is no coincidence. cygport is a tool created by Yaakov Selkowitz for himself when creating the Cygwin Ports package repository. Later, it became popular enough among other Cygwin package maintainers that it pushed out most of the competing build systems.
I am running 64-bit Linux and I am attempting to build the LLVM trunk. I follow the instructions to the letter, and invoke configure with the arguments I want, followed by make. Running make install leaves each directory with no action, and running locate on a name of an llvm executable (such as clang) comes up with no results.
I do not understand what could be wrong here, but I am quite sure that no executables are produced. This exact process works for software in general. Is there some absurdly obvious thing that I am missing?
I'm using gcc 4.5 and 3.81.
Depending on whether you asked for debug or release build, you can check the stuff inside bin subdir of Debug or Release (alternatively, Debug+Assert, Release+Assert) directory in your build directory for the binaries.
If there is still nothing, then you can go to tools/ and invoke make directly to check what's going there. Doing "make VERBOSE=1" might provide some additional information.
You probably want to say what's happening and maybe take a look at what's going on and how exactly you invoked configure and make.
Here is what has been working for me on the last 4 or so Ubuntu 64 bit distros.
svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm
cd llvm
cd tools
svn co http://llvm.org/svn/llvm-project/cfe/trunk clang
cd ..
./configure --enable-optimized --disable-doxygen --prefix=/llvm
make
make install
I've written a C++ program (command line, portable code) and I'm trying to release a Linux version at the same time as the Windows version. I've written a makefile as follows:
ayane: *.cpp *.h
g++ -Wno-write-strings -oayane *.cpp
Straightforward enough so far; but I'm given to understand it's customary to have a second step, make install. So when I put the install: target in the makefile... what command should be associated with it? (If possible I'd prefer it to work on all Unix systems as well as Linux.)
Installation
A less trivial installer will copy several things into place, first insuring that the appropriate paths exists (using mkdir -p or similar). Typically something like this:
the executable goes in $INSTALL_PATH/bin
any libraries built for external consumption go in $INSTALL_PATH/lib or $INSTALL_PATH/lib/yourappname
man pages go in $INSTALL_PATH/share/man/man1 and possibly other sections if appropriate
other docs go in $INSTALL_PATH/share/yourappname
default configuration files go in $INSTALL_PATH/etc/yourappname
headers for other to link against go in $INSTALL_PATH/include/yourappname
Installation path
The INSTALL_PATH is an input to the build system, and usually defaults to /usr/local. This gives your user the flexibility to install under their $HOME without needing elevated permission.
In the simplest case just use
INSTALL_PATH?=/usr/local
at the top of the makefile. Then the user can override it by setting an environment variable in their shell.
Deinstallation
You also occasionally see make installs that build a manifest to help with de-installation. The manifest can even be written as a script to do the work.
Another approach is just to have a make uninstall that looks for the things make install places, and removes them if they exist.
In the simplest case you just copy the newly created executable into the /usr/local/bin path. Of course, it's usually more complicated than that.
Notice that most of these operations require special rights, which is why make install is usually invoked using sudo.
make install is usually the step that "installs" the binary into the correct place.
For example, when compiling Vim, make install may place it in /usr/local/bin
Not all Makefiles have a make install