Install local library - haskell

How can I "install" a local (ie it is on my hard-drive, not on the internet) .hs file to use it across multiple programs? Specifically, if I edit the library, those edits should be available to all programs, so no copy-pasting the library into every program’s directory.
To compile my programs, I still want to type ghc main.hs, not a page of file-paths.
This may be obvious from the above, but I don’t have any knowledge of cabal.

Make sure you have the proper Haskell platform installed, including cabal. Alternatively you can use stack, which is more modern and in many ways better, but IMO cabal is still more practical for a simple project like yours.The following assumes you use cabal on a typical Linux machine.
If not already done, give your file a meaningful hierarchical module name, according to what it does. module Video.Demuxing.FFMPEG or Data.List.Shuffle.Deterministic, for example. Let's assume you call it Foo.Bar.Baz. I.e. the file should begin with
module Foo.Bar.Baz where
... -- start code
Put the file in a corresponding folder structure, i.e.
if not already done, make a new project directory, for example
mkdir /home/Uꜱᴇʀɴᴀᴍᴇ/haskell/foobar
cd /home/Uꜱᴇʀɴᴀᴍᴇ/haskell/foobar
In that project directory make a subdirectory Foo, therein a directory Bar, and put your file in it as Baz.hs.
mkdir -p Foo/Bar
cp WʜᴇʀᴇEᴠᴇʀ/Yᴏᴜʀ/Fɪʟᴇ/Wᴀꜱ/Bᴇꜰᴏʀᴇ.hs Foo/Bar/Baz.hs
Make the file part of a new cabal library.
cabal init
This will ask you a couple of questions, hopefully it'll be clear what to choose. For the most part, the defaults will be fine, in that case always just press enter.
Put everything under version control, if you haven't already. (If you don't know what this is, I suggest you read some Github tutorials. You can skip this step, but the sooner you accustom yourself to some VCS, the better.)
Install your project locally.
cabal install
If everything has worked without errors, you can then, in a Haskell file stored in somewhere else on the computer, simply
import Foo.Bar.Baz
and have everything availably you've defined in that project module. No need to tell GHC where Foo.Bar.Baz is stored when compiling, it has already registered that at this point. You can also launch up ghci anywhere and :m +Foo.Bar.Baz.

Related

How to manage development and installed versions of a shared library?

In short: This question is basically about telling Linux to load the development version of the .so file for executables in the dev directory and the installed .so file for others.
In long: Imagine a shared library, let's call it libasdf.so. And imagine the following directories:
/home/user/asdf/lib: libasdf.so
/home/user/asdf/test: ... perform_test
/opt/asdf/lib: libasdf.so
/home/user/jkl: ... use_asdf
In other words, you have a development directory for your library (/home/user/asdf) and you have an installed copy of its previous stable version (/opt/asdf) and some other programs using it (/home/user/jkl).
My question is, how can I tell Linux, to load /home/user/asdf/lib/libasdf.so when executing /home/user/asdf/test/perform_test and to load /opt/asdf/lib/libasdf.so when executing /home/user/jkl/use_asdf? Note that, even though I specify the directory by -L during link, Linux uses other methods (for example /ect/ld.so.conf and $LD_LIBRARY_PATH) to find the .so file.
The reason I need such a thing is that, of course the executables in the development directory need to link with the latest version of the library, while the other programs, would want to use the stable version.
Putting ../lib in the library path doesn't seem like a secure idea, not to mention not completely correct since you can't run the test from a different directory.
One solution I thought about is to have perform_test link with libasdf-dev.so and upon install, copy libasdf-dev.so as libasdf.so and have others link with that. This solution has one problem though. Imagine the following additional directory:
/home/user/asdf/tool: ... use_asdf_too
Which gets installed to:
/opt/asdf/bin: use_asdf_too
In my solution, it is unknown what use_asdf_too should be linked against. If linked against libasdf.so, it wouldn't work properly if invoked from the dev directory and if linked against libasdf-dev.so, it wouldn't work properly if invoked from the installed location.
What can I do? How is this managed by other people?
Installed shared objects usually don't just end with ".so". Usually they also include their soname, such as libadsf.so.42.1. The .so file for development is typically a symlink to a fully-versioned filename. The linker will look for the .so file and resolve it to the full filename, and the loader will then load the fully-versioned library instead.

How to to build src from a CygPort?

I have a question about the structure of the source code from a cygport package.
Here is the contents of a Cygports source file:
the actual source bundle for the project (tar.gz, tar.bz2, etc.)
the any number of *.patch files.
a .cygport file
I am trying to build gedit-3.4.2 from cygports repository.
How does the .cygport file help me run the proper options in the ./configure ?
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error. How do I get the list of ./configure options that were used to build the project when the cygport was built?
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
Here is the contents of gedit-3.4.2-1.cygport:
inherit python gnome2
DESCRIPTION="GNOME text editor"
PATCH_URI="3.4.2-cygwin.patch"
DEPEND="gnome-common gtk-doc
girepository(Gtk-3.0)
pkgconfig(enchant)
pkgconfig(gtksourceview-3.0)
pkgconfig(libpeas-gtk-1.0)"
PKG_NAMES="${PN} ${PN}-devel"
PKG_HINTS="setup devel"
gedit_CONTENTS="--exclude=gtk-doc --exclude=libgedit* etc/ usr/bin/ usr/lib/gedit/ ${PYTHON_SITELIB#/} usr/share/"
gedit_devel_CONTENTS="usr/include/ usr/lib/gedit/libgedit* usr/lib/pkgconfig/ usr/share/gtk-doc/"
DIFF_EXCLUDES="*.desktop.in *.schemas.in *-marshal.h"
CYGCONF_ARGS="--libexecdir=/usr/lib --enable-python"
KEEP_LA_FILES="none"
EDIT Someone from Cygwin Ports mailing list said:
"The configure options are
--libexecdir=/usr/lib --enable-python
Which is from CYGCONF_ARGS."
Here is the contents of a Cygports source file:
You'd do better to think of it as a Cygwin package source file.
cygport is simply a tool for automating the creation of Cygwin binary and source packages. It is the primary tool available, but unlike with some other packaging systems, there's really nothing forcing you to use it. It is quite possible to build a Cygwin package entirely by hand, since it is really nothing more than a tarball that Cygwin's setup.exe can blindly unpack into the Cygwin root directory (typically c:\cygwin) with the expectation that this will put the package's files in sensible locations.
Before cygport existed, people did build their own ad hoc packaging systems. Many Cygwin package maintainers still use these tools they created. (Yours truly included; two of my three packages use cygport, but the third still uses a custom build system.)
Ultimately, you want to read the cygport manual, in /usr/share/doc/cygport/manual.html.
(Yes, I know, "RTFM" answers are frowned on here. But, as one who currently maintains two cygport based packages in the official Cygwin package repository, please believe me when I tell you that the manual is still the single best resource available on this topic.)
How does the .cygport file help me run the proper options in the ./configure ?
As you found out through other resources, you'd first need to edit the CYGCONF_ARGS value in the .cygport file.
The simplest possible step after that is cygport gedit-3.4.2-1.cygport all. That attempts to rebuild all the binary packages in a single step. It also builds a new source package containing updated .cygport and patch files.
If something breaks in the all build process, it is usually faster to switch to using the sub-commands contained by all instead of completely restarting the process. The all step just runs prep, compile, install, package, and finish for you, in that order. For instance, if all fails during the compilation step, there's probably no need to repeat the prep step.
(It is exceptionally uncommon for cygport or a sane build system to wreck the build tree, forcing you to re-run prep. Far more commonly, you end up needing to re-do prep when you manually wreck the build tree while trying to get a new package to build for the first time and need to start over.)
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error.
You can probably fix that by installing the libaspell-devel package from the official Cygwin package repository with setup.exe.
Personally, I wouldn't disable any feature unless it meant installing unofficial packages, such as those from the Cygwin Ports project.[*] It is nice to have Cygwin Ports repository, but because it contains so many packages, installing one can end up creating an "install the world" situation: package A depends on packages B, C and D, and C depends on E, F, G, H, and G depends on I, J, K, and... Dependency hierarchies within the Cygwin package repo tend to be flatter and narrower than those in the Cygports repo.
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
You have guessed that you just add --prefix=/my/private/program/tree to CYGCONF_ARGS, I trust.
[*] If you are feeling confused about "Cygwin Ports" and cygport, the naming similarity is no coincidence. cygport is a tool created by Yaakov Selkowitz for himself when creating the Cygwin Ports package repository. Later, it became popular enough among other Cygwin package maintainers that it pushed out most of the competing build systems.

Run HAppStack app withot cabal

I'm trying out HAppStack. I installed HAppStack and created a project: happstack new project web. New folder 'web' created with project guestbook under it. So now I want to run it. The only way I could do it is run cabal install. But I want to run my app without installing with cabal! Executing run.sh errors: Could not find module 'Paths_guestbook'. How can I do it?
Edit:
In general, is there a way to run HAppStack app without rebuild like in Snap?
In general, you can always build Cabal projects without installing simply by doing:
$ cabal configure
$ cabal build
The resulting executable will usually be called dist/build/<project>/<project>.
The specific error you're getting is because the code must be built with Cabal to get the Paths_guestbook module, which will contain information about the location of data files used by it. (It may be the case that it's unable to find these data files if you run the executable without installing it; in that case, you'll need a more elaborate solution, such as cabal-dev.)
(I'm not a Happstack user, so I don't know if there's an official way to accomplish this, but this should work for basically any Cabal-based project in general. The repository shows that run.sh was last modified in 2009, so I suspect it has simply bit-rotten. It doesn't do anything special, though, so cabal build should work just fine.)
SHORT VERSION:
The run.sh seems to be missing an include paramater. Modify it to look like this:
#!/bin/sh
runghc -isrc -isrc-interactive-only src/Main.hs
I have update the run.sh in darcs to include this change.
LONG VERSION:
Normally that flag is not needed for Happstack applications. You can usually just do, runhaskell Main.hs. But in that particular example the Main.hs explicitly imports:
import Paths_guestbook (version)
which is used in the versionInfo function so that the server can report its own version number. Though version number in src-interactive-only is hardcoded and will generally be out of date. So it is only correct if you actually build with cabal.
The Paths_guestbook module is normally created automatically when cabal build is run. So, another fix would be to change the run.sh to:
#!/bin/sh
runghc -isrc -idist/build/autogen src/Main.hs
And run cabal configure && cabal build once. After that you will be able to use run.sh (until you do a cabal clean).
Another option would be to set a CPP flag in the .cabal file, and only import Paths_guestbook when the application is being built via cabal.
For example in the happstack.com source code:
http://patch-tag.com/r/stepcut/happstackDotCom/snapshot/current/content/pretty/Main.hs
In line 40 (or so) you will see an #ifdef __CABAL__. happstack.com needs to be able to know where to find the static content such as .css files. When doing runhaskell Main.hs in the local directory, it will look for the files in a sub-directory of the local directory. If you do cabal install it will instead look whever cabal installs the data files. Or, you can override the default location with command-line arguments. (Which is what the debian packaging for that app does).
Unfortunately, the happstack new project command is somewhat bitrotten because the author became a parent and has not had time to work on it in a long time. It will likely be removed from the upcoming Happstack release in order to reduce confusion.
In order to be truly useful, I think the command needs to prompt for a bunch of values and then generate a new project from a set of templates. Similar to how 'cabal init' works. But currently, no one has volunteered the time to make that happen.
To see changes to your source appear automatically with out restarting the server you can use the happstack-plugins library. There is an screencast of it here:
http://happstack.blogspot.com/2010/10/recompile-your-haskell-based-templates.html

GHC Install Without Root

So I'd like to set up a linux machine for Haskell development with one huge caveat -- no root privs on this machine. We could of course get the admins to install GHC for us, eventually. However, in the long-term then we need to hassle them when we want to upgrade, etc. So much better to do everything in userland. Which also means that we'll want to install c libs we link to in userland as well, etc. to keep everything as hassle-free as possible.
So, the question is, how, soup-to-nuts, would I go about doing a purely userland install of GHC? The machine will have gcc, and the usual toolchain. If necessary, we can start with a typical ghc install to get the ball rolling, but it would be nice not to.
Additionally, any tips on managing an environment like this would be appreciated, especially involving how such a setup can be manageable with multiple devs/accounts.
I did this too. I created a directory ~/usr and passed --prefix=$HOME/usr to all configure scripts. Using the Haskell Platform makes this process even smoother.
You obviously need a directory that all pertinent users have at least read permission on. Say /home/foo, with subdirectories bin, lib, share, .cabal. Then ./configure --prefix=/home/foo and make && make install, and make sure that /home/foo/* is before /usr/* in everybody's PATH, LIBRARY_PATH etc. You should probably start with installing gcc and c-libs there, and when everything C is installed, install ghc.
I managed to install ghc through stack by following these instructions. It worked like a charm; the only additional thing I had to do was to install the GMP library and to add it to the LD_LIBRARY_PATH.
If you want to use stack to install ghc or ghci, follow this offical manual:
download the tar.gz file from the release link (curl/wget/even scp can upload your local file to a remote server)
extract the file with tar xvzf and enter the folder test if ./stack run properly
add
export PATH="<stack_path>:$PATH"
to ~/.bashrc
Every time you start the terminal, do source ~/.bashrc
install ghci locally
stack ghci
It will install ghci automatically and launch it.

What should Linux/Unix 'make install' consist of?

I've written a C++ program (command line, portable code) and I'm trying to release a Linux version at the same time as the Windows version. I've written a makefile as follows:
ayane: *.cpp *.h
g++ -Wno-write-strings -oayane *.cpp
Straightforward enough so far; but I'm given to understand it's customary to have a second step, make install. So when I put the install: target in the makefile... what command should be associated with it? (If possible I'd prefer it to work on all Unix systems as well as Linux.)
Installation
A less trivial installer will copy several things into place, first insuring that the appropriate paths exists (using mkdir -p or similar). Typically something like this:
the executable goes in $INSTALL_PATH/bin
any libraries built for external consumption go in $INSTALL_PATH/lib or $INSTALL_PATH/lib/yourappname
man pages go in $INSTALL_PATH/share/man/man1 and possibly other sections if appropriate
other docs go in $INSTALL_PATH/share/yourappname
default configuration files go in $INSTALL_PATH/etc/yourappname
headers for other to link against go in $INSTALL_PATH/include/yourappname
Installation path
The INSTALL_PATH is an input to the build system, and usually defaults to /usr/local. This gives your user the flexibility to install under their $HOME without needing elevated permission.
In the simplest case just use
INSTALL_PATH?=/usr/local
at the top of the makefile. Then the user can override it by setting an environment variable in their shell.
Deinstallation
You also occasionally see make installs that build a manifest to help with de-installation. The manifest can even be written as a script to do the work.
Another approach is just to have a make uninstall that looks for the things make install places, and removes them if they exist.
In the simplest case you just copy the newly created executable into the /usr/local/bin path. Of course, it's usually more complicated than that.
Notice that most of these operations require special rights, which is why make install is usually invoked using sudo.
make install is usually the step that "installs" the binary into the correct place.
For example, when compiling Vim, make install may place it in /usr/local/bin
Not all Makefiles have a make install

Resources