Specifying a custom remote-repo in a cabal sandbox - haskell

I would like to work on a project in a cabal sandbox. But instead of using the same remote-repo as my non-sandboxed code (i.e., Hackage), I'd like to point to a different remote repo. I tried creating a cabal.config file in the project directory with a remote-repo line, but it seemed to have no effect; running cabal update after that indicated that Hackage was being downloaded, but not my custom repo.
Is this use case supported, and if so, how do I achieve it?

This is in fact a Cabal bug, I've opened a Github issue about it.

Related

how to manage cabal sandboxes

The current documentation of cabal shows a sandbox subcommand.
The respective page on github no longer contains the section on sandboxes.
I'm using cabal version 3.2.0.0, but the sandbox subcommand is absent. What is the correct way to manage sandboxes with cabal?
Apparently there's an overhaul going on with the documentation, there's mention of a Nix-style/new-/v2 commands but it's unclear to a noob what's the canonical way of using sandboxes with cabal.
They're no longer needed. The nix-style store does everything sandboxes did, but better. Just use use cabal build (cabal v2-build for pre-3.0 cabal's) and other cabal commands with impunity in a bare, sandbox-free directory.

How to install dorodango package manager Scheme language

I needed to use Purely Functional Data Structures in Scheme (pfds),
so I found a resource for it (https://github.com/ijp/pfds). I cloned
it using git command, and so far successful.
I have racket installed and needed these pfds to work. For which
first I had to install the pfds using a package manager called “dorodango” to
install pfds on Gitlab (https://gitlab.com/rotty/dorodango.git).
So the problem now is that how I install dorodango package manager from
the resource I found on Gitlab.
Can someone help? Please.
Whoa! If I understand you correctly, this is Much Easier than you think.
First, though, it looks to me like you need to back up many steps.
To install the pfds package for racket, you need to do one (and only one) of these two things. Either:
run raco pkg install pfds from the command-line, or if you're not a command-line person,
Use the package manager built into DrRacket.
No need to git clone anything, no need to use dorodango. Perhaps I'm misunderstanding something about your situation?
(For more information, check out the Getting Started with Packages.)

Why do cabal configure?

In the Cabal User Guide it says that Cabal is often compared with autoconf and automake since the command line interface for actually configuring and building packages follows the same steps steps:
./configure --prefix=...
make
make install
compared to
cabal configure --prefix=...
cabal build
cabal install
My understanding is that ./configure uses a config file (produced by autoconf) to adapt the make process to the environment in which it will run and also to check dependencies. So ./configure therefore always have an "input" to conform to. But if cabal configure is not given any arguments what does it do, and why is it necessary before running cabal build?
The cabal configure step does at least two things I know of:
Check that the package description parses OK.
Check that all required dependencies are already installed (and report an error if not).
Basically it's running the constraint solver to decide exactly which packages you're going to build against. (E.g., if you have several versions of ByteString installed, which version are you going to use? Well it might depend on which version the packages you depend on are expecting...)
Also I believe it's possible to supply options at configure time which change exactly which features of the package get built (but I don't have experience with this).
I think originally you had to call configure before you could call build, but I believe now the cabal command-line tool does that step for you automatically in many cases. (E.g., cabal run now seems to automatically reconfigure if the package description file is newer than the configuration DB.)

how to make cabal sandbox aware of (installed) packages in other locations?

When I have a sandbox, it seems cabal install ignores packages in $HOME/.ghc/x86_64-linux-7.8.4/package.conf.d.
How can I configure the sandbox such that these packages become visible?
I am seeing a vague reference to --package-db=db in https://www.haskell.org/cabal/users-guide/installing-packages.html#sandboxes-advanced-usage
but I understand neither how nor when to use it. (with sandbox init? configure? install? none seems to work - none gives any error message either.)
I know about add-source but my question refers to installed packages.
The whole point of the sandbox is that it ignores your local package database.
If you want to share installations across many sandboxes, you may install to the global database; but then you should be very careful, as fixing the badness of a broken package is much more difficult. Keep it to really core packages that you expect to be widely shared across many, many projects -- not just the half dozen you're stressing out about right now for your job.
Alternately, you may share one sandbox between the builds of many packages; simply set the CABAL_SANDBOX_CONFIG variable to an absolute path pointing to the appropriate cabal.sandbox.config file. This is significantly safer, and much more flexible, as you can choose how widely your installed packages are shared (and in bad cases, simply nuke the sandbox and start over).
Here is something you can try - copy (or symlink) the files from ~/.ghc/{arch-os-ghc-version}/package.conf.d to the sandbox's {arch-os-ghc-version}-packages.conf.d directory.
There is a question about the package.cache file. The following procedure seems to be a safe way to proceed:
Start with an empty sandbox
Copy the package.conf.d files from ~/.ghc to the sandbox (including package.cache)
Add packages to the sandbox via cabal install --only-dependencies
I don't know if the package.cache file is required or if there is a way to rebuild it.
One disadvantage is that cabal install --only-deps seems to reinstall broken packages in the sandbox even if they are not required by your application. Maybe there is work-around for this.

Run HAppStack app withot cabal

I'm trying out HAppStack. I installed HAppStack and created a project: happstack new project web. New folder 'web' created with project guestbook under it. So now I want to run it. The only way I could do it is run cabal install. But I want to run my app without installing with cabal! Executing run.sh errors: Could not find module 'Paths_guestbook'. How can I do it?
Edit:
In general, is there a way to run HAppStack app without rebuild like in Snap?
In general, you can always build Cabal projects without installing simply by doing:
$ cabal configure
$ cabal build
The resulting executable will usually be called dist/build/<project>/<project>.
The specific error you're getting is because the code must be built with Cabal to get the Paths_guestbook module, which will contain information about the location of data files used by it. (It may be the case that it's unable to find these data files if you run the executable without installing it; in that case, you'll need a more elaborate solution, such as cabal-dev.)
(I'm not a Happstack user, so I don't know if there's an official way to accomplish this, but this should work for basically any Cabal-based project in general. The repository shows that run.sh was last modified in 2009, so I suspect it has simply bit-rotten. It doesn't do anything special, though, so cabal build should work just fine.)
SHORT VERSION:
The run.sh seems to be missing an include paramater. Modify it to look like this:
#!/bin/sh
runghc -isrc -isrc-interactive-only src/Main.hs
I have update the run.sh in darcs to include this change.
LONG VERSION:
Normally that flag is not needed for Happstack applications. You can usually just do, runhaskell Main.hs. But in that particular example the Main.hs explicitly imports:
import Paths_guestbook (version)
which is used in the versionInfo function so that the server can report its own version number. Though version number in src-interactive-only is hardcoded and will generally be out of date. So it is only correct if you actually build with cabal.
The Paths_guestbook module is normally created automatically when cabal build is run. So, another fix would be to change the run.sh to:
#!/bin/sh
runghc -isrc -idist/build/autogen src/Main.hs
And run cabal configure && cabal build once. After that you will be able to use run.sh (until you do a cabal clean).
Another option would be to set a CPP flag in the .cabal file, and only import Paths_guestbook when the application is being built via cabal.
For example in the happstack.com source code:
http://patch-tag.com/r/stepcut/happstackDotCom/snapshot/current/content/pretty/Main.hs
In line 40 (or so) you will see an #ifdef __CABAL__. happstack.com needs to be able to know where to find the static content such as .css files. When doing runhaskell Main.hs in the local directory, it will look for the files in a sub-directory of the local directory. If you do cabal install it will instead look whever cabal installs the data files. Or, you can override the default location with command-line arguments. (Which is what the debian packaging for that app does).
Unfortunately, the happstack new project command is somewhat bitrotten because the author became a parent and has not had time to work on it in a long time. It will likely be removed from the upcoming Happstack release in order to reduce confusion.
In order to be truly useful, I think the command needs to prompt for a bunch of values and then generate a new project from a set of templates. Similar to how 'cabal init' works. But currently, no one has volunteered the time to make that happen.
To see changes to your source appear automatically with out restarting the server you can use the happstack-plugins library. There is an screencast of it here:
http://happstack.blogspot.com/2010/10/recompile-your-haskell-based-templates.html

Resources