I'd like to snapshot the global Hackage database into a frozen, smaller one for my company's deploys. How can one most easily copy out some segment of Hackage onto a private server?
Here's one script that does it in just about the simplest way possible: https://github.com/jamwt/mirror-hackage
You can also use the MirrorClient directly from the hackage2 repo: http://code.haskell.org/hackage-server/
This is not an answer two the question in the title but an answer to my interpretation of what the OP wish to achieve.
Depending of what you want for level of stability in your production circle you can approach the problem in several ways.
I have split the dependencies in two parts, things that I can use that are in the haskell platform (keep every platform used in production) and then only use a small number of packages outside that and don't let anyone (including yourself) add more packages into your dependency tree just because of laziness (as developer). These extra packages you use some kind of script for and collect from hackage (lock to version) by using cabal fetch. Keep them safe. Create a install script that uses your safe packages and if a new machine (developer) are added to your team, use that script.
yackage is great but it all comes down to how you ship your product. If you have older versions in production you need to have a yackage setup for every version and that could be quiet annoying after a couple of years.
You can download Hackage with Voker57's hackage-mirror.sh. You'll need 'curl' for it to run. If you're using a Debian based Linux distribution, you can install curl by typing apt-get install curl.
Though it's not a segment of Hackage, I've written a bash script, that downloads the whole Hackage, what can be further easily set up as a mirror using an HTTP server. Also, it downloads all required stuff like GHC compilers ready to be used with Stack.
Currently, a complete Hackage mirror occupies ~10GiB (~100000 packages of all versions) and Stack related stuff like GHC compilers ~21GiB (~200 files). Consequent runs of the script skip already downloaded stuff, so it downloads only new one. So it's a pretty convenient way to "live offline" and sync up to date when online.
Related
Long story short, I'd like some guidance on what's the (best) way to have Haskell work on Archlinux.
By work I mean all, in terms of the ghci command line tool, installing packages I don't have - such as vector-space, which this answer to a question of mine refers to -, and any other thing that could be necessary to a Haskell obstinate learner.
Archlinux wikipage on Haskell lists three (alternative?) packages for making Haskell work on the system, namely ghc, cabal-install, and stack. I have the first and the third installed on my system, but I think I must have installed the latter later (unless it's a dependency to ghc) while tampering around (probably in relation to Vim as a Haskell IDE). Furthermore, I have a huge amount of haskell-* packages installed (why? Who knows? As a learner I must have come multiple times to the point of say uh, let's try this!).
Are there any pros and cons ("cons", ahah) about each of those packages?
Can they all be used with/without conflicts?
Does any of them make any other superfluous?
Is there anything else I should be aware of which I seem apparently ignorant about based of what I've written?
Arch Linux's choice of providing dynamically linked libraries in their packages tends to get in the way if you are looking to develop Haskell code. As an Arch user myself, my default advice would be to not use Arch's Haskell packages at all, and instead to install whatever you need through ghcup or Stack, starting from the guidance in their respective project pages.
You are basically there. Try the following:
ghci: If you get the Haskell REPL then it works.
stack ghci: Again you should get the Haskell REPL. There are a lot of versions of GHC, and stack manages these along with the libraries. Whenever you use a new version of GHC stack will download it and create a local installation for you.
stack is independent of your Linux package manager. The trouble is that your distro will only have the Haskell libraries it actually needs for any applications it has integrated, and once you step outside of those you are in dependency hell with no support. So I recommend that you avoid your distro Haskell packages. stack does everything you need.
If you installed stack from your Linux package manager then you might want to uninstall it and use a personal copy (i.e. in your ~/.local directory) instead. Then you can always say stack update to check you have the latest version.
Once you have stack going, create a project by saying stack new my-project simple. Then go into the project folder and start editing. You can work with just a .hs file and GHC if you really want, but its painful; you will do much better with stack, even if you are just messing around.
You'll also need an editor. Basic functionality like syntax highlighting is available in pretty much everything, but once you get past Towers of Hanoi you are going to want something better. I use Atom with ide-haskell-ghcide. This uses the Haskell Language Server under the hood, so you will need to install that too. I know a bunch of other editors have HLS support, but I don't have experience with them.
While the Nix/OS wiki and manuals provide a lot of excellent information, I am still having trouble getting an architectural overview. Apologies for the quantity and naivity of the questions; feel free to answer a subset:
1. What constitutes a Nix package?
From my reading of the manual a Nix package is:
i. A Nix expression that fetches the source and dependencies needed to build.
ii. A builder script.
iii. A listing on all-packages.nix.
The source and the binary along with generated derivations are put in the nix/store, and channels automate updates, keeping them up to date efficiently by using a shared binary cache.
a. Is this correct and complete?
b. Where are the .nix expressions stored?
c. May I simply copy a package folders between the nix/stores of different machines if they have the same architecture?
2. What constitutes a Nix environment?
a. Where and how are environments defined?
b. What about user profiles?
c. How does the nix-shell command work? Is it related to the nix-env command?
3. What is the relationship between NixOS's configuration.nix and Nix environments?
From the manual and wiki I gather NixOS is a Nix package, and that Nix creates a basic system environment based on configuration.nix.
a. Is this true, and if so what do nixos-rebuild and nixos-install do besides this?
b. Is it possible to reverse the process, i.e. generate succinct package or configuration files from an environment?
c. What can I do with NixOS that I cannot do with Nix?
4. What are best practices when using Nix for creating portable and reproducible environments to share with colleagues?
a. What are the various approaches to sharing desktop, server and development environments?
b. What are the use-cases for these approaches?
c. What are their advantages and disadvantages vis-à-vis portability and accesibility?
5. Open bonus question: what else is critical to note about Nix/OS architecture?
1.a
Yes, you can view Nix also as a build tool which uses /nix/store as a cache. Nix being a package manager is just a side-effect of this design.
1.b
Where your nix expressions are depends on your setup. To figure this out look into your $NIX_PATH variable which points to locations where copies of nixpkgs repo are located. Those copies were (sometimes still are) managed by the nix-channel tool, but in the future you will could point to nixpkgs like:
export NIX_PATH=nixpkgs=https://github.com/NixOS/nixpkgs/archive/16.03.tar.gz
You can read more about NIX_PATH in this blog post about nix search paths
1.c
Yes, packages can be copied between machines. Actually, there is already a tool for this: nix-copy-closure.
2.a
I believe you are talking here about Nix environments that you manage with nix-env. We usually refer to these as nix profiles. What I said about the nix search path (NIX_PATH variable) in point 1 does no really apply to nix-env.
The nix-env tool uses ~/.nix-defexpr, which is part of NIX_PATH by default, but that's only a coincidence. If you empty NIX_PATH, nix-env will still be able to find derivations because of ~/.nix-defexpr.
2.b
A user profile is just a nix environment (described in 2.a) that you can change to anything else, e.g.:
nix-env --switch-profile ./result
where ./result is something in /nix/store or something that points into /nix/store. Then the above command will switch the ~/.nix-profile symlink with your ./result.
2.c
nix-shell is actually closer to the nix-build command. So let me first explain what nix-build does.
nix-build is used to build .nix files (also derivations, but for that I will have to explain what a derivation is). An example how you would use nix-build:
nix-build something.nix
The above command would produce a ./result symlink, which points to something in /nix/store. Nix command will realize the build and store the output into /nix/store.
nix-shell on the other hand will do exactly what nix-build does except it will not trigger the builder script and will drop you into that environment. That way you end up with an environment that you can use to develop nix expressions, which can also be outside the nixpkgs repository (eg. your private projects).
3.a
Nix installs the binary and NixOS creates configuration for that binary and hooks it up with the init system (systemd currently).
3.b
No. This is what other configuration managers do. Nix works the other way around. The difference in approach is nicely described in this blog post.
3.c
As said in 3.a, nix will only install binaries, while nixos will also make sure that the binary is running.
4.a/b/c
Basically there is no limit, do how you think fits you. Once you understand the basic concepts you find what works best for you. Look at others' dotfiles/configurations and have opinions.
I use my collection of nixos configurations to manage laptops for my family with the help of the
system.autoUpgrade service.
To create a (build) reproducible environment I wrote a blog post some time ago.
5.
My personal favorite tool that is coming (or is already here) is vulnix. This will check your current system/project against current vulnerabilities (CVEs). And this makes nix stand out against others, especially since it is so easy to use (no enterprise setup).
Another use case that I found for nix is to build reproducible docker images using dockerTools helpers.
For non trivial projects is commonly split it in several packages (in particular, I work usually with Visual Studio C# solutions containing 1-10 projects).
My current Haskell workflow is perform cabal clean && cabal configure && cabal install --force-reinstall for each time I modify some package that is used in another one.
That's works fine but I wish work with several Haskell projects as if only one be (ideally if A and B projects was modified then using ghci A detect changes on B).
The proposed solution (if possible) should work fine too if certain package A (in development) is shared in several "workspaces".
I looked for, but the unique related response (Haskell Cafe, Working with multiple projects 2009) suggest my current workflow as solution.
Any tutorial explaining it (workspaces, shared "in development" packages, ...) will be welcome!
Thanks a lot!!! :)
(I'm working with ghc)
So, basically you can use cabal-dev to make a local sandbox of the packages you want to use for a given project. This is will stop different projects that may have conflicting package requirements from mucking every thing up.
Here is a good post on reddit explaining the basics.
http://www.reddit.com/r/haskell/comments/f3ykj/psa_use_cabaldev_to_solve_dependency_problems/
Suppose I want to use different versions of GHC, each of them with a different binary name.
Question 1. Can I use ./configure --prefix=ghc-some-version-dir for each of the installations and create symbolic links ghc-7.4.1, ghc-7.6.2, ghc-head without problems?
That is, after the installation and creation of binaries from source code. Using virtual environments would still be needed for building projects and its dependencies.
Question 2. What prevents us from uploading ghc to Hackage with a package name ghc-version having a binary name that depends on its version? e.g. one could cabal install ghc-version-7.6.2 and get a binary ghc-7.6.2 in ~/.cabal/bin
You don't need to do anything special. GHC already installs all of its executables with versioned names and links from the non-versioned name to the most recently installed version, e.g. a link from "ghc" to "ghc-7.6.1" or whatever you installed last. When you build from the repository, the version number is quite long and includes the date you built it.
I don't know for sure why GHC isn't on Hackage, but I presume it's because the build system is very complicated, and that cabal-izing it (and maintaining the cabalization) would be more work than it's worth.
There are several soluttions
Just use chroot
Use a package manager that handles multiple versions of the same library/software such as nix
There are scripts which have been written to handle this such as https://github.com/spl/multi-ghc
Use gnu stow as described in Brent Yorgey blog post.
Ben Millwood has a solution where he just uses the -w flag, read his comment at:https://plus.google.com/u/0/100165496075034135269/posts/VU9FupRvRbU
I wonder if we can a reduce just a little bit the effort around packages
under linux/unix OS environments and software installations.
It is my stance that there is too much redundant effort about $subject.
I have been pondering about ways to connect build systems of $subject
with some next "stage build tools", like: easybuild (1) & openbuildservice (2);
read below for more details.
To be more specific, I was able last week to take pkgsrc's repository,
process the Makefiles via a tiny "pkg2eb" script and produce *.eb files
for easybuild, then fed many parallel gcc compilations with them.
That "blindly-driven process" ended up in >600 successful builds,
ie. these were packages that simply needed 'wget/configure/make/make install';
It's not bad for a first run, just wonder if it can be done any better.
So:
According to your experience, which OS has the cleanest/leanest
pkgsrc/port structure to be sourced & fed to other external tools?
This is NOT the same as which has the most available packages!
Have you heard of any similar efforts trying to massively produce
packages from eg. a common source list in a structured manner?
(I mean, in a transferable way across different build systems)
So,
much relevant information is visible here:
http://www.mancoosi.org/edos/packages/ # lengthy description of various packaging formats
this one shows the higher level picture:
http://www.mancoosi.org/edos/suggestions/ (esp. 2.1.1 Expressivity shortcomings)
Anyway, to answer to original question, the best bets as of now are:
RPM's .spec files
DEB control files
pkgsrc; possible but some hackery is still needed
portage; quite clean, distinguishes between DEPEND and RDEPEND
macports; easy to parse; very detailed dependencies aspects
ports; like pkgsrc; multiple dependencies defined