What is the purpose of nix-instantiate? What is a store-derivation? - nixos

In the manual it is written :
The command nix-instantiate generates store derivations from
(high-level) Nix expressions.
But what are store derivations ?
The manual says the following about store derivations :
A description of a build action. The result of a derivation is a store
object. Derivations are typically specified in Nix expressions using
the derivation primitive. These are translated into low-level store
derivations (implicitly by nix-env and nix-build, or explicitly by
nix-instantiate)
This is a little bit difficult to understand for a nix-newbee and I found nothing more enlightening about nix-instantiate and store derivations by googling. I also asked on #nixos, I got no answer, yet.
Could someone please explain on a simple example what a store derivation is, what is it used for ?
Why one would generate store derivations using nix-instantiate? Could you give a super simple, easy to understand example ?

What is nix-instantiate good for ?
The command nix-instantiate sole purpose is to evaluate Nix expressions.
The primary purpose of the Nix languages is to generate derivations.
What is a store-derivation?
A derivation (see example) is a computer-friendly representation of the build recipes used for building (realize) packages. They are files with the extension .drv which are listed in the store directory, usually /nix/store.
These build recipes are understood by the Nix daemon, and are used to ensure that all the dependencies are built before, and stored in the pre-computed paths. Once all dependencies are successfully compiled, then the Nix daemon can look for substitute, or realize the derivation locally. All the detailed explanation is available in Eelco Dolstra PhD Thesis.
These files are created each time the nix-instantiate command evaluates the derivation function of the Nix language, unless the --eval command line option is provided.
Why one would generate store derivations using nix-instantiate ?
If you are interested in the build output, you should prefer nix-build, which is equivalent to:
$ nix-store -r $(nix-instantiate '<nixpkgs>' -A hello)
In some cases you are not interested in the build results, but in looking at the compilation time dependencies. For example, if you want you to investigate the build time dependencies of the hello package. Then using the nix-store command as follow, you can request all the dependencies of the build recipe:
$ nix-store --tree -q $(nix-instantiate '<nixpkgs>' -A hello)

EDIT: this whole thing needs to be revised, because it is very misleading; for example:
A derivation is not function application, but taking a Nix expression and translate it and its concrete arguments to an alternative format.
All quotes are from Eelco Dolstra's PhD thesis.
Store derivations
A store derivation is a Nix expression with all
variability removed and translated into an alternative format.
This intermediate representation "describes a
single, static, constant build action" that can be built into software components.
"Nix expressions usually translate to a graph of store derivations."
To put it differently,
*------------------------------------------------------*
| |
| NIX EXPRESSION == function |
| |
| ( Describes how to build a component. That is, how ) |
| ( to compose its input parameters, which can be ) |
| ( other components as well. ) |
| |
| STORE DERIVATION == function application |
| |
| ( Result of a Nix expression called with concrete arguments. ) |
| ( Corollary: a single Nix expression can produce ) |
| ( different derivations depending on the inputs. ) |
| |
*------------------------------------------------------*
For context:
Image taken from section "2.4 Store derivations".
The thesis describes a Nix expression as
a "family of build actions", in contrast to a
derivation that is "exactly one build action".
ARG_1, ..., ARG_N
| ---(aaa, ...)---> DERIVATION_1
NIX EXPRESSION | ---(bbb, ...)---> DERIVATION_2
| :
function( | :
param_1, | :
..., | :
param_N | :
) | :
| ---(zzz, ...)---> DERIVATION_N
The derivations above could be producing the same
application but would build it with different configuration
options for example. (See APT packages vim-nox,
vim-gtk, vim-gtk3, vim-tiny, etc.)
Why is it called "derivation"?
Its name comes from "2.2 Nix expressions":
The result of the function [i.e., Nix expression]
is a derivation. This is Nix-speak for a
component build action, which derives the
component from its inputs.
Why are "store derivations" needed?
Section "2.4 Store derivations" has all the
details, but here's the gist:
Nix expressions are not built directly; rather, they are translated to
the more primitive language of store derivations, which encode single
component build actions. This is analogous to the way that compilers
generally do the bulk of their work on simpler intermediate
representations of the code being compiled, rather than on a fullblown
language with all its complexities.
Format of store derivations
From section "5.4. Translating Nix expressions to store derivations":
The abstract syntax of store derivations is shown in Figure 5.5 in a
Haskell-like [135] syntax (see Section 1.7). The store derivation
example shown in Figure 2.13 is a value of this data type.
Figure 5.5.: Abstract syntax of store derivations
data StoreDrv = StoreDrv {
output : Path,
outputHash : String,
outputHashAlgo : String,
inputDrvs : [Path],
inputSrcs : [Path],
system : String,
builder : Path,
args : [String],
envVars : [(String,String)]
}
Example
For example, the Nix expression to build the Hello
package in Figure 2.6,
Figure 2.6
{stdenv, fetchurl, perl}:
stdenv.mkDerivation {
name = "hello-2.1.1";
builder = ./builder.sh;
src = fetchurl {
url = http://ftp.gnu.org/pub/gnu/hello/hello-2.1.1.tar.gz;
md5 = "70c9ccf9fac07f762c24f2df2290784d";
};
inherit perl;
}
will result in an intermediate representation of
something similar to in Figure 2.13:
Figure 2.13 Store derivation
{ output = "/nix/store/bwacc7a5c5n3...-hello-2.1.1" 25
, inputDrvs = { 26
"/nix/store/7mwh9alhscz7...-bash-3.0.drv",
"/nix/store/fi8m2vldnrxq...-hello-2.1.1.tar.gz.drv",
"/nix/store/khllx1q519r3...-stdenv-linux.drv",
"/nix/store/mjdfbi6dcyz7...-perl-5.8.6.drv" 27 }
}
, inputSrcs = {"/nix/store/d74lr8jfsvdh...-builder.sh"} 28
, system = "i686-linux" 29
, builder = "/nix/store/3nca8lmpr8gg...-bash-3.0/bin/sh" 30
, args = ["-e","/nix/store/d74lr8jfsvdh...-builder.sh"] 31
, envVars = { 32
("builder","/nix/store/3nca8lmpr8gg...-bash-3.0/bin/sh"),
("name","hello-2.1.1"),
("out","/nix/store/bwacc7a5c5n3...-hello-2.1.1"),
("perl","/nix/store/h87pfv8klr4p...-perl-5.8.6"), 33
("src","/nix/store/h6gq0lmj9lkg...-hello-2.1.1.tar.gz"),
("stdenv","/nix/store/hhxbaln5n11c...-stdenv-linux"),
("system","i686-linux"),
("gtk","/store/8yzprq56x5fa...-gtk+-2.6.6"),
}
}
The abstract syntax of store derivations is shown in Figure 5.5 in a
Haskell-like [135] syntax (see Section 1.7). The store derivation
example shown in Figure 2.13 is a value of this data type.
Figure 5.5.: Abstract syntax of store derivations
data StoreDrv = StoreDrv {
output : Path,
outputHash : String,
outputHashAlgo : String,
inputDrvs : [Path],
inputSrcs : [Path],
system : String,
builder : Path,
args : [String],
envVars : [(String,String)]
}

Basically in Nix as an end user you start with nix expressions, these can be turned into nix derivations, which can later be "realized" (which is basically the final built output).
The purpose of nix-instantiate is to convert nix expressions into deriviations which are basically intermediate primitives.
Why one would generate store derivations using nix-instantiate? Could
you give a super simple, easy to understand example ?
This is usually done indirectly (for example via nix build).
It might be a bit confusing as /nix/store is the parent directory that stores both deriviations and realized outputs as well.

Related

tool to convert pre-cordinated SNOMED to post-coordinated

I have typed into the international Snomed browser tool `Superficial injury of head" to which I get the following -
http://browser.ihtsdotools.org/?perspective=full&conceptId1=283025007&edition=en-edition&release=v20170131&server=http://browser.ihtsdotools.org/api/snomed&langRefset=900000000000509007
or rather the important details:
Pre-coordinated:
283025007 |Superficial injury of head (disorder)|
*Post-coordinated:
82271004 |Injury of head (disorder)| +
283024006 |Superficial injury of head and neck (disorder)| :
{ 363698007 |Finding site (attribute)| = 69536005 |Head structure (body structure)|,
116676008 |Associated morphology (attribute)| = 3380003 |Superficial injury (morphologic abnormality)| }
I would find it hard to believe that the creators of SNOMED did not have a tool to take pre-coordinated exporessions and output the post-coordinated expressions.
Any SNOMED familiars happen to know of an automated way to achieve this is a tool doesn't already exist?
Thanks
I am not aware of a free online tool to do this. But what is happening is normal form generation; in your example it is giving the definition of a pre-coordinated expression but it could be taking a post-coordinated expression and generating a different normalised expression.
Without the capability of normalising an expression it is impossible to use some more advanced features of SNOMED CT, for example inheritance testing between post-coordinated expressions requires that expressions are converted into a normal form.
Having said I'm not aware of a free tool to do this, there are explicit instructions on how to do this - if you're feeling brave: https://confluence.ihtsdotools.org/display/DOCTSG/12.3.3+Building+Long+and+Short+Normal+Forms

Implementing another kind of flags in Haskell

We have the classic flags in the command line tools, those that enable something (without arguments, e.g --help or --version) and others kind of flags that accept arguments (e.g. --output-dir=/home/ or --input-file="in.a", whatever).
But this time, I would like to implement the following kind of.
$ myprogram --GCC-option="--stdlib 11" --debug
In a general way, the flag is like "--PROGRAM-option=ARGUMENT". Then, I keep from this flag, PROGRAM and ARGUMENT values, they are variables. In the example above, we have PROG=GCC and ARGUMENT=--stdlib 11.
How should can I implement this feature in Haskell? I have some experience parsing options in the classic way.
In a recent project of mine I used an approach based on a 'Data.Tree' of option handling nodes. Of course, I haven't released this code so it's of very limited use, but I think the scheme might be helpful.
data OptHandle = Op { optSat :: String -> Bool
, opBuild :: [String] -> State Env [String]
}
The node fields: check to see if the argument satisfied the node; and incrementally built up an initial program environment based on the the remaining arguments (returning unused arguments to be processed by nodes lower in the tree.)
An option processing tree is then hard coded, such as below.
pgmOptTree :: [Tree OptHandle]
pgmOptTree = [mainHelpT,pgmOptT,dbgT]
mainHelpT :: Tree OptHandle
mainHelpT = Node (Op sat bld) []
where
sat "--help" = True
sat _ = False
bld _ = do
mySetEnvShowHelp
return []
pgmOptT :: Tree OptHandle
pgmOptT = Node (Op sat bld) [dbgT]
where
sat = functionOn . someParse
bld ss = do
let (d,ss') = parsePgmOption ss
mySetEnvPgmOpt d
return ss'
You will also need a function which feeds the command line arguments to the tree, checking satisfiability of each node in a forest, executing the opBuild, and calling subforests. After running the handler in the state monad, you should be returned an initial starting environment which can be used to tell main the functionality you want to call.
The option handler I used was actually a little more complicated than this, as my program communicated with Bash to perform tab completions, and included help for most major options. I found the benefit to the approach was that I could more easily keep in sync three command line concerns: enabling tab completions which could inform users the next available commands; providing help for incomplete commands; and actually running the program for complete commands.
Maintaining a tree like this is nice because you can reuse nodes at different points, and add options that work with the others fairly easily.

Is it possible / easy to include some mruby in a nim application?

I'm currently trying to learn Nim (it's going slowly - can't devote much time to it). On the other hand, in the interests of getting some working code, I'd like to prototype out sections of a Nim app I'm working on in ruby.
Since mruby allows embedding a ruby subset in a C app, and since nim allows compiling arbitrary C code into functions, it feels like this should be relatively straightforward. Has anybody done this?
I'm particularly looking for ways of using Nim's funky macro features to break out into inline ruby code. I'm going to try myself, but I figure someone is bound to have tried it and /or come up with more elegant solutions than I can in my current state of learning :)
https://github.com/micklat/NimBorg
This is a project with a somewhat similar goal. It targets python and lua at the moment, but using the same techniques to interface with Ruby shouldn't be too hard.
There are several features in Nim that help in interfacing with a foreign language in a fluent way:
1) Calling Ruby from Nim using Nim's dot operators
These are a bit like method_missing in Ruby.
You can define a type like RubyValue in Nim, which will have dot operators that will translate any expression like foo.bar or foo.bar(baz) to the appropriate Ruby method call. The arguments can be passed to a generic function like toRubyValue that can be overloaded for various Nim and C types to automatically convert them to the right Ruby type.
2) Calling Nim from Ruby
In most scripting languages, there is a way to register a foreign type, often described in a particular data structure that has to be populated once per exported type. You can use a bit of generic programming and Nim's .global. vars to automatically create and cache the required data structure for each type that was passed to Ruby through the dot operators. There will be a generic proc like getRubyTypeDesc(T: typedesc) that may rely on typeinfo, typetraits or some overloaded procs supplied by user, defining what has to be exported for the type.
Now, if you really want to rely on mruby (because you have experience with it for example), you can look into using the .emit. pragma to directly output pieces of mruby code. You can then ask the Nim compiler to generate only source code, which you will compile in a second step or you can just change the compiler executable, which Nim will call when compiling the project (this is explained in the same section linked above).
Here's what I've discovered so far.
Fetching the return value from an mruby execution is not as easy as I thought. That said, after much trial and error, this is the simplest way I've found to get some mruby code to execute:
const mrb_cc_flags = "-v -I/mruby_1.2.0_path/include/ -L/mruby_1.2.0_path/build/host/lib/"
const mrb_linker_flags = "-v"
const mrb_obj = "/mruby_1.2.0_path/build/host/lib/libmruby.a"
{. passC: mrb_cc_flags, passL: mrb_linker_flags, link: mrb_obj .}
{.emit: """
#include <mruby.h>
#include <mruby/string.h>
""".}
proc ruby_raw(str:cstring):cstring =
{.emit: """
mrb_state *mrb = mrb_open();
if (!mrb) { printf("ERROR: couldn't init mruby\n"); exit(0); }
mrb_load_string(mrb, `str`);
`result` = mrb_str_to_cstr(mrb, mrb_funcall(mrb, mrb_top_self(mrb), "test_func", 0));
mrb_close(mrb);
""".}
proc ruby*(str:string):string =
echo ruby_raw("def test_func\n" & str & "\nend")
"done"
let resp = ruby """
puts 'this was a puts from within ruby'
"this is the response"
"""
echo(resp)
I'm pretty sure that you should be able to omit some of the compiler flags at the start of the file in a well configured environment, e.g. by setting LD_LIBRARY_PATH correctly (not least because that would make the code more portable)
Some of the issues I've encountered so far:
I'm forced to use mrb_funcall because, for some reason, clang seems to think that the mrb_load_string function returns an int, despite all the c code I can find and the documentation and several people online saying otherwise:
error: initializing 'mrb_value' (aka 'struct mrb_value') with an expression of incompatible type 'int'
mrb_value mrb_out = mrb_load_string(mrb, str);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~
The mruby/string.h header is needed for mrb_str_to_cstr, otherwise you get a segfault. RSTRING_PTR seems to work fine also (which at least gives a sensible error without string.h), but if you write it as a one-liner as above, it will execute the function twice.
I'm going to keep going, write some slightly more idiomatic nim, but this has done what I needed for now.

How are set argument to nix-expression

I'm new to Nix and I'm trying to understand the hello derivation given in example.
I can understand the syntax and what is supposed to do, however I don't understand
how the initial arguments (and the especially the perl one_ are fed ?
I mean, who is setting the perl argument before calling this derivation.
Does that mean that perl is a dependency of hello ?
Packages are typically written as set of dependencies -> derivation functions, to be assembled later. The arguments you ask about are fed from pkgs/top-level/all-packages.nix, which holds the set of all packages in Nixpkgs.
When you find the hello's line in all-packages.nix, you'll notice it's using callPackage - it's signature is path to Nix expression -> overrides -> derivation. callPackage loads the path, looks at the function it loaded, and for each arguments provides either value from overrides or, if not given, from the huge set in all-packages.nix.
For a nice description of callPackage see http://lethalman.blogspot.com/2014/09/nix-pill-13-callpackage-design-pattern.html - it's a less condensed explanation, showing how you could have invented callPackage yourself :-).

Haskell Haddock latex equation in comments

I'd like to use latex notation for equations in my source code.
For example, I would write the following comment in some haskell source file Equations.hs:
-- | $v = \frac{dx}{dt}$
In the doc directory, this gets rendered by haddock in Equations.tex as:
{\char '44}v = frac{\char '173}dx{\char '175}{\char '173}dt{\char '175}{\char '44}
I found this function in the source for Haddock's latex backend that replaces many characters that are used in latex formatting:
latexMunge :: Char -> String -> String
...
latexMunge '$' s = "{\\char '44}" ++ s
Is there any existing functionality that allows me to bypass this and insert latex equations in comments?
No. The main reason why this (and similar features) don't exist is that it's unclear what to do with the markup in the other backends, be it HTML one, Hoogle one or whatever else someone might be using. This is fairly commonly requested but there is no common agreement and more importantly, no patches.
Technically we don't support the LaTeX backend, it's kept around compiling so that the Haskell Report can be produced. If you or someone else wants to give it some new life (and features) then we'll happily accept patches.
tl;dr: no can do. I know people simply pre-render LaTeX and insert the resulting images in with the image syntax.

Resources