Config file in haskell - haskell

I would like to be able to provide to my Haskell app and config file written in Haskell. The reason why is because I would like the user to be able to provide a few custom functions.
Is it possible to load a haskell file at runtime, even though it might depend on some type provided by the app itself.
At the moment, I have a super main function, and I build a new executable per config file. The file basically, declares some hooks and call super main with them. The problem with that is, at the moment I have to define a new target for each config in my cabal file (I use a sandbox, and I don't want to have to install any library part of my package). I thought using runghc instead but I do I make it works with the sandbox ? I've seen there is a 'plugin' package on hackage but it does't seem to be up to date. What is the common way to deal with this type of problem ?

dyre looks like it fits the bill.

Related

How to create haskell cabal package as output of haskell code?

The Haskell project I am working on generates code(and tests for it) that is intended to be used as an independent Haskell library. I want to wrap it in a cabal project, so it can be included as a dependency.
I searched for a library interface for the cabal, so I can create a cabal project at a given directory by calling some functions, but found none.
I could, of course, just run bash commands from Haskell, but it looks ugly to me.
Is there any tool that will solve my problem in a nice way?
You want the Cabal package. You can parse an existing cabal file, change stuff in the data structures, and regenerate the text representation.
Edit in answer to comment:
I don't know of any tutorials. The links I gave are for the Haddock docs, and the mapping between data types and Cabal file text is pretty straightforward. So you should probably start by writing the code to produce a PackageDescription value and then call writePackageDescription on it.
Note the existence of emptyPackageDescription, which lets you just specify the fields you want.
(Removed link to pretty printer class because PackageDescription isn't a member.)

How does SCons build content signature about a function?

Recently I used SCons to build project. But it rebuild everytime.. After reading .sconsign.dblite file, I found that SCons will create a content signature for a function. Like this:
TargetA.txt: 99bb021f789f7bb1c23935daffecfb8e 1598528466 22769
31aada685dfb3c4e5d1e474ba5cc251d [buildfunction(target, source, env)]
It create a signature for function buildfunction(target, source, env) which created file TargetA.txt. But I couldn't find the relative code. I think it's offered by SCons. Can anyone give me a help?
You'll have to go look at the source code.
It's fairly complicated.
Starts here:
https://github.com/SCons/scons/blob/master/SCons/Action.py#L1210
Also subject to change.
We may revamp in the not to distant future as python 3.5+ has features which may simplify the logic.

find_dependency(Threads) or include(FindThreads) in a package config file

In CMake, we can use find_dependency() in an package -config.cmake file to "forwards the correct parameters for QUIET and REQUIRED which were passed to the original find_package() call." So, naturally we'll want to do that instead of calling find_package() in such files.
Also, for dependency on a threads library, CMake offers us the FindThreads module, so that we write include(FindThreads), prepended by some preference commands, and get a bunch of interesting variables set. So, that's preferable to find_package(Threads).
And thus we have a dilemma: What to put in -config.cmake files, for a threads library dependency? The former, or the latter?
Following a discussion in comments with #Tsyarev, it seems that:
find_package(Threads) includes the FindThreads module internally.
... which means it "respects" the preference variables affecting FindThreads behavioe.
so it makes sense, functionally and aesthetically, to just use find_package() in your main CMakeLists.txt and find_dependency() in -config.cmake.

Passing params from the build system to buildbot

I'd like to share how I implemented a solution to a problem I had, to get some feedback and maybe learn some new feature of buildbot.
Scenario:
Create a package of a given software, and upload the package to the buildmaster into a shared folder.
The package name contains some data that are known to the build system (i.e. Makefiles) specifically the sw version. Let's assume the package name is:
myapp-1.2.3-r2435.tar.gz
Question:
How do I send to the buildslave steps the required to build up the very same package name, so that the buildslave can upload the package?
Specifically I need to know the version number (but I guess this could be any param)
Implemented (and working) solution:
The makefile, once the compilation process is completed, writes a file with the required param.
The slave uses the SetProperty() step to read the content of the file into a custom named property
Once I have the value of interest in the property (let's say APP_VERSION) I use it to build the package name with the same pattern used by the build system.
The described solution works, but I do not really like it because:
1) it's complicated, hence, I guess, fragile
2) it is not OS independent (I use "echo $VAR > file" to write the file, and "cat file" to read it and set the buildslave Property)
Is there in your opinion a better way to solve this issue?
Do you have any suggestion to make the solution OS independent? (It will not work for sure on Windows, while my package shoudl be built on Windows OS too)

Node.js/npm - dynamic service discovery in packages

I was wondering whether Node.js/npm include any kind of exension mechanism comparable to Python setuptools' "entry points".
So, in short:
is there any way I can do dynamic discovery of services provided by other packages using npm?
if not, what would be the best way to implement something similar? Specifying the extension name in the main module's configuration file seems to be the logical solution, but I wonder whether something "automatic" can be done.
I'm not aware of any builtin mechanism to do this.
One viable way of doing it yourself:
I made a small tool (Jumpstart) to quickly create project scaffolding from templates with placeholders, and I used a kind of plugin mechanism for that. It basically comes down to that the Jumpstart script searches for modules named jumpstart-* "adjacent" to where the module itself is installed. So it would work for both local and global installations. If installed locally, it would search the other local modules (on the same level) and if global, it searches the other global modules.
Note that here, "search" comes down to a simple fs.exists check to see if there's a Jumpstart template module with a particular name installed. However, nothing would stand in the way to actually get a full list of all installed packages matching the jumpstart-* pattern, and loading all at once. I could also search up the entire directory tree for node_modules directories and do the same. There's no point in doing this for this particular program, however.
See https://npmjs.org/package/jumpstart for docs.
The only limitation to this technique is that all modules must be named in a consistent fashion. Start with some string, end with some string, something like that. Any rogue packages polluting the namespace could be detected by doing further checks on a package contents: What files does it contain? What kind of object does its main module export? etc.
Brunch also uses a plugin mechanism. This one actually deals with file extensions, so is more relevant: https://github.com/brunch/brunch/wiki/Plugins . See for example source of the CoffeeScript plugin https://github.com/brunch/coffee-script-brunch/blob/master/src/index.coffee .

Resources