I am trying to get my head around compiled splices. With previouse help I can compile and render some usefull results. I don't fully understand the way it works though.
In interpreted mode, the algorithm is simple: construct root, call handler function given the mapped url, pull data from DB, construct and bind splices out of pulled data, insert them into heist and call the apropriate template.
It is all upside down in compiled mode. I map url directly to cRender and don't call a handler. So I assume all the splice constructing and data processing functions are called at load time.
So my question is when is the database called? Does this happen at load time too?
It is just the sequence of events that I don't understand.
Since splice construction is independent of a particular template rendering, does this mean the splice binding tags are unique accross the whole application?? Are they like global variables?
Thanks
Yes, you are pretty much correct. Although I wouldn't say they are like global variables. They are more like global constants, or a global API. I view compiled splices as an API that your web designer can use to interact with dynamic data.
Compiled splices allow you to insert holes into your markup that get filled with data at runtime. At load time the running monad is HeistT n IO. But at run time the running monad is RuntimeSplice n. So if you're looking at the compiled Heist API, it's very easy to see where the runtime code like database functions need to be: in the RuntimeSplice n monad.
Related
I'm writing a numerical optimisation library in Haskell, with the aim of making functions like a gradient descent algorithm available for users of the library. In writing these relatively complex functions, I write intermediary functions, such as a function that performs just one step of gradient descent. Some of these intermediary functions perform tasks that no user of the library could ever have need for. Some are even quite cryptic, but make sense when used by a bigger function.
Is it common practice to leave these intermediary functions available to library users? I have considered moving these to an "Internal" library, but moving small functions into a whole different library from the main functions using them seems like a bad idea for code legibility. I'd also quite like to test these smaller functions as well as the main functions for debugging purposes down the line - and ideally would like to test both in the same place, so that complicates things even more.
I'm unsurprisingly using Cabal for the library so answers in that context as well would be helpful if that's easier.
You should definitely not just throw such internal functions in the export of your package's trunk module, together with the high-level ones. It makes the interface/haddocks hard to understand, and also poses problems if users come to depend on low-level details that may easily change in future releases.
So I would keep these functions in an “internal” module, which the “public” module imports but only re-exports those that are indended to be used:
Public
module Numeric.Hegash.Optimization (optimize) where
import Numeric.Hegash.Optimization.Internal
Private
module Numeric.Hegash.Optimization.Internal where
gradientDesc :: ...
gradientDesc = ...
optimize :: ...
optimize = ... gradientDesc ...
A more debatable matter is whether you should still allow users to load the Internal module, i.e. whether you should put it in the exposed-modules or other-modules section of your .cabal file. IMO it's best to err on the “exposed” side, because there could always be valid use cases that you didn't foresee. It also makes testing easier. Just ensure you clearly document that the module is unstable. Only functions that are so deeply in the implementation details that they are basically impossible to use outside of the module should not be exposed at all.
You can selectively export functions from a module by listing them in the header. For example, if you have functions gradient and gradient1 and only want to export the former, you can write:
module Gradient (gradient) where
You can also incorporate the intermediary functions into their parent functions using where to limit the scope to just the parent function. This will also prevent the inner function from being exported:
gradient ... =
...
where
gradient1 ... = ...
I'd like to use Haskell's quickcheck library test some C code. The easiest way seems to be doing a foreign import and write a property on top of the resulting haskell function. The problem with this is that if the C code causes a segfault or manages to corrupt memory, my tests either crash without output or do something totally unpredictable.
The second alternative is to make simple executable wrappers over the C-bits and execute them outside the testing process via System.Process. Needless to say, doing this requires a lot of scaffolding and serializing values, but on the other hand, it can handle segfaults.
Is there any way of making the foreign import strategy as safe as running an external process?
You could implement the wrapper in your current process, but then use System.Posix.Process.forkProcess to run in safely in a process of its own, implementing the necessary communication using Haskell.
This seems like something that should be easy but how do I get a pure value out of a query if I am using AcidState's Data.Acid.Memory.Pure module. I guess I can generalize the question to "how do I get any value out of the Update monad?". You see, I'm trying to write a test that does the following run-of-the-mill tasks:
Updates a pure AcidState with an object
Queries that Object out of the state using IxSet
Compares the Queried Object and the one returned by the Update for equivalence.
I need a pure "Bool" from this in order to make integration with test frameworks easy. At first I thought I'd simply use runState from Control.Monad.State but I was mistaken (or just didn't do it right). What should I do?
Since you are using Data.Acid.Memory.Pure, you can use the update, query, and update_ functions from that module (instead of the ones from Data.Acid) to look at the result of an event purely. As with regular, impure acid-state, you don't simply "run the Update and Query monads," you have to convert them to an event first. With Data.Acid.Memory.Pure, that means you simply wrap them with the constructors of Event.
I want to make a function called 'load' which imports definitions of functions from another file. I know how to import modules, but in my program I want the definitions of the functions to change depending on which module is 'loaded' with this new function. Is there a way to do this? Is there a better way to write my program so that this is not necessary?
I think it's type signature would look something like:
load :: String -> IO ()
where the string is the name of the module to be loaded (and the module is in the same directory).
Edit: Thanks for all the replies. Most people agree that this is not the best way to do what I want. Instead, is there a way to declare a global variable from within an I/O program. That is, I want it so that if I type (function "thing") into a function of type String -> IO(), I can still type 'thing' into GHCi to get the value assigned to it... Any suggestions?
There is almost certainly a better way to write your program so that this is not necessary. It's hard to say what without knowing more details about your situation, though. You could, for instance, represent the generic interface each module implements as a data-type, and have each module export a value of that type with the implementation.
Basically, the set of loaded modules is a static, compile-time property, so it makes no sense to want your program's behaviour to change based on its contents. Are you trying to write a library? Your users probably won't appreciate it doing such evil magic to their import lists :) (And it probably isn't possible without Template Haskell in that case, anyway.)
The exception is if you're trying to implement a Haskell tool (e.g. REPL, IDE, etc.) or trying to do plugins; i.e. dynamically-loaded modules of Haskell source code to integrate into your Haskell program. The first thing to try for those should be hint, but you may find you need something more advanced; in that case, the GHC API is probably your best bet. plugins used to be the de-facto standard in this area, but it doesn't seem to compile with GHC 7; you might want to check out direct-plugins, a simplified implementation of a similar interface that does.
mueval might be relevant; it's designed for executing short (one-line) snippets of Haskell code in a safe sandbox, as used by lambdabot.
Unless you're building a Haskell IDE or something like that, you most likely don't need this (^1).
But, in the case you do, there is always the hint-package, which allows you to embed a haskell interpreter into your program. This allows you to both load haskell modules and to convert strings into haskell values at runtime. There is a nice example of how to use it here
^1: If you're looking for a way to make things polymorphic, i.e. changing some, but not all definitions of in your code, you're probably looking for typeclasses.
With regards to your edit, perhaps you might be interested in IORef.
I have a Haskell RPCXML (HaXR) server process, run with GHC, that needs to execute any function that it's passed. These functions will all be defined at runtime so the compiled server won't know about them.
Is there a way to load a function definition at runtime? A method that avoids disk IO is preferable.
Thanks.
hint seems to be popular these days.
Although to load a function definition I think you will either have to put it into a module, or re-interpret it every time you use it.