I'm using acid-state in a project and I quite like it. I like how easy it is to add persistence to plain Haskell datatypes without much boilerplate.
As far as I understand, acid-state keeps a log of events, instead of writing out the entire new state on every update. What I'm looking for is a way for me to review a log of recent changes to the state from within the application, as a (read-only) list. (Something like git log, though I don't need branching or being able to go back to an older commit.)
Of course I can write to my own separate log file with details of all state changes, or even model my data as a list of diffs, but I prefer something that is automatic and allows me to use plain datatypes as much as possible.
Is there a library similar to acid-state, or perhaps some internal functionality of acid-state that I could use for this?
Here's the approach I ended up with:
I was already using a wrapper around Data.Acid.update (because it's running in a monad with restricted IO) and I realized that the wrapper could store the event to my own log. The UpdateEvent update constraint implies SafeCopy update and with runPut . safePut I can serialize that to a ByteString. However... this is a binary representation, not intended to be humand-readable, and I wanted to be able to review it. I realized that reading the acid-state event log from disk would have the same problem.
So I added Show update to the constraints of my wrapper. At every place that uses the state I added:
{-# LANGUAGE StandaloneDeriving #-}
...
$(makeAcidic ''State ['update])
deriving instance Show Update
(StandaloneDeriving might be a little controversial, but it does not cause a problem with orphans here, as it's in the same file.)
In the wrapper I now call show on the update and write the result to my own log file. Of course this loses the atomicity of the update: it is possible the application crashes between the update call and my own logging call, but I'm willing to accept that risk.
Related
We are working on a data logging code and I would like to make a library of it.
The only thing that different programs will need to define is the type strict for the sample that the want to save. The library will save a sample every x period. But I don't know exactly how to have an external DTU in the library code? Is possible to declare the DTU as an interface or something similar? There must be a way to do so but not so sure what it is.
CODESYS does not have generics, there is no way around that. Although I am not convinced it would be an elegant or fitting design, there is one way you might be able to do if you really want your type to be visible in your logging library, but it has serious drawbacks.
You can create another library, or more accurately a family of libraries that will all use the same placeholder. Each of these libraries can define a type of the same name. Reference this placeholder from your logging library, so that it will resolve the name. That will allow your library to compile. Then, by controlling which library the placeholder resolves to inside the project, you can select which type is used. The big downside is you cannot use your logging library with two different types inside the same project, as you cannot get a placeholder to resolve to two libraries at the same time.
Personally, I would probably implement the logging library with some sort of log entry writer as a function block that contains size + arrays of bytes and manages the details of actual logging, and then define inherited function blocks in projects/libraries that need logging. Each of these inherited function blocks would give access to the bytes in a typed way (method, exposed reference/pointer) and assign the size based on the data type used (SIZEOF in FB_Init, for instance). This requires a bit of code for each type, but really not much, and new types are easy to add with a 20-second copy+paste+modify.
I've discovered Helm (http://helm-engine.org/) the other day and I've been playing a bit with it. I like Elm, so Helm has been great to use thus far.
Simply put, the update function gets called every tick and get passed the model, and has to return an updated version of that model. Another function then gets called to render that model on screen.
For small games with not much in the model it seems ideal, but I've been thinking about something a bit bigger, for which a HashMap (or a lot of them) would be ideal, and I'm wondering about the performances of that.
I'm no expert but I believe using the Data.HashTable.IO should modify the hashtable in ram instead of creating a new copy on change, but that seems complicated to interface with Helm. That would mean using a Cmd for each lookups and each changes and returning that to Helmn and then get passed the result from a new call to update, a nightmare to use if you have more than one or two things to lookup I think.
Data.HashMap.Strict (or Lazy ?) would probably work better, but I imagine each change would create a new copy, and the GC would free up the old one at some future point. Is that correct ?
That would potentially mean hundreds of copy then free per frame, unless the whole thing is smart enough to realise I'm not using the old hashtable again after the change and just not copy it.
So how does this work in practice ? (I'm thinking of HashMap because it seems like the easier solution, but I guess this applies to regular lists too).
I support the comments about avoiding premature optimization and benchmarking, instead of guessing, to determine if performance is acceptable. That said, you had some specific questions too.
Data.HashMap.Strict (or Lazy ?) would probably work better, but I imagine each change would create a new copy, and the GC would free up the old one at some future point. Is that correct ?
Yes, the path to the modified node will consist of new nodes. Modulo balancing, the sub trees on the left and right of the path will all be shared (not copied) by the old and new tree structures.
That would potentially mean hundreds of copy then free per frame
I'm not sure where you get "hundreds" from. Are you saying there are hundreds of updates? For some structures there are rewrite rules that allow much of the intermediate values to be used in a mutating manner. See for example, this small examination of vector.
So how does this work in practice ?
In practice people implement what they want and rework parts that are too slow. I might reach for HashMap early, instead of assuming the containers Data.Map will suffice, but I don't go beyond that without evidence.
Suppose that I have a source keypads :: Producer IO Keypad that produces a stream of sensitive data such as one-time keypads.
Now if my library exposes keypads, an end-user might connect keypads to two sinks, let's call them good and bad, where bad requests a value and reads it, but then returns it back upstream via leftover. Later on, the good sink might consume the same keypad previously read by bad. The enduser might be oblivious to this happening, for example if good and bad are provided by external libraries.
Is there any way to design a read-only source in conduit that discards leftover data?
(I've read here that it's not possible to disable reusing leftovers, but as I'm new to conduits, maybe there's a different way to design the architecture that I'm not seeing.)
I can think of two options:
Wrap bad with a map id Conduit which will prevent leftovers from propagating. I'm thinking your code would look something like:
keypads $$ (CL.map id =$= bad) >> good
Drop down to the Pipe layer of abstraction and call injectLeftovers on bad to ensure that all leftovers are consumed there and then discarded.
I'm guessing (1) is the approach you'll want.
The title pretty much asks what I want -- an example of using predicates that don't involve iterating over a collection, using IEnumerable, or anything like that.
I don't care what language you use, or even if you provide a code example. I'd be grateful if you only described a simple example which illustrates predicates without the usual uses.
I'm working on an article and I'm stumped, so any examples will be gratefully received and credited.
In one project I have manually implemented cache dependencies using predicates.
Whenever a new item is put in the cache that depends on another cached item, a predicate delegate is created which will check whether the dependency is still valid or whether the cached item has to be invalidated.
Whenever a cached item was accessed, and it had a dependency check predicate assigned, then it was checked before the cached item was returned.
If course that probably could be solved without using predicates, but that solution was pretty elegant and enabled implementing the cache dependencies by easily injecting the depency checking method.
Summary
Is there something like http://en.wikipedia.org/wiki/E_programming_language as a DSL inside of Clojure?
Background
I'm aware of:
http://bit.ly/N4jnTI and http://bit.ly/Lm3SSD
However, neither provides what I want.
Context
I'm a big fan of both capability systems and information flow. And I'm wondering if anyone has developed Clojure DSLs for these two techniques. The following would be ideal:
all objects have some tag (say in it's meta table) that lists who has read access to the object
when I want to run a query as user "foo", I set some context var saying "now, use only the capabilities of foo" -- then the function, when it tries to reach objects, either gets the object (if foo has access to it) or nil (if foo does not have access to it). Leaking information bout the existence of objects is not a big deal to me at the moment.
Question
So the question is -- is this something easy to do as a Clojure DSL? Where each object has some capability tag, and we can execute pieces of function/code under certain tags, and the runtime system makes sure that no one gets access to things they're not supposed to access.
Thanks!
you can do this with metadata and preconditions and then create macros to add a DSL/syntax to it, though I would recommend skipping the macros and going for just preconditions and metadata.
Each object would have a piece of metadata with a list of it's capabilities.
Each function would have a precondition that checked the metadata.