I would like to use racket lisp engines which allow execution of a procedure that can be interrupted by a timer timeout. I'm not sure how to construct a procedure that engine will accept because I can't find examples online. At the engine documentation it lists input proc having the following contract:
(engine proc) → engine?
proc : ((any/c . -> . void?) . -> . any/c)
I'm just learning typed racket annotation semantics and this is above my head at the moment. Can someone help provide some concrete examples of valid procedures that can be use in a racket engine?
I've played a little around with it. Here is what I did:
#lang racket
(require racket/engine)
(define (test-engine allow-interrupt)
(let loop ((n 1))
(allow-interrupt #f)
(displayln (list 'step n))
(allow-interrupt #t)
(sleep 1)
(loop (add1 n))))
(define tee (engine test-engine))
(engine-run 2000 tee)
I noticed that it might break in the middle of a displayln so to make displayln atomic I made use of the supplied procedure that makes delays the interruption during atomic operations. Without it it will print out the rest in the next (engine-run 2000 tee) instead of finishing it before it returns.
Related
Multiple, perhaps most, language implementations that include a compiler at runtime neglect to garbage-collect discarded code (See, for example julia, where this leads to memory leaks in applications like genetic-programming)
My preliminary tests indicate that Chez Scheme does not leak memory here, but I would like to know with greater certainty, since I don't even know if f and g actually get compiled. (The old mantra: "Tests can only prove the presence of bugs, not their absence")
The test I tried: f and g call each other, and their definitions get replaced at runtime.
(define f)
(define g)
(define (make-f x)
(eval `(set! f (lambda (y)
(if (> y 100)
(+ (remainder ,x 3) (g y))
(+ y 1))))))
(define (make-g x)
(eval `(set! g (lambda (y)
(if (< y 10)
(+ (remainder ,x 5) (f y))
(div y 2))))))
(define (make-and-run-f n)
(begin
(make-f 1)
(make-g 1)
(let loop ((i 0) (acc 0))
(if (> i n)
acc
(begin
(make-f i)
(make-g i)
(loop (+ i 1) (+ acc (f 33))))))))
(time (make-and-run-f 1000000)) ; runs in 10 min and negligible memory
Given the importance of both procedures and garbage collection to Scheme, I would be surprised if Chez Scheme did not try to garbage collect any dynamically created objects. The R6RS Standard says [emphasis mine]:
All objects created in the course of a Scheme computation, including procedures and continuations, have unlimited extent. No Scheme object is ever destroyed. The reason that implementations of Scheme do not (usually!) run out of storage is that they are permitted to reclaim the storage occupied by an object if they can prove that the object cannot possibly matter to any future computation.
A procedure is an object, and any object may be garbage collected if the implementation can prove that the computation will not need it again. This is not a requirement, but that goes for any object, not just for procedures.
The Chez Scheme manual seems definitive, though (Chez Scheme Version 9 User's Guide, p. 82):
Since all Scheme objects, including code objects, can be relocated or even reclaimed by the garbage collector....
In the 1990s Kent Dybvig wrote a paper together with David Eby and Carl Bruggeman which may be of interest here, called Don’t Stop the BIBOP: Flexible and Efficient Storage Management for Dynamically Typed Languages, which describes the garbage collection strategy implemented in Chez Scheme. In the paper some time is spent discussing "code objects" and in particular how they are segregated and treated differently during the garbage collection process (since they may contain pointers to other objects).
Taking the three functions below, implemented in Haskell and Clojure respectively:
f :: [Int] -> Int
f = foldl1 (+) . map (*7) . filter even
(defn f [coll]
((comp
(partial reduce +)
(partial map #(* 7 %)
(partial filter even?)) coll))
(defn f [coll]
(transduce
(comp
(filter even?)
(map #(* 7 %)))
+ coll))
when they are applied to a list like [1, 2, 3, 4, 5] they all return 42. I know the machinery behind the first 2 is similar since map is lazy in Clojure, but the third one uses transducers. Could someone show the intermediate steps for the execution of these functions?
The intermediate steps between the second and third example are the same for this specific example. This is due to the fact that map and filter are implemented as lazy transformations of a sequence into a sequence, as you’re no-doubt already aware.
The transducer versions of map and filter are defined using the same essential functionality as the not-transducer versions, except that the way they “conj" (or not, in the case of filter) onto the result stream is defined elsewhere. Indeed, if u look at the source for map, there are explicit data-structure construction functions in use, whereas the transducer variant uses no such functions -- they are passed in via rf. Explicitly using cons in the non-transducer versions means they're always going to be dealing with sequences
IMO, the main benefit of using transducers is that you have the ability to define the process that you're doing away from the thing which will use your process. Therefore perhaps a more interesting rewrite of your third example may be:
(def process (comp (filter even)
(map #(* 7 %))))
(defn f [coll] (transduce process + collection))
Its an exercise to the application author to decide when this sort of abstraction is necessary, but it can definitely open an opportunity for reuse.
It may occur to you that you can simply rewrite
(defn process [coll]
((comp
(partial map #(* 7 %)
(partial filter even?)) coll))
(reduce + (process coll))
And get the same effect; this is true. When your input is always a sequence (or always the same kind of stream / you know what kind of stream it will be) there's arguably not a good reason to create a transducer. But the power of reuse can be demonstrated here (assume process is a transducer)
(chan 1 process) ;; an async channel which runs process on all inputs
(into [] process coll) ;; writing to a vector
(transduce + process coll) ;; your goal
The motivation behind transducers was essentially to stop having to write new collection functions for different collection types. Rich Hickey mentions his frustration writing functions like map< map> mapcat< mapcat>, and so on in the core async library -- what map and mapcat are, is already defined, but because they assume that they operate on sequences (that explicit cons I linked above), they cant be applied to asnychronous channels. But channels can supply their own rf in the transducer version to let them reuse these functions.
I need a function which takes two parameters input string and default string, then return input if not blank, or default.
(defn default-if-blank [input default]
(if (clojure.string/blank? input)
default
input))
I can implement the function, but I think there would be lots of good utility libraries in Clojure world, such as Apache commons or Guava in Java world.
Is it common that implementing such functions by myself, rather than using some libraries? I'm new to Clojure, so this can be stupid question but any advices will help me. Thanks.
What you have there is very normal and would be perfectly fine in high quality Clojure code. It's completely ok to write this way, because Clojure is a terse language this does not add much intellectual overhead.
In some cases you would not need it, for instance if you are pulling values from any of the collections, get takes an optional default parameter:
user> (get {1 "one" 2 "two"}
42
"infinity")
"infinity"
and optional parameters when destructing things like maps:
user> (let [{a :a b :b c :c :or {a :default-a
b :default-b}}
{:a 42 :c 99}]
(println "a is" a)
(println "b is" b)
(println "c is" c))
a is 42
b is :default-b
c is 99
make it less common to need these situations, though there are plenty of (if (test foo) foo :default) cases in the codebase I work with on a daily basis. It has not been common and consistent for us in a way that would lend itself to being consolidated as you have done.
I'm reading The Joy of Clojure, and in the section about paralellization, the functions pvalues, pmap and pcalls are explained, with a brief example of how each one is used. Below are the examples given for pvalues and pcalls:
(defn sleeper [s thing] (Thread/sleep (* 1000 s)) thing)
(pvalues
(sleeper 2 :1st)
(sleeper 3 :2nd)
(keyword "3rd"))
(pcalls
#(sleeper 2 :1st)
#(sleeper 3 :2nd)
#(keyword "3rd"))
I understand the technical difference between the two -- pvalues takes a variable number of "values" to be computed, whereas pcalls takes "an arbitrary number of functions taking no arguments," to be called in parallel. In both cases, a lazy sequence of the results is returned.
My question is essentially, when would you use one vs. the other? It seems like the only semantic difference is that you stick a # before each argument to pcalls, turning it into an anonymous function. So, purely from the perspective of saving keystrokes and having simpler code, wouldn't it make more sense to just use pvalues? -- and if so, why even have a pcalls function?
I get that with pcalls, you can substitute a symbol referring to a function, but if you wanted to use such a function with pvalues, you could just put the function in parentheses so that it is called.
I'm just a little confused as to why Clojure has these 2 functions that are so similar. Is there anything that you can do with one, but not the other? Some contrived examples might be helpful.
pvalues is a convienience macro around pcalls, though because macros are not first class it can't be composed or passed around in the same way a function can. Here is an example where only pcalls works:
user> (defn sleeper [s thing] (Thread/sleep (* 1000 s)) thing)
#'user/sleeper
user> (def actions [#(sleeper 2 :1st) #(sleeper 3 :2nd) #(keyword "3rd")])
#'user/actions
user> (apply pcalls actions)
(:1st :2nd :3rd)
user> (apply pvalues actions)
CompilerException java.lang.RuntimeException: Can't take value of a macro: #'clojure.core/pvalues, compiling:(NO_SOURCE_PATH:1:1)
It's important when designing APIs in clojure that you don't restrict the users to only interacting with your library through macros because then they get caught on limitations such as this.
I have to do a term project in my symbolic programming class. But I'm not really sure what a good/legitimate project would be. Can anyone give me examples of symbolic programming? Just any generic ideas because right now I'm leaning towards a turn based fight game (jrpg style basically), but I really don't want to do a game.
The book Paradigms of Artificial Intelligence Programming, Case Studies in Common Lisp by Peter Norvig is useful in this context.
The book describes in detail symbolic AI programming with Common Lisp.
The examples are in the domain of Computer Algebra, solving mathematical tasks, game playing, compiler implementation and more.
Soon there will be another fun book: The Land of Lisp by Conrad Barski.
Generally there are a lot of possible applications of Symbolic Programming:
natural language question answering
natural language story generation
planning in logistics
computer algebra
fault diagnosis of technical systems
description of catalogs of things and matching
game playing
scene understaning
configuration of technical things
A simple arithmetic interpreter in Scheme that takes a list of symbols as input:
(define (symbolic-arith lst)
(case (car lst)
((add) (+ (cadr lst) (caddr lst)))
((sub) (- (cadr lst) (caddr lst)))
((mult) (* (cadr lst) (caddr lst)))
((div) (/ (cadr lst) (caddr lst)))
(else (error "unknown operator"))))
Test run:
> (symbolic-arith '(add 2 43))
=> 45
> (symbolic-arith '(sub 10 43))
=> -33
> (symbolic-arith '(mu 50 43))
unknown operator
> (symbolic-arith '(mult 50 43))
=> 2150
That shows the basic idea behind meta-circular interpreters. Lisp in Small Pieces is the best place to learn about such interpreters.
Another simple example where lists of symbols are used to implement a key-value data store:
> (define person (list (cons 'name 'mat) (cons 'age 20)))
> person
=> ((name . mat) (age . 20))
> (assoc 'name person)
=> (name . mat)
> (assoc 'age person)
=> (age . 20)
If you are new to Lisp, Common Lisp: A Gentle Introduction to Symbolic Computation is a good place to start.
This isn't (just) a naked attempt to steal #Ken's rep. I don't recall what McCarthy's original motivation for creating Lisp might have been. But it is certainly suitable for computer algebra, including differentiation and integration.
The reason that I am posting, though, is that automatic differentiation is used to mean something other than differentiating symbolic expressions. It's used to mean writing a function which calculate the derivative of another function. For example, given a Fortran program which calculates f(x) an automatic differentiation tool would write a Fortran function which calculates f'(x). One technique, of course, is to try to transform the program into a symbolic expression, then use symbolic differentiation, then transform the resulting expression into a program again.
The first of these is a nice exercise in symbolic computation, though it is so well trodden that it might not be a good choice for a term paper. The second is an active research front and OP would have to be careful not to bite off more than (s)he can chew . However, even a limited implementation would be interesting and challenging.