Clojure: block the use of an atom? - multithreading

I have a Clojure code which runs a few threads in parallel. They all share an atom: (def counter (atom 0)) which is incremented by each thread. Every 10 minute, I'd like to perform several actions using the value of the atom and then reset it back to 0 - for example:
(defn publish-val []
(let [c #counter]
(email c)
(statsd c)
(print-log c)
(reset! counter 0)))
It is important that the value of counter will not change from the moment it is dereferenced to the moment it is reset - meaning all threads should be blocked when trying to change the atom's value while publish-val is executed. How do I do this?

Unless you've greatly simplified the problem for your example, it looks like swap!-ing out the current counter value with zero would be sufficient here:
(defn publish-val []
(with-local-vars [c nil]
(swap! counter
(fn [x] (var-set c x) 0))
(email #c)
(statsd #c)
(print-log #c)))
So you just save your old counter value in a local variable, atomically swap it with zero, and then do whatever bookkeeping needed to do with the old value—all without stalling any of the other threads longer than it takes to swap!

Use an agent.
See the section Using agents to serialise access to non-threadsafe resources about using them to print to console

Related

swap! value in atom (nested-map) Clojure

I'm trying to update a nested counter in an atom (a map) from multiple threads, but getting unpredictable results.
(def a (atom {:id {:counter 0}}))
(defn upvote [id]
(swap! a assoc-in [(keyword id) :counter] (inc (get-in #a [(keyword id) :counter])))
)
(dotimes [i 10] (.start (Thread. (fn [] (upvote "id")))))
(Thread/sleep 12000)
(prn #a)
I'm new to Clojure so very possible I'm doing something wrong, but can't figure out what. It's printing a counter value with results varying from 4-10, different each time.
I want to atomically update the counter value and hoped that this approach would always give me a counter value of 10. That it would just retry upon failure and eventually get to 10.
It's for an up-vote function that can get triggered concurrently.
Can you see what I'm doing wrong here?
You are updating the atom non-atomically in your code. You first get its value by #a, and then apply it using the swap function. The value may change in between.
The atomic way to update the value is to use a pure function within swap, without referring to the previous atom value via #:
(defn upvote [id]
(swap! a update-in [(keyword id) :counter] inc))

How to exhaust a channel's values and then return the result (ClojureScript)?

Suppose that channel chan has the values "1" and "2" on queue.
Goal: Make a function which takes chan and returns the vector [1 2]. Note that I am totally fine if this function has to block for some time before its value is returned.
Attempt:
(defn chan->vector
[chan]
(let [a (atom true) v []]
(while (not-nil? #a)
(go
(reset! a (<! chan))
(into v #a)
(reset! a (<! chan))
)
) v
)
)
Result: My REPL freezes and eventually spits out a huge error. I have come to realize that this is because the (go ...) block is asynchronous, and so immediately returns. Thus the atom іn my (while ...) loop is never given a chance to be set to nil and the loop can never terminate.
So how do I accomplish the desired result? In case it's relevant, I'm using ClojureScript and targetting nodejs.
you should use alts! from core.async to fulfill this task
(https://clojure.github.io/core.async/#clojure.core.async/alts!):
(def x (chan 10))
(go (>! x 1)
(>! x 2)
(>! x 3))
(defn read-all [from-chan]
(<!! (go-loop [res []]
(let [[v _] (alts! [from-chan] :default :complete)]
(if (= v :complete)
res
(recur (conj res v)))))))
(read-all x)
;; output: [1 2 3]
(read-all x)
;; output: []
(go (>! x 10)
(>! x 20)
(>! x 30)
(>! x 40))
(read-all x)
;; output: [10 20 30 40]
inside the go-loop this (a/alts! [from-chan] :default :complete) tries to read any value from channel, and in case there are no value available at once it emits the default value, so you will see you should break the loop and return accumulated values.
update: as the blocking read (<!!) is absent in cljs, you can rewrite it the following way:
(defn read-all [from-chan]
(go-loop [res []]
(let [[v _] (alts! [from-chan] :default :complete)]
(if (= v :complete)
res
(recur (conj res v)))))))
so it will return the channel, and then just read one value from there:
(go (let [res (<! (read-all x))]
(println res)
;; do something else
))
You can use clojure.core.async/reduce:
;; demo setup
(def ch (async/chan 2))
(async/>!! ch :foo)
(async/>!! ch :bar)
;; background thread to print reduction result
(async/thread
(prn (async/<!! (async/reduce conj [] ch))))
;; closing the channel…
(async/close! ch)
;; …terminates the reduction and the result gets printed out:
;; [:foo :bar]
clojure.core.async/reduce returns a channel that will produce a value if and when the original channel closes. Internally it uses a go block and will release control in between taking elements from the original channel.
If you want to produce a value after a certain amount of time passes whether or not the original channel closes, you can either wrap the original channel in a pass-through channel that will close itself after a timeout passes or you can use a custom approach to the reduction step (perhaps the approach suggested by #leetwinski).
Use into
Returns a channel containing the single (collection) result of the
items taken from the channel conjoined to the supplied collection. ch
must close before into produces a result.
Something like this should work (it should print the events from events-chan given events-chan closes when it is done publishing events):
(go
(println (<! (into [] events-chan))))
The source channel needs to end (close), otherwise you can't put all events into a collection.
Edit:
Re-read your question, and it is not very clear what you want to accomplish. Whatever you want to do, chan->vector needs to return a channel so that whoever calls it can wait for the result. In fact, chan->vector is exactly into:
; chan->vector ch:Chan<Event> -> Chan<Vector[Event]>
(defn chan->vector [ch]
(into [] ch))
(go
(let [events (<! (chan->vector events-chan))]
(println events) ; Do whatever with the events vector
))
As I mentioned above, if the events chan never closes, then you have to do more thinking about how to consume the events. There is no magic solution. Do you want to batch the events by time intervals? By number of events? By a combination of those?
In summary, as mentioned above, chan->vector is into.
While possible in Clojure and many other languages, what you want to do is not possible in ClojureScript.
You want a function that blocks while listening to a channel. However, ClojureScript's version of core.async doesn't include the blocking operators. Why? Because ClojureScript doesn't block.
I couldn't find a reliable source to back that last sentence. There seems to be a lot of confusion around this topic on the web. However, I'm pretty sure of what I'm saying because ClojureScript ultimately becomes JavaScript, and that's how JavaScript works.
Indeed, JavaScript never blocks, neither on the browser nor in Node.js. Why? As far as I understand, it uses a single thread, so if it were to block, the user would be unable to do anything in the browser.
So it's impossible to do what you want. This is by design, because it could have disastrous UX effects. ClojureScript channels are like JavaScript events; in the same way you don't want an event listener to block the user interface while waiting for an event to happen, you also shouldn't want a channel to block while waiting for new values.
Instead, try using a callback function that gets called whenever a new value is delivered.

Clojure how to get access to one field from two threads?

Can't understand multithreading in clojure. Can't find examples of REAL multithreading. Most samples with atoms, refs, vars are singlethreaded. So, I have a quest. Two threads gaining access to one field, each thread can change it. I use atom for this purpose, so the Code is:
(do
(def field (atom "v0"))
(defn f1 []
(dotimes [i 100000]
(if (= i 9999)
(reset! field "v1"))))
(defn f2 []
(dotimes [i 100000]
(if (= i 777)
(reset! field "v2"))))
(do
(deref (future (Thread/sleep 10) (f1))
0 f2)
(prn #field)))
But nothing, the value of field is "v0". How to make normal twothreaded example with cycles in each thread and with access to variable???
watch the docs of deref:
clojure.core/deref
([ref] [ref timeout-ms timeout-val])
returns the in-transaction-value of ref, else returns the
most-recently-committed value of ref. When applied to a var, agent
or atom, returns its current state. When applied to a delay, forces
it if not already forced. When applied to a future, will block if
computation not complete. When applied to a promise, will block
until a value is delivered. The variant taking a timeout can be
used for blocking references (futures and promises), and will return
timeout-val if the timeout (in milliseconds) is reached before a
value is available. See also - realized?.
so your timeout is 0, that means it will return default value
which is f2 - a function value (not a function call), which is not being called obviously, so no reset! ever happens.
if you want "v1" you should deref like:
(deref (future (Thread/sleep 10) (f1)) 100 (f2))
if you want "v2":
(deref (future (Thread/sleep 10) (f1)) 0 (f2))

Clojure core.async, CPU hangs after timeout. Anyway to properly kill macro thread produced by (go..) block?

Based on core.async walk through example, I created below similar code to handle some CPU intensive jobs using multiple channels with a timeout of 10 seconds. However after the main thread returns, the CPU usage remains around 700% (8 CPUs machine). I have to manually run nrepl-close in emacs to shut down the Java process.
Is there any proper way to kill macro thread produced by (go..) block ? I tried close! each chan, but it doesn't work. I want to make sure CPU usage back to 0 by Java process after main thread returns.
(defn [] RETURNED-STR-FROM-SOME-CPU-INTENSE-JOB (do... (str ...)))
(let [n 1000
cs (repeatedly n chan)]
(doseq [c cs]
(go
(>! c (RETURNED-STR-FROM-SOME-CPU-INTENSE-JOB ))))
(dotimes [i n]
(let [[result source] (alts!! (conj cs (timeout 10000))) ] ;;wait for 10 seconds for each job
(if (list-contains? cs source) ;;if returned chan belongs to cs
(prn "OK JOB FINISHED " result)
(prn "JOB TIMEOUT")
)))
(doseq [i cs]
(close! i)) ;;not useful for "killing" macro thread
(prn "JOBS ARE DONE"))
;;Btw list-contains? function is used to judge whether an element is in a list
;;http://stackoverflow.com/questions/3249334/test-whether-a-list-contains-a-specific-value-in-clojure
(defn list-contains? [coll value]
(let [s (seq coll)]
(if s
(if (= (first s) value) true (recur (rest s) value))
false)))
In REPL there seems to be no clean way yet.
I first tried a very dirty way by using deprecated method Thread.stop
(doseq [i #threadpool ]
(.stop i))
It seemed worked as CPU usage dropped once the main thread returned to REPL, but if I run the program again in REPL, it'd just hang at the go block part!!
Then I googled around and found this blog and it says
One final thing to note: we don't explicitly do any work to shutdown the go routines. Go routines will automatically stop operation when the main function exits. Thus, go routines are like daemon threads in the JVM (well, except for the "thread" part ...)
So I tried again by making my project into a uberjar and run it on a command console, and it turned out that CPU usage would drop immediately when blinking cursor returns to the console!
Based on answer for another related question How to control number of threads in (go...), I've found a better way to properly kill all the threads started by (go...) block:
First alter the executor var and supply a custom thread pool
;; def, not defonce, so that the executor can be re-defined
;; Number of threads are fixed to be 4
(def my-executor
(java.util.concurrent.Executors/newFixedThreadPool
4
(conc/counted-thread-factory "my-async-dispatch-%d" true)))
(alter-var-root #'clojure.core.async.impl.dispatch/executor
(constantly (delay (tp/thread-pool-executor my-executor))))
Then call .shutdownNow and .awaitTermination method at the end of (go...) block
(.shutdownNow my-executor)
(while (not (.awaitTermination my-executor 10 java.util.concurrent.TimeUnit/SECONDS ) )
(prn "...waiting 10 secs for executor pool to finish") )
[UPDATE]
The shutdown executor method above seems not pure enough. The final solution for my case is to send a function with control of its own timeout into go block, using thunk-timeout function. Credits go to this post. Example below
(defn toSendToGo [args timeoutUnits]
(let [result (atom nil)
timeout? (atom false)]
(try
( thunk-timeout
(fn [] (reset! result (myFunction args))) timeoutUnits)
(catch java.util.concurrent.TimeoutException e (do (prn "!Time out after " timeoutUnits " seconds!!") (reset! timeout? true)) ))
(if #timeout? (do sth))
#result))
(let [c ( chan)]
(go (>! c (toSendToGo args timeoutUnits))))
(shutdown-agents)
Implementation-specific, JVM: both agents and channels use a global thread pool, and the termination function for agents iterates and closes all open threads in the VM. Empty the channels first: this action is immediate and non-reversible (especially if you are in a REPL).

:reload-all and existing references

I just discovered an interesting feature of :reload-all. Say I have:
(defn clock-update [clock] (swap! clock (fn [previousTime] (+ previousTime 1) ) ) )
(def threads (Executors/newScheduledThreadPool 16))
(defn start-clock [clock]
(. threads scheduleAtFixedRate
#(clock-update clock) 0 1 TimeUnit/SECONDS ))
and I (start-clock clock) where clock is an atom I'm watching, WELL, if I then change the atom swap! function (say, change + for -) in clock-update and (use :reload-all 'myns) then guess what, that function is used to update the atom for existing threads instead! I didn't expect that. I thought existing threads would continue to reference whatever function they were constructed with.
As the documentation explains
def always applies to the root binding, even if the var is thread-bound at the point where def is called.

Resources