Safe Multi-threaded Counters Management in Clojure - multithreading

I have a scenario where I want to monitor performance of different modules with simple counters. The code is written in clojure. There is an unknown number of possible counters I will need to monitor during running, and once in a while I report them (to statsd).
Here is my code:
(defn counter-incrementer []
(let [counters (atom {})
atom-increment (fn [counters-unwrapped metric-name metric-value]
(assoc counters-unwrapped metric-name (+ (get counters-unwrapped metric-name 0) metric-value)))
increment (fn [metric-name metric-value]
(swap! counters atom-increment metric-name metric-value))]
(fn [metric-name metric-value]
(increment metric-name metric-value))))
Then in each place in a code I want to update the counter, I will use:
(def inc-fn (counter-incrementer))
.
.
.
(inc-fn "number of logged users" 10)
This code works, but I feel it's not the best solution to the problem. For example, each time I want to update one counter, I lock all the counters map.
Is there a best-practice solution for this kind of problem in clojure?

You are correct, optimal solution to have one atom per counter.
Though, the best solution is to take production ready metrics library, like https://github.com/sjl/metrics-clojure
Here is usage for counters:
http://metrics-clojure.readthedocs.org/en/latest/metrics/counters.html

If you want to keep multiple metrics in one reference type and use only clojure, you can do so by using an agent map. Updates to agents are queued and handled by the agent thread pool, so there is no locking for the calling thread.
(def metrics (agent {:counter1 0,:counter2 0}))
(send metrics update-in [:counter1] inc)
#metrics
=> {:counter1 1, :counter2 0}
This way you can create new key value pairs dynamically when needed. update-in will create new keys when they're not in the map, but you will need to adjust your update function to account for nil values. This can be most practically done by compositing your original function with fnil and a default value.
(send metrics update-in [:counter3] (fnil inc 0))
#metrics
=> {:counter1 1, :counter3 1, :counter2 0}
(send metrics update-in [:counter3] (fnil inc 0))
#metrics
=> {:counter1 1, :counter3 2, :counter2 0}
You need to keep into account that updates aren't applied directly though. If there's still actions queued on the agent pool, it will prevent closing down the JVM process, unless a (shutdown-agents) is given.

Related

Is clojure "multithread"?

My question may seem weird but I think I'm facing an issue with volatile objects.
I have written a library implemented like this (just a scheme, not real content):
(def var1 (volatile! nil))
(def var2 (volatile! nil))
(def do-things [a]
(vreset! var1 a)
(vswap! var2 (inc #var2))
{:a #var1 :b #var2})
So I have global var which are initialized by external values, others that are calculated and I return their content.
i used volatile to have better speed than with atoms and not to redefine everytime a new var for every calculation.
The problem is that this seems to fail in practice because I map do-things to a collection (in another program) with inner sub-calls to this function occasionaly, like (pseudo-code) :
(map
(fn [x]
(let [analysis (do-things x)]
(if blabla
(do-things (f x))
analysis)))) coll)
Will inner conditionnal call spawn another thread under the hood ? It seems yes because somethimes calls work, sometimes not.
is there any other way to do apart from defining volatile inside every do-things body ?
EDIT
Actually the error was another thing but the question is still here : is this an acceptable/safe way to do without any explicit call to multithreading capabilities ?
There are very few constructs in Clojure that create threads on your behalf - generally Clojure can and will run on one or more threads depending on how you structure your program. pmap is a good example that creates and manages a pool of threads to map in parallel. Another is clojure.core.reducers/fold, which uses a fork/join pool, but really that's about it. In all other cases it's up to you to create and manage threads.
Volatiles should only be used with great care and in circumstances where you control the scope of use such that you are guaranteed not to be competing with threads to read and write the same volatile. Volatiles guarantee that writes can be read on another thread, but they do nothing to guarantee atomicity. For that, you must use either atoms (for uncoordinated) or refs and the STM (for coordinated).

Achieving multiple locks in clojure

I'm new to Clojure and am writing a web application. It includes a function fn performed on user user-id which includes several steps of reading and writing to the database and file system. These steps cannot be performed simultaneously by multiple threads (will cause database and file system inconsistencies) and I don't believe they can be performed using a database transaction. However, they are specific to one user and thus can be performed simultaneously for different users.
Thus, if a http request is made to perform fn for a specific user-id I need to make sure that it is completed before any http requests can perform fn for this user-id
I've come up with a solution that seems to work in the REPL but have not tried it in the web server yet. However, being unexperienced with Clojure and threaded programming I'm not sure whether this is a good or safe way to solve the problem. The following code has been developed by trial-and-error and uses the locking function - which seems to go against the "no locks" philosophy of Clojure.
(ns locking.core)
;;; Check if var representing lock exists in namespace
;;; If not, create it. Creating a new var if one already
;;; exists seems to break the locking.
(defn create-lock-var
[var-name value]
(let [var-sym (symbol var-name)]
(do
(when (nil? (ns-resolve 'locking.core var-sym))
(intern 'locking.core var-sym value))
;; Return lock var
(ns-resolve 'locking.core var-sym))))
;;; Takes an id which represents the lock and the function
;;; which may only run in one thread at a time for a specific id
(defn lock-function
[lock-id transaction]
(let [lock (create-lock-var (str "lock-id-" lock-id) lock-id)]
(future
(locking lock
(transaction)))))
;;; A function to test the locking
(defn test-transaction
[transaction-count sleep]
(dotimes [x transaction-count]
(Thread/sleep sleep)
(println "performing operation" x)))
If I open three windows in REPL and execute these functions, it works
repl1 > (lock-function 1 #(test-transaction 10 1000)) ; executes immediately
repl2 > (lock-function 1 #(test-transaction 10 1000)) ; waits for repl1 to finish
repl2 > (lock-function 2 #(test-transaction 10 1000)) ; executes immediately because id=2
Is this reliable? Are there better ways to solve the problem?
UPDATE
As pointed out, the creation of the lock variable is not atomic. I've re-written the lock-function function and it seems to work (no need for create-lock-var)
(def locks (atom {}))
(defn lock-transaction
[lock-id transaction]
(let [lock-key (keyword (str "lock-id-" lock-id))]
(do
(compare-and-set! locks (dissoc #locks lock-key) (assoc #locks lock-key lock-id))
(future
(locking (lock-key #locks)
(transaction))))))
Note: Renamed the function to lock-transaction, seems more appropriate.
Don't use N vars in a namespace, use an atom wrapped around 1 hash-map mapping N symbols to N locks. This fixes your current race condition, avoids creating a bunch of silly vars, and is easier to write anyway.
Since you're making a web app, I have to warn you: even if you do manage to get in-process locking right (which is not easy in itself), it will be for nothing as soon as you deploy your web server on more than one machine (which is almost mandatory if you want your app to be highly-available).
So basically, if you want to use locking, you'd better use distributed locking. From this point on, this discussion is not Clojure-specific, since Clojure's concurrency tools won't be especially helpful here.
For distributed locking, you could use something like Zookeeper. If you don't want to set up a whole Zookeeper cluster just for this, maybe you can compromise by using a Redis database (the Carmine library gives you distributed locks out of the box), although last time I heard Redis locking is not 100% reliable.
Now, it seems to me locking is not especially a requirement, and is not the best approach, especially if you're striving for idiomatic Clojure. How about using a queue instead ? Some popular JVM message brokers (such as HornetQ and ActiveMQ) give you Message Grouping, which guarantees that messages of the same group-id will be processed (serially) by the same consumer. All you have to do is have some threads listen to the right queue and set the user-id as the group id for your messages.
HACK: If you don't want to set up a distributed message broker, maybe you could get around by enabling sticky sessions on you load balancer, and using such a message broker in-VM.
By the way, don't name your function fn :).

Clojure - Using agents slows down execution too much

I am writing a benchmark for a program in Clojure. I have n threads accessing a cache at the same time. Each thread will access the cache x times. Each request should be logged inside a file.
To this end I created an agent that holds the path to the file to be written to. When I want to write I send-off a function that writes to the file and simply returns the path. This way my file-writes are race-condition free.
When I execute my code without the agent it finished in a few miliseconds. When I use the agent, and ask each thread to send-off to the agent each time my code runs horribly slow. I'm talking minutes.
(defn load-cache-only [usercount cache-size]
"Test requesting from the cache only."
; Create the file to write the benchmark results to.
(def sink "benchmarks/results/load-cache-only.txt")
(let [data-agent (agent sink)
; Data for our backing store generated at runtime.
store-data (into {} (map vector (map (comp keyword str)
(repeat "item")
(range 1 cache-size))
(range 1 cache-size)))
cache (create-full-cache cache-size store-data)]
(barrier/run-with-barrier (fn [] (load-cache-only-work cache store-data data-agent)) usercount)))
(defn load-cache-only-work [cache store-data data-agent]
"For use with 'load-cache-only'. Requests each item in the cache one.
We time how long it takes for each request to be handled."
(let [cache-size (count store-data)
foreachitem (fn [cache-item]
(let [before (System/nanoTime)
result (cache/retrieve cache cache-item)
after (System/nanoTime)
diff_ms ((comp str float) (/ (- after before) 1000))]
;(send-off data-agent (fn [filepath]
;(file/insert-record filepath cache-size diff_ms)
;filepath))
))]
(doall (map foreachitem (keys store-data)))))
The (barrier/run-with-barrier) code simply spawns usercount number of threads and starts them at the same time (using an atom). The function I pass is the body of each thread.
The body willl simply map over a list named store-data, which is a key-value list (e.g., {:a 1 :b 2}. The length of this list in my code right now is 10. The number of users is 10 as well.
As you can see, the code for the agent send-off is commented out. This makes the code execute normally. However, when I enable the send-offs, even without writing to the file, the execution time is too slow.
Edit:
I made each thread, before he sends off to the agent, print a dot.
The dots appear just as fast as without the send-off. So there must be something blocking in the end.
Am I doing something wrong?
You need to call (shutdown-agents) when you're done sending stuff to your agent if you want the JVM to exit in reasonable time.
The underlying problem is that if you don't shutdown your agents, the threads backing its threadpool will never get shut down, and prevent the JVM from exiting. There's a timeout that will shutdown the pool if there's nothing else running, but it's fairly lengthy. Calling shutdown-agents as soon as you're done producing actions will resolve this problem.

synchronized counter in clojure

If I want to keep a global counter (e.g. to count number of incoming requests across multiple threads), then the best way to do in java would be to use a volatile int. Assuming, clojure is being used is there a better (better throughput) way to do?
I would do this with an atom in Clojure:
(def counter (atom 0N))
;; increment the counter
(swap! counter inc)
;; read the counter
#counter
=> 1
This is totally thread-safe, and surprisingly high performance. Also, since it uses Clojure's abitrary-precision numeric handling, it isn't vulnerable to integer overflows in the way that a volatile int can be.....
Define a global counter as an agent
(def counter (agent 0))
To increase the value contained in the agent you send a function (in this case inc) to the agent:
(send counter inc)
To read the current value you can use deref or the # reader macro:
#counter ;; same as (deref counter)
Agents are only one of several available reference types. You can read more about these things on the Clojure website:
High-level overview
Software transactional memory with refs
Asynchronous agents
Atoms

Idiomatic Clojure way to spawn and manage background threads

What is the idiomatic Clojure way to create a thread that loops in the background doing updates to some shared refs and to manage its lifetime? I find myself using future for this, but it feels like a little bit of a hack as I never return a meaningful value. E.g.:
(future (loop [] (do
(Thread/sleep 100)
(dosync (...))
(recur))))
Also, I need to be careful to future-cancel this when the background processing is no longer needed. Any tips on how to orchestrate that in a Clojure/Swing application would be nice. E.g. a dummy JComponent that is added to my UI that is responsible for killing the thread when the window is closed may be an idea.
You don't need a do in your loop; it's implied. Also, while there's nothing wrong with an unconditional loop-recur, you may as well use (while true ...).
future is a fine tool for this; don't let it bother you that you never get a value back. That should really bother you if you use an agent rather than a future, though - agents without values are madness.
However, who said you need to future-cancel? Just make one of the steps in your future be to check whether it's still needed. Then no other parts of your code need to keep track of futures and decide when to cancel them. So something like
(future (loop []
(Thread/sleep 100)
(when (dosync
(alter some-value some-function))
(recur)) ; quit if alter returns nil
))
would be a viable approach.
Using agents for background recurring tasks seems neater to me
(def my-ref (ref 0))
(def my-agent (agent nil))
(defn my-background-task [x]
(do
(send-off *agent* my-background-task)
(println (str "Before " #my-ref))
(dosync (alter my-ref inc))
(println "After " #my-ref)
(Thread/sleep 1000)))
Now all you have to do is to initiate the loop
(send-off my-agent my-background-task)
The my-backgound-task function is sending itself to the calling agent after its invocation is done.
This is the way how Rich Hickey performs recurring tasks in the ant colony example application: Clojure Concurrency

Resources