I have a main thread that creates/destroys objects. Let's name the object 'f'.
Now, every time this object is created it is added to the tailqueue of another object - say 'mi' . conversely when this object is deleted.
Now, there is another thread that runs every second, that tries to gather say statistics for this object 'f'. So it basically walks through all the max possible instance of 'mi' (say 2048)and then for each such 'mi', it gathers all the 'f' objects attached to it, sends a cmd down to the lower layer which emits some values corresponding to these objects. Now it must update the corresponding 'f' objects with these values.
Now the concern is what IF one of these 'f' objects gets deleted by the main thread while this walk is happening every 1s ?
Intuitively one would think of having a lock at the 'mi' level that is acquired before beginning the walk and released post the walk /update of all the 'f' objects belonging to a particular instance of 'mi', correct?
But the only hitch with this is that there could be 10,000's and even millions of 'f' objects tied to this instance of 'mi'.
The other requirement being that the main thread performance of creating/destroying these 'f' objects should be high i.e at the rate of atleast 10000 objects per second....
So given that, i'm not sure if it's feasible to have this per 'mi' object lock? Or am i overestimating the side effects of lock contention?
Any other ideas ?
Now the concern is what IF one of these 'f' objects gets deleted by
the main thread while this walk is happening every 1s ?
If an f object gets deleted while the other thread is trying to use it, undefined behavior will be invoked and you will probably end up spending some hours debugging your program to try to figure out why it is occasionally crashing. :) The trick is to make sure that you never delete any f while the other thread might be using it -- typically that would mean that your main thread needs to lock the mi's mutex before removing the f from its queue -- once the f is no longer in the queue, you can release the mutex before deleting the f if you want to, since at that point the other thread will not be able to access the f anyway.
i'm not sure if it's feasible to have this per 'mi' object lock?
It's feasible, as long as you don't mind your main thread occasionally getting held off (i.e. blocked waiting in a mutex::lock() method-call) until your other thread finishes iterating through the mi's queue and releases the mutex. Whether that holdoff-time is acceptable or not will depend on what the latency requirements of your main thread are (e.g. if it's generating a report, then being blocked for some number of milliseconds is no problem; OTOH if it is operating the control surfaces on a rocket in flight, being blocked for any length of time is unacceptable)
Any other ideas ?
My first idea is to get rid of the second thread entirely -- just have main thread call the statistics-collection function directly once per second, instead. Then you don't have to worry about mutexes or mutex-contention at all. This does mean that your main thread won't be able to perform its primary function during the time it is running the statistics-collection function, but at least now its "down time" is predictable rather than being a random function of which mi objects the two threads happen to try to lock/access at any given instant.
If that's no good (i.e. you can't tolerate any significant hold-off time whatsoever), another approach would be to use a message-passing paradigm rather than a shared-data paradigm. That is, instead of allowing both threads direct access the same set of mi's, use a message-queue of some sort so that the main thread can take a mi out of service and send it over to the second thread for statistics-gathering purposes. The second thread would then scan/update it as usual, and when it's done, pass it back (via a second message-queue) to the primary thread, which would put it back into service. You could periodically do this with various mi's to keep statistics updated on each of them without every requiring shared access to any of them. (This only works if your main thread can afford to go without access to certain mi's for short periods, though)
Related
I want to see the intrinsic difference between a thread and a long-running go block in Clojure. In particular, I want to figure out which one I should use in my context.
I understand if one creates a go-block, then it is managed to run in a so-called thread-pool, the default size is 8. But thread will create a new thread.
In my case, there is an input stream that takes values from somewhere and the value is taken as an input. Some calculations are performed and the result is inserted into a result channel. In short, we have input and out put channel, and the calculation is done in the loop. So as to achieve concurrency, I have two choices, either use a go-block or use thread.
I wonder what is the intrinsic difference between these two. (We may assume there is no I/O during the calculations.) The sample code looks like the following:
(go-loop []
(when-let [input (<! input-stream)]
... ; calculations here
(>! result-chan result))
(recur))
(thread
(loop []
(when-let [input (<!! input-stream)]
... ; calculations here
(put! result-chan result))
(recur)))
I realize the number of threads that can be run simultaneously is exactly the number of CPU cores. Then in this case, is go-block and thread showing no differences if I am creating more than 8 thread or go-blocks?
I might want to simulate the differences in performance in my own laptop, but the production environment is quite different from the simulated one. I could draw no conclusions.
By the way, the calculation is not so heavy. If the inputs are not so large, 8,000 loops can be run in 1 second.
Another consideration is whether go-block vs thread will have an impact on GC performance.
There's a few things to note here.
Firstly, the thread pool that threads are created on via clojure.core.async/thread is what is known as a cached thread pool, meaning although it will re-use recently used threads inside that pool, it's essentially unbounded. Which of course means it could potentially hog a lot of system resources if left unchecked.
But given that what you're doing inside each asynchronous process is very lightweight, threads to me seem a little overkill. Of course, it's also important to take into account the quantity of items you expect to hit the input stream, if this number is large you could potentially overwhelm core.async's thread pool for go macros, potentially to the point where we're waiting for a thread to become available.
You also didn't mention preciously where you're getting the input values from, are the inputs some fixed data-set that remains constant at the start of the program, or are inputs continuously feed into the input stream from some source over time?
If it's the former then I would suggest you lean more towards transducers and I would argue that a CSP model isn't a good fit for your problem since you aren't modelling communication between separate components in your program, rather you're just processing data in parallel.
If it's the latter then I presume you have some other process that's listening to the result channel and doing something important with those results, in which case I would say your usage of go-blocks is perfectly acceptable.
I figured it would be a lot easier if I drew a picture of my problem. Here it is:
Everything that is black in the diagram is part of the old design. Everything that is blue is part of the new design. Basically, I need to add a new thread (Worker Thread C) that will handle most of the work that Worker Thread B used to do. Worker Thread A is listening for real time updates from an external application. When he receives an update, he posts a message to Worker Thread B. Worker Thread B will set its copy of the new data (he still needs it in the new design) and then notify the GUI Thread as well as Worker Thread C that new data has arrived.
The user will send a request from the GUI to the new thread (Worker Thread C). Worker Thread C will process the request using the last received copy of the data that originally came from Worker Thread A. So my question is: Will Worker Thread C always be using the latest copy of the data when processing a request with this new design? What if Worker Thread B is too slow to update and then the user submits a request from the GUI? Thanks!
If I'm not mistaken, worker A is conceptually different than workers B and C, right? It rather looks like B and C handle user requests in the background in order to not block the UI. So, there could be a whole list of these background workers that perform UI operations or even none, while there will always be a worker A that pulls/receives updates.
Now, what I would do is that the worker A sends new data to the UI. The UI then uses this data in the next request. When it starts one of the workers like B or C, it just passes the data along with the other info that tells the thread what to do.
Note that you need to take care that you don't modify the data in different threads. The easiest way is to always copy the data when passing it between different parts, but that is often too expensive. Another easy way is to make the data constant. In worker A, you use a unique_ptr<Data> to accumulate the update and then send that data as a shared_ptr<Data const> to the UI thread. From that point on, this data is immutable (the compiler makes sure that you don't change it by accident) so it can be shared between threads without any further lock.
When creating a worker for a background operation, you pass in the shared_ptr<Data const>. If it needs to modify that data, it would first have to copy it, but usually that isn't something that can't be avoided.
Notes:
The basic idea is that you have either shared and immutable data or exclusive-owned and mutable data.
The data received from thread A is stored in the UI here, but conceptually it is part of the model in an MVC design. There, you only keep a reference to the last update, the earlier ones can be discarded. The worker thread still using the data won't notice, because the data is refcounted using shared_ptr.
At some point, I would consider aborting the background workers. Computing anything based on old data is not necessary, so it could be worthwhile to not waste time on it but to restart based on recent data.
I'm assuming that the channels between the threads (message queues) are synchronized. If they are already synchronized, that is all that you need.
If you're using C++98, you will need auto_ptr instead of unique_ptr and Boost's shared_ptr.
Wanting to be sure we're using the correct synchronization (and no more than necessary) when writing threadsafe code in JRuby; specifically, in a Puma instantiated Rails app.
UPDATE: Extensively re-edited this question, to be very clear and use latest code we are implementing. This code uses the atomic gem written by #headius (Charles Nutter) for JRuby, but not sure it is totally necessary, or in which ways it's necessary, for what we're trying to do here.
Here's what we've got, is this overkill (meaning, are we over/uber-engineering this), or perhaps incorrect?
ourgem.rb:
require 'atomic' # gem from #headius
SUPPORTED_SERVICES = %w(serviceABC anotherSvc andSoOnSvc).freeze
module Foo
def self.included(cls)
cls.extend(ClassMethods)
cls.send :__setup
end
module ClassMethods
def get(service_name, method_name, *args)
__cached_client(service_name).send(method_name.to_sym, *args)
# we also capture exceptions here, but leaving those out for brevity
end
private
def __client(service_name)
# obtain and return a client handle for the given service_name
# we definitely want to cache the value returned from this method
# **AND**
# it is a requirement that this method ONLY be called *once PER service_name*.
end
def __cached_client(service_name)
##_clients.value[service_name]
end
def __setup
##_clients = Atomic.new({})
##_clients.update do |current_service|
SUPPORTED_SERVICES.inject(Atomic.new({}).value) do |memo, service_name|
if current_services[service_name]
current_services[service_name]
else
memo.merge({service_name => __client(service_name)})
end
end
end
end
end
end
client.rb:
require 'ourgem'
class GetStuffFromServiceABC
include Foo
def self.get_some_stuff
result = get('serviceABC', 'method_bar', 'arg1', 'arg2', 'arg3')
puts result
end
end
Summary of the above: we have ##_clients (a mutable class variable holding a Hash of clients) which we only want to populate ONCE for all available services, which are keyed on service_name.
Since the hash is in a class variable (and hence threadsafe?), are we guaranteed that the call to __client will not get run more than once per service name (even if Puma is instantiating multiple threads with this class to service all the requests from different users)? If the class variable is threadsafe (in that way), then perhaps the Atomic.new({}) is unnecessary?
Also, should we be using an Atomic.new(ThreadSafe::Hash) instead? Or again, is that not necessary?
If not (meaning: you think we do need the Atomic.news at least, and perhaps also the ThreadSafe::Hash), then why couldn't a second (or third, etc.) thread interrupt between the Atomic.new(nil) and the ##_clients.update do ... meaning the Atomic.news from EACH thread will EACH create two (separate) objects?
Thanks for any thread-safety advice, we don't see any questions on SO that directly address this issue.
Just a friendly piece of advice, before I attempt to tackle the issues you raise here:
This question, and the accompanying code, strongly suggests that you don't (yet) have a solid grasp of the issues involved in writing multi-threaded code. I encourage you to think twice before deciding to write a multi-threaded app for production use. Why do you actually want to use Puma? Is it for performance? Will your app handle many long-running, I/O-bound requests (like uploading/downloading large files) at the same time? Or (like many apps) will it primarily handle short, CPU-bound requests?
If the answer is "short/CPU-bound", then you have little to gain from using Puma. Multiple single-threaded server processes would be better. Memory consumption will be higher, but you will keep your sanity. Writing correct multi-threaded code is devilishly hard, and even experts make mistakes. If your business success, job security, etc. depends on that multi-threaded code working and working right, you are going to cause yourself a lot of unnecessary pain and mental anguish.
That aside, let me try to unravel some of the issues raised in your question. There is so much to say that it's hard to know where to start. You may want to pour yourself a cold or hot beverage of your choice before sitting down to read this treatise:
When you talk about writing "thread-safe" code, you need to be clear about what you mean. In most cases, "thread-safe" code means code which doesn't concurrently modify mutable data in a way which could cause data corruption. (What a mouthful!) That could mean that the code doesn't allow concurrent modification of mutable data at all (using locks), or that it does allow concurrent modification, but makes sure that it doesn't corrupt data (probably using atomic operations and a touch of black magic).
Note that when your threads are only reading data, not modifying it, or when working with shared stateless objects, there is no question of "thread safety".
Another definition of "thread-safe", which probably applies better to your situation, has to do with operations which affect the outside world (basically I/O). You may want some operations to only happen once, or to happen in a specific order. If the code which performs those operations runs on multiple threads, they could happen more times than desired, or in a different order than desired, unless you do something to prevent that.
It appears that your __setup method is only called when ourgem.rb is first loaded. As far as I know, even if multiple threads require the same file at the same time, MRI will only ever let a single thread load the file. I don't know whether JRuby is the same. But in any case, if your source files are being loaded more than once, that is symptomatic of a deeper problem. They should only be loaded once, on a single thread. If your app handles requests on multiple threads, those threads should be started up after the application has loaded, not before. This is the only sane way to do things.
Assuming that everything is sane, ourgem.rb will be loaded using a single thread. That means __setup will only ever be called by a single thread. In that case, there is no question of thread safety at all to worry about (as far as initialization of your "client cache" goes).
Even if __setup was to be called concurrently by multiple threads, your atomic code won't do what you think it does. First of all, you use Atomic.new({}).value. This wraps a Hash in an atomic reference, then unwraps it so you just get back the Hash. It's a no-op. You could just write {} instead.
Second, your Atomic#update call will not prevent the initialization code from running more than once. To understand this, you need to know what Atomic actually does.
Let me pull out the old, tired "increment a shared counter" example. Imagine the following code is running on 2 threads:
i += 1
We all know what can go wrong here. You may end up with the following sequence of events:
Thread A reads i and increments it.
Thread B reads i and increments it.
Thread A writes its incremented value back to i.
Thread B writes its incremented value back to i.
So we lose an update, right? But what if we store the counter value in an atomic reference, and use Atomic#update? Then it would be like this:
Thread A reads i and increments it.
Thread B reads i and increments it.
Thread A tries to write its incremented value back to i, and succeeds.
Thread B tries to write its incremented value back to i, and fails, because the value has already changed.
Thread B reads i again and increments it.
Thread B tries to write its incremented value back to i again, and succeeds this time.
Do you get the idea? Atomic never stops 2 threads from running the same code at the same time. What it does do, is force some threads to retry the #update block when necessary, to avoid lost updates.
If your goal is to ensure that your initialization code will only ever run once, using Atomic is a very inappropriate choice. If anything, it could make it run more times, rather than less (due to retries).
So, that is that. But if you're still with me here, I am actually more concerned about whether your "client" objects are themselves thread-safe. Do they have any mutable state? Since you are caching them, it seems that initializing them must be slow. Be that as it may, if you use locks to make them thread-safe, you may not be gaining anything from caching and sharing them between threads. Your "multi-threaded" server may be reduced to what is effectively an unnecessarily complicated, single-threaded server.
If the client objects have no mutable state, good for you. You can be "free and easy" and share them between threads with no problems. If they do have mutable state, but initializing them is slow, then I would recommend caching one object per thread, so they are never shared. Thread[] is your friend there.
In our scenario,
the consumer takes at least half-a-second to complete a cycle of process (against a row in a data table).
Producer produces at least 8 items in a second (no worries, we don't mind about the duration of a consuming).
the shared data is simply a data table.
we should never ask producer to wait (as it is a server and we don't want it to wait on this)
How can we achieve the above without locking the data table at all (as we don't want producer to wait in any way).
We cannot use .NET 4.0 yet in our org.
There is a great example of a producer/consumer queue using Monitors at this page under the "Producer/Consumer Queue" section. In order to synchronize access to the underlying data table, you can have a single consumer.
That page is probably the best resource for threading in .NET on the net.
Create a buffer that holds the data while it is being processed.
It takes you half a second to process, and you get 8 items a second... unless you have at least 4 processors working on it, you'll have a problem.
Just to be safe I'd use a buffer at least twice the side needed (16 rows), and make sure it's possible with the hardware.
There is no magic bullet that is going to let you access a DataTable from multiple threads without using a blocking synchronization mechanism. What I would do is to hold the lock for as short a duration as possible. Keep in mind that modifying any object in the data table's hierarchy will require locking the whole data table. This is because modifying a column value on a DataRow can change the internal indexing structures inside the parent DataTable.
So what I would do is from the producer acquire a lock, add a new row, and release the lock. Then in the conumser you will acquire the same lock, copy data contained in a DataRow into a separate data structure, and then release the lock immediately. Now, you can operate on the copied data without synchronization mechanisms since it is isolated. After you have completed the operation on it you will again acquire the lock, merge the changes back into the DataRow, and then release the lock and start the process all over again.
I have 2 questions :
Q1) Can i implement an asynchronous timer in a single threaded application i.e i want a functionality like this.
....
Timer mytimer(5,timeOutHandler)
.... //this thread is doing some other task
...
and after 5 seconds, the timeOutHandler function is invoked.
As far as i can think this cannot be done for a single threaded application(correct me if i am wrong). I don't know if it can be done using select as the demultiplexer, but even if select could be used, the event loop would require one thread ? Isn't it ?
I also want to know whether i can implement a timer(not timeout) using select.
Select only waits on set of file descriptors, but i want to have a list of timers in ascending order of their expiry timeouts and want select to tell me when the first timer expires and so on. So the question boils down to can a asynchronous timer be implemented using select/poll or some other event demultiplexer ?
Q2) Now lets come to my second question. This is my main question.
Now i am using a dedicated thread for checking timeouts i.e i have a min heap of timers(expiry times) and this thread sleeps till the first timer expires and then invokes the callback.
i.e code looks something like this
lock the mutex
check the time of the first timer
condition timed wait for that time(and wake up if some other thread inserts a timer with expiry time less than the first timer) Condition wait unlocks the lock.
After the condition wait ends we have the lock. So unlock it, remove the timer from the heap and invoke the callback function.
go to 1
I want the time complexity of such asynchronous timer. From what i see
Insertion is lg(n)
Expiry is lg(n)
Cancellation
:( this is what makes me dizzy ) the problem is that i have a min heap of timers according to their times and when i insert a timer i get a unique id. So when i need to cancel the timer, i need to provide this timer id and searching for this timer id in the heap would take in the worst case O(n)
Am i wrong ?
Can cancellation be done in O(lg n)
Please do take care of some multithreading issues. I would elaborate on what i mean by my previous sentence once i get some responses.
It's definitely possible (and usually preferable) to implement timers using a single thread, if we can assume that the thread will be spending most of its time blocking in select().
You could check out using signal() and SIGALRM to implement the functionality under POSIX, but I'd recommend against it (Unix signals are ugly hacks, and when the signal callback function runs there is very little that you can do inside it safely, since it is running asynchronously to your app thread)
Your idea about using select()'s timeout to implement your timer functionality is a good one -- that is a very common technique and it works well. Basically you keep a list of pending/upcoming events that is sorted by timestamp, and just before you call select() you subtract the current time from the first timestamp in the list, and pass in that time-delta as the timeout value to select(). (note: if the time-delta is negative, pass in zero as the timeout value!) When select() returns, you compare the current time with the time of the first item in the list; if the current time is greater than or equal to the event time, handle the timer-event, pop the first item off the head of the list, and repeat.
As for efficiency, your big-O times will depend entirely on the data structure you use to store your ordered list of timers. If you use a priority queue (or a similar ordered tree type structure) you can have O(log N) times for all of your operations. You can even go further and store the events-list in both a hash table (keyed on the event IDs) and a linked list (sorted by time stamp), and that can give you O(1) times for all operations. O(log N) is probably sufficiently efficient though, unless you plan to have a really large number of events pending at once.
man pthread_cond_timedwait
man pthread_cond_signal
If you are a windows App, you can trigger a WM_TIMER message to be sent to you at some point in the future, which will work even if your app is single threaded. However, the accuracy of the timing will not be great.
If your app runs in a constant loop (like a game, rendering at 60Hz), you can simply check each time around the loop to see if triggered events need to be called.
If you want your app to basically be interrupted, your function to be called, then execution to return to where it was, then you may be out of luck.
If you're using C#, System.Timers.Timer will do what you want. You specify an event handler method that the timer calls when it expires, which can be in the class that you invoke the timer from. Note that when the timer calls the event handler, it will do it on a separate thread, which you need to take into account if you're updating the user interface, or use its SynchronizingObject property to run it on the UI thread.