I've been searching for information for a common kernel implementation of queues, that is, first-in-first-out data structures. I thought there may be one since it's likely something that's common to use, and there's a standard for linked lists (in the form of the list_head structure). Is there some standard queue implementation I can't find, or is it perhaps common practice to just use linked lists as queues and hope for the best?
Are you looking for include/linux/kfifo.h?
From the heading:
A simple kernel FIFO implementation.
It's rather new anyway, so it's not hard to find direct usages of linked lists. Also, they have a quite different implementation (FIFOs are implemented as circular buffers), so they have different applications.
Note also they are designed with multithreaded usage in mind (think to producer/consumer queues), but you can use them without locking with __kfifo_put/__kfifo_get.
Btw: I remember I learned about them on lwn.net - bookmark this: lwn.net/Kernel/Index, and read the entry about kfifo :-).
From your ex-kernel developer,
Blaisorblade
You're right, the Linux kernel typically uses linked lists to implement queues. This makes sense, because linked lists offer the required behavior. See this example from kernel/workqueue.c:
INIT_LIST_HEAD(&wq->list);
// ...
case CPU_UP_CANCELED:
list_for_each_entry(wq, &workqueues, list) {
if (!per_cpu_ptr(wq->cpu_wq, hotcpu)->thread)
You seem to confusing an abstraction (a fifo queue) with an implementation (a linked list).
They are not mutually exclusive - in fact queues are most commonly implemented as linked lists - there is no "hoping for the best".
Related
The topic of multi-threaded access to Lisp objects came up in another post at https://stackoverflow.com/posts/comments/97440894?noredirect=1, but as a side issue, and I am hoping for further clarification.
In general, Lisp functions (and special forms, macros, etc) seem to naturally divide into accessors and modifiers of objects. Modifiers of shared objects are clearly problematic in multi-threaded applications, since updates occurring at the same time can interfere with each other (requiring protective locks, atomic operations, etc).
But the question of potential accessor interference seems less clear. Of course, any accessor could be written to include latent modifying code, but I would like to think that the basic Lisp accessor operations (as specified in CLHS and implemented for the various platforms) do not. However, I suspect there could be a very few exceptions for reasons of efficiency—exceptions that would be good to be aware of if otherwise used in multi-threaded code without protection. (The kind of exceptions I’m talking about are not operations like maphash which can be used as both an accessor and modifier.)
It would be helpful if anyone with implementation experience could point to at least one built-in access-only operation (say in SBCL or other source) that includes potentially troublesome modification. I know guarantees are hard to come by, but heuristic guidance is useful too.
Any code that does that would be a bug in an implementation that supports multithreading. SBCL protects functions that are not thread-safe with the famous *world-lock*.
If you have a real reason to want an immutable structure, use defconstant with a read-only defstruct.
(defstruct number (value :read-only t))
(defconstant +five+ (make-number 5))
I am looking for an equivalent of the concurrent_queue from Intel's tbb module in Rust. I have found some crates:
multiqueue
two-lock-queue
crossbeam-deque
and even
futures-pool
thread-pool
I feel like they are all doing similar things, however in their docs it seems that they are using different algorithms for the implementation.
Though I don't know a lot about programming in C++, I am pretty sure that tbb's concurrent_queue is a very fast MPMC queue implementation. You cannot be close to that performance if you only wrap a queue container in a Mutex (which is tested by one of my friends).
Since the efficiency (both latency and throughput) is the main thing I care about, what should I use in Rust? The queue could be either bounded or unbounded and I probably need Acquire-Release ordering.
I think the crossbeam::sync::MsQueue and the crossbeam::sync::SegQueue from the crossbeam crate have the same capabilities as the concurrent_queue you linked.
They are lock free queues that can be used in a non blocking way with push and try_pop.
This benchmark indicates that SegQueue is faster than MsQueue, but that may still depend on your use case.
I am new to concurrent programming. As I was going through the topics I got confused between Synchronizations, thread safe collections, atomic wrapper classes, locks.
Locks and Synchronization do same work by making a piece of code thread safe. Why do we need thread safe collections or atomic wrapper classes then? As locking will enable only a single thread to access the code and won't let collections or primitive types to be thread unsafe.
This is a very wide question that you're asking. Problem is, not all of these things have a single, strict definition. For example, thread-safe collections might use various forms of synchronization (like e.g. locks or atomic operations) to achieve thread-safety. However, not even the term "thread-safe" is well-defined!
However, there is one thing you got wrong surely: Synchronization is the goal, while locks, mutexes, atomics etc are means to achieve that. Synchronization just means that different threads access resources in a synchronized way. In other words, they coordinate access so that they don't badly interact with each other. BTW: I'm talking here about threads, but the different entities could also be processes or even different computers, but let's keep it simple at first.
Now, you ask about the use of "thread safe collections or atomic wrapper classes" and why they are required at all. The answer is pretty simple, these things provide different interfaces or services on a higher level. For example, when I have a FIFO connecting two threads, it doesn't matter how they synchronize access to the underlying queue. If the interface for the two threads is implemented properly, you get certain guarantees. Doing so manually with just locks is possible but complicated, so providing these as high-level building blocks in addition to the low-level primitives just makes software development easier and the results more reliable.
Lastly, one advise for further questions: As initially mentioned, not all terms have a universal meaning associated with them. Therefore, it would help if you provided additional info, like in particular the programming language you intend to use.
Because you need to be careful when using synchronization. If you abuse of it, you may have performance issues. Using thread-safe collections when possible is usually better for performance and you make sure that you haven't errors or deadlocks.
I've read a lot of articles about distributed Haskell. Much work has been done but seems to be in the area of distributing computations. I saw the remote package which seems to implement Erlang-style messaging passing but it is 0.1 and early stage.
I'd like to implement a system where there are many separate processes that provide distinct services, and are tied together by several main processes. This seems to be a natural fit for Erlang, but not so for Haskell. But I like Haskell's type safety.
Has there been any recent adoption of Erlang-style process management in Haskell?
If you want to learn more about the remote package, a.k.a CloudHaskell, see the paper as well as Jeff Epstein's thesis. It aims to provide precisely the actor abstraction you want, but as you say it is in the early stages. There is active discussion regarding improvements on the parallel-haskell mailing list, so if you have specific needs that remote doesn't provide, we'd be happy for you to jump in and help us decide its future directions.
More mature but lower-level than remote is the haskell-mpi package. If you stick to the Simple interface, messages can be sent containing arbitrary Serialize instances, but the abstraction is still way lower than remote.
There are some experimental systems, such as described in Implementing a High-level Distributed-Memory Parallel Haskell in Haskell (Patrick Maier and Phil Trinder, IFL 2011, can't find a pdf online). It blends a monad-par approach of deterministic dataflow parallelism with a limited ability to make the I-structures serializable over the network. These sorts of abstraction have promise for doing distributed computation, but since the focus is on computing purely-functional values rather than providing Erlang-style processes, they probably wouldn't be a good fit for your application.
Also, for completeness, I should point out the Haskell wiki page on cloud and HPC Haskell, which covers what I describe here, as well as the subsection on distributed Haskell, which seems in need of a refresh.
I frequently get the feeling that IPC and actors are an oversold feature. There are plenty of attractive messaging systems out there that have Haskell bindings e.g. MessagePack, 0MQ or Thrift. IMHO the only thing you have to add is proper addressing of processes and decide who/what is managing this addressing capability.
By the way: a number of coders adopt e.g. 0MQ into their Erlang environments, simply because it offers the possibility to structure messaging via message brokers rather then relying on pure process to process messaging in super scale.
In a "massively multicore world" I personally assume that shared memory approaches will eventually be outperforming messaging. Someone can then always come and argue with asynchrony of course. But already when you write that you want to "tie together" your processes by "several main processes" you in fact speak about synchronization. Also, you can of course challenge whether a single function, process or thread is the right level of parallelization.
In short: I would probably see whether MessagePack or 0MQ could fit my needs in Haskell and care for the rest in my code.
Does someone know of a good resource for the implementation (meaning source code) of lock-free usual data types. I'm thinking of Lists, Queues and so on?
Locking implementations are extremely easy to find but I can't find examples of lock free algorithms and how to exactly does CAS work and how to use it to implement those structures.
Check out Julian M Bucknall's blog. He describes (in detail) lock-free implementations of queues, lists, stacks, etc.
http://www.boyet.com/Articles/LockfreeQueue.html
http://www.boyet.com/Articles/LockfreeStack.html
http://www.liblfds.org
Written in C.
If C++ is okay with you, take a look at boost::lockfree. It has lock-free Queue, Stack, and Ringbuffer implementations.
In the boost::lockfree::details section, you'll find a lock-free freelist and tagged pointer (ABA prevention) implementation. You will also see examples of explicit memory ordering via boost::atomic (an in-development version of C++0x std::atomic).
Both boost::lockfree and boost::atomic are not part of boost yet, but both have seen attention from the boost-development mailing list and are on the schedule for review.