I have two thread in my code. One thread is a generator which creates messages. A timestamp is generated before a message is transmitted. The other thread is a receiver which accepts replies from multiple clients. A timestamp is created for each reply. Two threads are running at the same time.
I find the timestamp generated by the receivers is earlier than the timestamp generated by the generator. The correct order should be the timestamp for the receiver is later than the timestamp for the generator.
If I give a high priority for the generator thread, this problem does not occcur. But this can also slow down the performance.
Is there other way to guarantee the correct order and less effection on the performance? Thanks.
Based on the comment thread in the question, this is likely the effect of the optimizer. This is really a problem with the design more than anything else - it assumes that the clocks between the producer and consumer are shared or tightly synchronized. This assumption seems reasonable until you need to distribute the processing between more than one computer.
Clocks are rarely (if ever) tightly synchronized between different computers. The common algorithm for synchronizing computers is the Network Time Protocol. You can achieve very close to millisecond synchronization on the local area network but even that is difficult.
There are two solutions to this problem that come to mind. The first is to have the producer's timestamp is passed through the client and into the receiver. If the receiver receives a timestamp that is earlier than it's notion of the current time, then it simply resets the timestamp to the current time. This type of normalization will allow assumptions about time being a monotonically increasing sequence continue to hold.
The other solution is to disable optimization and hope that the problem goes away. As you might expect, your mileage may vary considerably with this solution.
Depending on the problem that you are trying to solve you may be able to provide your own synchronized clock between the different threads. Use an atomically incrementing number instead of the wall time. java.util.concurrent.atomic.AtomicInteger or one of its relatives can be used to provide a single number that is incremented every time that a message is generated. This allows the producer and receiver to have a shared value to use as a clock of sorts.
In any case, clocks are really hard to use correctly especially for synchronization purposes. If you can find some way to remove assumptions about time from distributed systems, your architectures and solutions will be more resilient and more deterministic.
Related
I read the documentation of std::time and discovered that there are 3 types of time available:
Duration
Instant
SystemTime
I need the implementation which is closest to RTC with almost no clock drift. I need to use the times across machines in a distributed system.
I thought of going with Instant, but then I also read this:
Note, however, that instants are not guaranteed to be steady. In other
words, each tick of the underlying clock may not be the same length
(e.g. some seconds may be longer than others).
Which one is the best fit for a distributed environment?
There are essentially two approaches for distributed communication of time points:
Use PTP to synchronize time across your various machines and assume that the precision is sufficient for your application; you should be able to reach micro-second precision which is generally good enough.
Do not use a global time, instead:
Mark each different machine as a different time-source (or even, each different core),
Tag each event with its time-source and its instant,
Never attempt to compare instants from different time-sources.
In either case, Instant is better (monotonic) than SystemTime. SystemTime is weird since the time can be rewind, which is generally unexpected.
Of the three, Instant seems like it would give you the closest approximation of a RTC, at least within the context of the system.
In a distributed sense, you'd probably want to have only one instance across one machine and query it on the others to get relative time, that way you are guaranteed at least consistency in event times.
After a cursory glance, I was unable to find an out of the box solution if you want accurate time.
EDIT: This is a duplicate of Does Akka Tcp support full-duplex communication? (please don’t ask the same question multiple times, the same goes for duplicating on mailing lists, this wastes the time of those who volunteer their help, reducing your chances of getting answers in the future)
I've modified Echo server from https://github.com/akka/akka/blob/master/akka-docs/rst/scala/code/docs/io/EchoServer.scala#L96
case Received(data) =>
connection ! Write(data, Ack(currentOffset))
log.debug("same {}", sender.eq(connection)) // true
buffer(data)
That means incoming and outgoing messages are handled by the same actor. So a single working thread(that takes messages from a mailbox) will process read and write operations. Looks like a potential bottleneck.
In "classical" world I can create one thread to read from a socket and another for a writing and get simultaneous communication.
Update
Discussion in google group https://groups.google.com/forum/#!topic/akka-dev/mcs5eLKiAVQ
While there is a single Actor that either reads or writes at any given point in time, each of these operations takes very few cycles since it only occurs when there are data to be read or buffer space available to be written to. The system call overhead of ~1µs means that with the default buffer sizes of 128kiB you should be able to transfer up to 100GiB/s in total, which sure is a bottleneck but probably not today and in practice (this roughly coincides with typical CPU memory bandwidth, so more data rate is currently impossible anyway). Once this changes we can split the reading and writing responsibilities between different selectors and wake up different Actors, but before doing that we’ll need to verify that there actually is a measurable effect.
The other question that needs answering is which operating system kernels actually allow concurrent operations on a single socket from multiple threads. I have not researched this yet, but I would not be surprised to find that fully independent locking will be hard to do and there might not (yet) be a reason to expend that effort.
I'd like to learn how to deal with possibility of using multiple CPU cores in audio rendering of a single input parameter array in OSX.
In AudioToolbox, one rendering callback normally lives on a single thread which seemingly gets processed by a single CPU core.
How can one deal with input data overflow on that core, while other 3, 5 or 7 cores staying practically idle?
It is not possible to know in advance how many cores will be available on a particular machine, of course.
Is there a way of (statically or dynamically) allocating rendering callbacks to different threads or "threadbare blocks"?
Is there a way of precisely synchronising the moment at which various rendering callbacks on their own (highest priority) threads in parallel produce their audio buffers?
Can there GCD API perhaps be of any use?
Thanks in advance!
PS. This question is related to another question I have posted a while ago:
OSX AudioUnit SMP , with the difference that I now seem to better understand the scope of the problem.
No matter how you set up your audio processing on macOS – be it just writing a single render callback, or setting up a whole application suite – CoreAudio will always provide you with just one single realtime audio thread. This thread runs with the highest priority there is, and thus is the only way the system can give you at least some guarantees about processing time and such.
If you really need to distribute load over multiple CPU cores, you need to create your own threads manually, and share sample and timing data across them. However, you will not be able to create a thread with the same priority as the system's audio thread, so your additional threads should be considered much "slower" than your audio thread, which means you might have to wait on your audio thread for some other thread(s) longer than you have time available, which then results in an audible glitch.
Long story short, the most crucial part is to design the actual processing algorithm carefully, as in all scenarios you really need to know what task can take how long.
EDIT: My previous answer here was quite different and uneducated. I updated the above parts for anybody coming across this answer in the future, to not be guided in the wrong direction.
You can find the previous version in the history of this answer.
I am not completely sure, but I do not think this is possible. Of course, you can use the Accelerate.framework by Apple, which uses the available resources. But
A render callback lives on a real-time priority thread on which
subsequent render calls arrive asynchronously. Apple
Documentation
On user level you are not able to create such threads.
By the way, these slides by Godfrey van der Linden may be interesting to you.
Ever since I discovered sockets, I've been using the nonblocking variants, since I didn't want to bother with learning about threading. Since then I've gathered a lot more experience with threading, and I'm starting to ask myself.. Why would you ever use it for sockets?
A big premise of threading seems to be that they only make sense if they get to work on their own set of data. Once you have two threads working on the same set of data, you will have situations such as:
if(!hashmap.hasKey("bar"))
{
dostuff // <-- meanwhile another thread inserts "bar" into hashmap
hashmap[bar] = "foo"; // <-- our premise that the key didn't exist
// (likely to avoid overwriting something) is now invalid
}
Now imagine hashmap to map remote IPs to passwords. You can see where I'm going. I mean, sure, the likelihood of such thread-interaction going wrong is pretty small, but it's still existent, and to keep one's program secure, you have to account for every eventuality. This will significantly increase the effort going into design, as compared to simple, single-threaded workflow.
I can completely see how threading is great for working on separate sets of data, or for programs that are explicitly optimized to use threading. But for the "general" case, where the programmer is only concerned with shipping a working and secure program, I can not find any reason to use threading over polling.
But seeing as the "separate thread" approach is extremely widespread, maybe I'm overlooking something. Enlighten me! :)
There are two common reasons for using threads with sockets, one good and one not-so-good:
The good reason: Because your computer has more than one CPU core, and you want to make use of the additional cores. A single-threaded program can only use a single core, so with a heavy workload you'd have one core pinned at 100%, and the other cores sitting unused and going to waste.
The not-so-good reason: You want to use blocking I/O to simplify your program's logic -- in particular, you want to avoid dealing with partial reads and partial writes, and keep each socket's context/state on the stack of the thread it's associated with. But you also want to be able to handle multiple clients at once, without slow client A causing an I/O call to block and hold off the handling of fast client B.
The reason the second reason is not-so-good is that while having one thread per socket seems to simplify the program's design, in practice it usually complicates it. It introduces the possibility of race conditions and deadlocks, and makes it difficult to safely access shared data (as you mentioned). Worse, if you stick with blocking I/O, it becomes very difficult to shut the program down cleanly (or in any other way effect a thread's behavior from anywhere other than the thread's socket), because the thread is typically blocked in an I/O call (possibly indefinitely) with no reliable way to wake it up. (Signals don't work reliably in multithreaded programs, and going back to non-blocking I/O means you lose the simplified program structure you were hoping for)
In short, I agree with cib -- multithreaded servers can be problematic and therefore should generally be avoided unless you absolutely need to make use of multiple cores -- and even then it might be better to use multiple processes rather than multiple threads, for safety's sake.
The biggest advantage of threads is to prevent the accumulated lag time from processing requests. When polling you use a loop to service every socket with a state change. For a handful of clients, this is not very noticeable, however it could lead to significant delays when dealing with significantly large number of clients.
Assuming that each transaction requires some pre-processing and post processing (depending on the protocol this may be trivial amount of processing, or it could be relatively significant as is the case with BEEP or SOAP). The combined time to pre-process/post-process requests could lead to a backlog of pending requests.
For illustration purposes imagine that the pre-processing, processing, and post-processing stage of a request each consumes 1 microsecond so that the total request takes 3 microseconds to complete. In a single threaded environment the system would become overwhelmed if incoming requests exceed 334 requests per second (since it would take 1.002 seconds to service all requests received within a 1 second period of time) leading to a time deficit of 0.002 seconds each second. However if the system were using threads, then it would be theoretically possible to only require 0.336 seconds * (0.334 for shared data access + 0.001 pre-processing + 0.001 post processing) of processing time to complete all of the requests received in a 1 second time period.
Although theoretically possible to process all requests in 0.336 seconds, this would require each request to have it's own thread. More reasonably would be to multiple the combined pre/post processing time (0.668 seconds) by the number of requests and divide by the number of configured threads. For example, using the same 334 incoming requests and processing time, theoritically 2 threads would complete all requests in 0.668 seconds (0.668 / 2 + 0.334), 4 threads in 0.501 seconds, and 8 threads in 0.418 seconds.
If the highest request volume your daemon receives is relatively low, then a single threaded implementation with non-blocking I/O is sufficient, however if you expect occasionally bursts of high volume of requests then it is worth considering a multi-threaded model.
I've written more than a handful of UNIX daemons which have relatively low throughput and I've used a single-threaded for the simplicity. However, when I wrote a custom netflow receiver for an ISP, I used a threaded model for the daemon and it was able to handle peak times of Internet usage with minimal bumps in system load average.
This is sort of a strange question that's been bothering me lately. In our modern world of multi-core CPUs and multi-threaded operating systems, we can run many processes with true hardware concurrency. Let's say I spawn two instances of Program A in two separate processes at the same time. Disregarding OS-level interference which may alter the execution time for either or both processes, is it possible for both of these processes to complete at exactly the same moment in time? Is there any specific hardware/operating-system mechanism that may prevent this?
Now before the pedants grill me on this, I want to clarify my definition of "exactly the same moment". I'm not talking about time in the cosmic sense, only as it pertains to the operation of a computer. So if two processes complete at the same time, that means that they complete
with a time difference that is so small, the computer cannot tell the difference.
EDIT : by "OS-level interference" I mean things like interrupts, various techniques to resolve resource contention that the OS may use, etc.
Actually, thinking about time in the "cosmic sense" is a good way to think about time in a distributed system (including multi-core systems). Not all systems (or cores) advance their clocks at exactly the same rate, making it hard to actually tell which events happened first (going by wall clock time). Because of this inability to agree, systems tend to measure time by logical clocks. Two events happen concurrently (i.e., "exactly at the same time") if they are not ordered by sharing data with each other or otherwise coordinating their execution.
Also, you need to define when exactly a process has "exited." Thinking in Linux, is it when it prints an "exiting" message to the screen? When it returns from main()? When it executes the exit() system call? When its process state is run set to "exiting" in the kernel? When the process's parent receives a SIGCHLD?
So getting back to your question (with a precise definition for "exactly at the same time"), the two processes can end (or do any other event) at exactly the same time as long as nothing coordinates their exiting (or other event). What counts as coordination depends on your architecture and its memory model, so some of the "exited" conditions listed above might always be ordered at a low level or by synchronization in the OS.
You don't even need "exactly" at the same time. Sometimes you can be close enough to seem concurrent. Even on a single core with no true concurrency, two processes could appear to exit at the same time if, for instance, two child processes exited before their parent was next scheduled. It doesn't matter which one really exited first; the parent will see that in an instant while it wasn't running, both children died.
So if two processes complete at the same time, that means that they complete with a time difference that is so small, the computer cannot tell the difference.
Sure, why not? Except for shared memory (and other resources, see below), they're operating independently.
Is there any specific hardware/operating-system mechanism that may prevent this?
Anything that is a resource contention:
memory access
disk access
network access
explicit concurrency management via locks/semaphores/mutexes/etc.
To be more specific: these are separate CPU cores. That means they have computing circuitry implemented in separate logic circuits. From the wikipedia page:
The fact that each core can have its own memory cache means that it is quite possible for most of the computation to occur as interaction of each core with its own cache. Once you have that, it's just a matter of probability. That's not to say that algorithms take a nondeterministic amount of time, but their inputs may come from a probabilistic distribution and the amount of time it takes to run is unlikely to be completely independent of input data unless the algorithm has been carefully designed to take the same amount of time.
Well I'm going to go with I doubt it:
Internally any sensible OS maintains a list of running processes.
It therefore seems sensible for us to define the moment that the process completes as the moment that it is removed from this list.
It also strikes me as fairly unlikely (but not impossible) that a typical OS will go to the effort to construct this list in such a way that two threads can independently remove an item from this list at exactly the same time (processes don't terminate that frequently and removing an item from a list is relatively inexpensive - I can't see any real reason why they wouldn't just lock the entire list instead).
Therefore for any two terminating processes A and B (where A terminates before B), there will always be a reasonably large time period (in a cosmic sense) where A has terminated and B has not.
That said it is of course possible to produce such a list, and so in reality it depends on the OS.
Also I don't really understand the point of this question, in particular what do you mean by
the computer cannot tell the difference
In order for the computer to tell the difference it has to be able to check the running process table at a point where A has terminated and B has not - if the OS schedules removing process B from the process table immediately after process A then it could very easily be that no such code gets a chance to execute and so by some definitions it isn't possible for the computer to tell the difference - this sutation holds true even on a single core / CPU processor.
Yes, without any OS Scheduling interference they could finish at the same time, if they don't have any resource contention (shared memory, external io, system calls). When either of them have a lock on a resource they will force the other to stall waiting for resource to free up.