Performance when calling fsync on multiple files vs one file - linux

I have multiple threads each accepting requests, doing some processing, storing the result in a commit log, and returning the results. In order to guarantee that at most x seconds worth of data is lost, this commit log needs to be fsync'd every x seconds.
I would like to avoid synchronization between threads, which means they each need to have their own commit log rather than a shared log - is it possible to fsync all these different commit logs regularly in a performant way?
This is on Linux, ext4 (or ext3)
(Note: due to the nature of the code, even during normal processing the threads need to re-read some of their own recent data from the commit log (but never other threads commit log data), so I believe it would be impractical to use a shared log since many threads need to read/write to it)

If you only need flushing to happen every few seconds, do you need to fsync() at all? I.e. the OS should do it for you fairly regularly (unless the system is under heavy load and disk I/O is in short supply).
Otherwise, have your threads do something like:
if (high_resolution_time() % n == 0) {
fsync();
}
Where n is a value that would be e.g. 3 if high_resolution_time() returned returned Unix EPOCH time (which is expressed in seconds). Would make the thread flush the file every 3 seconds.
The problem, of course, is that you need much higher clock resolution to avoid having a thread that passes this code section several times per second not flush its file multiple times in quick succession. I don't know what programming language you use, but in C on Linux you could use
gettimeofday:
struct timeval tv;
gettimeofday(&tv, null);
double x = (double)tv.tv_sec * (double)1000000 + (double)tv.tv_usec;
if (x % 3000000 == 0) { // fsync every 3 seconds
fsync();
}

Related

Maximum number of tasks supported in AUTOSAR

What is the maximum number of tasks supported in AUTOSAR compliant systems?
In Linux, I can check the maximum process IDs supported to get the maximum number of tasks supported.
However, I couldn't find any source that states the maximum number of tasks supported by AUTOSAR.
Thank you very much for your help!
Well, we are still in an embedded automotive world and not on a PC.
There is usually a tradeoff between the number of tasks you have and what it takes to schedule them and what RAM/ROM and runtime resources your configuration uses.
As already said, if you just need a simple timed loop with some interrupts in between, one task may be ok.
It might be also enough, to have e.g. 3 tasks running at 5ms, 10ms and 20ms cycle. But you could also schedule this in simple cases like this with a single 5ms task:
TASK(TASK_5ms)
{
static uint8 cnt = 0;
cnt++;
// XXX and YYY Mainfunctions shall only be called every 10ms
// but do a load balancing, that does not run 3 functions every 10ms
// and 1 every 5ms, but only two every 5ms
if (cnt & 1)
{
XXX_Mainfunction_10ms();
}
else
{
YYY_Mainfunction_10ms();
}
ZZZ_Mainfunction_5ms();
}
So, if you need something to be run every 5, 10 or 20ms, you put these runnables into the corresponding tasks.
The old OSEK also had a notion of BASIC vs EXTENDED Tasks, where only extended tasks where able to react on OsEvents. This tasks might not run cyclically, but only on configured OsEvents. You would have an OS Waitpoint there, where the tasks is more or less stopped and only woken up by the OS on the arrival of an event. There are also OSALARM, which could either directly trigger the activation of a OsTask, or indirectly over an Event, so, you could e.g. wait on the same Waitpoint on both a cyclic event from an OsAlarm or an OsEvent set by something else e.g. by another task or from an ISR.
TASK(TASK_EXT)
{
EventMaskType evt;
for(;;)
{
WaitEvent(EVT_XXX_START | EVT_YYY_START | EVT_YYY_FINISHED);
GetEvent(TASK_EXT, &evt);
// Start XXX if triggered, but YYY has reported to be finished
if ((evt & (EVT_XXX_START | EVT_YYY_FINISHED) == (EVT_XXX_START | EVT_YYY_FINISHED))
{
ClearEvent(EVT_XXX_START);
XXX_Start();
}
// Start YYY if triggered, will report later to start XXX
if (evt & EVT_YYY_START)
{
ClearEvent(EVT_YYY_START);
YYY_Start();
}
}
}
This direct handling of scheduling is now mostly done/generated within the RTE based on the events you have configured for your SWCs and the Event to Task Mapping etc.
Tasks are scheduled mainly by their priority, that's why they can be interrupted anytime by a higher priority taks. Exception here is, if you configure your OS and tasks to be not preemptive but cooperative. Then it might be necessary to also use Schedule() points in your code, to give up the CPU.
On bigger systems and also on MultiCore systems with an MultiCore OS, there will be higher nunbers of Tasks, because Tasks are bound to a Core, though the Tasks on different Cores run independently, except maybe for the Inter-Core-Synchronization. This can also have a negative performance impact (Spinlocks can stop the whole system)
e.g. there could be some Cyclic Tasks for normal BaseSW components and one specific only for Communication components (CAN Stack and Comm-Services).
We usually separate the communication part, since they need a certain cycle time like 5..10ms, since this cycle is used by the Comm-Stack for message transmission scheduling and also reception timeout monitoring.
Then there might be a task to handle the memory stack (Ea/Fls, Eep/Fee, NvM).
There might be also some kind of Event based Tasks to trigger certain HW-control and processing chains of measured data, since they might be put on different cores, and can be scheduled by start or finished events of each other.
On the other side, for all your cyclic tasks, you should also make sure, that the functions run within such task do not run longer than your task cycle, otherwise you get an OS Shutdown due to multiple activation of the same task, since your task is started again, before it actually finished. And you might have some constraints, that require some tasks to finish in your applications expected measurement cycle.
In safety relevant systems (ASIL-A .. ASIL-D) you'll also have at least one task fpr each safety-level to get freedome-from-interference. In AUTOSAR, you already specify that on the OSApplication which the tasks are assigned to, which also allows you to configure the MemoryProtection (e.g. WrAccess to memory partitions by QM, ASIL-A, ASIL-B application and tasks). That is then another part, the OS has to do at runtime, to reconfigure the MPU according to the OsApplications MemoryAccess settings.
But again, the more tasks you create, the higher the usage of RAM, ROM and runtime.
RAM - runtime scheduling structures and different task stacks
ROM - the actual task and event configurations
Runtime - the context switches of the tasks and also the scheduling itself
It seems to vary. I found that ETAS RTA offers 1024 tasks*, whereas Vector's MICROSAR OS has 65535.
For task handling, OSEK/ASR provides the following functions:
StatusType ActivateTask (TaskType TaskID)
StatusType TerminateTask (void)
StatusType Schedule (void)
StatusType GetTaskID (TaskRefType TaskID)
StatusType GetTaskState (TaskType TaskID, TaskStateRefType State)
*Link might change in future, but it is easy to search ETAS page directly for manuals etc.: https://www.etas.com/en/products/download_center.php
Formally you can have an infinite number of OsTasks. According to the spec. the configuration of the Os can have 0..* OsTask.
Apart from that the (OS) software uses data type TaskType for Task-Index variables. Therefore, if TaskType is of uint16 you could not have more than 65535 tasks.
Besides that, if you have a lot of tasks, you might re-think your design.

How can I improve performance with FutureTasks

The problem seems simple, I have a number (huge) of operations that I need to work and the main thread can only proceed when all of those operations return their results, however. I tried in one thread only and each operation took about let's say from 2 to 10 seconds at most, and at the end it took about 2,5 minutes. Tried with future tasks and submited them all to the ExecutorService. All of them processed at a time, however each of them took about let's say from 40 to 150 seconds. In the end of the day the full process took about 2,1 minutes.
If I'm right, all the threads were nothing but a way of execute all at once, although sharing processor's power, and what I thought I would get would be the processor working heavily to get me all the tasks executed at the same time taking the same time they take to excecuted in a single thread.
Question is: Is there a way I can reach this? (maybe not with future tasks, maybe with something else, I don't know)
Detail: I don't need them to exactly work at the same time that actually doesn't matter to me what really matters is the performance
You might have created way too many threads. As a consequence, the cpu was constantly switching between them thus generating a noticeable overhead.
You probably need to limit the number of running threads and then you can simply submit your tasks that will execute concurrently.
Something like:
ExecutorService es = Executors.newFixedThreadPool(8);
List<Future<?>> futures = new ArrayList<>(runnables.size());
for(Runnable r : runnables) {
es.submit(r);
}
// wait they all finish:
for(Future<?> f : futures) {
f.get();
}
// all done

If one thread writes to a location and another thread is reading, can the second thread see the new value then the old?

Start with x = 0. Note there are no memory barriers in any of the code below.
volatile int x = 0
Thread 1:
while (x == 0) {}
print "Saw non-zer0"
while (x != 0) {}
print "Saw zero again!"
Thread 2:
x = 1
Is it ever possible to see the second message, "Saw zero again!", on any (real) CPU? What about on x86_64?
Similarly, in this code:
volatile int x = 0.
Thread 1:
while (x == 0) {}
x = 2
Thread 2:
x = 1
Is the final value of x guaranteed to be 2, or could the CPU caches update main memory in some arbitrary order, so that although x = 1 gets into a CPU's cache where thread 1 can see it, then thread 1 gets moved to a different cpu where it writes x = 2 to that cpu's cache, and the x = 2 gets written back to main memory before x = 1.
Yes, it's entirely possible. The compiler could, for example, have just written x to memory but still have the value in a register. One while loop could check memory while the other checks the register.
It doesn't happen due to CPU caches because cache coherency hardware logic makes the caches invisible on all CPUs you are likely to actually use.
Theoretically, the write race you talk about could happen due to posted write buffering and read prefetching. Miraculous tricks were used to make this impossible on x86 CPUs to avoid breaking legacy code. But you shouldn't expect future processors to do this.
Leaving aside for a second tricks done by the compiler (even ones allowed by language standards), I believe you're asking how the micro-architecture could behave in such scenario. Keep in mind that the code would most likely expand into a busy wait loop of cmp [x] + jz or something similar, which hides a load inside it. This means that [x] is likely to live in the cache of the core running thread 1.
At some point, thread 2 would come and perform the store. If it resides on a different core, the line would first be invalidated completely from the first core. If these are 2 threads running on the same physical core - the store would immediately affect all chronologically younger loads.
Now, the most likely thing to happen on a modern out-of-order machine is that all the loads in the pipeline at this point would be different iterations of the same first loop (since any branch predictor facing so many repetitive "taken" resolution is likely to assume the branch will continue being taken, until proven wrong), so what would happen is that the first load to encounter the new value modified by the other thread will cause the matching branch to simply flush the entire pipe from all younger operations, without the 2nd loop ever having a chance to execute.
However, it's possible that for some reason you did get to the 2nd loop (let's say the predictor issue a not-taken prediction just at the right moment when the loop condition check saw the new value) - in this case, the question boils down to this scenario:
Time -->
----------------------------------------------------------------
thread 1
cmp [x],0 execute
je ... execute (not taken)
...
cmp [x],0 execute
jne ... execute (not taken)
Can_We_Get_Here:
...
thread2
store [x],1 execute
In other words, given that most modern CPUs may execute instructions out of order, can a younger load be evaluated before an older one to the same address, allowing the store (from another thread) to change the value so it may be observed inconsistently by the loads.
My guess is that the above timeline is quite possible given the nature of out-of-order execution engines today, as they simply arbitrate and perform whatever operation is ready. However, on most x86 implementations there are safeguards to protect against such a scenario, since the memory ordering rules strictly say -
8.2.3.2 Neither Loads Nor Stores Are Reordered with Like Operations
Such mechanisms may detect this scenario and flush the machine to prevent the stale/wrong values becoming visible. So The answer is - no, it should not be possible, unless of course the software or the compiler change the nature of the code to prevent the hardware from noticing the relation. Then again, memory ordering rules are sometimes flaky, and i'm not sure all x86 manufacturers adhere to the exact same wording, but this is a pretty fundamental example of consistency, so i'd be very surprised if one of them missed it.
The answer seems to be, "this is exactly the job of the CPU cache coherency." x86 processors implement the MESI protocol, which guarantee that the second thread can't see the new value then the old.

context switch measure time

I wonder if anyone of you know how to to use the function get_timer()
to measure the time for context switch
how to find the average?
when to display it?
Could someone help me out with this.
Is it any expert who knows this?
One fairly straightforward way would be to have two threads communicating through a pipe. One thread would do (pseudo-code):
for(n = 1000; n--;) {
now = clock_gettime(CLOCK_MONOTONIC_RAW);
write(pipe, now);
sleep(1msec); // to make sure that the other thread blocks again on pipe read
}
Another thread would do:
context_switch_times[1000];
while(n = 1000; n--;) {
time = read(pipe);
now = clock_gettime(CLOCK_MONOTONIC_RAW);
context_switch_times[n] = now - time;
}
That is, it would measure the time duration between when the data was written into the pipe by one thread and the time when the other thread woke up and read that data. A histogram of context_switch_times array would show the distribution of context switch times.
The times would include the overhead of pipe read and write and getting the time, however, it gives a good sense of big the context switch times are.
In the past I did a similar test using stock Fedora 13 kernel and real-time FIFO threads. The minimum context switch times I got were around 4-5 usec.
I dont think we can actually measure this time from User space, as in kernel you never know when your process is picked up after its time slice expires. So whatever you get in userspace includes scheduling delays as well. However, from user space you can get closer measurement but not exact always. Even a jiffy delay matters.
I believe LTTng can be used to capture detailed traces of context switch timings, among other things.

What can make a program run slower when using more threads?

This question is about the same program I previously asked about. To recap, I have a program with a loop structure like this:
for (int i1 = 0; i1 < N; i1++)
for (int i2 = 0; i2 < N; i2++)
for (int i3 = 0; i3 < N; i3++)
for (int i4 = 0; i4 < N; i4++)
histogram[bin_index(i1, i2, i3, i4)] += 1;
bin_index is a completely deterministic function of its arguments which, for purposes of this question, does not use or change any shared state - in other words, it is manifestly reentrant.
I first wrote this program to use a single thread. Then I converted it to use multiple threads, such that thread n runs all iterations of the outer loop where i1 % nthreads == n. So the function that runs in each thread looks like
for (int i1 = n; i1 < N; i1 += nthreads)
for (int i2 = 0; i2 < N; i2++)
for (int i3 = 0; i3 < N; i3++)
for (int i4 = 0; i4 < N; i4++)
thread_local_histogram[bin_index(i1, i2, i3, i4)] += 1;
and all the thread_local_histograms are added up in the main thread at the end.
Here's the strange thing: when I run the program with just 1 thread for some particular size of the calculation, it takes about 6 seconds. When I run it with 2 or 3 threads, doing exactly the same calculation, it takes about 9 seconds. Why is that? I would expect that using 2 threads would be faster than 1 thread since I have a dual-core CPU. The program does not use any mutexes or other synchronization primitives so two threads should be able to run in parallel.
For reference: typical output from time (this is on Linux) for one thread:
real 0m5.968s
user 0m5.856s
sys 0m0.064s
and two threads:
real 0m9.128s
user 0m10.129s
sys 0m6.576s
The code is at http://static.ellipsix.net/ext-tmp/distintegral.ccs
P.S. I know there are libraries designed for exactly this kind of thing that probably could have better performance, but that's what my last question was about so I don't need to hear those suggestions again. (Plus I wanted to use pthreads as a learning experience.)
To avoid further comments on this: When I wrote my reply, the questioner hasn't posted a link to his source yet, so I could not tailor my reply to his specific issues. I was only answering the general question what "can" cause such an issue, I never said that this will necessarily apply to his case. When he posted a link to his source, I wrote another reply, that is exactly only focusing on his very issue (which is caused by the use of the random() function as I explained in my other reply). However, since the question of this post is still "What can make a program run slower when using more threads?" and not "What makes my very specific application run slower?", I've seen no need to change my rather general reply either (general question -> general response, specific question -> specific response).
1) Cache Poisoning
All threads access the same array, which is a block of memory. Each core has its own cache to speed up memory access. Since they don't just read from the array but also change the content, the content is changed actually in the cache only, not in real memory (at least not immediately). The problem is that the other thread on the other core may have overlapping parts of memory cached. If now core 1 changes the value in the cache, it must tell core 2 that this value has just changed. It does so by invalidating the cache content on core 2 and core 2 needs to re-read the data from memory, which slows processing down. Cache poisoning can only happen on multi-core or multi-CPU machines. If you just have one CPU with one core this is no problem. So to find out if that is your issue or not, just disable one core (most OSes will allow you to do that) and repeat the test. If it is now almost equally fast, that was your problem.
2) Preventing Memory Bursts
Memory is read fastest if read sequentially in bursts, just like when files are read from HD. Addressing a certain point in memory is actually awfully slow (just like the "seek time" on a HD), even if your PC has the best memory on the market. However, once this point has been addressed, sequential reads are fast. The first addressing goes by sending a row index and a column index and always having waiting times in between before the first data can be accessed. Once this data is there, the CPU starts bursting. While the data is still on the way it sends already the request for the next burst. As long as it is keeping up the burst (by always sending "Next line please" requests), the RAM will continue to pump out data as fast as it can (and this is actually quite fast!). Bursting only works if data is read sequentially and only if the memory addresses grow upwards (AFAIK you cannot burst from high to low addresses). If now two threads run at the same time and both keep reading/writing memory, however both from completely different memory addresses, each time thread 2 needs to read/write data, it must interrupt a possible burst of thread 1 and the other way round. This issue gets worse if you have even more threads and this issue is also an issue on a system that has only one single-core CPU.
BTW running more threads than you have cores will never make your process any faster (as you mentioned 3 threads), it will rather slow it down (thread context switches have side effects that reduce processing throughput) - that is unlike you run more threads because some threads are sleeping or blocking on certain events and thus cannot actively process any data. In that case it may make sense to run more threads than you have cores.
Everything I said so far in my other reply holds still true on general, as your question was what "can"... however now that I've seen your actual code, my first bet would be that your usage of the random() function slows everything down. Why?
See, random keeps a global variable in memory that stores the last random value calculated there. Each time you call random() (and you are calling it twice within a single function) it reads the value of this global variable, performs a calculation (that is not so fast; random() alone is a slow function) and writes the result back there before returning it. This global variable is not per thread, it is shared among all threads. So what I wrote regarding cache poisoning applies here all the time (even if you avoided it for the array by having separated arrays per thread; this was very clever of you!). This value is constantly invalidated in the cache of either core and must be re-fetched from memory. However if you only have a single thread, nothing like that happens, this variable never leaves cache after it has been initially read, since it's permanently accessed again and again and again.
Further to make things even worse, glibc has a thread-safe version of random() - I just verified that by looking at the source. While this seems to be a good idea in practice, it means that each random() call will cause a mutex to be locked, memory to be accessed, and a mutex to be unlocked. Thus two threads calling random exactly the same moment will cause one thread to be blocked for a couple of CPU cycles. This is implementation specific, though, as AFAIK it is not required that random() is thread safe. Most standard lib functions are not required to be thread-safe, since the C standard is not even aware of the concept of threads in the first place. When they are not calling it the same moment, the mutex will have no influence on speed (as even a single threaded app must lock/unlock the mutex), but then cache poisoning will apply again.
You could pre-build an array with random numbers for every thread, containing as many random number as each thread needs. Create it in the main thread before spawning the threads and add a reference to it to the structure pointer you hand over to every thread. Then get the random numbers from there.
Or just implement your own random number generator if you don't need the "best" random numbers on the planet, that works with per-thread memory for holding its state - that one might be even faster than the system's built-in generator.
If a Linux only solution works for you, you can use random_r. It allows you to pass the state with every call. Just use a unique state object per thread. However this function is a glibc extension, it is most likely not supported by other platforms (neither part of the C standards nor of the POSIX standards AFAIK - this function does not exist on Mac OS X for example, it may neither exist in Solaris or FreeBSD).
Creating an own random number generator is actually not that hard. If you need real random numbers, you shouldn't use random() in the first place. Random only creates pseudo-random numbers (numbers that look random, but are predictable if you know the generator's internal state). Here's the code for one that produces good uint32 random numbers:
static uint32_t getRandom(uint32_t * m_z, uint32_t * m_w)
{
*m_z = 36969 * (*m_z & 65535) + (*m_z >> 16);
*m_w = 18000 * (*m_w & 65535) + (*m_w >> 16);
return (*m_z << 16) + *m_w;
}
It's important to "seed" m_z and m_w in a proper way somehow, otherwise the results are not random at all. The seed value itself should already be random, but here you could use the system random number generator.
uint32_t m_z = random();
uint32_t m_w = random();
uint32_t nextRandom;
for (...) {
nextRandom = getRandom(&m_z, &m_w);
// ...
}
This way every thread only needs to call random() twice and then uses your own generator. BTW, if you need double randoms (that are between 0 and 1), the function above can be easily wrapped for that:
static double getRandomDouble(uint32_t * m_z, uint32_t * m_w)
{
// The magic number below is 1/(2^32 + 2).
// The result is strictly between 0 and 1.
return (getRandom(m_z, m_w) + 1) * 2.328306435454494e-10;
}
Try to make this change in your code and let me know how the benchmark results are :-)
You are seeing cache line bouncing. I'm really surprised that you don't get wrong results, due to race conditions on the histogram buckets.
One possibility is that the time taken to create the threads exceeds the savings gained by using threads. I would think that N is not very large, if the elapsed time is only 6 seconds for a O(n^4) operation.
There's also no guarantee that multiple threads will run on different cores or CPUs. I'm not sure what the default thread affinity is with Linux - it may be that both threads run on a single core which would negate the benefits of a CPU-intensive piece of code such as this.
This article details default thread affinity and how to change your code to ensure threads run on specific cores.
Even though threads don't access the same elements of the array at the same, the whole array may sit in a few memory pages. When one core/processor writes to that page, it has to invalidate its cache for all other processors.
Avoid having many threads working over the same memory space. Allocate separate data for each thread to work upon, then join them together when the calculation finishes.
Off the top of my head:
Context switches
Resource contention
CPU contention (if they aren't getting split to multiple CPUs).
Cache thrashing
David,
Are you sure you run a kernel that supports multiple processors? If only one processor is utilized in your system, spawning additional CPU-intensive threads will slow down your program.
And, are you sure support for threads in your system actually utilizes multiple processors? Does top, for example, show that both cores in your processor utilized when you run your program?

Resources