C# multithreading a process with a much fewer CPU is faster than a much more CPU - multithreading

Currently our application is processing a large amount of files about over 1000 XML files on the same directory. The files are all being read, parsed and updated/saved to the database.
When we tested our application on a 12 core machine the total process is much slower than processing it on a 4 core machine.
What we observed is that the thread count produced by our application goes up to a range of 30 to 90 threads and the Context Switches is massively increasing. This is possibly caused by a lot of parallel execution being spawned but all of them are important.
Is the context switch the culprit? or the parallel read/write of files? or do we lessen the number of parallel tasks?

The bottle neck here is the disk access. No matter how many threads you start, the file system can only read one file at a time. Starting more threads will only make them fight over this single resource, increasing both the context switching and the disk seek times.
In the other end of the process is also a limitation as only one thread at a time can update a table in the database, but the database is designed to handle multiple processes.
Make a single thread responsible for the disk reads, and once a file has been read it can start a thread that processes it. That way you read from the disk in the most efficient way, and you have the multi threaded part of the operation behind the bottle neck.

Related

Mulithreading does not help for IO intensive task?

I need to copy a set of files with the size of each file ranging from 1MB to 700MB. After I copy each file, I need to validate the checksum of each file against an entry in md5sum.txt.
I wanted to optimize this task and hence evaluated the performance by splitting the load among multiple threads. The results were not as expected. I was expecting that the time taken for copy and validation would decrease with increase in the number of threads, but the time taken actually increased.
I have modified the ThreadPool source code shared in this link https://stackoverflow.com/a/22285532/1568395 to implement the threadpool.
The source code for the application can be found here
https://github.com/saai63/ThreadPool
The results for various number of threads is as shown below,
As per my reading, the probable reason could be that all tasks are now IO bound tasks and hence all of the threads will be blocked on IO operation and hence cannot run in parallel as the shared resource here is the HDD. I also understand that HDD controller tries to optimize the disk access by reducing the seek time. Disks love sequential access patterns, and any concurrent accesses will disrupt this pattern and hence the delay for large files.
Is this the only reason for the delay or there are some other factors? Why does the time increases with the increase in number of threads?
IO is always much slower than the CPU. When multiple threads try to read from an IO device, what they usually achieve is a "bull rush" to the device and increase the "randomness" of the IO operations, thus making it all slower. Fewer threads have a greater chance of sequential operations, which are notoriously faster.
In case of Multithreading you share CPU amongst threads. CPU is swtichted amongts threads whenever running thread goes into some sort of waiting state.
Here you have IO bound task and, there's no point of making your program multithreaded as all of them will be relying on single IO device.
Even if you implement a multiprocess solution (multiple processes on same node), all processes will be waiting for the same IO device and won't give any performance optimization.
One solution would be building some sort of multi node solution with shared disk having simultaneous multi-client access support.
Using this kind of approach you can divide your task amongst multiple nodes, access same disk and perform operation.
Edit:
I think increase in time is beacause time taken by Operating System for servicing multiple threads.
Switching CPU and IO devices amongst thread is gonna take as you increase number of threads, Context Switch is compute intensive task as well as you lose the IO/CPU Cache performance as you switch amongts threads.

Concurrent processes a lot slower than single process

I am modelling and solving a nonlinear program (NLP) using single-threaded CPLEX with AMPL (I am constraining CPLEX to use only one thread explicitly) in CentOS 7. I am using a processor with 6 independent cores (intel i7 8700) to solve 6 independent test instances.
When I run these tests sequentially, it is much faster than when I run these 6 instances concurrenctly (about 63%) considering time elapsed. They are executed in independent processes, reading distinct data files, and writting results in distinct output files. I have also tried to solve these tests sequentially with multithread, and I got similar times to those cases with only one thread sequentially.
I have checked the behaviour of these processes using top/htop. They get different processors to execute. So my question is how the execution of these tests concurrently would get so much impact on time elapsed if they are solving in different cores with only one thread and they are individual processes?
Any thoughts would be appreciated.
It's very easy to make many threads perform worse than a single thread. The key to successful multi-threading and speedup is to understand not just the fact that the program is multi-threaded, but to know exactly how your threads interact. Here are a few questions you should ask yourself as you review your code:
1) Do the individual threads share resources? If so what are those resources and when you are accessing them do they block other threads?
2) What's the slowest resource your multi-threaded code relies on? A common bottleneck (and oft neglected) is disk IO. Multiple threads can process data much faster but they won't make a disk read faster and in many cases multithreading can make it much worse (e.g. thrashing).
3) Is access to common resources properly synchronized?
To this end, and without knowing more about your problem, I'd recommend:
a) Not reading different files from different threads. You want to keep your disk IO as sequential as possible and this is easier from a single thread. Maybe batch read files from a single thread and then farm them out for processing.
b) Keep your threads as autonomous as possible - any communication back and forth will cause thread contention and slow things down.

Does threading a lot leads to thrashing?

Does threading a lot leads to thrashing if each new thread wants to access the memory (specifically the same database in my case) and perform read/write operations through out its lifetime?
I assume that this is true. If my assumption is true, then what is the best way to maximize the CPU utilization? And how can i determine that some specific number of threads will give good CPU utilization?
If my assumption is wrong, please do give proper illustrations to let me understand the scenario clearly.
Trashy code causes trashing. Not thread. All code is ran by some threads, even the main(). Temp objects are garbage collected the same way on any thread.
The subtle part is when each thread preloads its own objects to perform the work, which can duplicate a lot of same classes. It's usually a small sacrifice to make to get the power of concurrency. But it's not trash (no leak, no deterioration).
There is one exception: when some 3rd party code caches material in thread locals... You could end up caching the same stuff on each thread. Not really a leak, but not efficient.
Rule of thumb for number of threads? Depends on the task.
If the tasks are pure computation like math, then you should not exceed the number of non-hyperthreaded cores.
If the job is memory intensive along with pure computation work (most cases), then the number of hyperthreaded cores is your target (because the CPU will use the idle time of memory access for another core computations).
If the job is mostly large sequential disk i/o, then you number of threads should be not to much above the number of disk spindle available to read. This is VERY approximative since the disk caches, DMA, SSD, raids and such are completely affecting how the disk layer can service your thread without idling. When using random access, this is also valid. However, the virtualization these days will throw all your estimates out the window. Disk i/o could be much more available than you think, but also much worse.
If the jobs are mostly network i/o waits, then it is not really limited from your side; I would go with about 3x the number of cores to start. This multiplier is simply presuming that such thread wait on network for 2/3 of its time. Which is very low in practice. Could be 99% of its time waiting for nw i/o (100x). Which is why you see NIO sockets everywhere, to deal with many connections with fewer busier threads.
No, you could have 100's of idle threads waiting for work and not see any thrashing, which is caused by application working set size exceeding available memory size, so active pages need to be reloaded from disk (even written out to disk to when temporary variable storage needs saving to be relaoded later).
Threads share an address space, having many active leads to diminishing returns due to lock contention. So in the DB case, many processes reading tables can proceed simultaneously, yet updates of dependant data need to be serialised to keep data consistent which may cause lock contention and limit parallel processing.
Poorly written queries which need to load & sort large tables into memory, may cause thrashing when they exceed free RAM (perhaps poor choice of indexs). You can increase the query throughput, to utilise CPUs more, by having large RAM disk caches and using SSDs to reduce random data access times.
On memory intensive computations, cache sizes may become important, fewer threads whose data stays in cache and CPU pre-fetches minimise stalls, work better than threads competing to load their data from main memory.

single file reader/multiple consumer model: good idea for multithreaded program?

I have a simple task that is easily parallelizable. Basically, the same operation must be performed repeatedly on each line of a (large, several Gb) input file. While I've made a multithreaded version of this, I noticed my I/O was the bottleneck. I decided to build a utility class that involves a single "file reader" thread that simply goes and reads straight ahead as fast as it can into a circular buffer. Then, multiple consumers can call this class and get their 'next line'. Given n threads, each thread i's starting line is line i in the file, and each subsequent line for that thread is found by adding n. It turns out that locks are not needed for this, a couple key atomic ops are enough to preserve invariants.
I've tested the code and it seems faster, but upon second thought, I'm not sure why. Wouldn't it be just as fast to divide the large file into n input files ( you can 'seek' ahead into the same file to achieve the same thing, minimal preprocessing ), and then have each process simply call iostream::readLine on its own chunk? ( since iostream reads into its own buffer as well ). It doesn't seem that sharing a single buffer amongst multiple threads has any inherent advantage, since the workers are not actually operating on the same lines of data. Plus, there's no good way I don't think to parallelize so that they do work on the same lines. I just want to understand the performance gain I'm seeing, and know whether it is 'flukey' or scalable/reproducible across platforms...
When you are I/O limited, you can get a good speedup by using two threads, one reading the file, second doing the processing. This way the reading will never wait for processing (expect for the very last line) and you will be doing reading 100 %.
The buffer should be large enough to give the consumer thread enough work in one go, which most often means it should consist of multiple lines (I would recommend at least 4000 characters, but probably even more). This will prevent thread context switching cost to be impractically high.
Single threaded:
read 1
process 1
read 2
process 2
read 3
process 3
Double threaded:
read 1
process 1/read 2
process 2/read 3
process 3
On some platforms you can get the same speedup also without threads, using overlapped I/O, but using threads can be often clearer.
Using more than one consumer thread will bring no benefit as long as you are really I/O bound.
In your case, there are at least two resources that your program competes for, the CPU and the harddisk. In a single-threaded approach, you request data then wait with an idle CPU for the HD to deliver it. Then, you handle the data, while the HD is idle. This is bad, because one of the two resources is always idle. This changes a bit if you have multiple CPUs or multiple HDs. Also, in some cases the memory bandwidth (i.e. the RAM connection) is also a limiting resource.
Now, your solution is right, you use one thread to keep the HD busy. If this threads blocks waiting for the HD, the OS just switches to a different thread that handles some data. If it doesn't have any data, it will wait for some. That way, CPU and HD will work in parallel, at least some of the time, increasing the overall throughput. Note that you can't increase the throughput with more than two threads, unless you also have multiple CPUs and the CPU is the limiting factor and not the HD. If you are writing back some data, too, you could improve performance with a third thread that writes to a second harddisk. Otherwise, you don't get any advantage from more threads.

Loadbalance the thread in java on the basis of file size

Hi i have requirement to process large number of files via multithreading in java. The files will be of random size (min:100 MB , max :1.5GB).The configuration is i can only create 8 thread at the max and each thread will be allocated 8 files for processing from the source directory.The issue is sometimes huge files are getting allocated to single thread itself thus degrading the performance. I want know whether there is anyway to allocate files to thread such that all threads will be processing equal amount of size. I mean i want to balance the load among the threads on the basis of file size.
Thanks in advance :)
You should not even be doing parallel I/O from a single mechanical disk, as it is in fact slower than single threaded I/O. There are a lot of answers around here explaining that. Basically the mechanical head of the disk needs to spin every time to seek the next reading location. This is a costly operation. If you do this in parallel then you are just bouncing the head around as each thread gets its turn to run.
The best approach would be to just read the files one-by-one sequentially using a single producer thread and process them in parallel using a pool of worker threads.

Resources