Parallel processing strings Delphi full available CPU usage - multithreading

The goal is to achieve full usage of the available cores, in converting floats to strings in a single Delphi application. I think this problem applies to the general processing of string. Yet in my example I am specifically using the FloatToStr method.
What I am doing (I've kept this very simple so there is little ambiguity around the implementation):
Using Delphi XE6
Create thread objects which inherit from TThread, and start them.
In the thread execute procedure it will convert a large amount of
doubles into strings via the FloatToStr method.
To simplify, these doubles are just the same constant, so there is no
shared or global memory resource required by the threads.
Although multiple cores are used, the CPU usage % always will max out on the amount of a single core. I understand this is an established issue. So I have some specific questions.
In a simple way the same operation could be done by multiple app instances, and thereby achieve more full usage of the available CPU. Is it possible to do this effectively within the same executable ?
I.e. assign threads different process ids on the OS level or some equivalent division recognised by the OS ? Or is this simply not possible in out of the box Delphi ?
On scope :
I know there are different memory managers available & other groups have tried changing some of the lower level asm lock usage http://synopse.info/forum/viewtopic.php?id=57
But, I am asking this question in the scope of not doing things at such a low level.
Thanks
Hi J. My code is deliberately very simple :
TTaskThread = class(TThread)
public
procedure Execute; override;
end;
procedure TTaskThread.Execute;
var
i: integer;
begin
Self.FreeOnTerminate := True;
for i := 0 to 1000000000 do
FloatToStr(i*1.31234);
end;
procedure TfrmMain.Button1Click(Sender: TObject);
var
t1, t2, t3: TTaskThread;
begin
t1 := TTaskThread.Create(True);
t2 := TTaskThread.Create(True);
t3 := TTaskThread.Create(True);
t1.Start;
t2.Start;
t3.Start;
end;
This is a 'test code', where the CPU (via performance monitor) maxes out at 25% (I have 4 cores). If the FloatToStr line is swapped for a non string operation, e.g. Power(i, 2), then the performance monitor shows the expected 75% usage.
(Yes there are better ways to measure this, but I think this is sufficient for the scope of this question)
I have explored this issue fairly thoroughly. The purpose of the question was to put forth the crux of the issue in a very simple form.
I am asking about limitations when using the FloatToStr method. And asking is there an implementation incarnation which will permit better usage of available cores.
Thanks.

I second what everyone else has said in the comments. It is one of the dirty little secrets of Delphi that the FastMM memory manager is not scalable.
Since memory managers can be replaced you can simply replace FastMM with a scalable memory manager. This is a rapidly changing field. New scalable memory managers pop up every few months. The problem is that it is hard to write a correct scalable memory manager. What are you prepared to trust? One thing that can be said in FastMM's favour is that it is robust.
Rather than replacing the memory manager, it is better to replace the need to replace the memory manager. Simply avoid heap allocation. Find a way to do your work with need for repeated calls to allocate dynamic memory. Even if you had a scalable heap manager, heap allocation would still cost.
Once you decide to avoid heap allocation the next decision is what to use instead of FloatToStr. In my experience the Delphi runtime library does not offer much support. For example, I recently discovered that there is no good way to convert an integer to text using a caller supplied buffer. So, you may need to roll your own conversion functions. As a simple first step to prove the point, try calling sprintf from msvcrt.dll. This will provide a proof of concept.

If you can't change the memory manager (MM) the only thing to do is to avoid using it where MM could be a bottleneck.
As for float to string conversion (Disclamer: I tested the code below with Delphi XE) instead of
procedure Test1;
var
i: integer;
S: string;
begin
for i := 0 to 10 do begin
S:= FloatToStr(i*1.31234);
Writeln(S);
end;
end;
you can use
procedure Test2;
var
i: integer;
S: string;
Value: Extended;
begin
SetLength(S, 64);
for i := 0 to 10 do begin
Value:= i*1.31234;
FillChar(PChar(S)^, 64, 0);
FloatToText(PChar(S), Value, fvExtended, ffGeneral, 15, 0);
Writeln(S);
end;
end;
which produce the same result but does not allocate memory inside the loop.

And take attention
function FloatToStr(Value: Extended): string; overload;
function FloatToStr(Value: Extended; const FormatSettings: TFormatSettings): string; overload;
The first form of FloatToStr is not thread-safe, because it uses localization information contained in global variables. The second form of FloatToStr, which is thread-safe, refers to localization information contained in the FormatSettings parameter. Before calling the thread-safe form of FloatToStr, you must populate FormatSettings with localization information. To populate FormatSettings with a set of default locale values, call GetLocaleFormatSettings.

Much thanks for your knowledge and help so far. As per your suggestions I've attempted to write an equivalent FloatToStr method in a way which avoids heap allocation. To some success. This is by no means a solid fool proof implementation, just nice and simple proof of concept which could be extended upon to achieve a more satisfying solution.
(Should also note using XE6 64-bit)
Experiment result/observations:
the CPU usage % was proportional to the number of threads started
(i.e. each thread = 1 core maxed out via performance monitor).
as expected, with more threads started, performance degraded somewhat for each individual one (i.e. time measured to perform task - see code).
times are just rough averages
8 cores 3.3GHz - 1 thread took 4200ms. 6 threads took 5200ms each.
8 cores 2.5GHz - 1 thread took 4800ms. 2=>4800ms, 4=>5000ms, 6=>6300ms.
I did not calculate the overall time for a total multi thread run. Just observed CPU usage % and measured individual thread times.
Personally I find it a little hilarious that this actually works :) Or perhaps I have done something horribly wrong ?
Surely there are library units out there which resolve these things ?
The code:
unit Main;
interface
uses
Winapi.Windows, Winapi.Messages, System.SysUtils, System.Variants, System.Classes, Vcl.Graphics,
Vcl.Controls, Vcl.Forms, Vcl.Dialogs, Vcl.StdCtrls,
Generics.Collections,
DateUtils;
type
TfrmParallel = class(TForm)
Button1: TButton;
Memo1: TMemo;
procedure Button1Click(Sender: TObject);
private
{ Private declarations }
public
{ Public declarations }
end;
TTaskThread = class(TThread)
private
Fl: TList<double>;
public
procedure Add(l: TList<double>);
procedure Execute; override;
end;
var
frmParallel: TfrmParallel;
implementation
{$R *.dfm}
{ TTaskThread }
procedure TTaskThread.Add(l: TList<double>);
begin
Fl := l;
end;
procedure TTaskThread.Execute;
var
i, j: integer;
s, xs: shortstring;
FR: TFloatRec;
V: double;
Precision, D: integer;
ZeroCount: integer;
Start, Finish: TDateTime;
procedure AppendByteToString(var Result: shortstring; const B: Byte);
const
A1 = '1';
A2 = '2';
A3 = '3';
A4 = '4';
A5 = '5';
A6 = '6';
A7 = '7';
A8 = '8';
A9 = '9';
A0 = '0';
begin
if B = 49 then
Result := Result + A1
else if B = 50 then
Result := Result + A2
else if B = 51 then
Result := Result + A3
else if B = 52 then
Result := Result + A4
else if B = 53 then
Result := Result + A5
else if B = 54 then
Result := Result + A6
else if B = 55 then
Result := Result + A7
else if B = 56 then
Result := Result + A8
else if B = 57 then
Result := Result + A9
else
Result := Result + A0;
end;
procedure AppendDP(var Result: shortstring);
begin
Result := Result + '.';
end;
begin
Precision := 9;
D := 1000;
Self.FreeOnTerminate := True;
//
Start := Now;
for i := 0 to Fl.Count - 1 do
begin
V := Fl[i];
// //orignal way - just for testing
// xs := shortstring(FloatToStrF(V, TFloatFormat.ffGeneral, Precision, D));
//1. get float rec
FloatToDecimal(FR, V, TFloatValue.fvExtended, Precision, D);
//2. check sign
if FR.Negative then
s := '-'
else
s := '';
//2. handle negative exponent
if FR.Exponent < 1 then
begin
AppendByteToString(s, 0);
AppendDP(s);
for j := 1 to Abs(FR.Exponent) do
AppendByteToString(s, 0);
end;
//3. count consecutive zeroes
ZeroCount := 0;
for j := Precision - 1 downto 0 do
begin
if (FR.Digits[j] > 48) and (FR.Digits[j] < 58) then
Break;
Inc(ZeroCount);
end;
//4. build string
for j := 0 to Length(FR.Digits) - 1 do
begin
if j = Precision then
Break;
//cut off where there are only zeroes left up to precision
if (j + ZeroCount) = Precision then
Break;
//insert decimal point - for positive exponent
if (FR.Exponent > 0) and (j = FR.Exponent) then
AppendDP(s);
//append next digit
AppendByteToString(s, FR.Digits[j]);
end;
// //use just to test agreement with FloatToStrF
// if s <> xs then
// frmParallel.Memo1.Lines.Add(string(s + '|' + xs));
end;
Fl.Free;
Finish := Now;
//
frmParallel.Memo1.Lines.Add(IntToStr(MillisecondsBetween(Start, Finish)));
//!YES LINE IS NOT THREAD SAFE!
end;
procedure TfrmParallel.Button1Click(Sender: TObject);
var
i: integer;
t: TTaskThread;
l: TList<double>;
begin
//pre generating the doubles is not required, is just a more useful test for me
l := TList<double>.Create;
for i := 0 to 10000000 do
l.Add(Now/(-i-1)); //some double generation
//
t := TTaskThread.Create(True);
t.Add(l);
t.Start;
end;
end.

FastMM4, by default, on thread contention, when one thread cannot acquire access to data, locked by another thread, calls Windows API function Sleep(0), and then, if the lock is still not available enters a loop by calling Sleep(1) after each check of the lock.
Each call to Sleep(0) experiences the expensive cost of a context switch, which can be 10000+ cycles; it also suffers the cost of ring 3 to ring 0 transitions, which can be 1000+ cycles. As about Sleep(1) – besides the costs associated with Sleep(0) – it also delays execution by at least 1 millisecond, ceding control to other threads, and, if there are no threads waiting to be executed by a physical CPU core, puts the core into sleep, effectively reducing CPU usage and power consumption.
That’s why, in your case, CPU use never reached 100% - because of the Sleep(1) issued by FastMM4.
This way of acquiring locks is not optimal.
A better way would have been a spin-lock of about 5000 pause instructions, and, if the lock was still busy, calling SwitchToThread() API call. If pause is not available (on very old processors with no SSE2 support) or SwitchToThread() API call was not available (on very old Windows versions, prior to Windows 2000), the best solution would be to utilize EnterCriticalSection / LeaveCriticalSection, that don’t have latency associated by Sleep(1), and which also very effectively cedes control of the CPU core to other threads.
I have modified FastMM4 to use a new approach to waiting for a lock: CriticalSections instead of Sleep(). With these options, the Sleep() will never be used but EnterCriticalSection / LeaveCriticalSection will be used instead. Testing has shown that the approach of using CriticalSections instead of Sleep (which was used by default before in FastMM4) provides significant gain in situations when the number of threads working with the memory manager is the same or higher than the number of physical cores. The gain is even more evident on computers with multiple physical CPUs and Non-Uniform Memory Access (NUMA). I have implemented compile-time options to take away the original FastMM4 approach of using Sleep(InitialSleepTime) and then Sleep(AdditionalSleepTime) (or Sleep(0) and Sleep(1)) and replace them with EnterCriticalSection / LeaveCriticalSection to save valuable CPU cycles wasted by Sleep(0) and to improve speed (reduce latency) that was affected each time by at least 1 millisecond by Sleep(1), because the Critical Sections are much more CPU-friendly and have definitely lower latency than Sleep(1).
When these options are enabled, FastMM4-AVX it checks:
whether the CPU supports SSE2 and thus the "pause" instruction, and
whether the operating system has the SwitchToThread() API call, and,
and in this case uses "pause" spin-loop for 5000 iterations and then SwitchToThread() instead of critical sections; If a CPU doesn't have the "pause" instrcution or Windows doesn't have the SwitchToThread() API function, it will use EnterCriticalSection / LeaveCriticalSection.
I have made available the fork called FastMM4-AVX at https://github.com/maximmasiutin/FastMM4
Here are the comparison of the Original FastMM4 version 4.992, with default options compiled for Win64 by Delphi 10.2 Tokyo (Release with Optimization), and the current FastMM4-AVX branch. Under some scenarios, the FastMM4-AVX branch is more than twice as fast comparing to the Original FastMM4. The tests have been run on two different computers: one under Xeon E6-2543v2 with 2 CPU sockets, each has 6 physical cores (12 logical threads) - with only 5 physical core per socket enabled for the test application. Another test was done under a i7-7700K CPU.
Used the "Multi-threaded allocate, use and free" and "NexusDB" test cases from the FastCode Challenge Memory Manager test suite, modified to run under 64-bit.
Xeon E6-2543v2 2*CPU i7-7700K CPU
(allocated 20 logical (allocated 8 logical
threads, 10 physical threads, 4 physical
cores, NUMA) cores)
Orig. AVX-br. Ratio Orig. AVX-br. Ratio
------ ----- ------ ----- ----- ------
02-threads realloc 96552 59951 62.09% 65213 49471 75.86%
04-threads realloc 97998 39494 40.30% 64402 47714 74.09%
08-threads realloc 98325 33743 34.32% 64796 58754 90.68%
16-threads realloc 116708 45855 39.29% 71457 60173 84.21%
16-threads realloc 116273 45161 38.84% 70722 60293 85.25%
31-threads realloc 122528 53616 43.76% 70939 62962 88.76%
64-threads realloc 137661 54330 39.47% 73696 64824 87.96%
NexusDB 02 threads 122846 90380 73.72% 79479 66153 83.23%
NexusDB 04 threads 122131 53103 43.77% 69183 43001 62.16%
NexusDB 08 threads 124419 40914 32.88% 64977 33609 51.72%
NexusDB 12 threads 181239 55818 30.80% 83983 44658 53.18%
NexusDB 16 threads 135211 62044 43.61% 59917 32463 54.18%
NexusDB 31 threads 134815 48132 33.46% 54686 31184 57.02%
NexusDB 64 threads 187094 57672 30.25% 63089 41955 66.50%
Your code that calls FloatToStr is OK, since it allocates a result string using the memory manager, then reallocates it, etc. Even better idea would have been to explicitly deallocate it, for example:
procedure TTaskThread.Execute;
var
i: integer;
s: string;
begin
for i := 0 to 1000000000 do
begin
s := FloatToStr(i*1.31234);
Finalize(s);
end;
end;
You can find better tests of the memory manager in the FastCode challenge test suite at https://github.com/maximmasiutin/FastCodeBenchmark
Also, please note that reference counters in Delphi strings use locking operations, which are inherently slow. For example, on an Intel 2400MHz processor with Tiger Lake microarchitecture (released in October 2020), LOCK ADD is about 18 CPU cycles (7.5ns), while non-locked simple ADD is about 0.75 CPU cycles (0.3ns). If your code ensures that the strings are not assigned and modified from different threads, then you may not need this locking. One of the approaches to ensure that a string with multiple references is not manipulated from different threads is to call UniquesString() before such use. Therefore, to improve speed, you may modify the System.pas and to remove the LOCK prefix from the assembly instructions that operate the string reference counters. For example, instead of
LOCK INC [EDX-skew].StrRec.refCnt
there will be
INC [EDX-skew].StrRec.refCnt
However, compiling and using your own, custom version of System.pas may not be an easy task. You can find more information about reference counter locking in Delphi strings in a separate answer.

Related

How does Delphi XE7 handle Multiple Threading in Parallel Tasks

I am new to parallel programming and am trying to find out why I occasionally get an EmonitorLockException:Object lock not owned when I increase the number of parrallel tasks to run. Is the case that the Threads become tangled the more tasks I run. Or is my code code not correct?
{$APPTYPE CONSOLE}
uses
System.SysUtils, System.Threading, System.Classes, System.SyncObjs, System.StrUtils;
const
WorkerCount = 10000; // this is the number of tasks to run in parallel note:when this number is increased by repeated factors of 10
// it takes longer to produce the result and sometimes the program crashes
// with an EmonitorLockException:Object lock not owned . Is it that my program is not written correctly or efficiently to run
// a large number of parallel taks and the Threads become entagled.
// Eventually I would like to optimeize the program to find the optimal number of tasks to run in parallel so as to find the result in the shortest time.
// This becomes important when the word sequence is increased to six or more letters.
sequencetofind='help'; // letter sequence to find randomly
sequencelengthplus1=5; // the ength of the letter sequence plus 1 extra letter for a check to see if it is working
var
Ticks: Cardinal;
i,k,m: Integer;
sequencenum: Integer;
alphabetarray:array[1..26] of string;
v:char;
copyarray,letters:array[1..sequencelengthplus1] of string;
tasks: array of ITask;
LTask: ITask;
Event1:TEvent;
sequencesection:TCriticalSection;
function findsequencex(index: Integer): TProc;
begin
Result := procedure
var
counter,m:integer;
r:integer;
z:string;
lettersx:array[1..sequencelengthplus1] of string;
begin
for m:=1 to sequencelengthplus1-1 do
lettersx[m]:=letters[m];
randomize;
counter:=1;
repeat
r:=random(26)+1;
z:=alphabetarray[r]; //randomly find letters until matched with the sequence
if z=letters[counter] then
begin
copyarray[counter]:=z;
counter:=counter+1; // increase counter when successfully found a match
end
else
counter:=1; // if match fails start again and look for the first letter
if (counter=sequencelengthplus1) then
begin // if all letters found in correct order find one more letter as a check
sequencesection.Acquire; //critical section start
r:=random(26)+1;
z:=alphabetarray[r];
TInterlocked.CompareExchange(sequencenum,r,0);
copyarray[sequencelengthplus1]:=z;
Event1.SetEvent; // set in motion the process to stop all other tasks
sequencesection.release; // critical section end
end;
until (Event1.WaitFor(0)=wrSignaled); // check to see if all letters of the sequence has been found
end;
end;
procedure Parallel2;
var
i,sequencevalue,j: Integer;
begin
Event1:=TEvent.Create(nil,true,false,'noname'); // sequence checker
Event1.resetevent;
sequencenum := 0;
Ticks := TThread.GetTickCount;
SetLength(Tasks, WorkerCount); // number of parallel tasks to undertake
for i := 0 to WorkerCount-1 do
Tasks[i]:=TTask.Run(findsequencex(i));
TTask.WaitForAny(Tasks); // wait for the first one to successfully finish
TThread.Synchronize(nil,
procedure
begin
for LTask in Tasks do
LTask.Cancel; // kill the remaining tasks
TInterlocked.Add (sequencevalue, sequencenum); // note the random letter check
end);
Ticks := TThread.GetTickCount - Ticks;
writeln('Parallel time ' + Ticks.ToString + ' ms, last random alphabet sequence number: ' + sequencenum.ToString+' random letter is = '+alphabetarray[sequencevalue]);
end;
begin
sequencesection:=TCriticalSection.Create;
for m:=1 to (sequencelengthplus1-1) do
begin
letters[m]:=copy(sequencetofind,m,1);
writeln(letters[m]);
end;
i:=0;
for v:='a' to 'z' do
begin
i:=i+1;
alphabetarray[i]:=v;
end;
try
begin
Parallel2; // call the parrallel procedure
writeln('finished');
for m:=1 to sequencelengthplus1 do
writeln(copyarray[m]);
if (Event1.WaitFor(0)=wrSignaled) then
begin
writeln('event signaled');
if (sequencenum=0) then writeln('sequence is null');
end;
Event1.Free;
sequencesection.free;
end;
except
on E: Exception do
Writeln(E.ClassName, ': ', E.Message);
end;
Readln;
end.
You've got masses of shared global variables that you access without any synchronization. For instance:
if z = letters[counter] then
begin
copyarray[counter] := z;
counter := counter + 1;
// increase counter when successfully found a match
end
Here copyarray is a global that is shared between all the tasks. This data race alone could result in the error you see. There are other similar races. And I am sure that there are many many more problems. The entire code cannot be salvaged. You need to throw it away and start again.
Here are some tips:
Pick a simpler task to begin learning about parallel programming.
Find a good book or tutorial on the subject. Start small and work up to your actual problem.
Stop using global variables. Sharing data with globals can only lead to pain. Sharing is your enemy. Keep it to a minimum.
If you need to share data do so in an explicit manner rather than using globals. And make sure the access to shared data is synchronized.
Don't call Randomize from inside your thread. It's not threadsafe. Call it once at startup.
Random is not threadsafe. Find a thread safe PRNG, or synchronise calls to Random. The former option is to be preferred.
Don't call TThread.Synchronize from the main thread.
Manage lifetime the standard way. When you create an object, use try and finally to protect its lifetime. Don't create objects in one function and destroy them in some other function.
Format your code so that it is readable. If you cannot read your code, how will you ever understand it?
With the greatest of respect, it's clear that you've not mastered serial programming yet. You should aim to be proficient at serial programming before moving to parallel programming. Parallel programming is at least an order of magnitude harder.
With that in mind, try to write a good, clean version of your program in serial form. Then think about how to transform it to a parallel version.

How to make multiple threads do work on a single TStringList

I want to be able to make multiple threads do work on a single TStringList. This is my current Thread code for using a single thread to do the work;
type
TFilterThread = class(TThread)
protected
procedure Execute; override;
public
lCombos : TStringList;
//Public Vars
end;
procedure TFilterThread.Execute;
var
I: integer;
HTML: String;
frm1 : TForm1;
splitLogin: TStringList;
validCount: Integer;
begin
validCount := 0;
for I := 0 to lCombos.Count - 1 do
begin
if Terminated then Exit();
Unit1.Form1.ListBox1.ItemIndex := i;
try
HTML := Unit1.Form1.IdHTTP1.Get(EmailCheckURL + lCombos[i]);
if AnsiPos('You indicated you', HTML) > 0 then
begin
//Do stuff
end;
except
Continue;
end;
end;
Showmessage('Finished!');
Unit1.Form1.Button1.Caption := 'Start';
end;
To Start the thread I use;
lComboLsit := TStringList.Create;
for I := 0 to listBox1.Items.Count -1 do
lComboLsit.Add(listBox1.Items[i]);`
iTFilterThread := TFilterThread.Create(True);
iTFilterThread.FreeOnTerminate := True;
iTFilterThread.lCombos := lComboLsit;
iTFilterThread.Start;
How would I introduce another thread to also do work on the lCombos list so that the operation would complete quicker?
The answer is that it depends. If the string list is not modified, then there's nothing to do. If the string list is modified then what is best depends critically on your specific usage.
Looking at your code, your program may not be CPU bound and so adding more threads may not help. Your program's bottleneck will the HTTP communication. However, despite not being CPU bound it is plausible that running multiple threads will reduce the impact HTTP latency. You can benchmark to find out. You don't appear to be modifying the string list so there's no race problems to be concerned with there.
However, here's a problem:
Unit1.Form1.IdHTTP1.Get(EmailCheckURL + lCombos[i]);
That will work, and is thread safe so long as TIdHTTP is thread safe. But it's pretty ugly to allow a thread to access a component on a form like that. And I don't see any real sense or need to share the TIdHTTP instance between threads. It would be far cleaner to let each thread instantiate and use their own TIdHTTP component.
Of course, you will need to decide on a policy for dividing the work between all your threads. You could have a shared index that keeps track of the next item to process. Have each thread increment it atomically each time they take an item. A parallel for loop would be a good fit here. That's available in the latest version of Delphi, or in any decent third party parallel library.
You do have some problems with your code. In the thread procedure you do this:
Unit1.Form1.ListBox1.ItemIndex := i;
....
Unit1.Form1.Button1.Caption := 'Start';
You cannot access VCL components from a thread.
And then you do this
ShowMessage('Finished!');
Don't show UI from a thread.
A point of idiom. Instead of looping over the items in your list box you can simply do this:
lComboLsit.Assign(listBox1.Items);

Delphi OTL Why does MultiThreading program uses only half the available CPU's [duplicate]

This question already exists:
Delphi CPU Usage low during MultiThreading
Closed 8 years ago.
During executing my multithreading program only 4 of the available 8 CPU's are being used. Why?
What can I change to make all CPU's working?
Parallel.ForEach(0, CalcList.Count-1)
.NumTasks(nMax)
.NoWait
.Execute(
procedure(const value: integer)
begin
CalcUnit.EntrySearch(value);
end);
(nMax and the CalcList.Count are both 16, Intel I7 HyperThreaded)
Thank you
I just did a test on an i7 2600 (4 cores 8 HT) using OTL. A simple Parallel.ForEach loop makes use of all 8. With and without the .NumTasks that you have. There is no problem with the library.
begin
Parallel.ForEach(0, 100)
//.NumTasks(16)
.Execute(
procedure(const value: integer)
var
newValue: Single;
I: Integer;
begin
newValue := value;
for I := 1 to 100000000 do
begin
newValue := newValue * I;
newValue := newValue / I;
end;
end);
ShowMessage('Done!');
end;
My guess is that the problem is in your code. Disk accesses in threads are a good way to counter the benefits of using threads in the first place.
I don't know enough about your code but you should rather look at reading in the data in a single thread and then threading the actual processing of that data.
I see that you also have .NoWait specified. Are you saving the return value for your Parallel.ForEach? Its a good idea to save this value because otherwise your code will block when the OnClick exits. See gabr's answer to this question.
Why is OmniThreadLibrary's ForEach blocking main thread?

Delphi CPU Usage low during MultiThreading

During Executing of a Multithreading program I see 8 threads starting in the Delphi Event log.
(My CPU is a Intel 7 with 4 cores HyperThreaded so 8 Calculation Cores) but in my TaskManager at the Performance tab I see only 12% CPU usage and only one core calculating with performance up to about 70 - 80 %. Did compile my multithreading program with OTL usage and with ParallelFor usage, But still only 12% performance and only One core doing the work.
On my Form1 I have a ButtonClick procedure with the OTL parallel.ForeEach which iterates over items of a StingList.
The StringList lines contains each a Name, a Path to a differend DataFile and a DataFormat of the File.
The ForEach.execute() Starts a 'EntrySearch'procedure on other Unit,
The EntrySearch procedure starts with the extraction of the info from the appropiate line of the Stringlist.
In a 'While X < Y loop' is the data extracted from the DataFile through a AssignFile and While not eof,read the lines with data. Calcuclations are made on the Data until the 'While X < Y' loop ends
I can see that 8 (CPUcount) Threads are started at the ButtonClick procedure. In the TaskManager I see only one CPU core start working total of about 12% ProcessorUsage.
When after the calculations the ProcessorUsage returns to 0% the .exe program is hanging and I have no controlover the program.
From the little data I can extract out of the CalculationUnit I getonly data from the last started thread, as of this last thread makes the other threads stop and cannot make their caculations and can not terminate.
{the OTL in the ButtonClick procedure}
Parallel.ForEach(0, StrList.Count-1)
.PreserveOrder
.NumTasks(CPUCount)
.NoWait
.Execute(
procedure(const value: integer)
begin
CalcUnit.EntrySearch(value);
end);
{procedure on CalcUnit}
procedure EntrySearch(value: integer);
begin
{extract Name, Path DataFile and DataFormat from StringList}
While X < Y do begin
AssignFile(qMSInputFile7, Path);
{$I-} reset(qMSInputFile7); {$I+}
While Not eof(qMSInputFile7) do Begin
with qMetaRec7 do begin
Read (qMSInputFile7, qMetaRec7);
{ Extract the Data}
end; // While not eof
{Make calculations}
end; // While X<Y
end;
What goes wrong? and how can I solve this.
Thanks A lot.

Parallel write to array slower than serial write using OmniThreadLibrary

I am working on an implementation of the Differential Evolution optimization algorithm, and want to speed up the calculation time by calculating population members in parallel.
I am using the OmniThread library, and have successfully parallelized my loop, only to find that it runs SLOWER than the serial implementation.
I have reduced the code to its essence to test the parallelization, and the reduced version exhibits the same problem: the parallel version is slower than the serial one.
The key is that I pass multiple dynamic arrays to which output should be written for each member of the population. Each array has one of the dimensions dedicated to the population member, so for each population member a different set of array indices is accessed. This also means that in the parallel implementation no 2 threads will write to the same array element.
Below the code I used to test (the actual code in the Differential Evolution has a DoWork procedure with even more const parameters and var arrays)
unit Unit1;
interface
type
TGoalFunction = reference to function(const X, B: array of extended): extended;
TArrayExtended1D = array of extended;
TArrayExtended2D = array of TArrayExtended1D;
TClassToTest = class abstract
private
class procedure DoWork(const AGoalFunction: TGoalFunction; const AInputArray: TArrayExtended2D; var AOutputArray1: TArrayExtended1D; var AOutputArray2: TArrayExtended2D; const AIndex, AIndex2: integer);
public
class procedure RunSerial;
class procedure RunParallel;
end;
function HyperSphere(const X, B: array of extended): extended;
const
DIMENSION1 = 5000;
DIMENSION2 = 5000;
LOOPS = 10;
implementation
uses
OtlParallel;
function HyperSphere(const X, B: array of extended): extended;
var
I: Integer;
begin
Result := 0;
for I := 0 to Length(X) - 1 do
Result := Result + X[I]*X[I];
end;
{ TClassToTest }
class procedure TClassToTest.DoWork(const AGoalFunction: TGoalFunction; const AInputArray: TArrayExtended2D; var AOutputArray1: TArrayExtended1D; var AOutputArray2: TArrayExtended2D; const AIndex, AIndex2: integer);
var
I: Integer;
begin
AOutputArray1[AIndex] := AGoalFunction(AInputArray[AIndex], []);
for I := 0 to Length(AOutputArray2[AIndex]) - 1 do
AOutputArray2[AIndex, I] := Random*AIndex2;
end;
class procedure TClassToTest.RunParallel;
var
LGoalFunction: TGoalFunction;
LInputArray: TArrayExtended2D;
LOutputArray1: TArrayExtended1D;
LOutputArray2: TArrayExtended2D;
I, J, K: Integer;
begin
SetLength(LInputArray, DIMENSION1, DIMENSION2);
for I := 0 to DIMENSION1 - 1 do
begin
for J := 0 to DIMENSION2 - 1 do
LInputArray[I, J] := Random;
end;
SetLength(LOutputArray1, DIMENSION1);
SetLength(LOutputArray2, DIMENSION1, DIMENSION2);
LGoalFunction := HyperSphere;
for I := 0 to LOOPS - 1 do
begin
Parallel.ForEach(0, DIMENSION1 - 1).Execute(
procedure (const value: integer)
begin
DoWork(LGoalFunction, LInputArray, LOutputArray1, LOutputArray2, value, I);
end
);
for J := 0 to DIMENSION1 - 1 do
begin
for K := 0 to DIMENSION2 - 1 do
LInputArray[J, K] := LOutputArray2[J, K];
end;
end;
end;
class procedure TClassToTest.RunSerial;
var
LGoalFunction: TGoalFunction;
LInputArray: TArrayExtended2D;
LOutputArray1: TArrayExtended1D;
LOutputArray2: TArrayExtended2D;
I, J, K: Integer;
begin
SetLength(LInputArray, DIMENSION1, DIMENSION2);
for I := 0 to DIMENSION1 - 1 do
begin
for J := 0 to DIMENSION2 - 1 do
LInputArray[I, J] := Random;
end;
SetLength(LOutputArray1, DIMENSION1);
SetLength(LOutputArray2, DIMENSION1, DIMENSION2);
LGoalFunction := HyperSphere;
for I := 0 to LOOPS - 1 do
begin
for J := 0 to DIMENSION1 - 1 do
begin
DoWork(LGoalFunction, LInputArray, LOutputArray1, LOutputArray2, J, I);
end;
for J := 0 to DIMENSION1 - 1 do
begin
for K := 0 to DIMENSION2 - 1 do
LInputArray[J, K] := LOutputArray2[J, K];
end;
end;
end;
end.
I was expecting a speedup of around x6 on my 8-core processor, but was faced with a slight slowdown. What should I change to get the speedup from running the DoWork procedure in parallel?
Note that I'd prefer to keep the actual work in the DoWork procedure, since I have to be able to call the same algorithm with and without parallelization (boolean flag) while keeping the body of the code shared for easy maintenance
This is due to the lack of thread safety of Random. The implementation of which is:
// global var
var
RandSeed: Longint = 0; { Base for random number generator }
function Random: Extended;
const
two2neg32: double = ((1.0/$10000) / $10000); // 2^-32
var
Temp: Longint;
F: Extended;
begin
Temp := RandSeed * $08088405 + 1;
RandSeed := Temp;
F := Int64(Cardinal(Temp));
Result := F * two2neg32;
end;
Because RandSeed is a global variable, which is modified by a call to Random, the threads end up having contended writes to RandSeed. And those contended writes cause your performance problem. They effectively serialize your parallel code. Severely enough to make it slower than the true serial code.
Add the code below to the top of the implementation section of your unit and you'll see the difference:
threadvar
RandSeed: Longint;
function Random: Double;
const
two2neg32: double = ((1.0/$10000) / $10000); // 2^-32
var
Temp: Longint;
F: Double;
begin
Temp := RandSeed * $08088405 + 1;
RandSeed := Temp;
F := Int64(Cardinal(Temp));
Result := F * two2neg32;
end;
With that change to avoid shared, contended writes, you'll find that the parallel version is faster, as expected. You don't get linear scaling with processor count. My guess is that is because your pattern of memory access is sub-optimal in the parallel version of the code.
I'm guessing that you are only using Random as a means to generate some data. But if you do need an RNG, you'll want to arrange that each task uses their own private instance of an RNG.
You can also speed up your code a little using Sqr(X) rather than X*X, and also by switching to Double instead of Extended.
Some time ago I was experiencing exactly the same issue. It turned out to be that the bottleneck was that OTL for Parallel.ForEach calls with a range creates a hidden enumerator which in cases where the task is very small and the loop is called often is the bottleneck.
A more performant solution looked something like this:
Parallel.ForEach(0, MAXCORES)
.NumTasks(MAXCORES)
.Execute(
procedure (const p:Integer)
var
chunkSize : Integer;
myStart, myEnd : Integer;
i: Integer;
begin
chunkSize := DIMENSION div MAXCORES;
myStart := p * chunkSize;
myEnd := min( myStart+chunkSize-1, DIMENSION -1);
for I := myStart to MyEnd do
DoSomething(i);
end);
This code scaled up quite linearly regardless of the load within the DoSomething call
I've tried running this (with the Random fix and using Doubles) on an i7 (8 hyper threads) and get the times 1650ms for parallel and 5240ms for serial. Given the code content I don't find this to be particularly unexpected scale up. The code as it stands will have near to a 100% successful pipeline prediction - all branches predicted, function call returns cached, even cache prefetch working well. On a typical modern PC this means that the code is probably going to be memory bandwidth limited in which scale up is going to depend a great deal on your memory performance rather than how many cores you have.
The only other issue is potential contention for FPU resources which will be highly dependent on your internal processor architecture.
I suspect that if the workload was more complex a greater scale up would be seen between serial and parallel as the serial version will be losing time to code triggered pipeline breaks whilst the parallel version will remain memory limited. I've done a fair bit of high performance computing work in Delphi and well optimised algorithms doing simple calculations can become totally memory bound with multi-threaded performance at scale ups of as little as 2 on a good 8 core machine due to memory bandwidth limits. This sort of issue can be particularly well illustrated if you have over-clocking capability as performance yield from over-clocking the CPU gives a very good indication of the level of memory waits since everything else speeds up proportionally to the over-clocking.
If you want to get into the details of processor architecture and how they impact what you are doing then http://www.agner.org/optimize/ is a good place to learn how much there is to learn.

Resources