Freeing a BSTR using ::SysFreeString(). More Platform Dependant? - visual-c++

I am writing a COM Server which have a plenty of Interfaces and methods. And most of the methods have the BSTR as the parameters and as local parameters used for the return. A snippet looks like
Update 5:
The real code. This fetches from bunch of Data based on a specific condition the DB to populate an array of Object.
STDMETHODIMP CApplication::GetAllAddressByName(BSTR bstrParamName, VARIANT *vAdddresses)
{
AFX_MANAGE_STATE(AfxGetStaticModuleState())
//check the Database server connection
COleSafeArray saAddress;
HRESULT hr;
// Prepare the SQL Strings dan Query the DB
long lRecCount = table.GetRecordCount();
if (lRecCount > 0)
{
//create one dimension safe array for putting details
saAddress.CreateOneDim(VT_DISPATCH,lRecCount);
IAddress *pIAddress = NULL;
//retrieve details
for(long iRet = table.MoveFirst(),iCount=0; !iRet; iRet = table.MoveNext(),iCount++)
{
CComObject<CAddress> *pAddress;
hr = CComObject<CAddress>::CreateInstance(&pAddress);
if (SUCCEEDED(hr))
{
BSTR bstrStreet = ::SysAllocString(table.m_pRecordData->Street);
pAddress->put_StreetName(bstrStreet);
BSTR bstrCity = ::SysAllocString(table.m_pRecordData->City);
pAddress->put_CityName(bstrCity);
}
hr = pAddress->QueryInterface(IID_IAddress, (void**)&pIAddress);
if(SUCCEEDED(hr))
{
saAddress.PutElement(&iCount,pIAddress);
}
}
*vAdddresses=saAddress.Detach();
}
table.Close();
return S_OK;
}
STDMETHODIMP CAddress::put_CityName(BSTR bstrCityName)
{
AFX_MANAGE_STATE(AfxGetStaticModuleState())
// m_sCityName is of CComBSTR Type
m_sCityName.Empty();//free the old string
m_sCityName = ::SysAllocString(bstrCityName);//create the memory for the new string
return S_OK;
}
The problem lies in the Memory Freeing part. The code works very fine in any Win XP machines, but when comes to WIN2K8 R2 and WIN7 the code crashes and pointing to the ::SysFreeString() as the culprit. The MSDN is not adequate to the solution.
Can anyone please help in finding the right solution?
Thanks a lot in advance :)
Update 1:
I have tried using the CComBSTR as per the suggestion in the place of raw BSTR, initialized using direct CString's and excluded the SysFreeString(). But for my trouble, on getting out of scope the system is calling the SysFreeString() which again causes the crash :(
Update 2:
With the same CComBSTR i tried to allocate using the SysAllocString() , the problem remains same :(
Update 3:
I am tired of all the options and in peace I am having only question in mind
Is it necessary to free the BSTR through SysFreeString() which was
allocated using SysAllocString()/string.AllocSysString()?
Update 4:
I missed to provide the information about the crash. When I tried to debug the COM server crashed with a error saying
"Possible Heap Corruption"
. Please help me out of here.. :(

// Now All Things are packed in to the Object
obj.Name = bstrName;
obj.Name2 = bstrname2;
I don't quite understand what do you mean by saying that things are packed since you're just copying pointers to the strings, and at the moment when you call SysFreeString obj.Name and obj.Name2 will point to an invalid block of memory. Although this code is not safe, it looks like if the source of your problem is class CFoo. You should show us more details of your code
I suggest you to use a CComBSTR class which will take a responsibility for releasing the memory.
UPDATE
#include <atlbase.h>
using namespace ATL;
...
{
CComBSTR bstrname(_T("Some Name"));
CComBSTR bstrname2(_T("Another Name"));
// Here one may work with these variables if needed
...
// Copy the local values to the Obj's member Variable
bstrname.Copy(&obj.Name);
bstrname2.Copy(&obj.Name2);
}
UPDATE2
First of all one should free bstrCity and bstrStreetName with SysFreeString or use CComBSTR instead within this block:
if (SUCCEEDED(hr))
{
BSTR bstrStreet = ::SysAllocString(table.m_pRecordData->Street);
pAddress->put_StreetName(bstrStreet);
BSTR bstrCity = ::SysAllocString(table.m_pRecordData->City);
pAddress->put_CityName(bstrCity);
// SysFreeString(bstrStreet)
// SysFreeString(bstrCity)
}
Consider to amplify the loop's condition !iRet with iCount < lRecCount.
for(...; !iRet /* && (iCount < lRecCount) */; ...)
Also, here:
m_sCityName = ::SysAllocString(bstrCityName);
you allocate memory but never release it since internally CComBSTR& operator = (OLESTR ..) allocates a new storage itself. One should rewrite is as follows:
m_sCityName = bstrCityName;
Everything else, looks good for me
UPDATE3
Well, Heap corruption is often a consequence of writing some values outside of the allocated memory block. Say you allocate an array of length 5 and put some value to the 6th position

Finally I have found the real reason for the Heap Corruption that happened in the code.
The put_StreetName/put_CityName of the IAddress/CAddress is designed in a following way.
STDMETHODIMP CAddress::put_CityName(BSTR bstrCityName)
{
AFX_MANAGE_STATE(AfxGetStaticModuleState())
m_sCityName.Empty();
TrimBSTR(bstrCityName);
m_sCityName = ::SysAllocString(bstrCityName);
return S_OK;
}
BSTR CAddress::TrimBSTR(BSTR bstrString)
{
CString sTmpStr(bstrString);
sTmpStr.TrimLeft();
sTmpStr.TrimRight();
SysReAllocString(&bstrString,sTmpStr); // The Devilish Line
}
The Devilish Line of code is the real culprit that caused the Memory to go hell.
What caused the trouble?
In this line of code, the BSTR string passed as a parameter is from another application and the real memory is in another realm. So the system is trying to reallocate teh string. Either succeed or not, the same is tried to cleared off from the memory in original application/realm, thus causing a crash.
What still unsolved?
Why the same piece of code doesn't crashed a single time in Win XP and
older systems? :(
Thanks for all who took their time to answer and solve my problem :)

Related

Why does my Groovy AST transform insert null at the end of my method?

I have written an AST transform that creates a setter for a JPA mapped property (it both sets the local field and calls setOwner on the other end of the relationship):
private static void createSetter(FieldNode field) {
Parameter parameter = GeneralUtils.param(field.getType(), field.getName());
BlockStatement body = new BlockStatement();
body.addStatement(assignS(fieldX(field), varX(parameter)));
MethodCallExpression setterCall = callX(varX(parameter), "setOwner", varX("this", field.getDeclaringClass()));
setterCall.setType(ClassHelper.VOID_TYPE);
body.addStatement(stmt(setterCall));
MethodNode method = new MethodNode(setterName(field.getName()), ACC_PUBLIC, ClassHelper.VOID_TYPE, new Parameter[] {parameter}, ClassNode.EMPTY_ARRAY, body);
field.getDeclaringClass().addMethod(method);
}
This works, but the generated method has a strange null statement at the end as disassembled by JD-GUI (in addition to an odd local variable):
public void setMore(Simple_MoreStuff more) {
Simple_MoreStuff localSimple_MoreStuff = more;
this.more = localSimple_MoreStuff;
more.setOwner(this);
null;
}
It doesn't seem to affect the actual correctness, but it's odd, and it seems like a bug. In MethodCallExpression, I found this comment but don't know if it relates, since my method is in fact void (I explicitly set it above, and it makes no difference):
//TODO: set correct type here
// if setting type and a methodcall is the last expression in a method,
// then the method will return null if the method itself is not void too!
// (in bytecode after call: aconst_null, areturn)
Is there a way to keep the generated method from having the spurious null?
I have not looked at JD-GUI, so I cannot tell how capable this tool is in understanding bytecode, that does not come from Java. But in general disassemblers can only somewhat show what Java code in that case might look like, by no means it is supposed to show correct code from a non-Java language. So better do not expect correct Java code if you disassemble Groovy.
In this case I suspect that JD-GUI stumbles over a workaround we have not gotten rid of yet. In several cases we add at the method end dead code, the const_null, areturn you have noticed. We do this because of problems with the verifier if a bytecode label is used at the end of a method. And since the dead code does not influence correctness we are currently using this solution.

Cairo.Surface is leaking... How to debug it with Monodevelop?

I have many doubts related with Cairo and GTK# (that runs on .NET and Mono). I'm developing a GTK# application for MS Windows and Linux. I'm using GTK# 2.12 over .NET right now while I'm working on the application.
I've created a custom widget that uses Cairo.ImageSurface and Cairo.Context objects. As far as I know, I'm calling the Dispose method of every ImageSurface object and every Context object I create inside the widget code.
The widget responds to the "MouseOver" event, redrawing some parts of its DrawingArea.
The (first) problem:
almost every redrawing operation increases a little bit the amount of used memory. When the amount of used memory has increased 3 or 4 Kbytes the Monodevelop tracelog panel shows me the following message:
Cairo.Surface is leaking, programmer is missing a call to Dispose
Set MONO_CAIRO_DEBUG_DISPOSE to track allocation traces
The code that redraws a part of the widget is something like:
// SRGB is a custom struct, not from Gdk nor Cairo
void paintSingleBlock(SRGB color, int i)
{
using (Cairo.Context g = CairoHelper.Create (GdkWindow)) {
paintSingleBlock (g, color, i);
// We do this to avoid memory leaks. Cairo does not work well with the GC.
g.GetTarget().Dispose ();
g.Dispose ();
}
}
void paintSingleBlock(Cairo.Context g, SRGB color, int i)
{
var scale = Math.Pow (10.0, TimeScale);
g.Save();
g.Rectangle (x(i), y(i), w(i), h(i));
g.ClosePath ();
g.Restore ();
// We don't directly use stb.Color because in some cases we need more flexibility
g.SetSourceRGB (color.R, color.G, color.B);
g.LineWidth = 0;
g.Fill ();
}
The (second) problem: Ok, Monodevelop tells me that I should set MONO_CAIRO_DEBUG_DISPOSE to "track allocation traces" (In order to find the leak, I suppose)... but I don't know how to set this environment variable (I'm in Windows). I've tried using bash and executing something like:
MONO_CAIRO_DEBUG_DISPOSE=1 ./LightCreator.exe
But nothing appears in stderr nor stdout... (neither the messages that appear in the Monodevelop's applicationt trace panel). I also don't know how to get the debugging messages that see inside Monodevelop but without Monodevelop.
There's anyone with experience debugging GTK# or Cairo# memory leaks?
Thanks in advance.
Just wanted to throw my 2c here as I was fighting a similar leak problem in Cairo with surfaces. What I noticed is that if I create a Surface object the ReferenceCount property becomes 1 and if I attach this surface to a Context if becomes not 2 but 3. After disposing the Context the ReferenceCount comes back but to 2.
So I used some reflection to call the native methods in Cairo to decrease the ReferenceCount when I really want to Dispose a surface. I use this code:
public static void HardDisposeSurface (this Surface surface)
{
var handle = surface.Handle;
long refCount = surface.ReferenceCount;
surface.Dispose ();
refCount--;
if (refCount <= 0)
return;
var asm = typeof (Surface).Assembly;
var nativeMethods = asm.GetType ("Cairo.NativeMethods");
var surfaceDestroy = nativeMethods.GetMethod ("cairo_surface_destroy", BindingFlags.Static | BindingFlags.NonPublic);
for (long i = refCount; i > 0; i--)
surfaceDestroy.Invoke (null, new object [] { handle });
}
After using it I still have some leaks, but they seem to be related to other parts of Cairo and not with the surfaces.
I have found that a context created with CairoHelper.Create() will have a reference count of two.
A call to dispose reduces the reference count by one. Thus the context is never freed and keeps its target alive, too.
The native objects have manual reference counting, but the Gtk# wrappers want to keep a native object alive as long as there is a C# instance referencing it.
If a native object is created for a C# wrapper instance it does not need to increment the reference count because the wrapper instance 'owns' the native object and the reference count has the correct value of one. But if a wrapper instance is created for an already existing native object the reference count of the native object needs to be manually incremented to keep the object alive.
This is decided by a bool parameter when a wrapper instance is created.
Looking at the code for CairoHelper.Create() will show something like this
public static Cairo.Context Create(Gdk.Window window) {
IntPtr raw_ret = gdk_cairo_create(window == null ? IntPtr.Zero : window.Handle);
Cairo.Context ret = new Cairo.Context (raw_ret, false);
return ret;
}
Even though the native context was just created 'owned' will be false and the C# context will increment the reference count.
There is no fixed version right now, it can only be corrected by fixing the source and building Gtk# yourself.
CairoHelper is an auto-generated file, to change the parameter to true this attribute must be included in gdk/Gdk.metadata.
<attr path="/api/namespace/class[#cname='GdkCairo_']/method[#name='Create']/return-type" name="owned">true</attr>
Everything to build Gtk# can be found here.
https://github.com/mono/gtk-sharp

InstallShield calling advapi32.dll method type mismatch error

I am trying to call Advapi32.LsaOpenPolicy() from basic MSI InstallShield code. I've successfully called other avdapi32.dll methods; But LsaOPenPolicy is throwing a mismatched type error.
My prototype is:
prototype INT Advapi32.LsaOpenPolicy(POINTER, POINTER, INT, POINTER);
The windows definition is:
NTSTATUS LsaOpenPolicy(
_In_ PLSA_UNICODE_STRING SystemName,
_In_ PLSA_OBJECT_ATTRIBUTES ObjectAttributes,
_In_ ACCESS_MASK DesiredAccess,
_Inout_ PLSA_HANDLE PolicyHandle
);
I've noted in C++ samples that the ObjectAttriibute structure is zeroed out. So I do something similar here in the InstallShield code -- pArray points to the array contents.
for i = 0 to 11
array(i) = 0;
endfor;
array(0) = 24;
// current error is 80020005 type mismatch.
try
i = POLICY_CREATE_ACCOUNT | POLICY_LOOKUP_NAMES;
pArray = array;
pPolicy = NULL;
nvOSResult = LsaOpenPolicy(NULL, pArray, i, pPolicy);
catch
Sprintf(errString, "0x%08x", Err.Number);
_Logger(hMSI, methodName, "LsaOpenPolicy Exception "+errString, INFORMATION, FALSE);
nvOSResult = Err.Number;
endcatch;
There not much other information I can find other than the 80020005 error thrown; I've tried a few different argument constructions, but I can't get past this.
I've posted this in an flexera and microsoft forum -- but I have gotten no traction there. (references for posterity: flexera-link, microsoft-link)
Any help or suggestions are welcome!
The answer to this question was to actually work-around the interface between installshield and the system DLLs by moving all the workings into a C++ DLL. As installation got more complex, I ended up with two separate DLL functions, one executed at dialog (non-admin) mode and one at deferred execution (admin) mode.
In order to pass information I used the MsiGetProperty() API using MSI properties for both input and output variables.
Note that for deferred execution, I needed a CAD function on the installshield side to marshal data into the custom action data location, and on the DLL side extract the data, again by using MsiGetProperty() but getting the "CustomActionData" property and then parse the resulting string which contained the marshaled data.

Java Graphics synchronisation issues drawing dynamic data

I have a program where I want to draw data that is constantly being updated (it is microhpone-line-data incidentally). The data is an 8000-length array of doubles, I don't really care about 'losing' data which is overriden between takes of the paint method.
In my naive implementation it became obvous that there's synchronisation issues, where the audio-data is updated while the painting routine is under-way.
I'm also aware I'm slightly out-of-date on Java and the Concurrency package it has, but my first response was to just put synchronised blocks around the shared data code. Unsurprisingly this blocks the graphics thread sometimes, so I'm thinking there's probaly a much better way of doing this.
Essentially I just dont have much experience with synchronisation and am screwing things up a bit somewhere. I wonder if someone with a better understanding of these matters might be able to suggest a more elegant solution that doesn't block the graphics thread?
My naive code:
Object lock = new Object();
double[] audio = new double[8000]
// array size is always exactly 8000
public update( double[] audio ) {
synchronized( lock ) {
this.audio=audio; // and some brief processing
}
repaint();
}
public void paint( Graphics g ) {
synchronized( lock ) {
// draw the contents of this.audio
}
}
Self-answer, unless some more intelligent person can offer something better, I just save a reference to the audio array at the beginning of the routine and draw from that, then any updates to the audio buffer do their calculations in a separate array and then assign this.audio to the new array in a single step.
It appears to work, although the paint routine does get an occasional flicker it is nothing like it was, where it was flashing very noticeably about 10% of the time due to synchronised blocking. The audio-data does not update half-way through the drawing routine either. so... problem solved. probably.
double[] audio = new double[8000]
// array size is always exactly 8000
public update( double[] audio ) {
// do any brief processing
this.audio=audio; // the reference is re-assigned in one step
repaint();
}
public void paint( Graphics g ) {
audioNow = this.audio; // save the reference
// draw the contents of audioNow (not this.audio)
}

Multi-threading on a foreach loop?

I want to process some data. I have about 25k items in a Dictionary. IN a foreach loop, I query a database to get results on that item. They're added as value to the Dictionary.
foreach (KeyValuePair<string, Type> pair in allPeople)
{
MySqlCommand comd = new MySqlCommand("SELECT * FROM `logs` WHERE IP = '" + pair.Key + "' GROUP BY src", con);
MySqlDataReader reader2 = comd.ExecuteReader();
Dictionary<string, Dictionary<int, Log>> allViews = new Dictionary<string, Dictionary<int, Log>>();
while (reader2.Read())
{
if (!allViews.ContainsKey(reader2.GetString("src")))
{
allViews.Add(reader2.GetString("src"), reader2.GetInt32("time"));
}
}
reader2.Close();
reader2.Dispose();
allPeople[pair.Key].View = allViews;
}
I was hoping to be able to do this faster by multi-threading. I have 8 threads available, and CPU usage is about 13%. I just don't know if it will work because it's relying on the MySQL server. On the other hand, maybe 8 threads would open 8 DB connections, and so be faster.
Anyway, if multi-threading would help in my case, how? o.O I've never worked with (multiple) threads, so any help would be great :D
MySqlDataReader is stateful - you call Read() on it and it moves to the next row, so each thread needs their own reader, and you need to concoct a query so they get different values. That might not be too hard, as you naturally have many queries with different values of pair.Key.
You also need to either have a temp dictionary per thread, and then merge them, or use a lock to prevent concurrent modification of the dictionary.
The above assumes that MySQL will allow a single connection to perform concurrent queries; otherwise you may need multiple connections too.
First though, I'd see what happens if you only ask the database for the data you need ("SELECT src,time FROMlogsWHERE IP = '" + pair.Key + "' GROUP BY src") and use GetString(0) and GetInt32(1) instead of using the names to look up the src and time; also only get the values once from the result.
I'm also not sure on the logic - you are not ordering the log events by time, so which one is the first returned (and so is stored in the dictionary) could be any of them.
Something like this logic - where each of N threads only operates on the Nth pair, each thread has its own reader, and nothing actually changes allPeople, only the properties of the values in allPeople:
private void RunSubQuery(Dictionary<string, Type> allPeople, MySqlConnection con, int threadNumber, int threadCount)
{
int hoppity = 0; // used to hop over the keys not processed by this thread
foreach (var pair in allPeople)
{
// each of the (threadCount) threads only processes the (threadCount)th key
if ((hoppity % threadCount) == threadNumber)
{
// you may need con per thread, or it might be that you can share con; I don't know
MySqlCommand comd = new MySqlCommand("SELECT src,time FROM `logs` WHERE IP = '" + pair.Key + "' GROUP BY src", con);
using (MySqlDataReader reader = comd.ExecuteReader())
{
var allViews = new Dictionary<string, Dictionary<int, Log>>();
while (reader.Read())
{
string src = reader.GetString(0);
int time = reader.GetInt32(1);
// do whatever to allViews with src and time
}
// no thread will be modifying the same pair.Value, so this is safe
pair.Value.View = allViews;
}
}
++hoppity;
}
}
This isn't tested - I don't have MySQL on this machine, nor do I have your database and the other types you're using. It's also rather procedural (kind of how you would do it in Fortran with OpenMPI) rather than wrapping everything up in task objects.
You could launch threads for this like so:
void RunQuery(Dictionary<string, Type> allPeople, MySqlConnection connection)
{
lock (allPeople)
{
const int threadCount = 8; // the number of threads
// if it takes 18 seconds currently and you're not at .net 4 yet, then you may as well create
// the threads here as any saving of using a pool will not matter against 18 seconds
//
// it could be more efficient to use a pool so that each thread takes a pair off of
// a queue, as doing it this way means that each thread has the same number of pairs to process,
// and some pairs might take longer than others
Thread[] threads = new Thread[threadCount];
for (int threadNumber = 0; threadNumber < threadCount; ++threadNumber)
{
threads[threadNumber] = new Thread(new ThreadStart(() => RunSubQuery(allPeople, connection, threadNumber, threadCount)));
threads[threadNumber].Start();
}
// wait for all threads to finish
for (int threadNumber = 0; threadNumber < threadCount; ++threadNumber)
{
threads[threadNumber].Join();
}
}
}
The extra lock held on allPeople is done so that there is a write barrier after all the threads return; I'm not quite sure if it's needed. Any object would do.
Nothing in this guarantees any performance gain - it might be that the MySQL libraries are single threaded, but the server certainly can handle multiple connections. Measure with various numbers of threads.
If you're using .net 4, then you don't have to mess around creating the threads or skipping the items you aren't working on:
// this time using .net 4 parallel; assumes that connection is thread safe
static void RunQuery(Dictionary<string, Type> allPeople, MySqlConnection connection)
{
Parallel.ForEach(allPeople, pair => RunPairQuery(pair, connection));
}
private static void RunPairQuery(KeyValuePair<string, Type> pair, MySqlConnection connection)
{
MySqlCommand comd = new MySqlCommand("SELECT src,time FROM `logs` WHERE IP = '" + pair.Key + "' GROUP BY src", connection);
using (MySqlDataReader reader = comd.ExecuteReader())
{
var allViews = new Dictionary<string, Dictionary<int, Log>>();
while (reader.Read())
{
string src = reader.GetString(0);
int time = reader.GetInt32(1);
// do whatever to allViews with src and time
}
// no iteration will be modifying the same pair.Value, so this is safe
pair.Value.View = allViews;
}
}
The biggest problem that comes to mind is that you are going to use multithreading to add values to a dictionary, which isn't thread safe.
You'll have to do something like this to make it work, and you might not get that much of a benefit from implementing it this was as it still has to lock the dictionary object to add a value.
Assumptions:
There is a table People in your
database
There are alot of people in
your database
Each database query adds overhead you are doing one db query for each of the people in your database I would suggest it was faster to get all the data back in one query then to make repeated calles
select l.ip,l.time,l.src
from logs l, people p
where l.ip = p.ip
group by l.ip, l.src
Try this with a loop in a single thread, I belive this will be much faster then your existing code.
With in your existing code another thing you can do is to take the creation of the MySqlCommand out of the loop, prepare it in advance and just change the parameter. This should speed up execution of the SQL. see http://dev.mysql.com/doc/refman/5.0/es/connector-net-examples-mysqlcommand.html#connector-net-examples-mysqlcommand-prepare
MySqlCommand comd = new MySqlCommand("SELECT * FROM `logs` WHERE IP = ?key GROUP BY src", con);
comd.prepare();
comd.Parameters.Add("?key","example");
foreach (KeyValuePair<string, Type> pair in allPeople)
{
comd.Parameters[0].Value = pair.Key;
If you are using mutiple threads, each thread will still need there own Command, at lest in MS-SQL this would still be faster even if you recreated and prepared the statment every time, due to the ability for the SQL server to be able to cache the execution plan of a paramertirised statment.
Before you do anything else, find out exactly where the time is being spent. Check the execution plan of the query. The first thing I'd suspect is a missing index on logs.IP.
18 minutes for something like this seems much too long to me. Even if you can cut the execution time in eight by adding more threads (which is unlikely!) you still end up using more than 2 minutes. You could probably read the whole 25k rows into memory in less than five seconds and do the necessary processing in memory...
EDIT: Just to clarify, I'm not advocating actually doing this in memory, just saying that it looks like there's a bigger bottleneck here that can be removed.
I think if you are running this on a multi core machine you could gain benefits from multi threading.
However the way I would approach it is to first look at unblocking the thread you are currently using by making asynchronous database calls. The call backs will execute on background threads, so you will get some multi core benefit there and you won't be blocking threads waiting for the db to come back.
For IO intensive apps like this example sounds like you are likely to see improved throughput depending on what load the db can handle. Assuming the db scales to handle more than one concurrent request you should be good.
Thanks everyone for your help. Currently I am using this
for (int i = 0; i < 8; i++)
{
ThreadPool.QueueUserWorkItem(addDistinctScres, i);
}
ThreadPool to run all the threads. I use the method provided by Pete Kirkham, and I'm creating a new connection per thread.
Times went down to 4 minutes.
Next I'll make something wait for the callback of the threadpool? before performing other functions.
I think the bottleneck now is the MySQL server, because the CPU usage has drops.
#odd parity I thought about that, but the real thing is waaay more than 25k rows. Idk if that'd work.
This sound like the perfect job for map/reduce, i am not a .Net-programmer, but this seems like a reasonable guide:
http://ox.no/posts/minimalistic-mapreduce-in-net-4-0-with-the-new-task-parallel-library-tpl

Resources