Long-running / blocking operations in boost asio handlers - multithreading

Current Situation
I implemented a TCP server using boost.asio which currently uses a single io_service object on which I call the run method from a single thread.
So far the server was able to answer the requests of the clients immediately, since it had all necessary information in the memory (no long-running operations in the receive handler were necessary).
Problem
Now requirements have changed and I need to get some information out of a database (with ODBC) - which is basically a long-running blocking operation - in order to create the response for the clients.
I see several approaches, but I don't know which one is best (and there are probably even more approaches):
First Approach
I could keep the long running operations in the handlers, and simply call io_service.run() from multiple threads. I guess I would use as many threads as I have CPU cores available?
While this approach would be easy to implement, I don't think I would get the best performance with this approach because of the limited number of threads (which are idling most of the time since database access is more an I/O-bound operation than a compute-bound operation).
Second Approach
In section 6 of this document it says:
Use threads for long running tasks
A variant of the single-threaded design, this design still uses a single io_service::run() thread for implementing protocol logic. Long running or blocking tasks are passed to a background thread and, once completed, the result is posted back to the io_service::run() thread.
This sounds promising, but I don't know how to implement that. Can anyone provide some code snippet / example for this approach?
Third Approach
Boris Schäling explains in section 7.5 of his boost introduction how to extend boost.asio with custom services.
This looks like a lot of work. Does this approach have any benefits compared to the other approaches?

The approaches are not explicitly mutually exclusive. I often see a combination of the first and second:
One or more thread are processing network I/O in one io_service.
Long running or blocking tasks are posted into a different io_service. This io_service functions as a thread pool that will not interfere with threads handling network I/O. Alternatively, one could spawn a detached thread every time a long running or blocking task is needed; however, the overhead of thread creation/destruction may a noticeable impact.
This answer that provides a thread pool implementation. Additionally, here is a basic example that tries to emphasize the interaction between two io_services.
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/chrono.hpp>
#include <boost/optional.hpp>
#include <boost/thread.hpp>
/// #brief Background service will function as a thread-pool where
/// long-standing blocking operations may occur without affecting
/// the network event loop.
boost::asio::io_service background_service;
/// #brief The main io_service will handle network operations.
boost::asio::io_service io_service;
boost::optional<boost::asio::io_service::work> work;
/// #brief ODBC blocking operation.
///
/// #brief data Data to use for query.
/// #brief handler Handler to invoke upon completion of operation.
template <typename Handler>
void query_odbc(unsigned int data,
Handler handler)
{
std::cout << "in background service, start querying odbc\n";
std::cout.flush();
// Mimic busy work.
boost::this_thread::sleep_for(boost::chrono::seconds(5));
std::cout << "in background service, posting odbc result to main service\n";
std::cout.flush();
io_service.post(boost::bind(handler, data * 2));
}
/// #brief Functions as a continuation for handle_read, that will be
/// invoked with results from ODBC.
void handle_read_odbc(unsigned int result)
{
std::stringstream stream;
stream << "in main service, got " << result << " from odbc.\n";
std::cout << stream.str();
std::cout.flush();
// Allow io_service to stop in this example.
work = boost::none;
}
/// #brief Mocked up read handler that will post work into a background
/// service.
void handle_read(const boost::system::error_code& error,
std::size_t bytes_transferred)
{
std::cout << "in main service, need to query odbc" << std::endl;
typedef void (*handler_type)(unsigned int);
background_service.post(boost::bind(&query_odbc<handler_type>,
21, // data
&handle_read_odbc) // handler
);
// Keep io_service event loop running in this example.
work = boost::in_place(boost::ref(io_service));
}
/// #brief Loop to show concurrency.
void print_loop(unsigned int iteration)
{
if (!iteration) return;
std::cout << " in main service, doing work.\n";
std::cout.flush();
boost::this_thread::sleep_for(boost::chrono::seconds(1));
io_service.post(boost::bind(&print_loop, --iteration));
}
int main()
{
boost::optional<boost::asio::io_service::work> background_work(
boost::in_place(boost::ref(background_service)));
// Dedicate 3 threads to performing long-standing blocking operations.
boost::thread_group background_threads;
for (std::size_t i = 0; i < 3; ++i)
background_threads.create_thread(
boost::bind(&boost::asio::io_service::run, &background_service));
// Post a mocked up 'handle read' handler into the main io_service.
io_service.post(boost::bind(&handle_read,
make_error_code(boost::system::errc::success), 0));
// Post a mockup loop into the io_service to show concurrency.
io_service.post(boost::bind(&print_loop, 5));
// Run the main io_service.
io_service.run();
// Cleanup background.
background_work = boost::none;
background_threads.join_all();
}
And the output:
in main service, need to query odbc
in main service, doing work.
in background service, start querying odbc
in main service, doing work.
in main service, doing work.
in main service, doing work.
in main service, doing work.
in background service, posting odbc result to main service
in main service, got 42 from odbc.
Note that the single thread processing the main io_service posts work into the background_service, and then continues to process its event loop while the background_service blocks. Once the background_service gets a result, it posts a handler into the main io_service.

We have same long-running tasks in our server (a legacy protocol with storages). So our server is running 200 threads to avoid blocking service (yes, 200 threads is running io_service::run). Its not too great thing, but works well for now.
The only problem we had is asio::strand which uses so-called "implementations" which gets locked when hadler is currently called. Solved this via increase this strands butckets and "deattaching" task via io_service::post without strand wrap.
Some tasks may run seconds or even minutes and this does work without issues at the moment.

Related

Do QThreads run on parallel?

I have two threads running and they simply print a message. Here is an minimalistic example of it.
Here is my Header.h:
#pragma once
#include <QtCore/QThread>
#include <QtCore/QDebug>
class WorkerOne : public QObject {
Q_OBJECT
public Q_SLOTS:
void printFirstMessage() {
while (1) {
qDebug() << "<<< Message from the FIRST worker" << QThread::currentThreadId();
}
}
};
class WorkerTwo : public QObject {
Q_OBJECT
public Q_SLOTS:
void printSecondMessage() {
while (1) {
qDebug() << ">>> Message from the SECOND worker" << QThread::currentThreadId();
}
}
};
And, of course, my main:
#include <QtCore/QCoreApplication>
#include "Header.h"
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
WorkerOne kek1;
QThread t1;
kek1.moveToThread(&t1);
t1.setObjectName("FIRST THREAD");
QThread t2;
WorkerTwo kek2;
kek2.moveToThread(&t2);
t2.setObjectName("SECOND THREAD");
QObject::connect(&t1, &QThread::started, &kek1, &WorkerOne::printFirstMessage);
QObject::connect(&t2, &QThread::started, &kek2, &WorkerTwo::printSecondMessage);
t1.start();
t2.start();
return a.exec();
}
When I start application I see an expected output of it:
As you may see, thread id is different. It's was added to be sure they are running on different threads.
I set the only one breakpoint in printFirstMessage and run the application in debug mode attached to the debugger. Once the debugger stops at my breakpoint - I wait for a while and press Continue, so my debugger stops at the same breakpoint again.
What do I expect to see? I expect to see only one <<< Message from the FIRST worker and a lot of messages from the second worker. But what do I see? I see only two messages: the first one from the first worker and the second one from the second worker.
I pressed Continue a lot of times and the result is more or less the same. That's weird to me, because I expected the second thread to be running while the first one is stopped by debugger.
I decided to test it using std::thread and wrote the following code:
#include <thread>
#include <iostream>
void foo1() {
while (true) {
std::cout << "Function ONE\n";
}
}
void foo2() {
while (true) {
std::cout << "The second function\n";
}
}
int main() {
std::thread t1(&foo1);
std::thread t2(&foo2);
t1.join();
t2.join();
}
Set a breakpoint in the first one, starts the app, after stopping at the breakpoint I hit Continue and see that console contains a lot of messages from the second function and only one from the first function (exactly this I expected using QThread as well):
Could someone explain how does it works with QThread? By the way, I tested it using QtConcurrent::run instead of QThread and the result was as expected: the second function is running while the first one is stopped because of a breakpoint.
Yes, multiple QThread instances are allowed to run in parallel. Whether they effectively run in parallel is up to your OS and depends on multiple factors:
The number of physical (and logical) CPU cores. This is typically not more than 4 or 8 on a consumer computer. This is the maximum number of threads (including the threads of other programs and your OS itself) that can be effectively run in parallel. The number of cores is much lower than the number of threads typically running on a computer. If your computer consists of only 1 core, you will still be able to use multiple QThread's but the OS scheduler will alternate between executing those threads. QThread::idealThreadCount can be used to query the number of (logical) CPU cores.
Each thread has a QThread::Priority. The OS thread scheduler may use this value to prioritize (or de-prioritize) one thread over another. A thread with a lower priority may get less CPU time than a thread with a higher priority when the CPU cores are busy.
The (workload on the) other threads that are currently running.
Debugging your program definitely alters the normal execution of a multi thread program:
Interrupting and continuing a thread has a certain overhead. In the meantime, the other threads may still/already perform some operations.
As pointed out by G.M., most of the time all threads are interrupted when a breakpoint is hit. How fast the others threads are interrupted is not well defined.
Often a debugger has a configuration option to allow interrupting a single thread, while the others continue running, see f.ex. this question.
The number of loops that are executed while the other thread is interrupted/started again, depends on the number of CPU instructions that are needed to perform a single loop. Calling qDebug() and QThread::currentThreadId() is definitely slower than a single std::cout.
Conclusion: You don't have any hard garanty about the scheduling of a thread. However, in normal operation, both threads will get almost the same amount of CPU time on average as the OS scheduler has no reason the favor one over the other. Using a debugger completely alters this normal behavior.

QT Multithreading Data Pass from Main Thread to Worker Thread

I am using multithreading in my QT program. I need to pass data to the worker object that lives in the worker thread from the main gui thread. I created a setData function in a QObject subclass to pass all the necessary data from the main gui thread. However I verified the function is called from the main thread by looking at QThread::currentThreadId() in the setData function. Even though the worker object function is called from the main thread does this ensure that the worker thread still has its own copy of the data as is required for a reentrant class? Keep in mind this is happening before the worker thread is started.
Also if basic data types are used in a class without dynamic memory and no static global variables is that class reentrant as long as all of its other member data is reentrant? (it's got reentrant data members like qstrings, qlists etc in addition the the basic ints bools etc)
Thanks for the help
Edited new content:
My main question was simply is it appropriate to call a QObject subclass method living in another thread from the main gui thread in order to pass my data to the worker thread to be worked on (in my case custom classes containing backup job information for long-pending file scans and copies for data backup). The data pass all happens before the thread is started so there's no danger of both threads modifying the data at once (I think but I'm no multithreading expert...) It sounds like the way to do this from your post is to use a signal from the main thread to a slot in the worker thread to pass the data. I have confirmed my data backup jobs are reentrant so all I need to do is assure that the worker thread works on its own instances of these classes. Also the transfer of data currently done by calling the QObject subclass method is done before the worker thread starts - does this prevent race conditions and is it safe?
Also here under the section "Accessing QObject Subclasses from Other Threads" it looks a little dangerous to use slots in the QObject subclass...
OK here's the code I've been busy recently...
Edited With Code:
void Replicator::advancedAllBackup()
{
updateStatus("<font color = \"green\">Starting All Advanced Backups</font>");
startBackup();
worker = new Worker;
worker->moveToThread(workerThread);
setupWorker(normal);
QList<BackupJob> jobList;
for (int backupCount = 0; backupCount < advancedJobs.size(); backupCount++)
jobList << advancedJobs[backupCount];
worker->setData(jobList);
workerThread->start();
}
The startBackup function sets some booleans and updates the gui.
the setupWorker function connects all signals and slots for the worker thread and worker object.
the setData function sets the worker job list data to that of the backend and is called before the thread starts so there is no concurrency.
Then we start the thread and it does its work.
And here's the worker code:
void setData(QList<BackupJob> jobs) { this->jobs = jobs; }
So my question is: is this safe?
There are some misconceptions in your question.
Reentrancy and multithreading are orthogonal concepts. Single-threaded code can be easily forced to cope with reentrancy - and is as soon as you reenter the event loop (thus you shouldn't).
The question you are asking, with correction, is thus: Are the class's methods thread-safe if the data members support multithreaded access? The answer is yes. But it's a mostly useless answer, because you're mistaken that the data types you use support such access. They most likely don't!
In fact, you're very unlikely to use multithread-safe data types unless you explicitly seek them out. POD types aren't, most of the C++ standard types aren't, most Qt types aren't either. Just so that there are no misunderstandings: a QString is not multithread-safe data type! The following code is has undefined behavior (it'll crash, burn and send an email to your spouse that appears to be from an illicit lover):
QString str{"Foo"};
for (int i = 0; i < 1000; ++i)
QtConcurrent::run([&]{ str.append("bar"); });
The follow up questions could be:
Are my data members supporting multithreaded access? I thought they did.
No, they aren't unless you show code that proves otherwise.
Do I even need to support multithreaded access?
Maybe. But it's much easier to avoid the need for it entirely.
The likely source of your confusion in relation to Qt types is their implicit sharing semantics. Thankfully, their relation to multithreading is rather simple to express:
Any instance of a Qt implicitly shared class can be accessed from any one thread at a given time. Corollary: you need one instance per thread. Copy your object, and use each copy in its own thread - that's perfectly safe. These instances may share data initially, and Qt will make sure that any copy-on-writes are done thread-safely for you.
Sidebar: If you use iterators or internal pointers to data on non-const instances, you must forcibly detach() the object before constructing the iterators/pointers. The problem with iterators is that they become invalidated when an object's data is detached, and detaching can happen in any thread where the instance is non-const - so at least one thread will end up with invalid iterators. I won't talk any more of this, the takeaway is that implicitly shared data types are tricky to implement and use safely. With C++11, there's no need for implicit sharing anymore: they were a workaround for the lack of move semantics in C++98.
What does it mean, then? It means this:
// Unsafe: str1 potentially accessed from two threads at once
QString str1{"foo"};
QtConcurrent::run([&]{ str1.apppend("bar"); });
str1.append("baz");
// Safe: each instance is accessed from one thread only
QString str1{"foo"};
QString str2{str1};
QtConcurrent::run([&]{ str1.apppend("bar"); });
str2.append("baz");
The original code can be fixed thus:
QString str{"Foo"};
for (int i = 0; i < 1000; ++i)
QtConcurrent::run([=]() mutable { str.append("bar"); });
This isn't to say that this code is very useful: the modified data is lost when the functor is destructed within the worker thread. But it serves to illustrate how to deal with Qt value types and multithreading. Here's why it works: copies of str are taken when each instance of the functor is constructed. This functor is then passed to a worker thread to execute, where its copy of the string is appended to. The copy initially shares data with the str instance in the originating thread, but QString will thread-safely duplicate the data. You could write out the functor explicitly to make it clear what happens:
QString str{"Foo"};
struct Functor {
QString str;
Functor(const QString & str) : str{str} {}
void operator()() {
str.append("bar");
}
};
for (int i = 0; i < 1000; ++i)
QtConcurrent::run(Functor(str));
How do we deal with passing data using Qt types in and out of a worker object? All communication with the object, when it is in the worker thread, must be done via signals/slots. Qt will automatically copy the data for us in a thread-safe manner so that each instance of a value is ever only accessed in one thread only. E.g.:
class ImageSource : public QObject {
QImage render() {
QImage image{...};
QPainter p{image};
...
return image;
}
public:
Q_SIGNAL newImage(const QImage & image);
void makeImage() {
QtConcurrent::run([this]{
emit newImage(render());
});
}
};
int main(int argc, char ** argv) {
QApplication app...;
ImageSource source;
QLabel label;
label.show();
connect(source, &ImageSource::newImage, &label, [&](const QImage & img){
label.setPixmap(QPixmap::fromImage(img));
});
source.makeImage();
return app.exec();
}
The connection between the source's signal and the label's thread context is automatic. The signal happens to be emitted in a worker thread in the default thread pool. At the time of signal emission, the source and target threads are compared, and if different, the functor will be wrapped in an event, the event posted the label, and the label's QObject::event will run the functor that sets the pixmap. This is all thread-safe and leverages Qt to make it almost effortless. The target thread context &label is critically important: without it, the functor would run in the worker thread, not the UI thread.
Note that we didn't even have to move the object to a worker thread: in fact, moving a QObject to a worker thread should be avoided unless the object does need to react to events and does more than merely generate a piece of data. You'd typically want to move e.g. objects that deal with communications, or complex application controllers that are abstracted from their UI. Mere generation of data can be usually done using QtConcurrent::run using a signal to abstract away the thread-safety magic of extracting the data from the worker thread to another thread.
In order to use Qt's mechanisms for passing data between threads with queues, you cannot call the object's function directly. You need to either use the signal/slot mechanism, or you can use the QMetaObject::invokeMethod call:
QMetaObject::invokeMethod(myObject, "mySlotFunction",
Qt::QueuedConnection,
Q_ARG(int, 42));
This will only work if both the sending and receiving objects have event queues running - i.e. a main or QThread based thread.
For the other part of your question, see the Qt docs section on reentrancy:
http://doc.qt.io/qt-4.8/threads-reentrancy.html#reentrant
Many Qt classes are reentrant, but they are not made thread-safe,
because making them thread-safe would incur the extra overhead of
repeatedly locking and unlocking a QMutex. For example, QString is
reentrant but not thread-safe. You can safely access different
instances of QString from multiple threads simultaneously, but you
can't safely access the same instance of QString from multiple threads
simultaneously (unless you protect the accesses yourself with a
QMutex).

Limit number of concurrent thread in a thread pool

In my code I have a loop, inside this loop I send several requests to a remote webservice. WS providers said: "The webservice can host at most n threads", so i need to cap my code since I can't send n+1 threads.
If I've to send m threads I would that first n threads will be executed immediately and as soon one of these is completed a new thread (one of the remaining m-n threads) will be executed and so on, until all m threads are executed.
I have thinked of a Thread Pool and explicit setting of the max thread number to n. Is this enough?
For this I would avoid the use of multiple threads. Instead, wrapping the entire loop up which can be run on a single thread. However, if you do want to launch multiple threads using the/a thread pool then I would use the Semaphore class to facilitate the required thread limit; here's how...
A semaphore is like a mean night club bouncer, it has been provide a club capacity and is not allowed to exceed this limit. Once the club is full, no one else can enter... A queue builds up outside. Then as one person leaves another can enter (analogy thanks to J. Albahari).
A Semaphore with a value of one is equivalent to a Mutex or Lock except that the Semaphore has no owner so that it is thread ignorant. Any thread can call Release on a Semaphore whereas with a Mutex/Lock only the thread that obtained the Mutex/Lock can release it.
Now, for your case we are able to use Semaphores to limit concurrency and prevent too many threads from executing a particular piece of code at once. In the following example five threads try to enter a night club that only allows entry to three...
class BadAssClub
{
static SemaphoreSlim sem = new SemaphoreSlim(3);
static void Main()
{
for (int i = 1; i <= 5; i++)
new Thread(Enter).Start(i);
}
// Enfore only three threads running this method at once.
static void Enter(int i)
{
try
{
Console.WriteLine(i + " wants to enter.");
sem.Wait();
Console.WriteLine(i + " is in!");
Thread.Sleep(1000 * (int)i);
Console.WriteLine(i + " is leaving...");
}
finally
{
sem.Release();
}
}
}
I hope this helps.
Edit. You can also use the ThreadPool.SetMaxThreads Method. This method restricts the number of threads allowed to run in the thread pool. But it does this 'globally' for the thread pool itself. This means that if you are running SQL queries or other methods in libraries that you application uses then new threads will not be spun-up due to this blocking. This may not be relevant to you, in which case use the SetMaxThreads method. If you want to block for a particular method however, it is safer to use Semphores.

QtConcurrent threading is slow!! What am I doing wrong?

Why is my qtconcurrent::run() call just as slow as calling the member function through the object??
(Ex: QtConcurrent::run(&db, &DBConnect::loadPhoneNumbers) is just as slow as calling db.loadPhoneNumbers())
Read below for futher explanation
I've been trying to create a thread via QtConcurrent::run to help speed up data being sent to a SQL database table. I am taking a member variable which is a QMap and iterating through it to send each key+value to the database.
Member function for the QtConcurrent::run() call:
void DBConnect::loadPhoneNumbers()
{
//m_phoneNumbers is a private QMap member variable in DBConnect
qDebug() << "\t[!] Items to send: " << m_phoneNumbers.size();
QSqlQuery query;
qDebug() << "\t[!] Using loadphonenumbers thread: " << QThread::currentThread();
qDebug() << "\t[!] Ideal Num of Threads: " << QThread::idealThreadCount();
bool isLoaded = false;
QMap<QString,QString>::const_iterator tmp = m_phoneNumbers.constBegin();
while(tmp != m_phoneNumbers.constEnd())
{
isLoaded = query.exec(QString("INSERT INTO "+m_mtable+" VALUES('%1','%2')").arg(tmp.key()).arg(tmp.value()));
if(isLoaded == false)
{
qDebug() << "\r\r[X] ERROR: Could\'t load number " << tmp.key() << " into table " << m_mtable;
qDebug() << query.lastError().text();
}
tmp++;
}
}
main.cpp section that calls the thread
DBConnect db("QODBC", myINI.getSQLServer(),C_DBASE,myINI.getMTable(), myINI.getBTable());
db.startConnect();
//...more code here
qDebug() << "\n[*] Using main thread: " << QThread::currentThread() << endl;
//....two qtconcurrent::run() threads started and finished here (not shown)
qDebug() << "\n[*] Sending numbers to Database...";
QFuture<void> dbFuture = QtConcurrent::run(&db, &DBConnect::loadPhoneNumbers);
dbFuture.waitForFinished();
My understanding of the situation
From my understanding, this thread will run under a new pool of threads seperate from the main thread. What I am seeing is not the case (note there are 2 other QtConcurrent::run() calls before this one for the database, all left to finish before continuing to database call)
Now I thought about using QtConcurrent::map() / mapped() but couldn't get it to work properly with a QMap. (Couldn't find any examples to help out with either but that is besides the matter... was just an FYI in case someone asks why I didn't use one)
Have been doing some "debug" work to find out whats happening and in my tests I use QThread::currentThread() to find which thread I am currently making a call from. This is what is happening for the various threads in my program. (All qtconcurrent::run() calls are made in main.cpp FYI... not sure if that makes a difference)
Check what is main thread: on QThread(0x5d2cd0)
Run thread 1: on QThread(0x5dd238, name = "Thread (pooled)")
Run thread 2: on QThread(0x5d2cd0)
Run thread 3 (loadPhoneNumbers function): on QThread(0x5d2cd0)
As seen above, other than the first qtconcurrent::run() call, everything else is on the main thread (o.O)
Questions:
From my understanding, all my threads (all qtconcurrent::run) should be on their own thread (only first one is). Is that true or am I missing something?
Second, is my loadPhoneNumebrs() member function thread safe?? (Since I am not altering anything from what I can see)
Biggest question:
Why is my loadPhoneNumbers() qtconcurrent::run call just as slow as if I just called the member function? (ex: db.loadPhoneNumbers() is just as slow as the qtconcurrent::run() version)
Any help is much appreciated!
Threads don't magically speed things up, they just make it so you can continue doing other stuff while it's happening in the background. When you call waitForFinished(), your main thread won't continue until the load phone numbers thread is finished, essentially negating that advantage. Depending on the implementation, that may be why your currentThread() is showing the same as main, because the wait is already happening.
Probably more significant in terms of speed would be to build a single query that inserts all the values in the list, rather than a separate query for each value.
According to QtSql documentation:
A connection can only be used from within the thread that created it.
Moving connections between threads or creating queries from a
different thread is not supported.
It works anyway because ODBC itself supports multithreaded access to a single ODBC handle. But since you are only using one connection, all queries are probably serialized by ODBC as if there was only a single thread (see for example what Oracle's ODBC driver does).
waitForFinished() calls a private function stealRunnable() that, as its name implies, takes a not yet started task from the QFuture queue an runs it in the current thread.

What are working threads?

What are this working threads? How to implement them? And when to use them. I ask this because many people mention them but I dont find an the net some example of them. Or is just a saying for creating threads? Thanks.
Working threads isn't itself a meaningful term in the thread world.
I guess you mean to say," What are worker threads" ?
In that case, let me tell you that a worker thread is commonly used to handle background tasks that the user shouldn't have to wait for to continue using your application.
e.g Recalculation and background printing.
For implementing the worker thread, the controlling function should be defined which defines the thread. When this function is entered, the thread starts, and when it exits, the thread terminates. This function should have the following prototype : More Information
UINT MyControllingFunction( LPVOID pParam );
A short snippet to implement the controlling function of worker thread,
UINT MyThreadProc( LPVOID pParam )
{
CMyObject* pObject = (CMyObject*)pParam;
if (pObject == NULL ||
!pObject->IsKindOf(RUNTIME_CLASS(CMyObject)))
return 1; // if pObject is not valid
// do something with 'pObject'
return 0; // thread completed successfully
}
// inside a different function in the program
.
.
.
pNewObject = new CMyObject;
AfxBeginThread(MyThreadProc, pNewObject);
.
.
.
"Worker thread" is a generic term for a thread which performs some task independent of some primary thread. Depending on usage, it may simply mean any thread other than the primary UI thread, or it may mean a thread which performs a well-scoped task (i.e. a 'job' rather than a continuous operation which lasts the lifetime of the application).
For example, you might spawn a worker thread to retrieve a file from a remote computer over a network. It might send progress updates the application's main thread.
I use a worker, or background thread, any time that I want to perform a lengthy task without tying up my user interface. Threads often allow me to simplify my code by making a continuous series of statements, rather than a convoluted, non-blocking architecture.

Resources