There is a database access operation executing on an async thread. After the results are obtained, this thread signals the main thread to do something with the data. But 2 seconds after signaling I have to check if main thread is able to proccess the data, if not I should clear it. The question is should I make the async thread sleep after signaling, then do the check operation or make async thread set up a timer after signaling to start another async thread to do check operation?
Related
If I create a new thread like this:
Thread thread = new Thread(.....);
thread.Start(......);
What happens if a method inside that thread uses the await operator?
I understand that in a normal scenario this would cause .Net to save the state of the current execution and give the thread back to the thread pool so it can work on something else while we wait for the awaited method to complete, but what if this is not in a thread pool thread to begin with?
I understand that in a normal scenario this would cause .Net to save the state of the current execution and give the thread back to the thread pool so it can work on something else while we wait for the awaited method to complete, but what if this is not in a thread pool thread to begin with?
To clarify a bit, await will save the local state and then return. It doesn't give up the thread right away.
So, in this case, if the Thread's main method returns, then that thread exits. It isn't returned to the thread pool since it's not a thread pool thread; it just exits because its thread proc returned.
There are other scenarios, too:
If the thread is a UI thread, then it returns to its message loop. The thread keeps running, processing other messages.
If the thread is the main thread of a Console application, then it exits, which causes the Console app to exit.
What happens if a method inside that thread uses the await operator?
The same as any other time await is used:
await captures the "context" (SynchronizationContext.Current or TaskScheduler.Current). In this case, the context would be the thread pool context.
When the method is ready to resume, it resumes on that context.
So in this case, the await will return, causing the thread to exit. Then later, the rest of the method will run on a thread pool thread.
If you "create" the thread, you "manage" it.
Once the code scheduled to run on that thread finishes, the thread will be destroyed.
If you're running async/await code on a thread you crated, chances are that you will run out of that thread soon and pay the price of creating and destroying the thread for no benefit.
Thread pools are used to schedule short lived code and, with async/await that is the norm.
If you have some long running code that blocks, then it might be better to create your own thread.
I was studying about multi-threading and came across join().
As I understand right, using join() on the thread makes process wait until 'joined' thread terminates. For example, calling t1.join() in main will make main wait until the job in thread t1 is finished and t1 terminates.
I'm just curious that how the function join() make this possible - how does it make current thread 'blocked' inside the function? Does join() force execution of joined thread first so any other thread should wait until that thread terminates? Or, is there some way to communicate between two threads(the thread who called join() and the thread who is joined)?
I will be waiting for the answer. Thanks a lot!
To be able to join you need to be able to wait on some event. Then join looks like this:
function join(t : Thread)
// do this atomically
if already done
return
wait on termination event of t
end
Waiting can be done in one of two ways:
Looping and periodically checking if the event has happened (busy wait)
Letting the system reclaim the resources of the thread and be woken up on a system event, in that case waking the thread is managed by the scheduler of the OS
It's rather language specific.
Once you create a thread, it starts running.
A join operation is when your main process stops and waits for the thread to exit and capture a return code. It will block until your thread completes - that's rather the point, as it allows for a synchronization to occur - everything in your program is at a 'known state'.
Related is the detach operation, which is effectively saying 'I don't care any more'.
I have multi thread program. I have a design of my application as follows:
Suppose one is main thread, and other are slave threads. Main thread keep track of all slave thread ID's. During one of the scenario of application (one of the scenario is graceful shutdown of application), i want to delete slave threads from main thread.
Here slave threads may be executing i.e., either in sleep mode or doing some action which i cannot stop the action. So i want to delete the threads from main thread with thread IDs i stored internally.
Additional info:
While deleting i should not wait for thread current action to complete as it may take long time as i am reading from data base and taking some action in thread, in case of gracefull shut down i should not wait for action to complete as it may take time.
If i force delete a thread how can there will be a resource leaks?
Is above design is ok or there is any flow or any ways we can improve the design.
Thanks!
It's not okay. It's a bad practice to forcefully kill a thread from another thread because you'll very likely to have resource leaks. The best way is to use an event or signal to signal the client process to stop and wait until they exit gracefully.
The overall flow of the program would look like this:
Parent thread creates an event (say hEventParent). it then creates child threads and passes hEventParent as a parameter. The Parent thread keeps the hThread of the child thread(s).
Child threads do work but periodically waits for hEventParent.
When the program needs to exit, the parent thread sets hEventParent. It then waits for hThread (WaitForMultipleObjects also accepts hThread)
Child thread is notified then execute clean up routine and exits.
When all the threads exit, the parent can then exit.
The most common approach consists in the main thread sending a termination signal to all the threads, then waiting for the threads to end.
Typically the worker threads will have a loop, inside of which the work is done. You can add a boolean variable that indicates if the thread needs to end. For example:
terminate = false;
while (!terminate) {
// work here
}
If you want your worker threads to go to sleep when they have no work, then it gets a bit more complicated. In this case you could make the threads wait on semaphores. Each semaphore will be signaled when there is work to do, and that will awaken the thread. You will also signal the semaphore when the request to terminate is issued. Example worker thread:
terminate = false;
while (!terminate) {
// work here
wait(semaphore); // go to sleep
}
When the main thread wants to exit it will set terminate to true for all the threads and then signal the thread semaphores to awaken the threads and give them a chance to see the termination request. After that it will join all the threads, and only after all the threads are finished it will exit.
Note that the terminate boolean may need to be declared as volatile if you are using C/C++, to indicate to the compiler that it may be changed from another thread.
For example on windows there is MsgWaitForMultipleObjects that lets you asynchronously wait for windows messages, socket events, asynchronous io (IOCompletionRoutine), AND mutex handles.
On Unix you have select/poll that gives you everything except possibility to break out when some pthread_mutex is unlocked.
The story:
I have application that has main thread that does something with multiple sockets, pipes or files. Now from time to time there is a side job (a db transaction) that might take a longer time and if done synchronously in main thread would disrupt normal servicing of sockets. So I want to do the db operation in separate thread. That thread would wait on some mutex when idle until main thread decides to give it some job and unlocks the mutex so db thread can grab it.
The problem is how the db thread can notify back the main thread that it has finished the job. Main thread has to process sockets, so it cannot afford sleeping in pthread_mutex_lock. Doing periodic pthread_mutex_trylock is the last I would want to do. Currently I consider using a pipe, but is this the better way?
Using a pipe is a good idea here. Make sure that no other process has the write end of the pipe open, and then select() or poll() in the main thread the read end for reading. Once your worker thread is done with the work, close() the write end. The select() in the main thread wakes up immediately.
I don't think waiting on a mutex and something else would be possible, because on Linux, mutexes are implemented with the futex(2) system call, which doesn't support file descriptors.
I don't know how well it applies to your specific problem, but posix has message queues.
Given a System.Timers.Timer, is there a way from the main thread to tell if the worker thread running the elapsed event code is still running?
In other words, how can one make sure the code running in the worker thread is not currently running before stopping the timer or the main app/service thread the timer is running in?
Is this a matter of ditching Timer for threading timer using state, or is it just time to use threads directly?
Look up ManualResetEvent, as it is made to do specifically what you're asking for.
Your threads create a new reset event, and add it to an accessible queue that your main thread can use to see if any threads are still running.
// main thread owns this
private List<ManualResetEvent> _resetEvents;
...
// main thread does this to wait for executing threads to finish
WaitHandle.WaitAll(_resetEvents.ToArray(), 2000, false)
...
// worker threads do this to signal the thread is done
myResetEvent.Set();
I can give you more sample code if you want, but I basically just copied it from the couple articles I read when I had to do this a year ago or so.
Forgot to mention, you can't add this functionality to the default threads you'll get when your timer fires. So you should make your timer handler be very lean and do nothing more than prepare and start a new worker thread.
...
ThreadPool.QueueUserWorkItem(new WaitCallback(MyWorkerDelegate),
myCustomObjectThatContainsAResetEvent);
For the out of the box solution, there is no way. The main reason is the thread running the TimerCallback function is in all likelihood still alive even if the code running the callback has completed. The TimerCallback is executed by a Thread out of the ThreadPool. When the task is completed the thread does not die, but instead goes back into the queue for the next thread pool task.
In order to get this to work your going to have to use a manner of thread safe signalling to detect the operation has completed.
Timer Documentation