I'm building a server with NIO, I have two questions.
Do I have to use a worker thread or a thread pool to process the messages received, or let the main thread do all this stuff ( I have performance needs).
I have two kind of sending, sendNow method which ends with selector.selectNow() and simple send method which ends with selector.wakeup().. can I have loss of data those methods?
thanks
If possible try to do it all in one thread. It gets very complicated very quickly otherwise.
I don't know why you think a sendNow() method needs to end with either selectNow() or wakeup(), but neither of them is intrinsically going to cause a data loss.
Related
I have a program that uses multiple threads to brute force the decryption of some encrypted string. The main thread has a channel, and the sender is cloned and sent to each thread. When a thread finds an answer, it sends it to the receiver which is in the main thread.
In this program I am not joining the threads, instead I use the blocking call sender.recv() to suspend the main thread until a single other thread finishes.
My hope is, once this call finishes, the main thread will return and all the other worker threads will be terminated.
Is this a poor design choice? Are there drawbacks of not having some condition in the other threads which would cause them to return when the solution has been discovered? Is it okay/safe to rely on the compiler to clean up my threads before they've technically finished?
Assuming there's no cleanup to be done, what you've done is mostly harmless. I'm assuming your worker thread looks something like this right now.
fn my_thread() {
// ... lots of hard work ...
channel.send(my_result);
}
and if that's the case, then "I received the result" and "the other thread is terminated" are very similar events, and the difference of "this function returned" is probably irrelevant. But suppose someone comes along and changes the code to look like this.
fn my_thread() {
// ... lots of hard work ...
channel.send(my_result);
do_cleanup_stuff();
}
Now do_cleanup_stuff() might not get a chance to run, if your main thread terminates before my_thread does. If that cleanup function is important, that could cause problems. And it could be more subtle than that. If any local variable in my_thread holds a file handle or an open TCP stream or any other object with a nontrivial Drop implementation, that value may not get a chance to Drop properly if you don't join the thread.
So it's probably best practice to join everything, even if it's just a final step at the end of your main.
I am using NSURLSession dataTaskWithURL:completionHandler. It looks like completionHandler is executed in a thread which is different than the thread(in my case, it's the main thread) which calls dataTaskWithURL. So my question is, since it is asynchronized, is it possible that the main thread exit, but the completionHandler thread is still running since the response has not come back, which is the case I am trying to avoid. If this could happen, how should I solve the problem? BTW, I am building this as a framework, not an application.Thanks.
In the first part of your question you seem un-sure that the completion handler is running on a different thread. To confirm this let's look at the NSURLSession Class Reference. If we look at the "Creating a Session" section we can see in the description for the following method the answer.
+ sessionWithConfiguration:delegate:delegateQueue:
Swift
init(configuration configuration: NSURLSessionConfiguration,
delegate delegate: NSURLSessionDelegate?,
delegateQueue queue: NSOperationQueue?)
Objective-C
+ (NSURLSession *)sessionWithConfiguration:(NSURLSessionConfiguration *)configuration
delegate:(id<NSURLSessionDelegate>)delegate
delegateQueue:(NSOperationQueue *)queue
In the parameters table for the NSOperationQueue queue parameter is the following quote.
An operation queue for scheduling the delegate calls and completion handlers. The queue need not be a serial queue. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.
So we can see the default behavior is to provide a queue whether from the developer or as the default class behavior. Again we can see this in the comments for the method + sessionWithConfiguration:
Discussion
Calling this method is equivalent to calling
sessionWithConfiguration:delegate:delegateQueue: with a nil delegate
and queue.
If you would like a more information you should read Apple's Concurrency Programming Guide. This is also useful in understanding Apple's approach to threading in general.
So the completion handler from - dataTaskWithURL:completionHandler: is running on a different queue, with queues normally providing their own thread(s). This leads the main component of your question. Can the main thread exit, while the completion handler is still running?
The concise answer is no, but why?
To answer this answer this we again turn to Apple's documentation, to a document that everyone should read early in their app developer career!
The App Programming Guide
The Main Run Loop
An app’s main run loop processes all user-related events. The
UIApplication object sets up the main run loop at launch time and uses
it to process events and handle updates to view-based interfaces. As
the name suggests, the main run loop executes on the app’s main
thread. This behavior ensures that user-related events are processed
serially in the order in which they were received.
All of the user interact happens on the main thread - no main thread, no main run loop, no app! So the possible condition you question mentions should never exist!
Apple seems more concerned with you doing background work on the main thread. Checkout the section "Move Work off the Main Thread"...
Be sure to limit the type of work you do on the main thread of your
app. The main thread is where your app handles touch events and other
user input. To ensure that your app is always responsive to the user,
you should never use the main thread to perform long-running or
potentially unbounded tasks, such as tasks that access the network.
Instead, you should always move those tasks onto background threads.
The preferred way to do so is to use Grand Central Dispatch (GCD) or
NSOperation objects to perform tasks asynchronously.
I know this answer is long winded, but I felt the need to offer insight and detail in answering your question - "the why" is just as important and it was good review :)
NSURLSessionTasks always run in background by default that's why we have completion handler which can be used when we get response from Web service.
If you don't get any response explore your request URL and whether HTTPHeaderFields are set properly.
Paste your code so that we can help it
I just asked the same question. Then figured out the answer. The thread of the completion handler is setup in the init of the NSURLSession.
From the documentation:
init(configuration configuration: NSURLSessionConfiguration,
delegate delegate: NSURLSessionDelegate?,
delegateQueue queue: NSOperationQueue?)`
queue - A queue for scheduling the delegate calls and completion handlers. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.*
My code that sets up for completion on main thread:
var session = NSURLSession(configuration: configuration, delegate:nil, delegateQueue:NSOperationQueue.mainQueue())
(Shown in Swift, Objective-C the same) Maybe post more code if this does not solve.
Hi i using winapi's QueueUserAPC to invoke an apc function call in another thread.
my question is, what is the best practice for passing a parameter to it.
i refer to the object lifetime and allocation/deallocation responsibility.
DWORD WINAPI QueueUserAPC(PAPCFUNC pfnAPC, HANDLE hThread, ULONG_PTR dwData);
i am using the dwData to pass the parameter to pass a pointer to some data and i was wondering how i should handle it.
i need to make sure that it lives until the receiving thread finished using it.
should i use a smart pointer to make sure that data is deallocated when no longer used?
i guess that allocation in the calling thread and dealloc. in the receiving is possible but probably not such a good thing.
anything else that can be done?
i think i would like to avoid synchronization between the two only to notify that the receiving thread is done with the data...
thanks!
Alloc'ing in the sending thread and dealloc'ing in the receiving one is easy, but it has the main drawback that it may leak, even if you handle the sending failure, the receiving thread may finish before having a chance to execute the APC.
Probably your easiest way to avoid the leak is to create a queue for sent data -maybe a queue per thread- and when thread finishes, you traverse the thread queue and free all the pending data.
But as usual, the devil is in the details...
We have an application that is undergoing performance testing. Today, I decided to take a dump of w3wp & load it in windbg to see what is going on underneath the covers. Imagine my surprise when I ran !threads and saw that there are 640 background threads, almost all of which seem to say the following:
OS Thread Id: 0x1c38 (651)
Child-SP RetAddr Call Site
0000000023a9d290 000007ff002320e2 Microsoft.Practices.EnterpriseLibrary.Caching.ProducerConsumerQueue.WaitUntilInterrupted()
0000000023a9d2d0 000007ff00231f7e Microsoft.Practices.EnterpriseLibrary.Caching.ProducerConsumerQueue.Dequeue()
0000000023a9d330 000007fef727c978 Microsoft.Practices.EnterpriseLibrary.Caching.BackgroundScheduler.QueueReader()
0000000023a9d380 000007fef9001552 System.Threading.ExecutionContext.runTryCode(System.Object)
0000000023a9dc30 000007fef72f95fd System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
0000000023a9dc80 000007fef9001552 System.Threading.ThreadHelper.ThreadStart()
If i had to give a guess, I'm thinkign that one of these threads are getting spawned for each run of our app - we have 2 app servers, 20 concurrent users, and ran the test approximately 30 times...it's in the neighborhood.
Is this 'expected behavior', or perhaps have we implemented something improperly? The test ran hours ago, so i would have expected any timeouts to have occurred already.
Edit: Thank you all for your replies. It has been requested that more detail be shown about the callstack - here is the output of !mk from sosex.dll.
ESP RetAddr
00:U 0000000023a9cb38 00000000775f72ca ntdll!ZwWaitForMultipleObjects+0xa
01:U 0000000023a9cb40 00000000773cbc03 kernel32!WaitForMultipleObjectsEx+0x10b
02:U 0000000023a9cc50 000007fef8f5f595 mscorwks!WaitForMultipleObjectsEx_SO_TOLERANT+0xc1
03:U 0000000023a9ccf0 000007fef8f59f49 mscorwks!Thread::DoAppropriateAptStateWait+0x41
04:U 0000000023a9cd50 000007fef8e55b99 mscorwks!Thread::DoAppropriateWaitWorker+0x191
05:U 0000000023a9ce50 000007fef8e2efe8 mscorwks!Thread::DoAppropriateWait+0x5c
06:U 0000000023a9cec0 000007fef8f0dc7a mscorwks!CLREvent::WaitEx+0xbe
07:U 0000000023a9cf70 000007fef8fba72e mscorwks!Thread::Block+0x1e
08:U 0000000023a9cfa0 000007fef8e1996d mscorwks!SyncBlock::Wait+0x195
09:U 0000000023a9d0c0 000007fef9463d3f mscorwks!ObjectNative::WaitTimeout+0x12f
0a:M 0000000023a9d290 000007ff002321b3 *** ERROR: Module load completed but symbols could not be loaded for Microsoft.Practices.EnterpriseLibrary.Caching.DLL
Microsoft.Practices.EnterpriseLibrary.Caching.ProducerConsumerQueue.WaitUntilInterrupted()(+0x0 IL)(+0x11 Native)
0b:M 0000000023a9d2d0 000007ff002320e2 Microsoft.Practices.EnterpriseLibrary.Caching.ProducerConsumerQueue.Dequeue()(+0xf IL)(+0x18 Native)
0c:M 0000000023a9d330 000007ff00231f7e Microsoft.Practices.EnterpriseLibrary.Caching.BackgroundScheduler.QueueReader()(+0x9 IL)(+0x12 Native)
0d:M 0000000023a9d380 000007fef727c978 System.Threading.ExecutionContext.runTryCode(System.Object)(+0x18 IL)(+0x106 Native)
0e:U 0000000023a9d440 000007fef9001552 mscorwks!CallDescrWorker+0x82
0f:U 0000000023a9d490 000007fef8e9e5e3 mscorwks!CallDescrWorkerWithHandler+0xd3
10:U 0000000023a9d530 000007fef8eac83f mscorwks!MethodDesc::CallDescr+0x24f
11:U 0000000023a9d790 000007fef8f0cbd2 mscorwks!ExecuteCodeWithGuaranteedCleanupHelper+0x12a
12:U 0000000023a9da20 000007fef945e572 mscorwks!ReflectionInvocation::ExecuteCodeWithGuaranteedCleanup+0x172
13:M 0000000023a9dc30 000007fef7261722 System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)(+0x60 IL)(+0x51 Native)
14:M 0000000023a9dc80 000007fef72f95fd System.Threading.ThreadHelper.ThreadStart()(+0x8 IL)(+0x2a Native)
15:U 0000000023a9dcd0 000007fef9001552 mscorwks!CallDescrWorker+0x82
16:U 0000000023a9dd20 000007fef8e9e5e3 mscorwks!CallDescrWorkerWithHandler+0xd3
17:U 0000000023a9ddc0 000007fef8eac83f mscorwks!MethodDesc::CallDescr+0x24f
18:U 0000000023a9e010 000007fef8f9ae8d mscorwks!ThreadNative::KickOffThread_Worker+0x191
19:U 0000000023a9e330 000007fef8f59374 mscorwks!TypeHandle::GetParent+0x5c
1a:U 0000000023a9e380 000007fef8e52045 mscorwks!SVR::gc_heap::make_heap_segment+0x155
1b:U 0000000023a9e450 000007fef8f66139 mscorwks!ZapStubPrecode::GetType+0x39
1c:U 0000000023a9e490 000007fef8e1c985 mscorwks!ILCodeStream::GetToken+0x25
1d:U 0000000023a9e4c0 000007fef8f594e1 mscorwks!Thread::DoADCallBack+0x145
1e:U 0000000023a9e630 000007fef8f59399 mscorwks!TypeHandle::GetParent+0x81
1f:U 0000000023a9e680 000007fef8e52045 mscorwks!SVR::gc_heap::make_heap_segment+0x155
20:U 0000000023a9e750 000007fef8f66139 mscorwks!ZapStubPrecode::GetType+0x39
21:U 0000000023a9e790 000007fef8e20e15 mscorwks!ThreadNative::KickOffThread+0x401
22:U 0000000023a9e7f0 000007fef8e20ae7 mscorwks!ThreadNative::KickOffThread+0xd3
23:U 0000000023a9e8d0 000007fef8f814fc mscorwks!Thread::intermediateThreadProc+0x78
24:U 0000000023a9f7a0 00000000773cbe3d kernel32!BaseThreadInitThunk+0xd
25:U 0000000023a9f7d0 00000000775d6a51 ntdll!RtlUserThreadStart+0x1d
Yes, the caching block has some - issues - with regard to the scavenger threads in older versions of Entlib, particularly if things are coming in faster than the scavenging settings let them come out.
This was completely rewritten in Entlib 5, so that now you'll never have more than two threads sitting in the caching block, regardless of the load, and usually it'll only be one.
Unfortunately there's no easy tweak to change the behavior in earlier versions. The best you can do is change the cache settings so that each scavenge will clean out more items at a time so not as many scavenge requests need to get scheduled.
640 threads is very bad for performance. If they are all waiting for something, then I'd say it's a fair bet that you have a deadlock and they will never exit. If they are all running (not waiting)... well, with 600+ threads on a 2 or 4 core processor none of them will get enough time slices to run very far! ;>
If your app is set up with a main thread that waits on the thread handles to find out when the threads exit, and the background threads get caught up in a loop or in a wait state and never exit the thread proc, then the process and all of its threads will never exit.
Check your thread code to make sure that every threadproc has a clear path to exit the threadproc. It's bad form to write an infinite loop in a background thread on the assumption that the thread will be forcibly terminated when the process shuts down.
If the background thread code spins in a loop waiting for an event handle to signal, make sure that you have some way to signal that event so that the thread can perform a normal orderly exit. Otherwise, you need to write the background thread to wait on multiple events and unblock when any one of the events signals. One of those events can be the activity that the background thread is primarily interested in and the other can be a shutdown event.
From the names of things in the stack dump you posted, it would appear that the thread is waiting for something to appear in the ProducerConsumerQueue. Investigate how that queue object is supposed to be shut down, probably on the producer side, and whether shutting down the queue will automatically release all consumers that are waiting on that queue.
My guess is that either the queue is not being shut down correctly or shutting it down does not implicitly release the consumers that are waiting on it. If the latter case, you may need to pump a terminate message through the queue to wake up all the consumers waiting on that queue and tell them to break out of their wait loop and exit.
You have an major issue. Every Thread occupies 1MB of stack and there is significant cost paid for Context Switching every thread in and out. Especially it becomes worst with managed code because every time GC has to run , it would have walk the threads stack to look for roots and when these threads are paged to the disk the cost to read from the disk is expensive,which adds up Perf issue.
Creating threads are Bad unless you know what you are doing? Jeffery Richter has written in detail about this.
To solve the above issue I would look what these threads are blocked on and also put a break-point on Thread Create (example sxe ct within windbg)
And later rearchitect from avoid creating threads , instead use the thread pool.
It would have been nice to some callstacks of these threads.
In Microsoft Enterprise Library 4.1, the BackgroundScheduler class creates a new thread each time an object is instantiated. It will be fixed in version 5.0. I do not know enough of this Microsoft Library to advise you how to avoid that behavior, but you may try the beta version: http://entlib.codeplex.com/wikipage?title=EntLib5%20Beta2
I'm struggling with multi-threaded programming...
I have an application that talks to an external device via a CAN to USB
module. I've got the application talking on the CAN bus just fine, but
there is a requirement for the application to transmit a "heartbeat"
message every second.
This sounds like a perfect time to use threads, so I created a thread
that wakes up every second and sends the heartbeat. The problem I'm
having is sharing the CAN bus interface. The heartbeat must only be sent
when the bus is idle. How do I share the resource?
Here is pseudo code showing what I have so far:
TMainThread
{
Init:
CanBusApi =new TCanBusApi;
MutexMain =CreateMutex( "CanBusApiMutexName" );
HeartbeatThread =new THeartbeatThread( CanBusApi );
Execution:
WaitForSingleObject( MutexMain );
CanBusApi->DoSomething();
ReleaseMutex( MutexMain );
}
THeartbeatThread( CanBusApi )
{
Init:
MutexHeart =CreateMutex( "CanBusApiMutexName" );
Execution:
Sleep( 1000 );
WaitForSingleObject( MutexHeart );
CanBusApi->DoHeartBeat();
ReleaseMutex( MutexHeart );
}
The problem I'm seeing is that when DoHeartBeat is called, it causes the
main thread to block while waiting for MutexMain as expected, but
DoHeartBeat also stops. DoHeartBeat doesn't complete until after
WaitForSingleObject(MutexMain) times out in failure.
Does DoHeartBeat execute in the context of the MainThread or
HeartBeatThread? It seems to be executing in MainThread.
What am I doing wrong? Is there a better way?
Thanks,
David
I suspect that the CAN bus API is single-threaded under the covers. It may be marshaling your DoHeartBeat() request from your second thread back to the main thread. In that case, there would be no way for it to succeed since your main thread is blocked. You can fix this in basically two ways: (1) send a message to the main thread, telling it to do the heart beat, rather than doing it on the second thread; or (2) use a timer on the main thread for your heart beat instead of a second thread. (I do think that multithreading is overkill for this particular problem.)
First, re-read the specs about the heartbeat. Does it say that an actual heartbeat message must be received every second, or is it necessary that some message be received every second, and that a heartbeat should be used if no other messages are in flight? The presence of data on the channel is de-facto evidence that the communications channel is alive, so no specific heartbeat message should be required.
If an actual heartbeat message is required, and it's required every second, in the above code there should be only one mutex and both threads need to share it. The code as written creates two separate mutexes, so neither will actually block. You'll end up with a collision on the channel and Bad Things Will Happen in CanBusApi. Make MainMutex visible a global/class variable and have both threads reference it.