I stumbled across some code similar to below:
private void SomeCallBack(object state)
{
lock (_lock)
{
try
{
if (_timer == null)
return;
_timer.Dispose();
// do some work here
}
catch
{
// handle exception
}
finally
{
_timer = new Timer(SomeCallBack, state, 100, Timeout.Infinite);
}
}
}
I don't understand the purpose of recreating the timer every time the callback is executed. I think what the code is trying to achieve is that only one thread can perform the work at a time. But wouldn't the lock be sufficient?
Also, according to msdn,
Note that callbacks can occur after the Dispose() method overload has been called
Is there any benefits of doing this?
If so, do the benefits justify the overheads in disposing and creating the timer?
Thanks for your help.
It seems that the code wants a nearly periodic timer (not exactly periodic because of the jitter introduced by the processing between the expiration of the timer and creation of the new timer). Disposing and recreating the timer is indeed an unnecessary overhead. The Change method would be better.
The check for null is also curious; somewhere else there would have to be code that sets _timer null for it to have any effect.
The reason for recreating the timer would be for the scenario where the code in the timer callback takes longer to execute than the timer period. In this case multiple instances of the callback would be running at the same time.
Related
I have an unusual issue with TaskCompletionSource that has me baffled. I have a TaskCompletionSource waiting for the task to complete once i call the TrySetResult. I call this in three places in the code: from a WCF thread immediately to return a value to an APM WCF BeginXXX EndXXX; from another WCF thread to return immediately to the APM; lastly from an NServiceBus handler thread.
I started with the ubiquitous ToAPM provided by MS-PL. http://blogs.msdn.com/b/pfxteam/archive/2011/06/27/using-tasks-to-implement-the-apm-pattern.aspx
I noticed that the two WCF based threads worked 100% of the time. in 100 hours of hard testing, additionally extensive unit tests, I have never experienced a single failure to return a completed task to the AsyncCallback.
From the MS provided ToAPM code, the code uses a ContinueWith on the completed task to call the AsyncCallback in a schedule enabled task.
The problem I have not solved is the NServiceBus threads calling the TrySetResult on the TaskCompletionSource object. I find times of outages, where for undefined periods of time, the call simply fails. I set break points in the code for both the call and inside the ContinueWith code. I get the break point on the TrySetResult always, but only sometimes on the code inside the ContinueWith code.
The following information hopefully will shed some light on the matter.
I use a CancellationTokenSource with a timeout and setting a result to call the TrySetResult on TaskCompletionSource obj. When the above call does not work to move the task to completed, the timeout code fires. This timeout code has never not worked. it succeeds 100% of the time.
What is interesting is this, in the same code that calls the TrySetResult from the NServiceBus thread, when it works, it works as easily calling the cancellation object's Cancel as it does the TrySetResult on the TaskCompletionSource obj.
When one fails they both fail.
Then after an indiscriminate period of time it works again.
This is a WCF server in a production and QA environment and each displays identical results.
What is most weird is the following, for one WCF connection, the NServiceBus thread succeeds and another fails at the same time. Then at times both work, and then both fail. Again, all at the same time.
I have tried a number of things to work around the issue to no avail:
I wrapped the call to TrySetResult in a TaskCompletionSource + ContinueWith -- fail
I wrapped the call in a Task.Factory.StartNew -- fail
I call it directly -- fail
I really do not know what else to try.
I put in checks to ensure that the TaskCompletionSource obj is not completed, and during the outage it is not.
I put in checks to ensure the CancellationTokenSource object is not cancelled or has a cancellation pending during the outage, it does not.
I examined the objects in the debugger and they seem good.
They just do not work sometimes.
Could there be an inconsistency in the NserviceBus threads that sometimes prevent the calls from working?
Is there some thread marshaling I can try?
I searched everywhere and I have not see one mention of this problem. Is it unique?
I am totally baffled and need some ideas.
Remove the call from the NServiceBus thread execution. Isolate the call to TrySetResult using a thread such as QueueUserWorkItem or spinning your own thread. Since, the executing resumes using the thread, you may need some additional threads to handle the throughput. Ether spin multiple dedicated threads or use the thread pool. I tested calling TrySetResult in a dedicate threads and they work.
Here is code to demonstrate a single dedicated thread:
public static void Spin()
{
ClientThread = new Thread(new ThreadStart(() =>
{
while (true)
{
try
{
if (!HasSomething.WaitOne(1000, false))
continue;
while (true)
{
WaitingAsyncData entry = null;
lock (qlocker)
{
if (!Trigger.Any())
break;
entry = Trigger.Dequeue();
}
if (entry == null)
break;
entry.TrySetResult("string");
}
}
catch
{
}
}
}));
ClientThread.IsBackground = true;
ClientThread.Start();
}
Here is the ThreadPool example code:
ThreadPool.QueueUserWorkItem(delegate
{
entry.TrySetResult("string");
});
Using the ThreadPool rather than static thread provides greater flexibility and scaleability.
I already have a windows service running with a System.Timers.Timer that do a specific work. But, I want some works to run at the same time, but in different threads.
I've been told to create a different System.Timers.Timer instance. Is this correct? Is this way works running in parallel?
for instance:
System.Timers.Timer tmr1 = new System.Timers.Timer();
tmr1.Elapsed += new ElapsedEventHandler(DoWork1);
tmr1.Interval = 5000;
System.Timers.Timer tmr2 = new System.Timers.Timer();
tmr2.Elapsed += new ElapsedEventHandler(DoWork2);
tmr2.Interval = 5000;
Will tmr1 and tmr2 run on different threads so that DoWork1 and DoWork2 can run at the same time, i.e., concurrently?
Thanks!
It is not incorrect.
Be careful. System.Timers.Timer will start a new thread for every Elapsed event. You'll get in trouble when your Elapsed event handler takes too long. Your handler will be called again on another thread, even though the previous call wasn't completed yet. This tends to produce hard to diagnose bugs. Something you can avoid by setting the AutoReset property to false. Also be sure to use try/catch in your event handler, exceptions are swallowed without diagnostic.
Multiple timers might mean multiple threads. If two timer ticks occur at the same time (i.e. one is running and another fires), those two timer callbacks will execute on separate threads, neither of which will be the main thread.
It's important to note, though, that the timers themselves don't "run" on a thread at all. The only time a thread is involved is when the timer's tick or elapsed event fires.
On another note, I strongly discourage you from using System.Timers.Timer. The timer's elapsed event squashes exceptions, meaning that if an exception escapes your event handler, you'll never know it. It's a bug hider. You should use System.Threading.Timer instead. System.Timers.Timer is just a wrapper around System.Threading.Timer, so you get the same timer functionality without the bug hiding.
See Swallowing exceptions is hiding bugs for more info.
Will tmr1 and tmr2 run on different threads so that DoWork1 and DoWork2 can run at the same time, i.e., concurrently?
At the start, yes. However, what is the guarantee both DoWork1 and DoWork2 would finish within 5 seconds? Perhaps you know the code inside DoWorkX and assume that they will finish within 5 second interval, but it may happen that system is under load one of the items takes more than 5 seconds. This will break your assumption that both DoWorkX would start at the same time in the subsequent ticks. In that case even though your subsequent start times would be in sync, there is a danger of overlapping current work execution with work execution which is still running from the last tick.
If you disable/enable respective timers inside DoWorkX, however, your start times will go out of sync from each other - ultimately possible they could get scheduled over the same thread one after other. So, if you are OK with - subsequent start times may not be in sync - then my answer ends here.
If not, this is something you can attempt:
static void Main(string[] args)
{
var t = new System.Timers.Timer();
t.Interval = TimeSpan.FromSeconds(5).TotalMilliseconds;
t.Elapsed += (sender, evtArgs) =>
{
var timer = (System.Timers.Timer)sender;
timer.Enabled = false; //disable till work done
// attempt concurrent execution
Task work1 = Task.Factory.StartNew(() => DoWork1());
Task work2 = Task.Factory.StartNew(() => DoWork2());
Task.Factory.ContinueWhenAll(new[]{work1, work2},
_ => timer.Enabled = true); // re-enable the timer for next iteration
};
t.Enabled = true;
Console.ReadLine();
}
Kind of. First, check out the MSDN page for System.Timers.Timer: http://msdn.microsoft.com/en-us/library/system.timers.timer.aspx
The section you need to be concerned with is quoted below:
If the SynchronizingObject property is null, the Elapsed event is
raised on a ThreadPool thread. If processing of the Elapsed event
lasts longer than Interval, the event might be raised again on another
ThreadPool thread. In this situation, the event handler should be
reentrant.
Basically, this means that where the Timer's action gets run is not such that each Timer has its own thread, but rather that by default, it uses the system ThreadPool to run the actions.
If you want things to run at the same time (kick off all at the same time) but run concurrently, you can not just put multiple events on the elapsed event. For example, I tried this in VS2012:
static void testMethod(string[] args)
{
System.Timers.Timer mytimer = new System.Timers.Timer();
mytimer.AutoReset = false;
mytimer.Interval = 3000;
mytimer.Elapsed += (x, y) => {
Console.WriteLine("First lambda. Sleeping 3 seconds");
System.Threading.Thread.Sleep(3000);
Console.WriteLine("After sleep");
};
mytimer.Elapsed += (x, y) => { Console.WriteLine("second lambda"); };
mytimer.Start();
Console.WriteLine("Press any key to go to end of method");
Console.ReadKey();
}
The output was this:
Press any key to go to end of method
First lambda.
Sleeping 3 seconds
After sleep
second lambda
So it executes them consecutively not concurrently. So if you want "a bunch of things to happen" upon each timer execution, you have to launch a bunch of tasks (or queue up the ThreadPool with Actions) in your Elapsed handler. It may multi-thread them, or it may not, but in my simple example, it did not.
Try my code yourself, it's quite simple to illustrate what's happening.
I have a set of 12 threads executing work (Runnable) in parallel. In essence, each thread does the following:
Runnable r;
while (true) {
synchronized (work) {
while (work.isEmpty()) {
work.wait();
}
r = work.removeFirst();
}
r.execute();
}
Work is added as following:
Runnable r = ...;
synchronized (work) {
work.add(r);
work.notify();
}
When new work is available, it is added to the list and the lock is notified. If there is a thread waiting, it is woken up, so it can execute this work.
Here lies the problem. When a thread is woken up, it is very likely that another thread will execute this work. This happens when the latter thread is done with its previous work and re-enters the while(true)-loop. The smaller/shorter the work actions, the more likely this will happen.
This means I am waking up a thread for nothing. As I need high throughput, I believe this behavior will lower the performance.
How would you solve this? In theory, I need a mechanism which allows me to cancel a pending thread wake-up notification. Of course, this is not possible in Java.
I thought about introducing a work list for each thread. Instead of pushing the work into one single list, the work is spread over the 12 work lists. But I believe this will introduce other problems. For example, one thread might have a lot of work pending, while another thread might have no work pending. In essence, I believe that a solution which assigns work to a particular thread in advance might become very complex and and is sub-optimal.
Thanks!
What you are doing is a thread pooling. Take a look at pre java-5 concurrency framework, PooledExecutor class there:
http://gee.cs.oswego.edu/dl/classes/EDU/oswego/cs/dl/util/concurrent/intro.html
In addition to my previous answer - another solution. This question makes me curious.
Here, I added a check with volatile boolean.
It does not completely avoid the situation of uselessly wakening up a thread but helps to avoid it. Actually, I do not see how this could be completely avoided without additional restrictions like "we know that after 100ms a job will most likely be done".
volatile boolean free = false;
while (true) {
synchronized (work) {
free = false; // new rev.2
while (work.isEmpty()) {
work.wait();
}
r = work.removeFirst();
}
r.execute();
free = true; // new
}
--
synchronized (work) {
work.add(r);
if (!free) { // new
work.notify();
} // new
free = false; // new rev.2
}
We recently adopted the TPL as the toolkit for running some heavy background tasks.
These tasks typically produce a single object that implements IDisposable. This is because it has some OS handles internally.
What I want to happen is that the object produced by the background thread will be properly disposed at all times, also when the handover coincides with application shutdown.
After some thinking, I wrote this:
private void RunOnUiThread(Object data, Action<Object> action)
{
var t = Task.Factory.StartNew(action, data, CancellationToken.None, TaskCreationOptions.None, _uiThreadScheduler);
t.ContinueWith(delegate(Task task)
{
if (!task.IsCompleted)
{
DisposableObject.DisposeObject(task.AsyncState);
}
});
}
The background Task calls RunOnUiThread to pass its result to the UI thread. The task t is scheduled on the UI thread, and takes ownership of the data passed in. I was expecting that if t could not be executed because the ui thread's message pump was shut down, the continuation would run, and I could see that that the task had failed, and dispose the object myself. DisposeObject() is a helper that checks if the object is actually IDisposable, and non-null, prior to disposing it.
Sadly, it does not work. If I close the application after the background task t is created, the continuation is not executed.
I solved this problem before. At that time I was using the Threadpool and the WPF Dispatcher to post messages on the UI thread. It wasn't very pretty, but in the end it worked. I was hoping that the TPL was better at this scenario. It would even be better if I could somehow teach the TPL that it should Dispose all leftover AsyncState objects if they implement IDisposable.
So, the code is mainly to illustrate the problem. I want to learn about any solution that allows me to safely handover Disposable objects to the UI thread from background Tasks, and preferably one with as little code as possible.
When a process closes, all of it's kernel handles are automatically closed. You shouldn't need to worry about this:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms686722(v=vs.85).aspx
Have a look at the RX library. This may allow you to do what you want.
From MSDN:
IsCompleted will return true when the Task is in one of the three
final states: RanToCompletion, Faulted, or Canceled
In other words, your DisposableObject.DisposeObject will never be called, because the continuation will always be scheduled after one of the above conditions has taken place. I believe what you meant to do was :
t.ContinueWith(t => DisposableObject.DisposeObject(task.AsyncState),
TaskContinuationOptions.NotOnRanToCompletion)
(BTW you could have simply captured the data variable rather than using the AsyncState property)
However I wouldn't use a continuation for something that you want to ensure happens at all times. I believe a try-finally block will be more fitting here:
private void RunOnUiThread2(Object data, Action<Object> action)
{
var t = Task.Factory.StartNew(() =>
{
try
{
action(data);
}
finally
{
DisposableObject.DisposeObject(task.AsyncState);
//Or use a new *foreground* thread if the disposing is heavy
}
}, CancellationToken.None, TaskCreationOptions.None, _uiThreadScheduler);
}
I’m trying to issue web requests asynchronously. I have my code working fine except for one thing: There doesn’t seem to be a built-in way to specify a timeout on BeginGetResponse. The MSDN example clearly show a working example but the downside to it is they all end up with a
SomeObject.WaitOne()
Which again clearly states it blocks the thread. I will be in a high load environment and can’t have blocking but I also need to timeout a request if it takes more than 2 seconds. Short of creating and managing a separate thread pool, is there something already present in the framework that can help me?
Starting examples:
http://msdn.microsoft.com/en-us/library/ms227433(VS.100).aspx
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.begingetresponse.aspx
What I would like is a way for the async callback on BeginGetResponse() to be invoked after my timeout parameter expires, with some indication that a timeout occurred.
The seemingly obvious TimeOut parameter is not honored on async calls.
The ReadWriteTimeout parameter doesn't come into play until the response returns.
A non-proprietary solution would be preferable.
EDIT:
Here's what I came up with: after calling BeginGetResponse, I create a Timer with my duration and that's the end of the "begin" phase of processing. Now either the request will complete and my "end" phase will be called OR the timeout period will expire.
To detect the race and have a single winner I call increment a "completed" counter in a thread-safe manner. If "timeout" is the 1st event to come back, I abort the request and stop the timer. In this situation, when "end" is called the EndGetResponse throws an error. If the "end" phase happens first, it increments the counter and the "timeout" foregoes aborting the request.
This seems to work like I want while also providing a configurable timeout. The downside is the extra timer object and the callbacks which I make no effort to avoid. I see 1-3 threads processing various portions (begin, timed out, end) so it seems like this working. And I don't have any "wait" calls.
Have I missed too much sleep or have I found a way to service my requests without blocking?
int completed = 0;
this.Request.BeginGetResponse(GotResponse, this.Request);
this.timer = new Timer(Timedout, this, TimeOutDuration, Timeout.Infinite);
private void Timedout(object state)
{
if (Interlocked.Increment(ref completed) == 1)
{
this.Request.Abort();
}
this.timer.Change(Timeout.Infinite, Timeout.Infinite);
this.timer.Dispose();
}
private void GotRecentSearches(IAsyncResult result)
{
Interlocked.Increment(ref completed);
}
You can to use a BackgroundWorker to run your HttpWebRequest into a separated thread, so your main thread still alive. So, this background thread will be blocked, but first one don't.
In this context, you can to use a ManualResetEvent.WaitOne() just like in that sample: HttpWebRequest.BeginGetResponse() method.
What kind of an application is this? Is this a service proces/ web application/console app?
How are you creating your work load (i.e requests)? If you have a queue of work that needs to be done, you can start off 'N' number of async requests (with the framework for timeouts that you have built) and then, once each request completes (either with timeout or success) you can grab the next request from the queue.
This will thus become a Producer/consumer pattern.
So, if you configure your application to have a maximum of "N' requests outstanding, you can maintain a pool of 'N' timers that you reuse (without disposing) between the requests.
Or, alternately, you can use ThreadPool.SetTimerQueueTimer() to manage your timers. The threadpool will manage the timers for you and reuse the timer between requests.
Hope this helps.
Seems like my original approach is the best thing available.
If you can user async/await then
private async Task<WebResponse> getResponseAsync(HttpWebRequest request)
{
var responseTask = Task.Factory.FromAsync(request.BeginGetResponse, ar => (HttpWebResponse)request.EndGetResponse(ar), null);
var winner = await (Task.WhenAny(responseTask, Task.Delay(new TimeSpan(0, 0, 20))));
if (winner != responseTask)
{
throw new TimeoutException();
}
return await responseTask;
}