WebRequest.BeingGetResponse and IAsyncResult.AsyncWaitHandle not working - multithreading

WebRequest.BeginGetResponse returns IAsyncResult, which has a member AsyncWaitHandle.
Initially, I thought that I could just wait on that in the initiating code. But it turns out that the event is signaled as soon as the request is made and before and not after EndGetResponse is called. This seems unintuitive for me but whatever.
So, I've looked for some examples out there and there seems to be two ways of going about it.
One is simply to create a ManualResetEvent and pass that in as user state so that in the callback I can set it after EndGetResponse.
The other is to use ThreadPool.RegisterWaitForSingleObject. Something like:
ManualResetEvent waitHandle = new ManualResetEvent(false);
ThreadPool.RegisterWaitForSingleObject(asyncResult.AsyncWaitHandle,
new WaitOrTimerCallback((s, t) => { waitHandle.Set(); }), null, -1, true);
waitHandle.WaitOne();
That works even if ugly. And looking at the MSDN documentation for BeginGetResponse, that's how the code sample does it.
My question is, passing in a ManualResetEvent as user state, seems to be much simpler to me. What benefit does this ThreadPool.RegisterWaitforSingleObject bring?

You use that WaitHandle to wait for that Request to get a Response. When the WaitHandle gets a signal you know that a Response has arrived, and then you call EndGetResponse to actually get the Response.

Related

Kivy: adding widgets from another thread

I've been stuck on this same issue for short of a week now:
the program should add widgets based on a http request. However, that request may take some time depending on user's internet connection, so I decided to thread that request and add a spinner to indicate that something is being done.
Here lies the issue. Some piece of code:
#mainthread
def add_w(self, parent, widget):
parent.add_widget(widget)
def add_course():
# HTTP Request I mentioned
course = course_manager.get_course(textfield_text)
courses_stack_layout = constructor_screen.ids.added_courses_stack_layout
course_information_widget = CourseInformation(coursename_label=course.name)
self.add_w(courses_stack_layout, course_information_widget)
constructor_screen.ids.spinner.active = False
add_course is being called from a thread, and spinner.active is being set True before calling this function. Here's the result, sometimes: messed up graphical interface
I also tried solving this with clock.schedule_once and clock.schedule_interval with a queue. The results were the same. Sometimes it works, sometimes it doesn't. The spinner does spin while getting the request, which is great.
Quite frankly, I would've never thought that implementing a spinner would be so hard.
How to implement that spinner? Maybe another alternative to threading? Maybe another alternative to urllib to make a request?
edit: any feedback on how I should've posted this so I can get more help? Is is too long? Maybe I could've been more clear?
The problem here was simply that widgets must also be created within the mainthread.
Creating another function marqued with #mainthread and calling that from the threaded one solved the issue.
Thanks for those who contributed.

a request-method in java that implements long-polling

I have already written a request-method in java that sends a request to a simple Server. I have written this simple server and the Connection is based on sockets. When the server has the answer for the request, it will send it automatically to client. Now I want to write a new method that can behave as following:
if the server does not answer after a fixed period of time, then I send a new Request to the server using my request-method
My problem is to implement this idea. I am thinking in launching a thread, whenever the request-method is executed. If this thread does not hear something for fixed period of time, then the request method should be executed again. But how can I hear from the same socket used between that client and server?
I am also asking,if there is a simpler method that does not use threads
curently I am working on this idea
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer){ go to (1)}
else{
exit
}
I have some difficulties in step 3. How I can go to (1)
You may be able to accomplish this with a single thread using a SocketChannel and a Selector, see also these tutorials on SocketChannel and Selector. The gist of it is that you'll use long-polling on the Selector to let you know when your SocketChannel(s) are ready to read/write/etc using Selector#select(long timeout). (SocketChannel supports non-blocking, but from your problem description it sounds like things would be simpler using blocking)
SocketChannel socketChannel = SocketChannel.open();
socketChannel.connect(new InetSocketAddress("http://jenkov.com", 80));
Selector selector = Selector.open();
SelectionKey key = socketChannel.register(selector, SelectionKey.OP_READ);
// returns the number of channels ready after 5000ms; if you have
// multiple channels attached to the selector then you may prefer
// to iterate through the SelectionKeys
if(selector.select(5000) > 0) {
SocketChannel keyedChannel = (SocketChannel)key.channel();
// read/write the SocketChannel
} else {
// I think your best bet here is to close and reopen the Socket
// or to reinstantiate a new socket - depends on your Request method
}
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer) then go to (1)

Is there a standard pattern for verifying an async request is still needed?

In mobile apps apps we can't (or should not) make network requests on the main thread. We normally get the results of the request back via a callback or a closure that is executed on the main thread when the result is available. Since the user may have moved on or the result may no longer be need, for example it may be an old request arriving out of order, we need to check that the action in the callback or closure should actually be executed based on the current state of the app.
In the case of iOS and swift I am planning on using closures so I am thinking of doing something like this for every request I make.
assume I have a method that looks something like this
func makeRequest(identifier: String, handler: (ident: String, result: ResultObject) -> Void) {
...
...
handler(identifier, result)
}
In addition to the handler that will be called when the result is available, I will pass in the value of an identifier, which in turn will be passed to the handler when it is called. The closure will capture a reference to the identifier when the request is created, so it be able to get the value that the reference holds at the time the handler is actually called. So it would look something like this, where ident is the value that commandIdentifier was when the request was made, and commandIdentifier inside the closure will be the value when the closure is actually executed.
commandIdentifer = "some unique identifier"
makeRequest(commandIdentifer) { ident, result in
if commandIdentifier == ident {
// do something
} else {
// do something else
}
}
I don't think there is anything special here, so my question is this:
Is this a general pattern, and if so where can I find any documentation on it?
I am particularly interested if there is some general way of creating the identifier and how to relate its reference in the main thread.
Also if I am total wrong and this not a good approach, I would like to hear that as well
I've used almost exactly that approach before. I use an integer identifier, and increment it when issuing a new request. That way if the pending request is superseded by a new one you can just drop the stale response on the floor.

Can the Azure Service Bus be delayed before retrying a message?

The Azure Service Bus supports a built-in retry mechanism which makes an abandoned message immediately visible for another read attempt. I'm trying to use this mechanism to handle some transient errors, but the message is made available immediately after being abandoned.
What I would like to do is make the message invisible for a period of time after it is abandoned, preferably based on an exponentially incrementing policy.
I've tried to set the ScheduledEnqueueTimeUtc property when abandoning the message, but it doesn't seem to have an effect:
var messagingFactory = MessagingFactory.CreateFromConnectionString(...);
var receiver = messagingFactory.CreateMessageReceiver("test-queue");
receiver.OnMessageAsync(async brokeredMessage =>
{
await brokeredMessage.AbandonAsync(
new Dictionary<string, object>
{
{ "ScheduledEnqueueTimeUtc", DateTime.UtcNow.AddSeconds(30) }
});
}
});
I've considered not abandoning the message at all and just letting the lock expire, but this would require having some way to influence how the MessageReceiver specifies the lock duration on a message, and I can't find anything in the API to let me change this value. In addition, it wouldn't be possible to read the delivery count of the message (and therefore make a decision for how long to wait for the next retry) until after the lock is already required.
Can the retry policy in the Message Bus be influenced in some way, or can a delay be artificially introduced in some other way?
Careful here because I think you are confusing the retry feature with the automatic Complete/Abandon mechanism for the OnMessage event-driven message handling. The built in retry mechanism comes into play when a call to the Service Bus fails. For example, if you call to set a message as complete and that fails, then the retry mechanism would kick in. If you are processing a message an exception occurs in your own code that will NOT trigger a retry through the retry feature. Your question doesn't get explicit on if the error is from your code or when attempting to contact the service bus.
If you are indeed after modifying the retry policy that occurs when an error occurs attempting to communicate with the service bus you can modify the RetryPolicy that is set on the MessageReciver itself. There is an RetryExponitial which is used by default, as well as an abstract RetryPolicy you can create your own from.
What I think you are after is more control over what happens when you get an exception doing your processing, and you want to push off working on that message. There are a few options:
When you create your message handler you can set up OnMessageOptions. One of the properties is "AutoComplete". By default this is set to true, which means as soon as processing for the message is completed the Complete method is called automatically. If an exception occurs then abandon is automatically called, which is what you are seeing. By setting the AutoComplete to false you required to call Complete on your own from within the message handler. Failing to do so will cause the message lock to eventually run out, which is one of the behaviors you are looking for.
So, you could write your handler so that if an exception occurs during your processing you simply do not call Complete. The message would then remain on the queue until it's lock runs out and then would become available again. The standard dead lettering mechanism applies and after x number of tries it will be put into the deadletter queue automatically.
A caution of handling this way is that any type of exception will be treated this way. You really need to think about what types of exceptions are doing this and if you really want to push off processing or not. For example, if you are calling a third party system during your processing and it gives you an exception you know is transient, great. If, however, it gives you an error that you know will be a big problem then you may decide to do something else in the system besides just bailing on the message.
You could also look at the "Defer" method. This method actually will then not allow that message to be processed off the queue unless it is specifically pulled by its sequence number. You're code would have to remember the sequence number value and pull it. This isn't quite what you described though.
Another option is you can move away from the OnMessage, Event-driven style of processing messages. While this is very helpful you don't get a lot of control over things. Instead hook up your own processing loop and handle the abandon/complete on your own. You'll also need to deal some of the threading/concurrent call management that the OnMessage pattern gives you. This can be more work but you have the ultimate in flexibility.
Finally, I believe the reason the call you made to AbandonAsync passing the properties you wanted to modify didn't work is that those properties are referring to Metadata properties on the method, not standard properties on BrokeredMessage.
I actually asked this same question last year (implementation aside) with the three approaches I could think of looking at the API. #ClemensVasters, who works on the SB team, responded that using Defer with some kind of re-receive is really the only way to control this precisely.
You can read my comment to his answer for a specific approach to doing it where I suggest using a secondary queue to store messages that indicate which primary messages have been deferred and need to be re-received from the main queue. Then you can control how long you wait by setting the ScheduledEnqueueTimeUtc on those secondary messages to control exactly how long you wait before you retry.
I ran into a similar issue where our order picking system is legacy and goes into maintenance mode each night.
Using the ideas in this article(https://markheath.net/post/defer-processing-azure-service-bus-message) I created a custom property to track how many times a message has been resubmitted and manually dead lettering the message after 10 tries. If the message is under 10 retries it clones the message increments the custom property and sets the en queue of the new message.
using Microsoft.Azure.ServiceBus;
public PickQueue()
{
queueClient = new QueueClient(QUEUE_CONN_STRING, QUEUE_NAME);
}
public async Task QueueMessageAsync(int OrderId)
{
string body = JsonConvert.SerializeObject(OrderId);
var message = new Message(Encoding.UTF8.GetBytes(body));
await queueClient.SendAsync(message);
}
public async Task ReQueueMessageAsync(Message message, DateTime utcEnqueueTime)
{
int resubmitCount = (int)(message.UserProperties["ResubmitCount"] ?? 0) + 1;
if (resubmitCount > 10)
{
await queueClient.DeadLetterAsync(message.SystemProperties.LockToken);
}
else
{
Message clone = message.Clone();
clone.UserProperties["ResubmitCount"] = ++resubmitCount;
await queueClient.ScheduleMessageAsync(message, utcEnqueueTime);
}
}
This question asks how to implement exponential backoff in Azure Functions. If you do not want to use the built-in RetryPolicy (only available when autoComplete = false), here's the solution I've been using:
public static async Task ExceptionHandler(IMessageSession MessageSession, string LockToken, int DeliveryCount)
{
if (DeliveryCount < Globals.MaxDeliveryCount)
{
var DelaySeconds = Math.Pow(Globals.ExponentialBackoff, DeliveryCount);
await Task.Delay(TimeSpan.FromSeconds(DelaySeconds));
await MessageSession.AbandonAsync(LockToken);
}
else
{
await MessageSession.DeadLetterAsync(LockToken);
}
}

Where is "setTimeout" from JavaScript in Haxe?

Is there an implementation of setTimeout() and clearTimeout() in Haxe?
It's of course possible to use the Timer class, but for a one-shot execution it's not the best way, I guess.
For a one-shot execution I think that Timer.delay() is perfect. You can use the returned instance to stop the timer later:
var timer = haxe.Timer.delay(function() trace("Hello World!"), 250);
...
timer.stop();
You could also access the native setTimeout() with the js.html.Window extern:
var handle = js.Browser.window.setTimeout(function() trace("Hello World!"), 250);
...
js.Browser.window.clearTimeout(handle);
In case you're using the kha framework:
Kha modifies haxe.Timer to call kha.Scheduler, which in the end doesn't get the timestamps via setTimeout - it gets these via requestAnimationFrame().
This seems to not work while a tab is inactive, so it's not the same function while the tab is inactive.
I'm attempting a workaround, but at the moment, it doesn't give the same result as a native setTimeout()-JS does (although I found a workaround which I'll present for inclusion).

Resources