I have SSLStream.readAsync() thread running which I need to cancel when internet is disconnected.
int bytesRead = await this.m_sslStream.ReadAsync(this.readBuffer, 0, ReadBufferSize)
As it is taking long time to return the exception when disconnected and application getting hang.
I tried by adding the condition where I am checking Internet status and trying to skip the Read call, but the thread is already start and trying to read data when internet disconnect so it is taking time to throw exception.
You could pass a CancellationToken to the method and cancel it when "Internet access is lost", but this is quite unusual. For one thing, it's difficult for an app to know when Internet access is lost.
It's more normal to just let the call eventually error. You can prevent application hangs by not waiting for the task to complete in that situation, e.g., by using Task.WaitAsync.
Related
I'm using the web sockets library from racket, and my use case requires to maintain permanent connections, so I frequently establish new connections when the server closes one, the thing is that after some time the server spontaneously raises an internal error saying that an attempt to reschedule a dead thread was made, but the exception raises from within another thread that is not the main one, I want to catch that kind of error/exception that comes from other threads I have no direct control over as I really don't have time to hack and fix whatever happens in the library.
We are using Azure service bus via NServiceBus and I am facing a problem with deciding the correct architecture for dealing with long running tasks as a result of messages.
As is good practice, we don't want to block the message handler from returning by making it wait for long running processes (downloading a large file from a remote server), and actually doing so will cause the lock on the message to be lost with Azure SB. The plan is to respond by spawning a separate task and allow the message handler to return immediately.
However this means that the handler is now immediately available for the next message which will cause another task to be spawned and so on until the message queue is empty. What I'd like is some way to stop taking messages while we are processing (a limited number of) earlier messages. Is there an accepted pattern for this with NServiceBus and Azure Service Bus?
The following is what I'd kind of do if I was programming directly against the Azure SB
{
while(true)
{
var message = bus.Next();
message.Complete();
// Do long running stuff here
}
}
The verbs Next and Complete are probably wrong but what happens under Azure is that Next gets a temporary lock on the message so that other consumers can no longer see the message. Then you can decide if you really want to process the message and if so call Complete. That then removes the message from the queue entirely, failing to do so will cause the message to appear back on the queue after a period of time as Azure assumes you crashed. As dirty as this code looks it would achieve my goals (so why not do it?) as my consumer is only going to consume the next time I'm available (after the long running task). Other consumers (other instances) can jump in if necessary.
The problem is that NServiceBus adds a level of abstraction so that now handling a message is via a method on a handler class.
void Handle(NewFileMessage message)
{
// Do work here
}
The problem is that Azure does not get the call to message.Complete() until after your work and after the Handle method exits. This is why you need to keep the work short. However if you exit you also signal that you are ready to handle another message. This is my Catch 22
Downloading on a background thread is a good idea. You don't want to to increase lock duration, because that's a symptom, not the problem. Your download can easily get longer than maximum lock duration (5mins) and then you're back to square one.
What you can do is have an orchestrating saga for download. Saga can monitor the download process and when download is completed, b/g process would signal to the saga about completion. If download is never finished, you can have a timeout (or multiple timeouts) to indicate that and have a compensating action or retry, whatever works for your business case.
Documentation on Sagas should get you going: http://docs.particular.net/nservicebus/sagas/
In Azure Service Bus you can increase the lock duration of a message (default set to 30 seconds) in case the handling will take a long time.
But, besides you are able to increase your lock duration, it's generally an indication that your handler takes care of to much work which can be divided over different handlers.
If it is critical that the file is downloaded, I would keep the download operation in the handler. That way if the download fails the message can be handled again and the download can be retried. If however you want to free up the handler instantly to handle more messages, I would suggest that you scale out the workers that perform the download task so that the system can cope with the demand.
It´s weird and I´m not sure but sometime ago I remember I do something like this
SubscriptionClient Client = SubscriptionClient.CreateFromConnectionString(this._connectionString, topicName, subscription);
BrokeredMessage message = Cient.Receive(TimeSpan.MaxValue);
And the call that days returns in max a minute the null or a message
But the specific question is, I want to know which is the maxtime by default a server response can wait (service bus), until it returns a message even if it is null
Also I know Timespan.MaxValue is the value to wait that I´m indicating, but I really need to know if i put maxvalue (a lot of time not to wait until it finish and discover it), when does Azure will get the return of the message
From an API perspective you can pass in any TimeSpan value and it will be accepted. The reasons for it to return sooner than you have specified even when there is no message could be network glitches, service side updates etc.
The time you decide to put there should be based on how often you expect messages and also when you want control back to be able to cleanly shutdown your client process. Say you expect messages every minute then setting a timeout of 5 minutes and getting a null back could indicate that the system is not healthy. Also say you are shutting down the service, you want to not call the next receive and want all the pending ones to complete and that way you can limit the timeout to a couple minutes.
I get the following error:
Connection timeout. No heartbeat received.
When accessing my meteor app (http://127.0.0.1:3000). The application has been moved over to a new pc with the same code base - and the server runs fine with no errors, and I can access the mongodb. What would cause the above error?
The problem seems to occur when the collection is larger. however I have it running on another computer which loads the collections instantaneously. The connection to to sock takes over a minute and grows in size, before finally failing:
Meteor's DDP implements Sockjs's Heartbeats used for long-polling. This is probably due to DDP Heartbeat's timeout default of 15s. If you access a large amount of data and it takes a lot of time, in your case, 1 minute, DDP will time out after being blocked long enough by the operation to prevent connections being closed by proxies (which can be worse), and then try to reconnect again. This can go on forever and you may never get the process completed.
You can try hypothetically disconnecting and reconnecting in short amount of time before DDP closes the connection, and divide the database access into shorter continuous processes which you can pick up on each iteration and see if the problem persists:
// while cursorCount <= data {
Meteor.onConnection(dbOp);
Meteor.setTimeout(this.disconnect, 1500); // Adjust timeout here
Meteor.reconnect();
cursorCount++;
}
func dbOp(cursorCount) {
// database operation here
// pick up the operation at cursorCount where last .disconnect() left off
}
However, when disconnected all live-updating will stop as well, but explicitly reconnecting might make up for smaller blocking.
See a discussion on this issue on Google groupand Meteor Hackpad
I have an Application that facilitates file uploading to the server using http request.
I'm using HttpSendRequestEx and HttpEndRequest to do the file uploading.
And it is working fine. And the uploading is performed inside a separate thread.
in which the files to be uploaded are processed one by one.
Now a new requirement is that the user should be able to cancel the uploading at any time.
As the uploading is performed inside a thread currently I dont have any control over that.
So as a work around, what I have done is,
When the user clicks the button for cancelling the upload,
the following HINTERNET handles
hSession returned from InternetOpen
hConnect returned from InternetConnect
hRequest returned from HttpOpenRequest
has been closed using InternetClosehandle function.
As these handles are used inside the thread, it is not able to access directly.
So I have declared these as static members of the class in whcich the thread runs.
So that I can access it directly when the button click is occurred.
By doing this the request which is under process gets cancelled immediately.
But I dont know whether this is a good way. So I would like to know whether there may have any security issues on proceeding this logic..
Also is there any option better than this....
Kindly have a comment on this. Expects an expert advise on this..
Thanks in advance
From a TCP socket point of view, this is a correct way to interrupt the request: It is perfectly legal to close a socket to stop and close a connection doing some work on a different thread.
That said, WinInet is more than a wrapper around a socket so we can't be sure that there's no resource or memory leak.
I would test it by writing a test program that creates a lot of such interrupted uploads and look for resource leaks on the Performance tab of the process in Process Explorer. I would especially watch the Handles count and probably the Virtual Memory as well.
Also, before closing the handles, you could try to close the HTTP connection by calling HttpEndRequest(), which may possibly clean a little more things.