I'm able to correctly set up and monitor any changes to docusign envelopes.
However, I'm trying to get this edge case working: if my listener is not active, I want docusign to retry.
I've tried adding "requireAcknowledgment" to true when creating my envelope, however, that does not seem to change anything. I can see the failures on my admin panel's connect tab. But they only retry if I manually trigger them.
event_notification = DocuSign_eSign::EventNotification.new({
:url => webhook_url,
:includeDocuments => false,
:envelopeEvents => [
DocuSign_eSign::EnvelopeEvent.new({:envelopeEventStatusCode => "sent"}),
DocuSign_eSign::EnvelopeEvent.new({:envelopeEventStatusCode => "delivered"}),
DocuSign_eSign::EnvelopeEvent.new({:envelopeEventStatusCode => "completed"}),
DocuSign_eSign::EnvelopeEvent.new({:envelopeEventStatusCode => "declined"}),
DocuSign_eSign::EnvelopeEvent.new({:envelopeEventStatusCode => "voided"}),
],
:loggingEnabled => true,
:requireAcknowledgment => true #retry on failure
})
# create the envelope definition with the template_id
envelope_definition = DocuSign_eSign::EnvelopeDefinition.new({
:status => 'sent',
:eventNotification => event_notification,
:templateId => #template_id
})
Some related threads that I've looked into: Docusign webhook listener - is there a retry?
DocuSign Connect webhook system has two queuing / retry models. The standard one is the "aggregate" model. The new one is "send individual messages" (SIM) model.
You probably have the aggregate queuing model. Its retry procedure is:
The first retry for envelope "1" will not happen until (24 hours have passed and an additional message for the configuration for an envelope 2 succeeded.)
But, if an envelope 1 message fails, and then there is a different message (different event) also for envelope 1, the second message will be tried whenever its event occurs (even if less than 24 hours). If it succeeds, then the first message will never be re-sent (since it was superseded by message 2).
(Drew is partly describing the SIM retry model.)
To switch to the SIM model
Use the eSignatures Administration tool. See the Updates section in the Account part of the navigation menu.
Connect will retry automatically once a successful publish to the same endpoint has occurred. If the first retry fails, a second will not be attempted until 24 hours have passed.
Related
I am trying to add a series of responses inside an intent handler and set a timer of 20 minutes which will trigger(at its end) a followup event.
So here is what I've tried:
agent.add(response_1);
//...
agent.add(response_n);
setTimeout(() => {
console.log("Setting follow up event")
agent.setFollowupEvent('20_MINUTES_PASSED');
}, 1200000);
Even though the log was displayed, my function execution stopped before it. I have checked the logs and I saw the message "Function execution took 26 ms, finished with status code: 200" displayed before "Setting follow up event".
I know that each function has a 3-5 sec timeout and I understand this is why the function finished its execution, but I cannot figure out how to trigger that event after those 20 minutes...
There are two issues with this idea: Cloud functions aren't meant to run for that long, you would have to use either a real server or some scheduling service for this. However, Dialogflow doesn't let you do this anyway, webhook requests time out after a few seconds. If you haven't send a response by then the agent will tell the user that your service is unavailable. You can also not initiate a new session without the users explicit request to do so, presumably because developers would quickly abuse this for spam etc. There is thus no way to trigger an event after 20 minutes.
The closest to what you are looking for are probably push notifications, but they are very limited compared to follow up events.
I'm using Azure WebJobs to poll a queue and then process the message.
Part of the message processing includes a hit to 3rd party HTTP endpoint. (e.g. a Weather api or some Stock market api).
Now, if the hit to the api fails (network error, 500 error, whatever) I try/catch this in my code, log whatever and then ... what??
If I continue .. then I assume the message will be deleted by the WebJobs SDK.
How can I:
1) Say to the SDK - please don't delete this message (so it will be retried automatically at the next queue poll and when the message is visible again).
2) Set the invisibility time value, when the SDK pops a message off the queue for processing.
Thanks!
Now, if the hit to the api fails (network error, 500 error, whatever) I try/catch this in my code, log whatever and then ... what??
The Webjobs SDK behaves like this: If your method throws an uncaught exception, the message is returned to the Queue with its dequeueCount property +1. Else, if all is well, the message is considered successfully processed and is deleted from the Queue - i.e. queue.DeleteMessage(retrievedMessage);
So don't gracefully catch the HTTP 500, throw an exception so the SDK gets the hint.
If I continue .. then I assume the message will be deleted by the WebJobs SDK.
From https://github.com/Azure/azure-content/blob/master/articles/app-service-web/websites-dotnet-webjobs-sdk-get-started.md#contosoadswebjob---functionscs---generatethumbnail-method:
If the method fails before completing, the queue message is not deleted; after a 10-minute lease expires, the message is released to be picked up again and processed. This sequence won't be repeated indefinitely if a message always causes an exception. After 5 unsuccessful attempts to process a message, the message is moved to a queue named {queuename}-poison. The maximum number of attempts is configurable.
If you really dislike the hardcoded 10-minute visibility timeout (the time the message stays hidden from consumers), you can change it. See this answer by #mathewc:
From https://stackoverflow.com/a/34093943/4148708:
In the latest v1.1.0 release, you can now control the visibility timeout by registering your own custom QueueProcessor instances via JobHostConfiguration.Queues.QueueProcessorFactory. This allows you to control advanced message processing behavior globally or per queue/function.
https://github.com/Azure/azure-webjobs-sdk-samples/blob/master/BasicSamples/MiscOperations/CustomQueueProcessorFactory.cs#L63
protected override async Task ReleaseMessageAsync(CloudQueueMessage message, FunctionResult result, TimeSpan visibilityTimeout, CancellationToken cancellationToken)
{
// demonstrates how visibility timeout for failed messages can be customized
// the logic here could implement exponential backoff, etc.
visibilityTimeout = TimeSpan.FromSeconds(message.DequeueCount);
await base.ReleaseMessageAsync(message, result, visibilityTimeout, cancellationToken);
}
I am trying to store failed notifications in a db, e.g. client does not have internet access. This will enable me to check from a backgroundService if there is a missing notification, and then create it from the backgroundService.
I therefore have the following, on my Azure App Service Mobile:
var notStat = await hub.SendWindowsNativeNotificationAsync(wnsToast, tag);
telemetry.TrackTrace("failure : " + notStat.Failure + " | Results : " + notStat.Results + " | State : " + notStat.State + " | Success : " + notStat.Success + " | trackingID : " + notStat.TrackingId + ");
The code snippet was to test the impact from the client, but no matter what I do the resulting log is just that the message was enqueued.
Question
So how to I detect failed Notifications?
Conclusion
To sum up the discussions made to the accepted answer:
When the notification has been send, the NotificationId and other relevant data, is stored in a separate Table.
The event on the client receiving the notification, will then send a message to the server stating that the notification is received. The entry is then removed from the Table.
The notifications that then are not received by the client, will through a background task be found. This will be every time the background task fires, e.g. every 6 hours, the background task will retrieve all the missing notifications. This enables the background task to create the relevant notifications and the user will not miss any notification.
The return of enqueued is expected - please, refer to the troubleshooting guidance. For more insights on what is happened try to set the EnableTestSend -
"result.State will simply state Enqueued at the end of the execution without any insight into what happened to your push. Now you can use the EnableTestSend boolean" (c) documentation
But be aware that when EnableTestSend is enabled, there are some limits (described on the same page, so will not copy paste it here to avoid the future issues with the outdated info).
You can use Per Message Telemetry functionality or REST API as well - Fiddler+some documentation.
And, as a follow-up questions, there were some discussions on SO i saw that you may find helpful: first and second.
And, as a last one, i would highly recommend (if you did not yet) to take a look at FAQ - it is important to know how different platforms handle the notifications, to avoid the situation when you try to debug something that was done by desing (for example, maybe, if the device is offline, and there are notifications, only the last will be delivered, etc).
I have something like this in my code
bus = Configure.With(activator)
.Options(o => o.SimpleRetryStrategy(errorQueueAddress: configuration.GetStringSettings("ErrorQueue")))
.Routing(r => r.TypeBased().Map<MyMessage>("endpointQueueName"))
.Transport(a => a.UseAzureServiceBus(configuration.GetStringSettings("AzureConnectionString"), configuration.GetStringSettings("InputQueueAddress"), Rebus.AzureServiceBus.Config.AzureServiceBusMode.Standard))
.Options(o => o.EnableMessageAuditing("auditQueueName"))
.Start();
...
bus.Send(message);
Assuming that "endpointQueueName" and "auditQueueName" exist on my azure service bus namespace. When i send a message of type MyMessage, i expect to find it in "endpointQueueName" queue and in "auditQueueName" queue but this doesn't happening. I find it only in "endpointQueueName" queue.
Why?
What I'm doing wrong in configuration?
You are observing the correct behavior :)
As stated in the Message Auditing documentation messages get copied to the audit queue before the message disappears, i.e. either
when HANDLING a message
when PUBLISHING a message (because it could be published to 0 subscribers - Rebus has no way of knowing)
So if your handler (which must also have message auditing configured) properly handles the message, you should see a copy (with some extra headers) in the audit queue.
I hope that makes it clearer :)
I am subscribing to a channel in Pusher on my local machine using the Javascript SDK, and I don't get any error.
However, when I publish an event to that channel it is not received by the subscriber.
I've looked at Pusher's debug console and saw that the message is indeed sent but the subscription never occurs, as the connection is somehow interrupted, apparently prior to the subscription request (i.e I get a disconnection message, as shown in the console screenshot below).
the code is pretty boilerplate:
var pusher = new Pusher('PUSHER_KEY');
channel = pusher.subscribe('game' + game.gameId);
channel.bind('statusChange', function(game) {
console.log("GOT PUSHER - STATUS " + game.status);
$scope.game.status = game.status;
});
Examining the channel.subscribed property shows that the subscription failed as it equals false. I am at the sandbox plan (max 20 connections) and am only using 2 connections.
What can disrupt the connection?
The channel object:
Console screenshot:
I don't know what's the issue exactly but enabling the logs on the client side might help your find it:
Pusher.log = function(message) {
if (window.console && window.console.log) {
window.console.log(message);
}
};
There's some resources on the website to debug that kind of problem too: http://pusher.com/docs/debugging