How does a sender app resume the session after being killed? - google-cast

The User Experience guidelines state that the sender app should resume the session after it is killed and restarted. Specifically it says "If the sender app gets killed, it should have the Cast session context stored and be able to resume the session from that context when the sender app is restarted." Few questions...
What does "Cast session context" mean in this situation? Is this some object or objects in the Cast API that can be persisted then restored OR or something more general?
In order for this to happen and not interfere with another app that may have cast to the same device while the original app was dead, the new instance of the original app must be able to query if a given device is running the original app's receiver, some other receiver or no receiver. How is this accomplished?
If the app is to resume to the same Chromecast device then some identifier for the device must be saved as the app is being destroyed. Is the getDeviceId in CastDevice the correct thing to store?

The preview SDK has some shortcomings that prevent you from a complete implementation of this feature. When the official SDK becomes available, this will be fully addressed.
FYI, in order to reconnect your previous "state", you need to persist certain information so next time you can identify the device/route and the session that you had initiated before. Unfortunately, as I mentioned above, the APIs that you would need for a full and clean recovery process are not completely there in the preview SDK, so you can ignore this aspect of the UX Guideline till the official release provides all the needed pieces for you.

Related

Is it possible to trigger service bus processors when the client object is disposed?

I am calling ServiceBusClient.DisposeAsync for disposing the client object. However, there are processors created from this object which starts throwing an exception saying the object is disposed and it cannot listen anymore. Is there any way to trigger auto closure of service processors when dispose is called? Or, should I get hold of all the processors created from this client and then stop the listening?
The short answer is no; there is intentionally no "stop processing on dispose" behavior for the processor. Your application is responsible for calling StopProcessingAsync.
More context:
The ServiceBusClient owns the connection shared by all child objects spawned from it. Closing/disposing the client will effectively dispose its children. If you attempt to invoke a service operation at that point, an error is triggered.
In the case of a processor, the application holds responsibility for calling start and stop. Because the processor is designed to be resilient, its goal is to continue to recover in the face of failures and keep trying to make forward progress until stop is called.
While it's true that the processor does understand that a disposed set of network resources is terminal, it has no way to understand your intent. Did your application close the ServiceBusClient with the intent that it would stop the associated processors? Did it close the client without realizing that there were processors still running?
Because the intent is ambiguous, we have to take the safer path. The processors will continue to run because they'll surface the exceptions to your application's error handler - which ensures that your application is made aware that there is an issue and allows you to respond in the way best for your application's needs.
On the other hand, if processing just stopped and you did not intend for that to happen, it would be much harder for your application to detect and remediate. You would just know that the processor stopped doing its job and there's a good chance that you wouldn't be able to understand why without a good deal of investigation.

DocuSign - How to handle system downtime

When we maintenance our server, or redeploy our external facing REST services for DocuSign, is there a way we can lock all envelopes that are currently sitting with signers? We use Connect to process signer/document updates from DocuSign, and we don't want these requests coming through while we're under maintenance.
I've seen in the documentation we can lock individual envelopes. Is the best route to run through each envelope that's still pending signature and temporarily lock it? This method seems very resource intensive considering the amount of consecutive API calls needed.
Connect supports exponential retires when the events fail to be sent to your endpoint. How long does your system down time take exactly?
When your system is back up, new events should arrive in your endpoint and you can react to them accordingly. Please let us know if you see otherwise.
https://developers.docusign.com/platform/webhooks/connect/architecture

Azure Service Bus ReceiveMessages with Sub processes

I thought my question was related to post "Azure Service Bus: How to Renew Lock?" but I have tried the RenewLockAsync.
Here is the concern, I am receiving messages from the ServBus with Sessions enabled so I get the session then receive messages. All good, here's the Rub.
There are TWO ADDITIONAL processes to complete per message. A manual transform / harvest of the message into some other object which is then sent out to a Kafka topic (stream). Note its all Async on top of this craziness. My team lead is insistent that the two sub processes can just be added INTO the receive process (ReceiveAsync) and finally call session.CompleteAsync() AFTER the OTHER two processes complete.
Well needles to say I'm consistently erroring with "The session lock has expired on the MessageSession. Accept a new MessageSession." with that architecture. I haven't even fleshed out the send to Kafka part its just mocked so its going to take longer once fleshed out.
Is it even remotely plausible to session.CompleteAsync() AFTER the sub processes or shouldn't that be done when the message is successfully received, then move on to other processing? I thought separate tasks would be more appropriate but again he didn't dig that idea..
I appreciate all insight and opinions thank you !
"The session lock has expired on the MessageSession. Accept a new MessageSession." indicates one of 2 things:
The lock has been open for too long, in which case calling "RenewLockAsync" before it expires would help.
The message lock has been explicitly released, through a call to CompleteAsync, AbandonAsync, DeadLetterAsync, etc. That would indicate a bug, since the lock can not be used after it has been released

What is the actual meaning, value and usage of Azure Service Bus' "at most once" delivery capability?

The Service Bus documentation states that "the At-Most-Once semantic can be supported by using session state to store the application state and by using transactions to atomically receive messages and update the session state." "Session" here appears to refer to Service Bus' messaging sessions, which include the ability to store arbitrary state. This mechanism lets you enroll state updates in transactions along with operations on messages.
I see how this can be used to reliably maintain the state of an application that is using message sessions. If you can update application state and complete a message in the same transaction, a properly-implemented app could potentially die anywhere in execution, and on resume would be guaranteed to inherit a state that results in successful, in-order continued session processing (sample code is here, though strangely it doesn't actually use transactions, although I see how it could and what that would accomplish).
What I don't see is how any of this translates to "at-most-once" delivery. Nothing about Service Bus, including updates to session state, can be enrolled in a distributed transaction. So what exactly does "at-most-once" mean, and what does it accomplish? And what distinguishing feature of Service Bus allows it to support "at-most-once" delivery when Azure Storage queues do not?
After looking at your post and reading through the doc, I realized it wasn't really explaining at-most-once.
So I reached out to the concerned team and confirmed that it is indeed incorrect. A PR has been raised to fix the doc accordingly.
Instead, sessions and transactions together provide a higher level of consistency which is commonly referred to as exactly-once processing (which can't really be achieved just by the message broker itself but along with a receiver capable of deduplication).
PS: at-most-once is indeed possible by simply using the ReceiveAndDelete mode

WebSphere MQ Security Authentication Exception on Unix

We have our application running on a Sun Solaris system and have a local WebSphere MQ installation. The applcation uses bindings mode to connect to queue manager. When trying to send message to the local queue, the JNDI binding is successfull but we encounter javax.jms.JMSSecurityException: MQJMS2013: invalid security authentication supplied for MQQueueManager error. When investigated found that the credentials (userid) used for authentication is not case sensitive as the user on which the application is running. The userid matches but it is not a case sensitive match. By default the user on which the application is running will be passed for authentication, but here the case sensitive match is failing. The application server is WebLogic. Appreciate any inputs.
In order to open the local queue, the application must have first connected to the queue manager successfully. The error on the remote queue is a connection error so it is not even getting to the queue manager. This suggests that you are using different connection factories and that the second one has some differences in the connectivity parameters. First step is to reconcile those differences.
Also, a MQJMS2013 Security Error can be many things, most of which are not actually MQ issues. For example some people store their managed objects in LDAP and an authentication problem there will throw this error. For people who use a filesystem-based JNDI, OS file permissions can cause the same thing. However if it is an actual WMQ issue (which this appears to be) then the linked exception will contain the MQ Reason Code (for example, MQRC=2035). If you want to be able to better diagnose MQ (or for that matter any JMS transport) issues, it pays to get in the habit of printing linked exceptions.
If you are not able to resolve this issue based on this input, I would advise updating the question with details of the managed object definitions and the reason code obtained from printing the linked exceptions.
We were using createQueueConnection() in QueueConnectionFactory for creating the connection and the issue got resolved by using the method createQueueConnection("",""). The unix userid (webA) is case sensitive and the application was trying to authenticate on the MQ with the case insensitive userid (weba) and MQ queue manager was rejecting the connection attempt. Can you tell us why the application was sending the case insensitive userid (weba) earlier?
Thanks,
Arun

Resources