ServiceStack SSE OnJoin and OnLeave callbacks aren't being triggered after calling SubscribeToChannelsAsync and UnsubscribeFromChannelsAsync - servicestack

I have a single ServerEventsClient object that I use to dynamically subscribe and unsubscribe from channels as needed. I have some channels that are always open and that I pass in the constructor. I register to other channels by calling SubscribeToChannelsAsync(). The connection is actually established and I am able to communicate with the other side using it (I'm using SSE as chat), but none of our OnJoin registered methods get called. The same is true for UnsubscribeFromChannelsAsync() and OnLeave. I tried using the UpdateSubscriberAsync() and got the same results.
Worth noting is the fact that I have NotifyChannelOfSubscriptions set to true in my ServerEventsFeature.
Could the problem be in the fact that we are (un)subscribing after we initialize the ServerEventsClient object with initial channels?

When a subscribers channels subscription is updated after they've subscribed it fires an onUpdate event.

Related

TypeScript: Large memory consumption while using ZeroMQ ROUTER/DEALER

We have recently started working on Typescript language for one of the application where a queue'd communication is expected between a server and client/clients.
For achieving the queue'd communication, we are trying to use the ZeroMQ library version 4.6.0 as a npm package: npm install -g zeromq and npm install -g #types/zeromq.
The exact scenario :
The client is going to send thousands of messages to the server over ZeroMQ. The server in-turn will be responding with some acknowledgement message per incoming message from the client. Based on the acknowledgement message, the client will send next message.
ZeroMQ pattern used :
The ROUTER/DEALER pattern (we cannot use any other pattern).
Client side code :
import Zmq = require('zeromq');
let clientSocket : Zmq.Socket;
let messageQueue = [];
export class ZmqCommunicator
{
constructor(connString : string)
{
clientSocket = Zmq.socket('dealer');
clientSocket.connect(connString);
clientSocket.on('message', this.ReceiveMessage);
}
public ReceiveMessage = (msg) => {
var argl = arguments.length,
envelopes = Array.prototype.slice.call(arguments, 0, argl - 1),
payload = arguments[0];
var json = JSON.parse(msg.toString('utf8'));
if(json.type != "error" && json.type =='ack'){
if(messageQueue.length>0){
this.Dispatch(messageQueue.splice(0, 1)[0]);
}
}
public Dispatch(message) {
clientSocket.send(JSON.stringify(message));
}
public SendMessage(msg: Message, isHandshakeMessage : boolean){
// The if condition will be called only once for the first handshake message. For all other messages, the else condition will be called always.
if(isHandshakeMessage == true){
clientSocket.send(JSON.stringify(message));
}
else{
messageQueue.push(msg);
}
}
}
On the server side, we already have a ROUTER socket configured.
The above code is pretty straight forward. The SendMessage() function is essentially getting called for thousands of messages and the code works successfully but with load of memory consumption.
Problem :
Because the behavior of ZeroMQ is asynchronous, the client has to wait on the call back call ReceiveMessage() whenever it has to send a new message to ZeroMQ ROUTER (which is evident from the flow to the method Dispatch).
Based on our limited knowledge with TypeScript and usage of ZeroMQ with TypeScript, the problem is that because default thread running the typescript code (which creates the required 1000+ messages and sends to SendMessage()) continues its execution (creating and sending more messages) after sending the first message (handshake message essentially), unless all the 1000+ messages are created and sent to SendMessage() (which is not sending the data but queuing the data as we want to interpret the acknowledgement message sent by the router socket and only based on the acknowledgement we want to send the next message), the call does not come to the ReceiveMessage() call back method.
It is to say that the call comes to ReceiveMessage() only after the default thread creating and calling SendMessage() is done doing this for 1000+ message and now there is no other task for it to do any further.
Because ZeroMQ does not provide any synchronous mechanism of sending/receiving data using the ROUTER/DEALER, we had to utilize the queue as per the above code using a messageQueue object.
This mechanism will load a huge size messageQueue (with 1000+ messages) in memory and will dequeue only after the default thread gets to the ReceiveMessage() call at the end. The situation will only worsen if say we have 10000+ or even more messages to be sent.
Questions :
We have validated this behavior certainly. So we are sure of the understanding that we have explained above. Is there any gap in our understanding of either/or TypeScript or ZeroMQ usage?
Is there any concept like a blocking queue/limited size array in Typescript which would take limited entries on queue, and block any new additions to the queue until the existing ones are queues (which essentially applies that the default thread pauses its processing till the time the call back ReceiveMessage() is called which will de-queue entries from the queue)?
Is there any synchronous ZeroMQ methodology (We have used it in similar setup for C# where we pool on ZeroMQ and received the data synchronously)?.
Any leads on using multi-threading for such a scenario? Not sure if Typescript supports multi threading to a good extent.
Note : We have searched on many forums and have not got any leads any where. The above description may have multiple questions inside one question (against the rules of stackoverflow forum); but for us all of these questions are interlinked to using ZeroMQ effectively in Typescript.
Looking forward to getting some leads from the community.
Welcome to ZeroMQ
If this is your first read about ZeroMQ, feel free to first take a 5 seconds read - about the main conceptual differences in [ ZeroMQ hierarchy in less than a five seconds ] Section.
1 ) ... Is there any gap in our understanding of either/or TypeScript or ZeroMQ usage ?
Whereas I cannot serve for the TypeScript part, let me mention a few details, that may help you move forwards. While ZeroMQ is principally a broker-less, asynchronous signalling/messaging framework, it has many flavours of use and there are tools to enforce both a synchronous and asynchronous cooperation between the application code and the ZeroMQ Context()-instance, which is the cornerstone of all the services design.
The native API provides means to define, whether a respective call ought block, until a message processing across the Context()-instance's boundary was able to get completed, or, on the very contrary, if a call ought obey the ZMQ_DONTWAIT and asynchronously return the control back to the caller, irrespectively of the operation(s) (in-)completion.
As additional tricks, one may opt to configure ZMQ_SND_HWM + ZMQ_RCV_HWM and other related .setsockopt()-options, so as to meet a specific blocking / silent-dropping behaviours.
Because ZeroMQ does not provide any synchronous mechanism of sending/receiving data
Well, ZeroMQ API does provide means for a synchronous call to .send()/.recv() methods, where the caller is blocked until any feasible message could get delivered into / from a Context()-engine's domain of control.
Obviously, the TypeScript language binding/wrapper is responsible for exposing these native API services to your hands.
3 ) Is there any synchronous ZeroMQ methodology (We have used it in similar setup for C# where we pool on ZeroMQ and received the data synchronously) ?
Yes, there are several such :
- the native API, if not instructed by a ZMQ_DONTWAIT flag, blocks until a message can get served
- the native API provides a Poller()-object, that can .poll(), if given a -1 as a long duration specifier to wait for sought for events, blocking the caller until any such event comes and appears to the Poller()-instance.
Again, the TypeScript language binding/wrapper is responsible for exposing these native API services to your hands.
... Large memory consumption ...
Well, this may signal a poor resources management care. ZeroMQ messages, once got allocated, ought become also free-d, where appropriate. Check your TypeScript code and the TypeScript language binding/wrapper sources, if the resources systematically get disposed off and free-d from memory.

Azure WebJobs getting initialized randomly

We have webjobs consisting of several methods in a single Functions.cs file. They have servicebus triggers on topic/queues. Hence, keep listening to topic/queue for brokeredMessage. As soon as the message arrives, we have a processing logic that does lot of stuff. But, we find sometimes, all the webjobs get reinitialized suddenly. I found few articles on the website which says webjobs do get initialized and it is usual.
But, not sure if that is the only way and can we prevent it from getting reinitialized as we call brokeredMessage.Complete as soon we get brokeredMessage since we do not want it to be keep processing again and again?
Also, we have few webjobs in one app service and few webjobs in other app service. And, we find all of the webjobs from both the app service get re initialized at the same time. Not sure, why?
You should design your process to be able to deal with occasional disconnects and failures, since this is a "feature" or applications living in the cloud.
Use a transaction to manage the critical area of your code.
Pseudo/commented code below, and a link to the Microsoft documentation is here.
var msg = receiver.Receive();
using (scope = new TransactionScope())
{
// Do whatever work is required
// Starting with computation and business logic.
// Finishing with any persistence or new message generation,
// giving your application the best change of success.
// Keep in mind that all BrokeredMessage operations are enrolled in
// the transaction. They will all succeed or fail.
// If you have multiple data stores to update, you can use brokered messages
// to send new individual messages to do the operation on each store,
// giving eventual consistency.
msg.Complete(); // mark the message as done
scope.Complete(); // declare the transaction done
}

Rebus - Send delayed message to another queue (Azure ServiceBus)

I have a website and and a webjob, where the website is a oneway client and the webjob is worker.
I use the Azure ServiceBus transport for the queue.
I get the following error:
InvalidOperationException: Cannot use ourselves as timeout manager
because we're a one-way client
when I try to send Bus.Defer from the website bus.
Since Azure Servicebus have built in support for timeoutmanager should not this work event from a oneway client?
The documentation on Bus.Defer says: Defers the delivery of the message by attaching a header to it and delivering it to the configured timeout manager endpoint
/// (defaults to be ourselves). When the time is right, the deferred message is returned to the address indicated by the header."
Could I fix this by setting the ReturnAddress like this:
headers.Add(Rebus.Messages.Headers.ReturnAddress, "webjob-worker");
Could I fix this by setting the ReturnAddress like this: headers.Add(Rebus.Messages.Headers.ReturnAddress, "webjob-worker");
Yes :)
The problem is this: When you await bus.Defer a message with Rebus, it defaults to return the message to the input queue of the sender.
When you're a one-way client, you don't have an input queue, and thus there is no way for you to receive the message after the timeout has elapsed.
Setting the return address fixes this, although I admit the solution does not exactly reek of elegance. A nicer API would be if Rebus had a Defer method on its routing API, which could be called like this:
var routingApi = bus.Advanced.Routing;
await routingApi.Defer(recipient, TimeSpan.FromSeconds(10), message);
but unfortunately it does not have that method at the moment.
To sum it up: Yes, setting the return address explicitly on the deferred message makes a one-way client capable of deferring messages.

Instantiate DeviceClient with IoT Hub

I have a console app which sends commands directly to a Raspberry Pi via Azure IoT Hub. It all works fine.
Where I get confused though, is on the two different ways (possibly more?) to instantiate DeviceClient.
Ex:
deviceClient = DeviceClient.Create(IOT_HUB_HOST_NAME, AuthenticationMethodFactory
.CreateAuthenticationWithRegistrySymmetricKey(IOT_HUB_DEVICE, IOT_DEVICE_KEY), TransportType.Http1);
or
deviceClient = DeviceClient.CreateFromConnectionString(IOT_HUB_CONN_STRING);
seem to do the same thing.
Why would I use one over the other? I can receive messages either way.
Yes, in the end of the day they have the same result.
https://github.com/Azure/azure-iot-sdks/blob/master/csharp/device/Microsoft.Azure.Devices.Client/DeviceClient.cs
Create(...) method invokes IotHubConnectionStringBuilder.Create(...) then CreateFromConnectionString(...) and has the description that it is the method that creates DeviceClient from individual parameters.
So, i believe, the Create one is some kind of the wrapper that gets the parameters, then creates the connection string from the individual params and passes that to the CreateFromConnectionString(...). So, the main difference, i think, will be about the performance.

Persist queue: serialize/deserialize queue object in node-amqp

I'm using the node-amqp module to manage rabbitmq subscriptions. Specifically, I'm assigning an exclusive/private queue to each user/session, and providing binding methods through the REST interface. I.e. "bind my queue to this exchange/routing_key pair", and "unbind my queue to this exchange/routing_key pair".
The challenge here is to avoid keeping a reference to the queue object in memory (say, in an object with module-wide scope).
Simply retrieving the queue itself from the connection each time I need it, proved difficult, since the queue object keeps tabs on bindings internally, probably to avoid violating the following from the amqp 0.9.1 reference:
The client MUST NOT attempt to unbind a queue that does not exist. Error code: not-found
I tried to simply set the queue object as a property on a session object using connect-mongo, since it uses JSON.stringify/JSON.parse on its properties. Unfortunately, the queue object fails to "stringify" due to a circular structure.
What is the best practice for persisting a queue object from the node-amqp module? Is it possible to serialize/deserialize?
I would not try to store the queue object, instead of that use an unique name for the queue that you can store. After that whenever you want to make operations over the queue you have two options:
In the case you have a previously opened "channel" to the queue, you should be able to do:
queue = connection.queues[name].
I mean connection as a node-amqp connection against rabbitMQ.
In the case you dont have a channel opened in your connection with rabbitmq, just open the channel again:
connection.queue(name = queueName, options, function(queue) {
// for example do unbind
})
I am also using REST interface to manage rabbitMQ. My connection object maintains all the queues, channels, etc... So, only the first time I try to use a queue I call to connection.queue, and the following request just retrieve the queue through connection.queues.

Resources