How do I create a channel in Pusher - pusher

I could not find any explicit information about the creation of channels with Pusher.
Is that simply an implicit action when either subscribing on the client or pushing events on the server?
class HelloController < ApplicationController
def hello
#does this create a channel "named 'test-channel'"?
Pusher['test-channel'].trigger('test_event', { :hello => 'world' })
end
end
If so is there a limit to the number of channels available?
The reason for my question is that I'd like to create a unique channel for every user and after the client side has closed down that channel.
But probably that is not really a good idea ;-)
thanks

Channels are really just a way of routing or filtering data. They exist by simply being subscribed to or having data published to them. So, it is an implicit action.
There are no limits to the number of channels you use and a unique channel per user is a nice solution for targeted messaging.

Related

Is there an inbuilt or canonical way in Rust to ‘publish’ state?

Is there any canonical way in Rust to publish a frequently updating ‘state’ such that any number of consumers can read it without providing access to the object itself?
My use case is that I have a stream of information coming in via web socket and wish to have aggregated metrics available to consume by other threads. One could do this externally with something like Kafka, and I could probably roll my own internal solution but wondering if there is any other method?
An alternative which I’ve used in Go is to have consumers register themselves with the producer and each receive a channel, with the producer simply publishing to each channel separately. There will generally be a low number of consumers so this may well work, but wondering if there’s anything better.
It sounds like you want a "broadcast channel".
If you're using async, the popular tokio crate provides an implementation in their sync::broadcast module:
A multi-producer, multi-consumer broadcast queue. Each sent value is seen by all consumers.
A Sender is used to broadcast values to all connected Receiver values. Sender handles are clone-able, allowing concurrent send and receive actions. [...]
[...]
New Receiver handles are created by calling Sender::subscribe. The returned Receiver will receive values sent after the call to subscribe.
If that doesn't quite suit your fancy, there are other crates that provide similar types that can be found by searching for "broadcast" on crates.io.

How to populate MQTT topic list dynamically on node js

I am using mqtt-node to get subscribed messages. But the problem is the topic list for subscribing will be appended through an API. But the appended topic list is not being read by the mqtt connection while subscribing for other topics. Please advice or suggest a suitable way to solve this issue.
There is no topic list.
The only way to discover what topics are in use is to either maintain a list external to the broker or to subscribe to a a wildcard and see what messages are published.
It's important to remember that topics only really exist at the moment a message is published to one. Subscribers supply a list of patterns (they can include wildcards like + or #) to match against those published topics and any matching messages are delivered.
You maintain an array of Topics
var topics = [
"test/1",
"test/2",
"test/3"
]
When a new Topic arrives via the API, you will need to first unsubscribe from the existing Topics
client.unsubscribe(topics)
then add the new Topic
topics.push(newTopic)
then re-subscribe
client.subscribe(topics)
This is what worked best for me when I have this use case.
Keep in mind that the time between unsubscribing and re-subscribing, messages could be Published and your client would not see them due to not being subscribed at the time. This is easy to overcome if you can use the RETAIN field on your Publishers....but in some use cases, this isn't practical.

Unique identifier for Spring Integration

I have a pub/subscribe queue in Spring Integration. Once a message is put on the queue I can see a new message ID is generated and different message ID for each of the subscribers. I want to use the initial unique message ID as an unique identifier while it flows through various microservices subscribers. Can I get the original message ID from each of the subscribers?
Also if I had multiple spring integration instances writing the messages into a single kafka queue, would message ID be unique?
I think Kafka deserves its own SO question. Re. the same id for all the subflows: how about a applySequence = true for the PublishSubscribeChannel and each message copy will be send with the Sequence Details headers where the IntegrationMessageHeaderAccessor.CORRELATION_ID is exactly copy of the original message?
The problem with Messaging that each new message should be really a new unique object. This way each message is a stand along entity and it doesn't effect all others and even may not know about their existence. The stateless is one of the consistency goals of Messaging per se.
Therefore if you would like to carry some identificator to all the messages, you should consider to use some other header, not an id. For this purpose the Framework already provides for your conventional mechanism called correlation and sequence details: https://docs.spring.io/spring-integration/docs/5.0.4.RELEASE/reference/html/messaging-channels-section.html#channel-configuration-pubsubchannel

Service Fabric actors that receive events from other actors

I'm trying to model a news post that contains information about the user that posted it. I believe the best way is to send user summary information along with the message to create a news post, but I'm a little confused how to update that summary information if the underlying user information changes. Right now I have the following NewsPostActor and UserActor
public interface INewsPostActor : IActor
{
Task SetInfoAndCommitAsync(NewsPostSummary summary, UserSummary postedBy);
Task AddCommentAsync(string content, UserSummary, postedBy);
}
public interface IUserActor : IActor, IActorEventPublisher<IUserActorEvents>
{
Task UpdateAsync(UserSummary summary);
}
public interface IUserActorEvents : IActorEvents
{
void UserInfoChanged();
}
Where I'm getting stuck is how to have the INewsPostActor implementation subscribe to events published by IUserActor. I've seen the SubscribeAsync method in the sample code at https://github.com/Azure/servicefabric-samples/blob/master/samples/Actors/VS2015/VoiceMailBoxAdvanced/VoicemailBoxAdvanced.Client/Program.cs#L45 but is it appropriate to use this inside the NewsPostActor implementation? Will that keep an actor alive for any reason?
Additionally, I have the ability to add comments to news posts, so should the NewsPostActor also keep a subscription to each IUserActor for each unique user who comments?
Events may not be what you want to be using for this. From the documentation on events (https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-reliable-actors-events/)
Actor events provide a way to send best effort notifications from the
Actor to the clients. Actor events are designed for Actor-Client
communication and should NOT be used for Actor-to-Actor communication.
Worth considering notifying the relevant actors directly or have an actor/service that will manage this communication.
Service Fabric Actors do not yet support a Publish/Subscribe architecture. (see Azure Feedback topic for current status.)
As already answered by charisk, Actor-Events are also not the way to go because they do not have any delivery guarantees.
This means, the UserActor has to initiate a request when a name changes. I can think of multiple options:
From within IUserAccount.ChangeNameAsync() you can send requests directly to all NewsPostActors (assuming the UserAccount holds a list of his posts). However, this would introduce additional latency since the client has to wait until all posts have been updated.
You can send the requests asynchronously. An easy way to do this would be to set a "NameChanged"-property on your Actor state to true within ChangeNameAsync() and have a Timer that regularly checks this property. If it is true, it sends requests to all NewsPostActors and sets the property to false afterwards. This would be an improvement to the previous version, however it still implies a very strong connection between UserAccounts and NewsPosts.
A more scalable solution would be to introduce the "Message Router"-pattern. You can read more about this pattern in Vaughn Vernon's excellent book "Reactive Messaging Patterns with the Actor Model". This way you can basically setup your own Pub/Sub model by sending a "NameChanged"-Message to your Router. NewsPostActors can - depending on your scalability needs - subscribe to that message either directly or through some indirection (maybe a NewsPostCoordinator). And also depending on your scalability needs, the router can forward the messages either directly or asynchronously (by storing it in a queue first).

Distributed pub/sub with single consumer per message type

I have no clue if it's better to ask this here, or over on Programmers.SE, so if I have this wrong, please migrate.
First, a bit about what I'm trying to implement. I have a node.js application that takes messages from one source (a socket.io client), and then does processing on the message, which might result in zero or more messages back out, either to the sender, or other clients within that group.
For the processing, I would like to essentially just shove the message into a queue, then it works its way through various message processors that might kick off their own items, and eventually, the bit running socket.io is informed "Hey, send this message back"
As a concrete example, say a user signs into the service, that sign in message is then placed in the queue, where the authorization processor gets it, does it's thing, then places a message back in the queue saying the client's been authorized. This goes back to the socket.io socket that is connected to the client, along with other clients that might be interested. It can also go to other subsystems that might want to do more processing on authorization (looking up user info, sending more info to the client based on their data, etc).
If I wanted strong coupling, this would be easy, but I tried that before, and it just goes to a mess of spaghetti code that's very fragile, and I would like to avoid that. Another wrench in the setup is this should be cluster-able, which is where the real problem comes in. There might be more than one, say, authorization processor running. But the authorization message should be processed only once.
So, in short, I'm looking for a pattern/technique that will allow me to, essentially, have multiple "groups" of subscribers for a message, and the message will be processed only once per group.
I thought about maybe having each instance of a processor generate a unique name that would be used as a list in Reids. This name would then be registered with some sort of dispatch handler, and placed into a set for that group of subscribers. Then when a message arrives, the dispatch pulls a random member out of that set, and places it into that list. While it seems like this would work, it seems somewhat over-complicated and fragile.
The core problem is I've never designed a system like this, so I'm not even sure the proper terms to use or look up. If anyone can point me in the right direction for this, I would be most appreciative.
I think what your describing is similar to https://www.getbridge.com/ service. I it but ended up writing my own based on zeromq, it allows you to register services, req -> <- rec and channels which are pub / sub workers.
As for the design, I used a client -> broker -> services & channels which are all plug and play using auto discovery, you have the services register their schema with the brokers who open a tcp connection so that brokers on other servers can communicate with that broker groups services. Then internal services and clients connect via unix sockets or ipc channels which ever is preferred.
I ended up wrapping around the redis publish/subscribe functions a bit to do this. Each type of message processor gets a "group name", and there can be multiple instances of the processor within that group (so multiple instances of the program can run for clustering).
When publishing a message, I generate an incremental ID, then store the message in a string key with that ID, then publish the message ID.
On the receiving end, the first thing the subscriber does is attempt to add the message ID it just got from the publisher into a set of received messages for that group with sadd. If sadd returns 0, the message has already been grabbed by another instance, and it just returns. If it returns 1, the full message is pulled out of the string key and sent to the listener.
Of course, this relies on redis being single threaded, which I imagine will continue to be the case.
What you might be looking for is an AMQP protocol implementation,where you can have queue get custom exchanges,and implement a pub-sub model.
RabbitMQ - a popular amqp protocol implementation with lots of libraries
it also has node.js library

Resources