As we know MQTT have using subscribe/publish method. May i know what platform user can save the database using MQTT protocol. Its hivemq or mosquito support database so i can see previous data recorded from the sensor?
If MQTT can support database. What other method beside using apache webserver.
MQTT is a Pub/Sub protocol, it is purely for delivering messages. What happens to those messages once delivered is not the concern of the protocol.
If you want to store all messages then you are going to need to implement that yourself.
This is either as:
A dedicated client that will subscribe to # and then store the messages to a database.
Some brokers have a plugin API that will allow you to register hooks that can intercept every message and store that to a database.
You will have to research if any broker you want to use supports plugins of this nature.
Related
I want to make an app which lets users comment and send messages. However, the notifications for these events will have to come instantly, just like any other social-media or chat application. This is what I'm thinking of:
Web-frontend: Angular, mobile: Ioinc with Angular
Backend: Node, Mongo
Now, this is how I was thinking I'd implement real-time notification.
There's a constant socket connection between the frontend (web & mobile-app) and the backend.
Whenever a message arrives, targeted to a specific user, I'll use some kind of a Mongo-hook to send the notification to the frontend via the socket connection.
Now, the confusion with this approach is:
Would millions of socket connections work at scale, at all? If not, what is the way to implement this pub-sub kind of system? I need to do it from scratch, not using Firebase.
What if a user is offline when he receives the message in the backend? If the socket is not on, how would he get the message? Is there a way to do it using Kafka? Please explain if you have some ideas on this.
Is this the correct approach? If not, can you suggest what would be appropriate?
Would millions of socket connections work at scale, at all? If not, what is the way to implement this pub-sub kind of system? I need to do it from scratch, not using Firebase.
Yes, it can work at scale just you have to made an architecture like that. You might find this useful
Scalable architecture for socket.io
https://socket.io/docs/v3/using-multiple-nodes/
What if a user is offline when he receives the message in the backend? If the socket is not on, how would he get the message?
If he the socket is not on or user is offline, then client Socket will be disconnected. At this point, notification will not be received and whenever the user comes online you'll have make an API call to get the notifications and connect again to the socket for further operations.
Is there a way to do it using Kafka?
Yes, you can also do it with Kafka. You'll need Consumer API(Subscriber) and Producer API(Publisher)
https://kafka.apache.org/documentation/#api
https://www.npmjs.com/package/kafka-node
Sending Apache Kafka data on web page
What do you use Apache Kafka for?
Real time notification with Kafka and NodeJS
I am working on an client/server node+react application that displays logs. I want the user to have a constant stream of data flowing in, or at least appear to be. Is the most efficient way to use a Websocket in node, and just connect the client to it?
If you're only streaming data from the server to the client, and that data is text, there is no need for Web Sockets.
Server-Sent Events (SSE) and the EventSource API are a simpler choice. It's specifically designed to update the client as things happen, and sounds like a good fit for your use case. They remain connected, will auto-reconnect if the connection is lost, and can support resuming from where they left off.
Web Sockets are more appropriate for when you want bi-directional data streaming.
I can see only one way communication in pusher docs i.e., from server to client. How to do it from client to server with node.js?
Pusher Channels does not support bidirectional transport. If you need to send data from your client to your server you will have to use another solution such as a POST request.
Channels does offer webhooks which can be triggered by certain events in the application and could be consumed by your server if they fit your requirements. However, webhooks are designed to keep you informed of certain events within your application rather than as a means of communication between client and server.
I would like to implement an MQTT server in Haskell.
I already have a HTTP REST server made in Haskell and would like to add some MQTT endpoints to that server.
For instance, there is an endpoint POST /foo, allowing users to send some information that will be stored in a Mongo DB. I would like to add an MQTT endpoint: if someone performs a PUBLISH with topic "/foo", the data will be stored to the same Mongo database, using the same internal functions than the POST.
Similarly for the SUBSCRIBE, the data should come from the backend database.
I saw http://hackage.haskell.org/package/mqtt-0.1.1.0
and
https://github.com/lpeterse/haskell-hummingbird
But I'm not sure if they are useable as a library to create the endpoints with specific callbacks.
So this is a two-fold question:
Any feedback on implementing MQTT endpoints in Haskell?
Is merging an HTTP and MQTT servers a good idea?
After some investigation, here are my findings on MQTT in Haskell:
First library I found is http://hackage.haskell.org/package/mqtt-hs. However it is buggy and not maintained anymore.
I'm now using http://hackage.haskell.org/package/net-mqtt, which works well.
I also understood that I didn't need to make a MQTT server: I just need to develop a client! My MQTT client will subscribe on a standard MQTT server (Mosquitto), and sink the data received in my database.
Another pain point of MQTT is the authentication/authorization. My server uses Keycloak for access control, while Mosquitto uses a static ACL file. I solved this problem by developing an authorization proxy for MQTT: the proxy sits in front of Mosquitto and filter the requests, based on Keycloak decisions.
I am trying to build a generic publish/subscribe server with nodejs and node_redis that receives requests from a browser with a channel name and responds with any data that has been published too that channel. To do this, I am using long polling requests from the browser and dealing with these requests by sending a response when a message is received on a channel.
For each new request, an obect is created for subscribing to the channel (if and only if it does not already exist).
clients = {};
//when request comes in,
clients[channel] = redis.createClient();
clients[channel].subscribe(channel);
Is this the best way to deal with the subscribtion channels, or is there some other more intuitive way?
I don't know what's your design, but you can subscribe with one redis client on multiple channels (after you subscribe with client, then you can only subscribe to other channel or unsubscribe within this connection: http://redis.io/commands/subscribe), because after you receive message, you have full information which channel this message comes from. Then you can distribute this message to all interested clients.
This helped me a little, because I could put type of message in channel name and then dynamically choose action for each message from small function, instead of generating separate subscription for each channel with separate logic.
Inside my node.js server I have only 2 redis clients:
simple client for all standard actions - lpush, sadd and so on
subscribe client - which listens for messages over subscribed channels, then this messages are distribute to all sessions (stored as sets for each channel type) using first redis client.
I would like to point you out to my post about pubsub using socket.io together with redis. Socket.io is a very good library =>
How to use redis PUBLISH/SUBSCRIBE with nodejs to notify clients when data values change?
I think the design is very simple and it should also be very scalable.
That seems like a pretty reasonable solution to me. What don't you like about it?
Something to keep in mind is that you can have multiple subscriptions on each Redis connection. This might end up complicating your logic, which is the opposite of what you are asking for. However, at scale this might be necessary. Each Redis connection is relatively inexpensive, but it does require a file descriptor and some memory.
Complete Redis Pub/Sub Example (Real-time Chat using Hapi.js & Socket.io)
We were trying to understand Redis Publish/Subscribe ("Pub/Sub") and all the existing examples were either outdated, too simple or had no tests.
So we wrote a Complete Real-time Chat using Hapi.js + Socket.io + Redis Pub/Sub Example with End-to-End Tests!
https://github.com/dwyl/hapi-socketio-redis-chat-example
The Pub/Sub component is only a few lines of node.js code:
https://github.com/dwyl/hapi-socketio-redis-chat-example/blob/master/lib/chat.js#L33-L40
Rather than pasting it here (without any context) we encourage you to checkout/try the example.
We built it using Hapi.js but the chat.js file is de-coupled from Hapi and can easily be used with a basic node.js http server or express (etc.)