How to automate API get data request? when using web sockets - node.js

As far as I know Web Sockets allows bi-directional communication. and web sockets (for example: Socket.io) connections are always open. so, whenever new data has arrived data should be automatically pushed to the view via socket.
but in below code I am using set_interval to make a http.get call. and set_interval is called once every 1 second.
now, doing these does not give a real-time feel that is, the new data is pulled once every 1 second. which is statically defined.
in-short, I want to automate what set_interval does in below code. I don't want a static fetch interval value. This is because at-times stock price could change within 100ms and at times it would change once in few seconds.
Now, if I set interval to 1 sec, that is make a call every 1 second. the real feel of high fluctuation in market move would not be seen.
I am not sure how usually developers fetch data in IOT applications. for example car is monitored in real-time and let's say speed of the car is fetched in real time and graphed on a web or mobile application.
How do I achieve something similar like that in Stock Ticker? I want to simply plugin the application to an API and when new data arrives instantly push it to all the viewers (subscribers) in real-time.
Code below
////
// CONFIGURATION SETTINGS
////
var FETCH_INTERVAL = 1000;
var PRETTY_PRINT_JSON = true;
////
// START
////
var express = require('express');
var http = require('http');
var https = require('https');
var io = require('socket.io');
var cors = require('cors');
function getQuote(socket, ticker) {
https.get({
port: 443,
method: 'GET',
hostname: 'www.google.com',
path: '/finance/info?client=ig&q=' + ticker,
timeout: 1000
}, function(response) {
response.setEncoding('utf8');
var data = '';
response.on('data', function(chunk) {
data += chunk;
});
response.on('end', function() {
if(data.length > 0) {
var dataObj;
try {
dataObj = JSON.parse(data.substring(3));
} catch(e) {
return false;
}
socket.emit(ticker, dataObj[0].l_cur);
}
});
});
}
I am making a call to method getQuote depending on FETCH_INTERVAL set above
function trackTicker(socket, ticker) {
// run the first time immediately
getQuote(socket, ticker);
// every N seconds
var timer = setInterval(function() {
getQuote(socket, ticker);
}, FETCH_INTERVAL);
socket.on('disconnect', function () {
clearInterval(timer);
});
}
var app = express();
app.use(cors());
var server = http.createServer(app);
var io = io.listen(server);
io.set('origins', '*:*');
app.get('/', function(req, res) {
res.sendfile(__dirname + '/index.html');
});
io.sockets.on('connection', function(socket) {
socket.on('ticker', function(ticker) {
trackTicker(socket, ticker);
});
});
server.listen(process.env.PORT || 4000);
Edits - Update
Okay, so I would need real-time feed. (this bit is sorted)
As far as I know, Real-time feeds are quite expensive and buying 10,000+ end points for each online client is quite expensive.
1) How do I make use of real-time feed to serve 1000s of end users? Can I use web sockets, Redis, publish/subscribe, broadcasting or some technology that copies real-time feed to tonnes of users? I want a efficient solution because I want to keep the expense of real-time data feed as low as possible.
How do I tackle that issue?
2) Yes, I understand polling needs to be done on server side and not on a client-side (to avoid doing polling for each client). but then what tech do I need to use? websockets, redis, pub/sub etc..
I have API URL and a token to access the API.
3) I am not just in need to fetch the data and push it to end users. But I would need to do some computation on the fetched data, will need to pull data from Redis or database as well and do calculations on it then push it to the view.
for example:
1) data I get in real-time market feed {"a":10, "b":20}
2) get data from DB or Redis {"x":2, "y":4}
3) do computation : z = a * x + b * y
4) finally push value of z in the view.
How do I do all these in real-time at the same-time push it to multiple clients?
Can you share a roadmap with me? I got the first piece of the puzzle getting real-time datafeed.

1) How do I make use of real-time feed to serve 1000s of end users? Can I use web sockets, Redis, publish/subscribe, broadcasting or some technology that copies real-time feed to tonnes of users? I want a efficient solution because I want to keep the expense of real-time data feed as low as possible.
How do I tackle that issue?
To "push" data to browser clients, you would want to use a webSocket or socket.io (built on top of webSockets). Then, anytime your server knows there's an update, it can immediately send that update to any currently connected client that is interested in that info. The basic idea is that the client connects to your server as soon as the web page is loaded and keeps that connection open for as long as the web page(s) are open.
2) Yes, I understand polling needs to be done on server side and not on a client-side (to avoid doing polling for each client). but then what tech do I need to use? websockets, redis, pub/sub etc..
It isn't clear to me what exactly you're asking about here. You will get updated prices using whatever the most efficient technology is that is offered by your provider. If all they provide is http calls, then you have to poll regularly using http requests. If they provide a webSocket interface to get updates, then that would be preferable.
There are lots of choices for how to keep track of which clients are interested in which pieces of information and how to distribute the updates. For a single server, you could easily build your own with just a Map of stock prices where the stock symbol is the key and an array of client identifiers is the value in the Map. Then, any time you get an update for a given stock, you just fetch the list of client IDs that are interested in that stock and send the update to them (over their webSocket/socket.io connection).
This is also a natural pub/sub type of application so anyone of the backends that support pub/sub would work just fine too. You could even use an EventEmitter where you .emit(stock, price) and each separate connection adds a listener for the stock symbols they are interested in.
For multiple servers at scale, you'd probably want to use some external process that manages the pub/sub process. Redis is a candidate for that.
3) I am not just in need to fetch the data and push it to end users. But I would need to do some computation on the fetched data, will need to pull data from Redis or database as well and do calculations on it then push it to the view.
I don't really see what question there is here. Pick your favorite database to store the info you need to fetch so you can get it upon demand.
How do I do all these in real-time at the same-time push it to multiple clients? Can you share a roadmap with me? I got the first piece of the puzzle getting real-time datafeed.
Real-time data feed.
Database to store your meta data used for calculations.
Some pub/sub system, either home built or from a pre-built package.
Then, follow this sequence of events.
Client signs in, connects a webSocket or socket.io connection.
Server accepts client connection and assigns a clientID and keeps track of the connection in some sort of Map between clientID and webSocket/socket.io connection. FYI, socket.io does this automatically for you.
Client tells server which items it wants to monitor (probably message sent over webSocket/socket.io connection.
Server registers that interest in pub/sub system (essentially subscribing the client to each item it wants to monitor.
Other clients do the same thing.
Each time client requests data on a specific item, the server makes sure that it is getting updates for that item (however the server gets its updates).
Server gets new info for some item that one or more clients is interested in.
New data is sent to pub/sub system and pub/sub system broadcasts that information to those clients that were interested in info on that particular item. The details of how that works depend upon what pub/sub system you choose and how it notifies subscribers of a change, but eventually a message is sent over webSocket/socket.io for the item that has changed.
When a client disconnects, their pub/sub subscriptions are "unsubscribed".

Related

Send data from websocket to front end - Nodejs, Expressjs

I'm working on a project that uses the binance api to create an interface to make day trading cryptos easier.
The call to their api looks like this:
binance.websockets.candlesticks(['BNBBTC'], "1m", function(candlesticks) {
let { e:eventType, E:eventTime, s:symbol, k:ticks } = candlesticks;
let { o:open, h:high, l:low, c:close, v:volume, n:trades, i:interval, x:isFinal, q:quoteVolume, V:buyVolume, Q:quoteBuyVolume } = ticks;
console.log(symbol+" "+interval+" candlestick update");
console.log("open: "+open);
console.log("high: "+high);
console.log("low: "+low);
console.log("close: "+close);
console.log("volume: "+volume);
console.log("isFinal: "+isFinal);
});
It seems to be returning data at a fixed interval, so I'm skeptical as to whether it's actually real time, but regardless, I'm wondering how to send this data to the front end as it comes in.
Currently, I'm doing this with the static data:
router.get('/interface', function(req,res) {
binance.candlesticks("BNBBTC", "5m", function(ticks, symbol) {
console.log("candlesticks()", ticks);
let last_tick = ticks[ticks.length - 1];
let [time, open, high, low, close, volume, closeTime, assetVolume, trades, buyBaseVolume, buyAssetVolume, ignored] = last_tick;
console.log(symbol+" last close: "+close);
res.render('interface', {ticks:ticks});
});
});
I've messed with socket.io in the past, but am unsure how to utilize it. Any help would be much appreciated! And please hmu if you're interested in cryptos. We are putting together a group in discord to share our research, and trading strategies.
To initiate the data sending process from the backend, (instead of frontend requesting data), you should use websockets (socketIO as you have mentioned).
To do that, first, you should start a socketio server in your express app, by wrapping the http/https server or express app.
Then, from the frontend, you should initiate a socketio-client.
Next, your frontend client should establish a connection with the server using the connect method of the socketio-client. It will fire an event in the server, with the socket connection.
Finally, the server can use that socket connection, to send any amount of data to the client. (You might need to save the connection for latter use).
i'm trying to do basically the same thing, what discord group you talking about?

Node app that fetches, processes, and formats data for consumption by a frontend app on another server

I currently have a frontend-only app that fetches 5-6 different JSON feeds, grabs some necessary data from each of them, and then renders a page based on said data. I'd like to move the data fetching / processing part of the app to a server-side node application which outputs one simple JSON file which the frontend app can fetch and easily render.
There are two noteworthy complications for this project:
1) The new backend app will have to live on a different server than its frontend counterpart
2) Some of the feeds change fairly often, so I'll need the backend processing to constantly check for changes (every 5-10 seconds). Currently with the frontend-only app, the browser fetches the latest versions of the feeds on load. I'd like to replicate this behavior as closely as possible
My thought process for solving this took me in two directions:
The first is to setup an express application that uses setTimeout to constantly check for new data to process. This data is then sent as a response to a simple GET request:
const express = require('express');
let app = express();
let processedData = {};
const getData = () => {...} // returns a promise that fetches and processes data
/* use an immediately invoked function with setTimeout to fetch the data
* when the program starts and then once every 5 seconds after that */
(function refreshData() {
getData.then((data) => {
processedData = data;
});
setTimeout(refreshData, 5000);
})();
app.get('/', (req, res) => {
res.send(processedData);
});
app.listen(port, () => {
console.log(`Started on port ${port}`);
});
I would then run a simple get request from the client (after properly adjusting CORS headers) to get the JSON object.
My questions about this approach are pretty generic: Is this even a good solution to this problem? Will this drive up hosting costs based on processing / client GET requests? Is setTimeout a good way to have a task run repeatedly on the server?
The other solution I'm considering would deal with setting up an AWS Lambda that writes the resulting JSON to an s3 bucket. It looks like the minimum interval for scheduling an AWS Lambda function is 1 minute, however. I imagine I could set up 3 or 4 identical Lambda functions and offset them by 10-15 seconds, however that seems so hacky that it makes me physically uncomfortable.
Any suggestions / pointers / solutions would be greatly appreciated. I am not yet a super experienced backend developer, so please ELI5 wherever you deem fit.
A few pointers.
Use crontasks for periodic processing of data. This is far preferable especially if you are formatting a lot of data.
Don't setup multiple Lambda functions for the same task. It's going to be messy to maintain all those functions.
After processing / fetching the feed, you can store the JSON file in your own server or S3. Note that if it's S3, then you are paying and waiting for a network operation. You can read the file from your express app and just send the response back to your clients.
Depending on the file size and your load in the server you might want to add a caching server so that you can cache the response until new JSON data is available.

why is performance of redis+socket.io better than just socket.io?

I earlier had all my code in socket.io+node.js server. I recently converted all the code to redis+socket.io+socket.io+node.js after noticing slow performance when too many users send messages across the server.
So, why socket.io alone was slow because it is not multi threaded, so it handles one request or emit at a time.
What redis does is distribute these requests or emits across channels. Clients subscribe to different channels, and when a message is published on a channel, all the client subscribed to it receive the message. It does it via this piece of code:
sub.on("message", function (channel, message) {
client.emit("message",message);
});
The client.on('emit',function(){}) takes it from here to publish messages to different channels.
Here is a brief code explaining what i am doing with redis:
io.sockets.on('connection', function (client) {
var pub = redis.createClient();
var sub = redis.createClient();
sub.on("message", function (channel, message) {
client.emit('message',message);
});
client.on("message", function (msg) {
if(msg.type == "chat"){
pub.publish("channel." + msg.tousername,msg.message);
pub.publish("channel." + msg.user,msg.message);
}
else if(msg.type == "setUsername"){
sub.subscribe("channel." +msg.user);
}
});
});
As redis stores the channel information, we can have different servers publish to the same channel.
So, what i dont understand is, if sub.on("message") is getting called every time a request or emit is sent, why is redis supposed to be giving better performance? I suppose even the sub.on("message") method is not multi threaded.
As you might know, Redis allows you to scale with multiple node instances. So the performance actually comes after the fact. Utilizing the Pub/Sub method is not faster. It's technically slower because you have to communicate between Redis for every Pub/Sign signal. The "giving better performance" is only really true when you start to horizontally scale out.
For example, you have one node instance (simple chat room) -- that can handle a maximum of 200 active users. You are not using Redis yet because there is no need. Now, what if you want to have 400 active users? Whilst using your example above, you can now achieve this 400 user mark, which is a "performance increase". In the sense you can now handle more users, but not really a speed increase. If that makes sense. Hope this helps!

Websocket transport reliability (Socket.io data loss during reconnection)

Used
NodeJS, Socket.io
Problem
Imagine there are 2 users U1 & U2, connected to an app via Socket.io. The algorithm is the following:
U1 completely loses Internet connection (ex. switches Internet off)
U2 sends a message to U1.
U1 does not receive the message yet, because the Internet is down
Server detects U1 disconnection by heartbeat timeout
U1 reconnects to socket.io
U1 never receives the message from U2 - it is lost on Step 4 I guess.
Possible explanation
I think I understand why it happens:
on Step 4 Server kills socket instance and the queue of messages to U1 as well
Moreover on Step 5 U1 and Server create new connection (it is not reused), so even if message is still queued, the previous connection is lost anyway.
Need help
How can I prevent this kind of data loss? I have to use hearbeats, because I do not people hang in app forever. Also I must still give a possibility to reconnect, because when I deploy a new version of app I want zero downtime.
P.S. The thing I call "message" is not just a text message I can store in database, but valuable system message, which delivery must be guaranteed, or UI screws up.
Thanks!
Addition 1
I do already have a user account system. Moreover, my application is already complex. Adding offline/online statuses won't help, because I already have this kind of stuff. The problem is different.
Check out step 2. On this step we technically cannot say if U1 goes offline, he just loses connection lets say for 2 seconds, probably because of bad internet. So U2 sends him a message, but U1 doesn't receive it because internet is still down for him (step 3). Step 4 is needed to detect offline users, lets say, the timeout is 60 seconds. Eventually in another 10 seconds internet connection for U1 is up and he reconnects to socket.io. But the message from U2 is lost in space because on server U1 was disconnected by timeout.
That is the problem, I wan't 100% delivery.
Solution
Collect an emit (emit name and data) in {} user, identified by random emitID. Send emit
Confirm the emit on client side (send emit back to server with emitID)
If confirmed - delete object from {} identified by emitID
If user reconnected - check {} for this user and loop through it executing Step 1 for each object in {}
When disconnected or/and connected flush {} for user if necessary
// Server
const pendingEmits = {};
socket.on('reconnection', () => resendAllPendingLimits);
socket.on('confirm', (emitID) => { delete(pendingEmits[emitID]); });
// Client
socket.on('something', () => {
socket.emit('confirm', emitID);
});
Solution 2 (kinda)
Added 1 Feb 2020.
While this is not really a solution for Websockets, someone may still find it handy. We migrated from Websockets to SSE + Ajax. SSE allows you to connect from a client to keep a persistent TCP connection and receive messages from a server in realtime. To send messages from a client to a server - simply use Ajax. There are disadvantages like latency and overhead, but SSE guarantees reliability because it is a TCP connection.
Since we use Express we use this library for SSE https://github.com/dpskvn/express-sse, but you can choose the one that fits you.
SSE is not supported in IE and most Edge versions, so you would need a polyfill: https://github.com/Yaffle/EventSource.
Others have hinted at this in other answers and comments, but the root problem is that Socket.IO is just a delivery mechanism, and you cannot depend on it alone for reliable delivery. The only person who knows for sure that a message has been successfully delivered to the client is the client itself. For this kind of system, I would recommend making the following assertions:
Messages aren't sent directly to clients; instead, they get sent to the server and stored in some kind of data store.
Clients are responsible for asking "what did I miss" when they reconnect, and will query the stored messages in the data store to update their state.
If a message is sent to the server while the recipient client is connected, that message will be sent in real time to the client.
Of course, depending on your application's needs, you can tune pieces of this--for example, you can use, say, a Redis list or sorted set for the messages, and clear them out if you know for a fact a client is up to date.
Here are a couple of examples:
Happy path:
U1 and U2 are both connected to the system.
U2 sends a message to the server that U1 should receive.
The server stores the message in some kind of persistent store, marking it for U1 with some kind of timestamp or sequential ID.
The server sends the message to U1 via Socket.IO.
U1's client confirms (perhaps via a Socket.IO callback) that it received the message.
The server deletes the persisted message from the data store.
Offline path:
U1 looses internet connectivity.
U2 sends a message to the server that U1 should receive.
The server stores the message in some kind of persistent store, marking it for U1 with some kind of timestamp or sequential ID.
The server sends the message to U1 via Socket.IO.
U1's client does not confirm receipt, because they are offline.
Perhaps U2 sends U1 a few more messages; they all get stored in the data store in the same fashion.
When U1 reconnects, it asks the server "The last message I saw was X / I have state X, what did I miss."
The server sends U1 all the messages it missed from the data store based on U1's request
U1's client confirms receipt and the server removes those messages from the data store.
If you absolutely want guaranteed delivery, then it's important to design your system in such a way that being connected doesn't actually matter, and that realtime delivery is simply a bonus; this almost always involves a data store of some kind. As user568109 mentioned in a comment, there are messaging systems that abstract away the storage and delivery of said messages, and it may be worth looking into such a prebuilt solution. (You will likely still have to write the Socket.IO integration yourself.)
If you're not interested in storing the messages in the database, you may be able to get away with storing them in a local array; the server tries to send U1 the message, and stores it in a list of "pending messages" until U1's client confirms that it received it. If the client is offline, then when it comes back it can tell the server "Hey I was disconnected, please send me anything I missed" and the server can iterate through those messages.
Luckily, Socket.IO provides a mechanism that allows a client to "respond" to a message that looks like native JS callbacks. Here is some pseudocode:
// server
pendingMessagesForSocket = [];
function sendMessage(message) {
pendingMessagesForSocket.push(message);
socket.emit('message', message, function() {
pendingMessagesForSocket.remove(message);
}
};
socket.on('reconnection', function(lastKnownMessage) {
// you may want to make sure you resend them in order, or one at a time, etc.
for (message in pendingMessagesForSocket since lastKnownMessage) {
socket.emit('message', message, function() {
pendingMessagesForSocket.remove(message);
}
}
});
// client
socket.on('connection', function() {
if (previouslyConnected) {
socket.emit('reconnection', lastKnownMessage);
} else {
// first connection; any further connections means we disconnected
previouslyConnected = true;
}
});
socket.on('message', function(data, callback) {
// Do something with `data`
lastKnownMessage = data;
callback(); // confirm we received the message
});
This is quite similar to the last suggestion, simply without a persistent data store.
You may also be interested in the concept of event sourcing.
Michelle's answer is pretty much on point, but there are a few other important things to consider. The main question to ask yourself is: "Is there a difference between a user and a socket in my app?" Another way to ask that is "Can each logged in user have more than 1 socket connection at one time?"
In the web world it is probably always a possibility that a single user has multiple socket connections, unless you have specifically put something in place that prevents this. The simplest example of this is if a user has two tabs of the same page open. In these cases you don't care about sending a message/event to the human user just once... you need to send it to each socket instance for that user so that each tab can run it's callbacks to update the ui state. Maybe this isn't a concern for certain applications, but my gut says it would be for most. If this is a concern for you, read on....
To solve this (assuming you are using a database as your persistent storage) you would need 3 tables.
users - which is a 1 to 1 with real people
clients - which represents a "tab" that could have a single connection to a socket server. (any 'user' may have multiple)
messages - a message that needs sent to a client (not a message that needs sent to a user or to a socket)
The users table is optional if your app doesn't require it, but the OP said they have one.
The other thing that needs properly defined is "what is a socket connection?", "When is a socket connection created?", "when is a socket connection reused?". Michelle's psudocode makes it seem like a socket connection can be reused. With Socket.IO, they CANNOT be reused. I've seen be the source of a lot of confusion. There are real life scenarios where Michelle's example does make sense. But I have to imagine those scenarios are rare. What really happens is when a socket connection is lost, that connection, ID, etc will never be reused. So any messages marked for that socket specifically will never be delivered to anyone because when the client who had originally connected, reconnects, they get a completely brand new connection and new ID. This means it's up to you to do something to track clients (rather than sockets or users) across multiple socket connections.
So for a web based example here would be the set of steps I'd recommend:
When a user loads a client (typically a single webpage) that has the potential for creating a socket connection, add a row to the clients database which is linked to their user ID.
When the user actually does connect to the socket server, pass the client ID to the server with the connection request.
The server should validate the user is allowed to connect and the client row in the clients table is available for connection and allow/deny accordingly.
Update the client row with the socket ID generated by Socket.IO.
Send any items in the messages table connected to the client ID. There wouldn't be any on initial connection, but if this was from the client trying to reconnect, there may be some.
Any time a message needs to be sent to that socket, add a row in the messages table which is linked to the client ID you generated (not the socket ID).
Attempt to emit the message and listen for the client with the acknowledgement.
When you get the acknowledgement, delete that item from the messages table.
You may wish to create some logic on the client side that discards duplicate messages sent from the server since this is technically a possibility as some have pointed out.
Then when a client disconnects from the socket server (purposefully or via error), DO NOT delete the client row, just clear out the socket ID at most. This is because that same client could try to reconnect.
When a client tries to reconnect, send the same client ID it sent with the original connection attempt. The server will view this just like an initial connection.
When the client is destroyed (user closes the tab or navigates away), this is when you delete the client row and all messages for this client. This step may be a bit tricky.
Because the last step is tricky (at least it used to be, I haven't done anything like that in a long time), and because there are cases like power loss where the client will disconnect without cleaning up the client row and never tries to reconnect with that same client row - you probably want to have something that runs periodically to cleanup any stale client and message rows. Or, you can just permanently store all clients and messages forever and just mark their state appropriately.
So just to be clear, in cases where one user has two tabs open, you will be adding two identical message to the messages table each marked for a different client because your server needs to know if each client received them, not just each user.
As already written in another answer, I also believe you should look at the realtime as a bonus : the system should be able to work also with no realtime.
I’m developing an enterprise chat for a large company (ios, android, web frontend and .net core + postGres backend) and after having developed a way for the websocket to re-establish connection (through a socket uuid) and get undelivered messages (stored in a queue) I understood there was a better solution: resync via rest API.
Basically I ended up by using websocket just for realtime, with an integer tag on each realtime message (user online, typers, chat message and so on) for monitoring lost messages.
When the client gets an id which is not monolithic (+1) then it understands it is out of sync so it drops all the socket messages and asks a resync of all its observers through REST api.
This way we can handle many variations in the state of the application during the offline period without having to parse tons of websocket messages in a row on reconnection and we are sure to be synced (because the last sync date is set just by the REST api, not from the socket).
The only tricky part is monitoring for realtime messages from the moment you call REST api to the moment the server replies because what is read from the db takes time to get back to the client and in the meanwhile variations could happen so they need to be cached and took into account.
We are going into production in a couple of months,
I hope to get back sleeping by then :)
It is seem that you already have user account system. You know which account is online/offline, you you can handle connect/disconnect event:
So the solution is, add online/offline and offline messages on database for each user:
chatApp.onLogin(function (user) {
user.readOfflineMessage(function (msgs) {
user.sendOfflineMessage(msgs, function (err) {
if (!err) user.clearOfflineMessage();
});
})
});
chatApp.onMessage(function (fromUser, toUser, msg) {
if (user.isOnline()) {
toUser.sendMessage(msg, function (err) {
// alert CAN NOT SEND, RETRY?
});
} else {
toUser.addToOfflineQueue(msg);
}
})
Look here: Handle browser reload socket.io.
I think you could use solution which I came up with. If you modify it properly, it should work as you want.
What I think you want is to have a reusable socket for each user, something like:
Client:
socket.on("msg", function(){
socket.send("msg-conf");
});
Server:
// Add this socket property to all users, with your existing user system
user.socket = {
messages:[],
io:null
}
user.send = function(msg){ // Call this method to send a message
if(this.socket.io){ // this.io will be set to null when dissconnected
// Wait For Confirmation that message was sent.
var hasconf = false;
this.socket.io.on("msg-conf", function(data){
// Expect the client to emit "msg-conf"
hasconf = true;
});
// send the message
this.socket.io.send("msg", msg); // if connected, call socket.io's send method
setTimeout(function(){
if(!hasconf){
this.socket = null; // If the client did not respond, mark them as offline.
this.socket.messages.push(msg); // Add it to the queue
}
}, 60 * 1000); // Make sure this is the same as your timeout.
} else {
this.socket.messages.push(msg); // Otherwise, it's offline. Add it to the message queue
}
}
user.flush = function(){ // Call this when user comes back online
for(var msg in this.socket.messages){ // For every message in the queue, send it.
this.send(msg);
}
}
// Make Sure this runs whenever the user gets logged in/comes online
user.onconnect = function(socket){
this.socket.io = socket; // Set the socket.io socket
this.flush(); // Send all messages that are waiting
}
// Make sure this is called when the user disconnects/logs out
user.disconnect = function(){
self.socket.io = null; // Set the socket to null, so any messages are queued not send.
}
Then the socket queue is preserved between disconnects.
Make sure it saves each users socket property to the database and make the methods part of your user prototype. The database does not matter, just save it however you have been saving your users.
This will avoid the problem mentioned in Additon 1 by requiring a confirmation from the client before marking the message as sent. If you really wanted to, you could give each message an id and have the client send the message id to msg-conf, then check it.
In this example, user is the template user that all users are copied from, or like the user prototype.
Note: This has not been tested.
Been looking at this stuff latterly and think different path might be better.
Try looking at Azure Service bus, ques and topic take care of the off line states.
The message wait for user to come back and then they get the message.
Is a cost to run a queue but its like $0.05 per million operations for a basic queue so cost of dev would be more from hours work need to write a queuing system.
https://azure.microsoft.com/en-us/pricing/details/service-bus/
And azure bus has libraries and examples for PHP, C#, Xarmin, Anjular, Java Script etc.
So server send message and does not need to worry about tracking them.
Client can use message to send back also as means can handle load balancing if needed.
Try this emit chat list
io.on('connect', onConnect);
function onConnect(socket){
// sending to the client
socket.emit('hello', 'can you hear me?', 1, 2, 'abc');
// sending to all clients except sender
socket.broadcast.emit('broadcast', 'hello friends!');
// sending to all clients in 'game' room except sender
socket.to('game').emit('nice game', "let's play a game");
// sending to all clients in 'game1' and/or in 'game2' room, except sender
socket.to('game1').to('game2').emit('nice game', "let's play a game (too)");
// sending to all clients in 'game' room, including sender
io.in('game').emit('big-announcement', 'the game will start soon');
// sending to all clients in namespace 'myNamespace', including sender
io.of('myNamespace').emit('bigger-announcement', 'the tournament will start soon');
// sending to individual socketid (private message)
socket.to(<socketid>).emit('hey', 'I just met you');
// sending with acknowledgement
socket.emit('question', 'do you think so?', function (answer) {});
// sending without compression
socket.compress(false).emit('uncompressed', "that's rough");
// sending a message that might be dropped if the client is not ready to receive messages
socket.volatile.emit('maybe', 'do you really need it?');
// sending to all clients on this node (when using multiple nodes)
io.local.emit('hi', 'my lovely babies');
};

How to inform a NodeJS server of something using PHP?

I'd like to add a live functionality to a PHP based forum - new posts would be automatically shown to users as soon as they are created.
What I find a bit confusing is the interaction between the PHP code and NodeJS+socket.io.
How would I go about informing the NodeJS server about new posts and have the server inform the clients that are watching the thread in which the post was posted?
Edit
Tried the following code, and it seems to work, my only question is whether this is considered a good solution, as it looks kind of messy to me.
I use socket.io to listen on port 81 to clients, and the server running om port 82 is only intended to be used by the forum - when a new post is created, a PHP script sends a POST request to localhost on port 82, along with the data.
Is this ok?
var io = require('socket.io').listen(81);
io.sockets.on('connection', function(socket) {
socket.on('init', function(threadid) {
socket.join(threadid);
});
});
var forumserver = require('http').createServer(function(req, res) {
if (res.socket.remoteAddress == '127.0.0.1' && req.method == 'POST') {
req.on('data', function(chunk) {
data = JSON.parse(chunk.toString());
io.sockets.in(data.threadid).emit('new-post', data.content);
});
}
res.end();
}).listen(82);
Your solution of a HTTP server running on a special port is exactly the solution I ended up with when faced with a similar problem. The PHP app simply uses curl to POST to the Node server, which then pushes a message out to socket.io.
However, your HTTP server implementation is broken. The data event is a Stream event; Streams do not emit messages, they emit chunks of data. In other words, the request entity data may be split up and emitted in two chunks.
If the data event emitted a partial chunk of data, JSON.parse would almost assuredly throw an exception, and your Node server would crash.
You either need to manually buffer data, or (my recommendation) use a more robust framework for your HTTP server like Express:
var express = require('express'), forumserver = express();
forumserver.use(express.bodyParser()); // handles buffering and parsing of the
// request entity for you
forumserver.post('/post/:threadid', function(req, res) {
io.sockets.in(req.params.threadid).emit('new-post', req.body.content);
res.send(204); // HTTP 204 No Content (empty response)
});
forumserver.listen(82);
PHP simply needs to post to http​://localhost:82/post/1234 with an entity body containing content. (JSON, URL-encoded, or multipart-encoded entities are acceptable.) Make sure your firewall blocks port 82 on your public interface.
Regarding the PHP code / forum's interaction with Node.JS, you probably need to create an API endpoint of sorts that can listen for changes made to the forum. Depending on your forum software, you would want to hook into the process of creating a new post and perform the API callback to Node.js at this time.
Socket.io out of the box is geared towards visitors of the site being connected on the frontend via Javascript. Upon the Node server receiving notification of a new post update, it would then notify connected clients of this new post and its details, at which point it would probably add new HTML to the DOM of the page the visitor is viewing.
You may want to arrange the Socket.io part of things so that users only subscribe to specific events being emitted by them being in a specific room such as "subforum123" so that they only receive notifications of applicable posts.

Resources