I'm using web_socket_channel for implementing the WebSocket client listening, the server-side will constantly push messages, but I don't want it to choke my main thread, I want it to listen in the background
My code is:
void main(){
final _channel = IOWebSocketChannel.connect(
Uri.parse("ws://address"));
_channel.stream.listen((data) {
print("receiving websocket data: $data");
});
print("this code suppose to be executed");
}
Related
I am implementing a websocket chat application where I want to gracefully shutdown all clients when the server stop because of the ctrl+c signal.
I am listening to incoming events using mio poll and tokens. Any new socket connection is registered with mio poll and any event received on the socket is successfully captured on the polling.
My initial idea was to use tokio::select with listen events and shutdown as 2 branches. But, I guess this design needs some modification to enable graceful shutdown.
// get_events function
fn get_events(poll, conn) -> Option<mio::Event>{
let shutdown = tokio::signal::ctrl_c();
let res = tokio::select!{
res = get_poll_events(poll) => { // returns events asynchronously, its an async method
// ... handler when events are received
res
},
_ = shutdown => {
// ... handler when ctrl+c signal is received
// send close connection message to TcpStream
return None;
}
};
Some(res)
}
// main function
let poll = Poll::new();
let shared_poll = Arc::new(Mutex::new(poll));
let conn: Arc<Mutex<HashMap<Token, WebSocketClient>>> = Arc::new(Mutex::new(HashMap::new()));
let (shutdown_notifier, shutdown_receiver) = mpsc::channel(10);
loop {
let res = WebSocketServer::get_events(shared_poll.clone(), conn.clone()).await;
if let None = res {
drop(shutdown_notifier); // drop the original mpsc::Sender, cloned in all clients.
let _ = shutdown_receiver.recv().await; // mpsc::Receiver only returns error when all clients are dropped, thus dropping the senders inside them.
break; // stop the program
}
// use event to register a new client with mpsc::sender
// if readable; accept incoming message
// broadcast message to other subscribers
// ... other tasks
...
}
On Ctrl+c signal received the shutdown handler gets activated. Post that, close connection message is sent on the TcpStream to all active clients. At this point, everything is as expected. All the clients receive close messages. But the real issue starts here, since the shutdown handler executes, the get_poll_events branch of tokio::select gets dropped. Meaning, no more polling for the incoming events on the registered sources (TcpStreams).
Ideally, the server should only close when it has received back the close message response from all the clients. The clients communicate back on the TcpStream, but since there is no active mechanism to listen to these events, I am unable to capture them and hence not able to drop the clients. However, instead of sending close message to clients if I dropped all the clients manually, then there was no need to listen to close message response and things worked albeit an incorrect implementation.
Tokio::select isn't an ideal choice as far as I can guess, but I am unable to come up with a solution on how to implement this case, where the server is still active to listen to all the close message responses from clients. It can then close when all the clients have gone out of scope in the received close messages.
What would be a way to achieve this functionality? TIA.
In Node.js they expose a handy way to pass net.Sockets to child processes (cluster.Worker) via:
var socket; // some instance of net.Socket
var worker = process.fork();
worker.on("online", function() {
worker.send("socket", socket);
});
Which is super cool and works handily. But how would I do this with a WebSocket connection? I'm open to try any module.
Currently I've tried using various modules like ws. Most of them store the initial net.Socket HTTP Request and then upgrade it, but none seem simple enough to pass to the child process as a net.Socket because they need tons of handshake info needed by the WebSocket spec, so far as I can tell.
I know there are hackish solutions, like opening a WebSocket server on the child process on a unique port, then telling the WebScoket connection to reconnect on that port, but then I need an open port for every child thread. Or, piping all data to the WebSocket connection through process.send so the main thread does all the io, but that defeats some of the performance benefits by running stuff on multiple threads.
So does anyone have any ideas?
Welp I figured it out. ws may have been too much for my intended purposes. Instead I found a pretty obscure WebSocket library, lark-websocket which exposes a function that given a net.Socket can wrap it up in in their Client class and work with it as a WebSocket. The only issue was both the parent and child threads would then try to ping the connection on the other end so I had to fork it and add a way for the parent thread to pause pinging.
Here's some example code for anyone interested:
var cluster = require("cluster");
var ws = require('lark-websocket');
if(cluster.isMaster) { // make a child process and pipe all ws connections to it
var worker = cluster.fork();
worker.once("online", function() {
console.log("worker online with pid", worker.process.pid);
})
ws.createServer(function(client, request){
worker.send("socket", client._socket); // send all websocket clients to the worker thread
}).listen(27015);
}
else { // we are a worker, so we handle the ws connections
process.on("message", function(message, handler) {
if(message === "socket") { // Note: Node js can only send sockets via handler if message === "socket", because passing sockets between threads is sketchy as fuck
var client = ws.createClient(handler);
client.on('message',function(msg){
console.log("worker " + process.pid + " got:", msg);
client.send("I got your: " + msg);
});
}
});
}
I know nodejs is asynchronous by nature and it is preferable to use that way, but I have a use case where we need to handle incoming TCP connections in synchronous way. Once a new connections received we need to connect to some other TCP server and perform some book keeping stuff etc and then handle some other connection. Since number of connections are limited, it is fine to handle this in synchronous way.
Looking for an elegant way to handle this scenario.
net.createServer(function(sock) {
console.log('Received a connection - ');
var sock = null;
var testvar = null;
sock = new net.Socket();
sock.connect(PORT, HOST, function() {
console.log('Connected to server - ');
});
//Other listeners
}
In the above code if two connections received simultaneously the output may be (since asynchronous nature):
Received a connection
Receive a connection
Connected to server
Connected to server
But the expectation is:
Received a connection
Connected to server
Receive a connection
Connected to server
What is the proper way of ding this?
One solution is implement a queue kind of solution with emitting 'done' or 'complete' events to handle next connection.
For this we may have to take the connection callback out of the createServer call. How to handle scoping of connection and other variables (testvar) in this case?
In this case what happens to the data/messages if received on connections which are in queue but not yet processed and not yet 'data' listener is registered.?
Any other better solutions will be helpful.
I think it is important to separate the concepts of synchronous code vs serial code. You want to process each request serially, but that can still be accomplished while handling each request asynchronously. For your case, the easiest way would probably be to have a queue of requests to handle instead.
var inProgress = false;
var queue = [];
net.createServer(function(sock){
queue.push(sock);
processQueue();
});
function processQueue(){
if (inProgress || queue.length === 0) return;
inProgress = true;
handleSockSerial(queue.shift(), function(){
inProgress = false;
processQueue();
});
}
function handleSockSerial(sock, callback){
// Do all your stuff and then call 'callback' when you are done.
}
Note, as long as you are using node >= 0.10, the data coming in from the socket will be buffered until you read the data.
I have a server that uses socket.io and I need a way of throttling a client that is sending the server data too quickly. The server exposes both a TCP interface and a socket.io interface - with the TCP server (from the net module) I can use socket.pause() and socket.resume(), and this effectively throttles the client. But with socket.io's socket class there are no pause() and resume() methods.
What would be the easiest way of getting feedback to a client that it is overwhelming the server and needs to slow down? I liked socket.pause() and socket.resume() because it didn't require any additional code on the client-side - backup the TCP socket and things naturally slow down. Any equivalent for socket.io?
Update: I provide an API to interact with the server (there is currently a python version which runs over TCP and a JavaScript version which uses socket.io). So I don't have any real control over what the client does. Which is why using socket.pause() and socket.resume() is so great - backing up the TCP stream slows the python client down no matter what it tries to do. I'm looking for an equivalent for a JavaScript client.
With enough digging I found this:
this.manager.transports[this.id].socket.pause();
and
this.manager.transports[this.id].socket.resume();
Granted this probably won't work if the socket.io connection isn't a web sockets connection, and may break in a future update, but for now I'm going to go with it. When I get some time in the future I'll probably change it to the QUOTA_EXCEEDED solution that Pascal proposed.
Here is a dirty way to achieve throttling. Although this is a old post; some people may benefit from it:
First register a middleware:
io.on("connection", function (socket) {
socket.use(function (packet, next) {
if (throttler.canBeServed(socket, packet)) {
next();
}
});
//You other code ..
});
canBeServed is a simple throttler as seen below:
function canBeServed(socket, packet) {
if (socket.markedForDisconnect) {
return false;
}
var previous = socket.lastAccess;
var now = Date.now();
if (previous) {
var diff = now - previous;
//Check diff and disconnect if needed.
if (diff < 50) {
socket.markedForDisconnect = true;
setTimeout(function () {
socket.disconnect(true);
}, 1000);
return false;
}
}
socket.lastAccess = now;
return true;
}
You can use process.hrtime() instead of Date.time().
If you have a callback on your server somewhere which normally sends back the response to your client, you could try and change it like this:
before:
var respond = function (res, callback) {
res.send(data);
};
after
var respond = function (res, callback) {
setTimeout(function(){
res.send(data);
}, 500); // or whatever delay you want.
};
Looks like you should slow down your clients. If one client can send too fast for your server to keep up, this is not going to go very well with 100s of clients.
One way to do this would be have the client wait for the reply for each emit before emitting anything else. This way the server can control how fast the client can send by only answering when ready for example, or only answer after a set time.
If this is not enough, when a client exceeded x requests per second, start replying with something like QUOTA_EXCEEDED error, and ignore the data they send in. This will force external developers to make their app behave as you want them to do.
As another suggestion, I would propose a solution like this:
It is common for MySQL to get a large amount of requests which would take longer time to apply than the rate the requests coming in.
The server can record the requests in a table in db assuming this action is fast enough for the rate the requests are coming in and then process the queue at a normal rate for the server to sustain. This buffer system will allow the server to run slow but still process all the requests.
But if you want something sequential, then the request callback should be verified before the client can send another request. In this case, there should be a server ready flag. If the client is sending request while the flag is still red, then there can be a message telling the client to slow down.
simply wrap your client emitter into a function like below
let emit_live_users = throttle(function () {
socket.emit("event", "some_data");
}, 2000);
using use a throttle function like below
function throttle(fn, threshold) {
threshold = threshold || 250;
var last, deferTimer;
return function() {
var now = +new Date, args = arguments;
if(last && now < last + threshold) {
clearTimeout(deferTimer);
deferTimer = setTimeout(function() {
last = now;
fn.apply(this, args);
}, threshold);
} else {
last = now;
fn.apply(this, args);
}
}
}
In the following thread, UDP packets are read from clients until the boolean field Run is set to false.
If Run is set to false while the Receive method is blocking, it stays blocked forever (unless a client sends data, which will make the thread loop and check for the Run condition again).
while (Run)
{
IPEndPoint remoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
byte[] data = udpClient.Receive(ref remoteEndPoint); // blocking method
// process received data
}
I usually get around the problem by setting a timeout on the server. It works fine, but seems to be a patchy solution to me.
udpClient.Client.ReceiveTimeout = 5000;
while (Run)
{
try
{
IPEndPoint remoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
byte[] data = udpClient.Receive(ref remoteEndPoint); // blocking method
// process received data
}
catch(SocketException ex) {} // timeout reached
}
How would you handle this problem? Is there any better way?
Use UdpClient.Close(). That will terminate the blocking Receive() call. Be prepared to catch the ObjectDisposedException, it signals your thread that the socket is closed.
You could do something like this:
private bool run;
public bool Run
{
get
{
return run;
}
set
{
run = value;
if(!run)
{
udpClient.Close();
}
}
}
This allows you to close the client once whatever condition is met to stop your connection from listening. An exception will likely be thrown, but I don't believe it will be a SocketTimeoutException, so you'll need to handle that.