I'm having a socket.io app that basically receives signals from a frontend in order to kill and start a new ffmpeg process (based on .spawn()).
Everything works like expected, but often I get a 525 error from cloudflare. The error message is: Cloudflare is unable to establish an SSL connection to the origin server.
It works like 9 out of 10 times.I noticed that more of these errors pop up whenever a kill + spawn is done. Could it be the case that something block the event loop and because of this blocks all incoming requests and cloudflare logs these as a handshake failed error?
Contacting cloudflare support gives me back this info (this is the request they do to my server):
Time id host message upstream
2017-08-16T09:14:24.000Z 38f34880faf04433 xxxxxx.com:2096 peer closed connection in SSL handshake while SSL handshaking to upstream https://xxx.xxx.xxx.xxx:2096/socket.io/?EIO=3&transport=polling&t=LtgKens
I'm debugging for some time now, but can't seem to find a solutions myself.
This is how I initialize my socketIO server.
/**
* Start the socket server
*/
var startSocketIO = function() {
var ssl_options = {
key: fs.readFileSync(sslConfig.keyFile, 'utf8'),
cert: fs.readFileSync(sslConfig.certificateFile, 'utf8')
};
self.app = require('https').createServer(ssl_options, express);
self.io = require('socket.io')(self.app);
self.io.set('transports', ['websocket', 'polling']);
self.app.listen(2096, function() {
console.log('Socket.IO Started on port 2096');
});
};
This is the listener code on the server side
this.io.on('connection', function (socket) {
console.log('new connection');
/**
* Connection to the room
*/
socket.on('changeVideo', function (data) {
//Send to start.js and start.js will kill the ffmpeg process and
start a new one
socket.emit('changeVideo');
});
});
Another thing that I observer while debugging (I only got this a few times):
The text new connection displayed on the server and the connected client emits the changevideo event but nothing happens on the server side instead the client just
keeps reconnecting.
This is a simplified version of the nodejs code. If you have more questions, just let me know.
Thanks!
Related
I have a socket running in nodejs and using this socket in html page this is working fine and some times I'm receiving the error on developer console as like
failed: Connection closed before receiving a handshake response. In this time my update not getting reflect on the user screen. Actually whenever the changes updated in admin screen I written the login in laravel to store this values into the redis and I have used the laravel event broadcast and in node js socket.io read the redis value change and push the values into the user screens.
I have code in laravel as like,
Laravel Controller,
public function updatecommoditygroup(Request $request)
{
$request_data = array();
parse_str($request, $request_data);
app('redis')->set("ssahaitrdcommoditydata", json_encode($request_data['commodity']));
event(new SSAHAITRDCommodityUpdates($request_data['commodity']));
}
In this above controller when the api call receives just store the values into this redis key and broadcast the event.
In my event class,
public $updatedata;
public function __construct($updatedata)
{
$this->updatedata = $updatedata;
}
public function broadcastOn()
{
return ['ssahaitrdupdatecommodity'];
}
Finally I have written my socket.io file as like below,
var app = require('express')();
var http = require('http').Server(app);
var io = require('socket.io')(http);
var Redis = require('ioredis');
var redis = new Redis({ port: 6379 } );
redis.subscribe('ssahaitrdupdatecommodity', function(err, count) {
});
io.on('connection', function(socket) {
console.log('A client connected');
});
redis.on('pmessage', function(subscribed, channel, data) {
data = JSON.parse(data);
io.emit(channel + ':' + data.event, data.data);
});
redis.on('message', function(channel, message) {
message = JSON.parse(message);
io.emit(channel + ':' + message.event, message.data);
});
http.listen(3001, function(){
console.log('Listening on Port 3001');
});
When I have update the data from admin I'm passing to laravel controller, and controller will store the received data into redis database and pass to event broadcast.And event broadcast pass the values to socket server and socket server push the data whenever the redis key get change to client page.
In client page I have written the code as like below,
<script src="../assets/js/socket.io.js"></script>
var socket = io('http://ip:3001/');
socket.on("novnathupdatecommodity:App\\Events\\NOVNATHCommodityUpdates", function(data){
//received data processing in client
});
Everything working fine in most of the time and some times issue facing like
**VM35846 socket.io.js:7 WebSocket connection to 'ws://host:3001/socket.io/?EIO=3&transport=websocket&sid=p8EsriJGGCemaon3ASuh' failed: Connection closed before receiving a handshake response**
By this issue user page not getting update with new data. Could you please anyone help me to solve this issue and give the best solution for this issue.
I think this is because your socket connection timeout.
new io({
path:,
serveClient:,
orgins:,
pingTimeout:,
pingInterval:
});
The above is the socket configuration. If you are not configuring socket sometime it behaves strangely. I do not know the core reason, but i too have faced similar issues that implementing the socket configuration solved it.
Socket.io Server
Similar configuration should be done on the client side. There is an option of timeout in client side
Socket.io Client
For example.
Say this is your front-end code
You connect to the socket server using the following command:
io('http://ip:3001', { path: '/demo/socket' });
In your server side when creating the connection:
const io = require("socket.io");
const socket = new io({
path: "/demo/socket",
serveClient: false /*whether to serve the client files (true/false)*/,
orgins: "*" /*Supports cross orgine i.e) it helps to work in different browser*/,
pingTimeout: 6000 /*how many ms the connection needs to be opened before we receive a ping from client i.e) If the client/ front end doesnt send a ping to the server for x amount of ms the connection will be closed in the server end for that specific client*/,
pingInterval: 6000 /* how many ms before sending a new ping packet */
});
socket.listen(http);
Note:
To avoid complication start you http server first and then start you sockets.
There are other options available, but the above are the most common ones.
I am just describing what i see in the socket.io document available in github.socket_config. Hope this helps
I am working on Socket IO, the connection between the client and the server is established successfully. I am facing two problems:
1 - When the initial connection is made between the client and the server, the socket.client.id on server and socket.id on client side, both are the same, but when I refresh the client page, the id of the client changes to other one, but on the server it is still the same. Does it makes any issue / problem while communicating with the server or even with the client using sockets, while not having the same ids ? or does the id on the server get changed when the client page is refreshed ?
2 - On the initial connection establishment the socket passes a messages, using socket.emit() from server and receives as socket.on() on client. But when I try to emit anything from client it doesn't get received on server.
Socket Connections
function Globals() {
this.socketConnection = async function() {
let p = new Promise(function(res, rej) {
io.on("connection", function(socket) {
if (socket.connected) {
res(socket);
} else {
rej("Socket Connection Error !");
}
})
})
return await p;
}
}
new Globals().socketConnection().then(function(soc) {
console.log(soc.client.id);
socket = soc;
soc.emit("Hi");
soc.on("Nady", function() {
console.log("I am called");
})
})
Client Side Connection
function Globals() {
this.socketConnection = async function() {
var socket = io('http://localhost:8080');
let p = new Promise(function(res, rej) {
socket.on('connect', function() {
if (socket.connected) {
console.log(socket.id);
res(socket);
}
})
})
return await p;
}
}
var socket;
new App().socketConnection().then(function(s) {
socket = s;
});
function ScrapJobs() {
var socket;
new App().socketConnection().then(function(s) {
socket = s;
});
var _this = this;
this.attachListeners = function() {
qs("#init-scrap").ev("click", _this.startScrapping);
}
this.startScrapping = function() {
console.log("I am cliced");
socket.on("Hi", function() {
console.log("Hi Nadeem");
})
socket.emit("Nady");
}
}
When the initial connection is made between the client and the server, the socket.client.id on server and socket.id on client side, both are the same, but when I refresh the client page, the id of the client changes to other one, but on the server it is still the same. Does it makes any issue
The client side socket.id value is set on the client socket object after the connect event is received and is updated (e.g. modified) upon a reconnect event.
It appears that the socket.io infrastructure will keep them the same on client and server. If the client disconnects and then reconnects, there will be a new connection with a new id on both client and server. It is possible you are attempting to hang onto the old socket object on the server after the client has disconnected it (we can't really see enough of your server code to evaluate that).
On the initial connection establishment the socket passes a messages, using socket.emit() from server and receives as socket.on() on client. But when I try to emit anything from client it doesn't get received on server.
You'd have to show us a reproducible case. This does not happen if you are coding things correctly. I would guess that you do not have the right listeners for messages on the right socket in order to see the messages you are sending. I promise you that sending a message from client to server works just fine when implemented properly.
A general comment about your code. Both code blocks you show appear to be stuffing a socket object into a higher scoped (or perhaps even global) variable. That is likely part the cause of your problem because that socket object can become dead if the client reconnects for any reason. Plus putting any sort of socket object into a global or module level variable makes your server only capable of serving one client - it's simply not how you design multi-client servers.
I have setup a simple server and client, however whenever I close the client, it seems not possible to reconnect. Here's my client:
const net = require('net');
var client = net.connect({port: 8080, host: '127.0.0.1'});
var response = '';
// events
client.on('data', function(chunk) { response += chunk });
client.on('end', function() {
console.log(response);
client.end()
});
// main execution
client.write('test');
And here's my server:
const net = require('net');
var server = net.createServer();
server.listen(8080, '127.0.0.1');
server.on('connection', function(sock) {
sock.on('data', function(chunk) {
sock.write('test received');
sock.end();
});
});
This is just outline code representing my issue. When I execute my client the first time, everything works correctly. However, when I execute it again, the server outputs the error mentioned in the title and crashes. The same happens if I remove 'client.end()' and instead Ctrl+C out of the client program to cause it to end.
My understanding of sockets is that they represent endpoints in a stream between the client and the server. When that stream is no longer necessary (i.e, when the client does what it needs to do), I want that stream to be completely removed. I would think that calling end() on both the client and server endpoints of that single stream would achieve this, like sending two FIN messages, but as explained it does not. The reasons I want to do this are so: (a) the client file will actually finish execution and (b) the server will no longer have its socket endpoint of that stream in its system/waste resources listening to it.
Any insight into the source of my problem would be appreciated.
You should use the 'connect' event on the client-side to be sure that you perform requests only when your socket is ready. So, in the callback on the event, you can invoke write() function.
You can check if socket is not destroyed before writing.
if (!socket.destroyed) socket.write("something");
Your server only closes the socket when data is received, and your client never sends any.
I want to serve socket connections from a Flash browser client, and therefore I need to add support for the policy-request-file protocol. I can't run the policy-file-request service on the default port 843 because of firewalls etc. The only option I have is to server the protocol on port 80, beside my HTTP server.
My app is written in node.js and the following code works:
var httpServer = http.createServer();
net.createServer(function(socket){
httpServer.emit('connection', socket);
}).listen(80);
I open a socket server on port 80, and for now I just emit the connection event on the httpServer, no problem so far. Now I want to check if the new socket is a policy-file-request which will just send the plain string <policy-file-request /> over a TCP connection. When I notice this string I know it isn't HTTP and I can return the crossdomain file and close the socket. So what I try now is this:
net.createServer(function(socket){
socket.once('readable', function(){
var chunk = socket.read(1);
// chunk[0] === 60 corresponds to the opening bracket '<'
if(chunk !== null && chunk[0] === 60) {
socket.end(crossdomain);
} else {
socket.unshift(chunk);
httpServer.emit('connection', socket);
}
});
}).listen(80);
Now I check if the first byte is the opening bracket '<' and then write the crossdomain file to the socket. Otherwise I unshift the chunk onto the stream and emit the socket as a connection on the HTTP-server. Now the problem is that the HTTP-server doesn't emit a request event anymore and my regular HTTP-requests aren't handled as a result.
I also tried this solution but with no success either:
httpServer.on('connection', function(socket){
socket.once('data', function(chunk){
if(chunk[0] === 60) {
socket.end(crossdomain);
}
})
});
When the socket emits the data event, the readyState of the socket is already 'closed' and a clientError event is already thrown by the httpServer. I searched everywhere and didn't found a solution. I also don't want to pipe the data through another socket to another port where my HTTP server is running locally, because that adds to many, unnecessary overhead I think. Is there a clean way to do this in node.js? I tested all this on node.js version 0.10.26.
I'm a nooby mobile developer trying to take advantage of cloudfoundry's service to run my server to handle some chats and character movements.
I'm using Noobhub to achieve this (TCP connection between server and client using Node.js and Corona SDK's TCP connection API)
So basically I'm trying a non-http TCP connection between Cloudfoundry(Node.js) and my machine(lua).
Link to Noobhub(There is a github repo with server AND client side implementation.
I am doing
Client
...
socket.connect("myappname.cloudfoundry.com", 45234)
...
(45234 is from server's process.env.VCAP_APP_PORT value I retrieved from console output I got through "vmc logs myappname" after running the application.)
Server
...
server.listen(process.env.VCAP_APP_PORT)
When I try to connect, it just times out.
On my local machine, doing
Client
...
socket.connect("localhost",8989)
Server
...
server.listen(8989)
works as expected. It's just on cloudfoundry that it doesn't work.
I tried a bunch of other ways of doing this such as setting the client's port connection to 80 and a bunch of others. I saw a few resources but none of them solved it.
I usually blow at asking questions so if you need more information, please ask me!
P.S.
Before you throw this link at me with an angry face D:< , here's a question that shows a similar problem that another person posted.
cannot connect to TCP server on CloudFoundry (localhost node.js works fine)
From here, I can see that this guy was trying to do a similar thing I was doing.
Does the selected answer mean that I MUST use host header (i.e. use http protocol) to connect? Does that also mean cloudfoundry will not support a "TRUE" TCP socket much like heroku or app fog?
Actually, process.env.VCAP_APP_PORT environment variable provides you the port, to which your HTTP traffic is redirected by the Cloud Foundry L7 router (nginx) based on the your apps route (e.g. nodejsapp.vcap.me:80 is redirected to the process.env.VCAP_APP_PORT port on the virtual machine), so you definitely should not use it for the TCP connection. This port should be used to listen HTTP traffic. That is why you example do work locally and do not work on Cloud Foundry.
The approach that worked for me is to listen to the port provided by CF with an HTTP server and then attach Websocket server (websocket.io in my example below) to it. I've created sample echo server that works both locally and in the CF. The content of my Node.js file named example.js is
var host = process.env.VCAP_APP_HOST || "localhost";
var port = process.env.VCAP_APP_PORT || 1245;
var webServerApp = require("http").createServer(webServerHandler);
var websocket = require("websocket.io");
var http = webServerApp.listen(port, host);
var webSocketServer = websocket.attach(http);
function webServerHandler (req, res) {
res.writeHead(200);
res.end("Node.js websockets.");
}
console.log("Web server running at " + host + ":" + port);
//Web Socket part
webSocketServer.on("connection", function (socket) {
console.log("Connection established.");
socket.send("Hi from webSocketServer on connect");
socket.on("message", function (message) {
console.log("Message to echo: " + message);
//Echo back
socket.send(message);
});
socket.on("error", function(error){
console.log("Error: " + error);
});
socket.on("close", function () { console.log("Connection closed."); });
});
The dependency lib websocket.io could be installed running npm install websocket.io command in the same directory. Also there is a manifest.yml file which describes CF deploy arguments:
---
applications:
- name: websocket
command: node example.js
memory: 128M
instances: 1
host: websocket
domain: vcap.me
path: .
So, running cf push from this directory deployed app to my local CFv2 instance (set up with the help of cf_nise_installer)
To test this echo websocket server, I used simple index.html file, which connects to server and sends messages (everything is logged into the console):
<!DOCTYPE html>
<head>
<script>
var socket = null;
var pingData = 1;
var prefix = "ws://";
function connect(){
socket = new WebSocket(prefix + document.getElementById("websocket_url").value);
socket.onopen = function() {
console.log("Connection established");
};
socket.onclose = function(event) {
if (event.wasClean) {
console.log("Connection closed clean");
} else {
console.log("Connection aborted (e.g. server process killed)");
}
console.log("Code: " + event.code + " reason: " + event.reason);
};
socket.onmessage = function(event) {
console.log("Data received: " + event.data);
};
socket.onerror = function(error) {
console.log("Error: " + error.message);
};
}
function ping(){
if( !socket || (socket.readyState != WebSocket.OPEN)){
console.log("Websocket connection not establihed");
return;
}
socket.send(pingData++);
}
</script>
</head>
<body>
ws://<input id="websocket_url">
<button onclick="connect()">connect</button>
<button onclick="ping()">ping</button>
</body>
</html>
Only thing to do left is to enter server address into the textbox of the Index page (websocket.vcap.me in my case), press Connect button and we have working Websocket connection over TCP which could be tested by sending Ping and receiving echo. That worked well in Chrome, however there were some issues with IE 10 and Firefox.
What about "TRUE" TCP socket, there is no exact info: according to the last paragraph here you cannot use any port except 80 and 443 (HTTP and HTTPS) to communicate with your app from outside of Cloud Foundry, which makes me think TCP socket cannot be implemented. However, according to this answer, you can actually use any other port... It seems that some deep investigation on this question is required...
"Cloud Foundry uses an L7 router (ngnix) between clients and apps. The router needs to parse HTTP before it can route requests to apps. This approach does not work for non-HTTP protocols like WebSockets. Folks running node.js are going to run into this issue but there are no easy fixes in the current architecture of Cloud Foundry."
- http://www.subbu.org/blog/2012/03/my-gripes-with-cloud-foundry
I decided to go with pubnub for all my messaging needs.