I'm a nooby mobile developer trying to take advantage of cloudfoundry's service to run my server to handle some chats and character movements.
I'm using Noobhub to achieve this (TCP connection between server and client using Node.js and Corona SDK's TCP connection API)
So basically I'm trying a non-http TCP connection between Cloudfoundry(Node.js) and my machine(lua).
Link to Noobhub(There is a github repo with server AND client side implementation.
I am doing
Client
...
socket.connect("myappname.cloudfoundry.com", 45234)
...
(45234 is from server's process.env.VCAP_APP_PORT value I retrieved from console output I got through "vmc logs myappname" after running the application.)
Server
...
server.listen(process.env.VCAP_APP_PORT)
When I try to connect, it just times out.
On my local machine, doing
Client
...
socket.connect("localhost",8989)
Server
...
server.listen(8989)
works as expected. It's just on cloudfoundry that it doesn't work.
I tried a bunch of other ways of doing this such as setting the client's port connection to 80 and a bunch of others. I saw a few resources but none of them solved it.
I usually blow at asking questions so if you need more information, please ask me!
P.S.
Before you throw this link at me with an angry face D:< , here's a question that shows a similar problem that another person posted.
cannot connect to TCP server on CloudFoundry (localhost node.js works fine)
From here, I can see that this guy was trying to do a similar thing I was doing.
Does the selected answer mean that I MUST use host header (i.e. use http protocol) to connect? Does that also mean cloudfoundry will not support a "TRUE" TCP socket much like heroku or app fog?
Actually, process.env.VCAP_APP_PORT environment variable provides you the port, to which your HTTP traffic is redirected by the Cloud Foundry L7 router (nginx) based on the your apps route (e.g. nodejsapp.vcap.me:80 is redirected to the process.env.VCAP_APP_PORT port on the virtual machine), so you definitely should not use it for the TCP connection. This port should be used to listen HTTP traffic. That is why you example do work locally and do not work on Cloud Foundry.
The approach that worked for me is to listen to the port provided by CF with an HTTP server and then attach Websocket server (websocket.io in my example below) to it. I've created sample echo server that works both locally and in the CF. The content of my Node.js file named example.js is
var host = process.env.VCAP_APP_HOST || "localhost";
var port = process.env.VCAP_APP_PORT || 1245;
var webServerApp = require("http").createServer(webServerHandler);
var websocket = require("websocket.io");
var http = webServerApp.listen(port, host);
var webSocketServer = websocket.attach(http);
function webServerHandler (req, res) {
res.writeHead(200);
res.end("Node.js websockets.");
}
console.log("Web server running at " + host + ":" + port);
//Web Socket part
webSocketServer.on("connection", function (socket) {
console.log("Connection established.");
socket.send("Hi from webSocketServer on connect");
socket.on("message", function (message) {
console.log("Message to echo: " + message);
//Echo back
socket.send(message);
});
socket.on("error", function(error){
console.log("Error: " + error);
});
socket.on("close", function () { console.log("Connection closed."); });
});
The dependency lib websocket.io could be installed running npm install websocket.io command in the same directory. Also there is a manifest.yml file which describes CF deploy arguments:
---
applications:
- name: websocket
command: node example.js
memory: 128M
instances: 1
host: websocket
domain: vcap.me
path: .
So, running cf push from this directory deployed app to my local CFv2 instance (set up with the help of cf_nise_installer)
To test this echo websocket server, I used simple index.html file, which connects to server and sends messages (everything is logged into the console):
<!DOCTYPE html>
<head>
<script>
var socket = null;
var pingData = 1;
var prefix = "ws://";
function connect(){
socket = new WebSocket(prefix + document.getElementById("websocket_url").value);
socket.onopen = function() {
console.log("Connection established");
};
socket.onclose = function(event) {
if (event.wasClean) {
console.log("Connection closed clean");
} else {
console.log("Connection aborted (e.g. server process killed)");
}
console.log("Code: " + event.code + " reason: " + event.reason);
};
socket.onmessage = function(event) {
console.log("Data received: " + event.data);
};
socket.onerror = function(error) {
console.log("Error: " + error.message);
};
}
function ping(){
if( !socket || (socket.readyState != WebSocket.OPEN)){
console.log("Websocket connection not establihed");
return;
}
socket.send(pingData++);
}
</script>
</head>
<body>
ws://<input id="websocket_url">
<button onclick="connect()">connect</button>
<button onclick="ping()">ping</button>
</body>
</html>
Only thing to do left is to enter server address into the textbox of the Index page (websocket.vcap.me in my case), press Connect button and we have working Websocket connection over TCP which could be tested by sending Ping and receiving echo. That worked well in Chrome, however there were some issues with IE 10 and Firefox.
What about "TRUE" TCP socket, there is no exact info: according to the last paragraph here you cannot use any port except 80 and 443 (HTTP and HTTPS) to communicate with your app from outside of Cloud Foundry, which makes me think TCP socket cannot be implemented. However, according to this answer, you can actually use any other port... It seems that some deep investigation on this question is required...
"Cloud Foundry uses an L7 router (ngnix) between clients and apps. The router needs to parse HTTP before it can route requests to apps. This approach does not work for non-HTTP protocols like WebSockets. Folks running node.js are going to run into this issue but there are no easy fixes in the current architecture of Cloud Foundry."
- http://www.subbu.org/blog/2012/03/my-gripes-with-cloud-foundry
I decided to go with pubnub for all my messaging needs.
Related
I have created a simple create-react-app which opens a websocket connection to an equally simple websocket echo server in python.
Everything works fine on my local network, however I'd also like to try to connect from outside my local network. To accomplish this I've forwarded port 3000 on my router to the computer running the create-react-app application and tested by connecting a separate computer through the hotspot on my smartphone to the ip address of my router at port 3000.
This fails without properly connecting to the python websocket echo server.
I can connect from an external network to the create-react-app (the default logo is displayed and the page is displayed properly) however the issue is when using the react app to connect to the python echo server.
Any ideas on where to go from here?
Here is the relevant code in App.js:
// Address and port of the python websocket server
const URL = 'ws://localhost:8765'
class EchoWebsocket extends Component {
ws = new WebSocket(URL);
componentDidMount() {
// Callback function when connected to websocket server
this.ws.onopen = () => {
// on connecting, do nothing but log it to the console
console.log('Opening new websocket #' + URL);
console.log('connected')
console.log('Websocket readyState = ', this.ws.readyState);
}
console.log('Websocket readyState = ', this.ws.readyState);
// Callback function when connection to websocket server is closed
this.ws.onclose = () => {
console.log('disconnected')
console.log('Websocket readyState = ', this.ws.readyState);
// automatically try to reconnect on connection loss
console.log('Opening new websocket #' + URL);
}
// Callback function to handle websocket error
this.ws.onerror = event => {
console.log("WebSocket Error: " , event);
}
}
render() {
return(
<div></div>
);
}
}
I also reference <EchoWebsocket /> later in App.js
Here is the python websocket echo server:
#!/usr/bin/env python
import asyncio
import websockets
async def echo(websocket, path):
async for message in websocket:
await websocket.send(message)
asyncio.get_event_loop().run_until_complete(
websockets.serve(echo, '', 8765))
asyncio.get_event_loop().run_forever()
Changing the ip address in App.js to the ip address of the router and forwarding port 8765 in the router to the computer running the websocket echo server allows this code to work both on the local network and from an external network.
I have a socket running in nodejs and using this socket in html page this is working fine and some times I'm receiving the error on developer console as like
failed: Connection closed before receiving a handshake response. In this time my update not getting reflect on the user screen. Actually whenever the changes updated in admin screen I written the login in laravel to store this values into the redis and I have used the laravel event broadcast and in node js socket.io read the redis value change and push the values into the user screens.
I have code in laravel as like,
Laravel Controller,
public function updatecommoditygroup(Request $request)
{
$request_data = array();
parse_str($request, $request_data);
app('redis')->set("ssahaitrdcommoditydata", json_encode($request_data['commodity']));
event(new SSAHAITRDCommodityUpdates($request_data['commodity']));
}
In this above controller when the api call receives just store the values into this redis key and broadcast the event.
In my event class,
public $updatedata;
public function __construct($updatedata)
{
$this->updatedata = $updatedata;
}
public function broadcastOn()
{
return ['ssahaitrdupdatecommodity'];
}
Finally I have written my socket.io file as like below,
var app = require('express')();
var http = require('http').Server(app);
var io = require('socket.io')(http);
var Redis = require('ioredis');
var redis = new Redis({ port: 6379 } );
redis.subscribe('ssahaitrdupdatecommodity', function(err, count) {
});
io.on('connection', function(socket) {
console.log('A client connected');
});
redis.on('pmessage', function(subscribed, channel, data) {
data = JSON.parse(data);
io.emit(channel + ':' + data.event, data.data);
});
redis.on('message', function(channel, message) {
message = JSON.parse(message);
io.emit(channel + ':' + message.event, message.data);
});
http.listen(3001, function(){
console.log('Listening on Port 3001');
});
When I have update the data from admin I'm passing to laravel controller, and controller will store the received data into redis database and pass to event broadcast.And event broadcast pass the values to socket server and socket server push the data whenever the redis key get change to client page.
In client page I have written the code as like below,
<script src="../assets/js/socket.io.js"></script>
var socket = io('http://ip:3001/');
socket.on("novnathupdatecommodity:App\\Events\\NOVNATHCommodityUpdates", function(data){
//received data processing in client
});
Everything working fine in most of the time and some times issue facing like
**VM35846 socket.io.js:7 WebSocket connection to 'ws://host:3001/socket.io/?EIO=3&transport=websocket&sid=p8EsriJGGCemaon3ASuh' failed: Connection closed before receiving a handshake response**
By this issue user page not getting update with new data. Could you please anyone help me to solve this issue and give the best solution for this issue.
I think this is because your socket connection timeout.
new io({
path:,
serveClient:,
orgins:,
pingTimeout:,
pingInterval:
});
The above is the socket configuration. If you are not configuring socket sometime it behaves strangely. I do not know the core reason, but i too have faced similar issues that implementing the socket configuration solved it.
Socket.io Server
Similar configuration should be done on the client side. There is an option of timeout in client side
Socket.io Client
For example.
Say this is your front-end code
You connect to the socket server using the following command:
io('http://ip:3001', { path: '/demo/socket' });
In your server side when creating the connection:
const io = require("socket.io");
const socket = new io({
path: "/demo/socket",
serveClient: false /*whether to serve the client files (true/false)*/,
orgins: "*" /*Supports cross orgine i.e) it helps to work in different browser*/,
pingTimeout: 6000 /*how many ms the connection needs to be opened before we receive a ping from client i.e) If the client/ front end doesnt send a ping to the server for x amount of ms the connection will be closed in the server end for that specific client*/,
pingInterval: 6000 /* how many ms before sending a new ping packet */
});
socket.listen(http);
Note:
To avoid complication start you http server first and then start you sockets.
There are other options available, but the above are the most common ones.
I am just describing what i see in the socket.io document available in github.socket_config. Hope this helps
I'm having a socket.io app that basically receives signals from a frontend in order to kill and start a new ffmpeg process (based on .spawn()).
Everything works like expected, but often I get a 525 error from cloudflare. The error message is: Cloudflare is unable to establish an SSL connection to the origin server.
It works like 9 out of 10 times.I noticed that more of these errors pop up whenever a kill + spawn is done. Could it be the case that something block the event loop and because of this blocks all incoming requests and cloudflare logs these as a handshake failed error?
Contacting cloudflare support gives me back this info (this is the request they do to my server):
Time id host message upstream
2017-08-16T09:14:24.000Z 38f34880faf04433 xxxxxx.com:2096 peer closed connection in SSL handshake while SSL handshaking to upstream https://xxx.xxx.xxx.xxx:2096/socket.io/?EIO=3&transport=polling&t=LtgKens
I'm debugging for some time now, but can't seem to find a solutions myself.
This is how I initialize my socketIO server.
/**
* Start the socket server
*/
var startSocketIO = function() {
var ssl_options = {
key: fs.readFileSync(sslConfig.keyFile, 'utf8'),
cert: fs.readFileSync(sslConfig.certificateFile, 'utf8')
};
self.app = require('https').createServer(ssl_options, express);
self.io = require('socket.io')(self.app);
self.io.set('transports', ['websocket', 'polling']);
self.app.listen(2096, function() {
console.log('Socket.IO Started on port 2096');
});
};
This is the listener code on the server side
this.io.on('connection', function (socket) {
console.log('new connection');
/**
* Connection to the room
*/
socket.on('changeVideo', function (data) {
//Send to start.js and start.js will kill the ffmpeg process and
start a new one
socket.emit('changeVideo');
});
});
Another thing that I observer while debugging (I only got this a few times):
The text new connection displayed on the server and the connected client emits the changevideo event but nothing happens on the server side instead the client just
keeps reconnecting.
This is a simplified version of the nodejs code. If you have more questions, just let me know.
Thanks!
I am trying to enable tcp, http and websocket.io communication on the same port. I started out with the tcp server (part above //// line), it worked. Then I ran the echo server example found on websocket.io (part below //// line), it also worked. But when I try to merge them together, tcp doesn't work anymore.
SO, is it possible to enable tcp, http and websockets all using the same port? Or do I have to listen on another port for tcp connections?
var net = require('net');
var http = require('http');
var wsio = require('websocket.io');
var conn = [];
var server = net.createServer(function(client) {//'connection' listener
var info = {
remote : client.remoteAddress + ':' + client.remotePort
};
var i = conn.push(info) - 1;
console.log('[conn] ' + conn[i].remote);
client.on('end', function() {
console.log('[disc] ' + conn[i].remote);
});
client.on('data', function(msg) {
console.log('[data] ' + conn[i].remote + ' ' + msg.toString());
});
client.write('hello\r\n');
});
server.listen(8080);
///////////////////////////////////////////////////////////
var hs = http.createServer(function(req, res) {
res.writeHead(200, {
'Content-Type' : 'text/html'
});
res.end(['<script>', "var ws = new WebSocket('ws://127.0.0.1:8080');", 'ws.onmessage = function (data) { ws.send(data); };', '</script>'].join(''));
});
hs.listen(server);
var ws = wsio.attach(hs);
var i = 0, last;
ws.on('connection', function(client) {
var id = ++i, last
console.log('Client %d connected', id);
function ping() {
client.send('ping!');
if (last)
console.log('Latency for client %d: %d ', id, Date.now() - last);
last = Date.now();
};
ping();
client.on('message', ping);
});
You can have multiple different protocols handled by the same port but there are some caveats:
There must be some way for the server to detect (or negotiate) the protocol that the client wishes to speak. You can think of separate ports as the normal way of detecting the protocol the client wishes to speak.
Only one server process can be actually listening on the port. This server might only serve the purpose of detecting the type of protocol and then forwarding to multiple other servers, but each port is owned by a single server process.
You can't support multiple protocols where the server speaks first (because there is no way to detect the protocol of the client). You can support a single server-first protocol with multiple client-first protocols (by adding a short delay after accept to see if the client will send data), but that's a bit wonky.
An explicit design goal of the WebSocket protocol was to allow WebSocket and HTTP protocols to share the same server port. The initial WebSocket handshake is an HTTP compatible upgrade request.
The websockify server/bridge is an example of a server that can speak 5 different protocols on the same port: HTTP, HTTPS (encrypted HTTP), WS (WebSockets), WSS (encrypted WebSockets), and Flash policy response. The server peeks at the first character of the incoming request to determine if it is TLS encrypted (HTTPS, or WSS) or whether it begins with "<" (Flash policy request). If it is a Flash policy request, then it reads the request, responds and closes the connection. Otherwise, it reads the HTTP handshake (either encrypted or not) and the Connection and Upgrade headers determine whether it is a WebSocket request or a plain HTTP request.
Disclaimer: I made websockify
Short answer - NO, you can't have different TCP/HTTP/Websocket servers running on the same port.
Longish answer -
Both websockets and HTTP work on top of TCP. So you can think of a http server or websocket server as a custom TCP server (with some state mgmt and protocol specific encoding/decoding). It is not possible to have multiple sockets bind to the same port/protocol pair on a machine and so the first one will win and the following ones will get socket bind exceptions.
nginx allows you to run http and websocket on the same port, and it forwards to the correct appliaction:
https://medium.com/localhost-run/using-nginx-to-host-websockets-and-http-on-the-same-domain-port-d9beefbfa95d
I'm building a settings manager for my http server. I want to be able to change settings without having to kill the whole process. One of the settings I would like to be able to change is change the port number, and I've come up with a variety of solutions:
Kill the process and restart it
Call server.close() and then do the first approach
Call server.close() and initialize a new server in the same process
The problem is, I'm not sure what the repercussions of each approach is. I know that the first will work, but I'd really like to accomplish these things:
Respond to existing requests without accepting new ones
Maintain data in memory on the new server
Lose as little uptime as possible
Is there any way to get everything I want? The API for server.close() gives me hope:
server.close(): Stops the server from accepting new connections.
My server will only be accessible by clients I create and by a very limited number of clients connecting through a browser, so I will be able to notify them of a port change. I understand that changing ports is generally a bad idea, but I want to allow for the edge-case where it is convenient or possibly necessary.
P.S. I'm using connect if that changes anything.
P.P.S. Relatively unrelated, but what would change if I were to use UNIX server sockets or change the host name? This might be a more relevant use-case.
P.P.P.S. This code illustrates the problem of using server.close(). None of the previous servers are killed, but more are created with access to the same resources...
var http = require("http");
var server = false,
curPort = 8888;
function OnRequest(req,res){
res.end("You are on port " + curPort);
CreateServer(curPort + 1);
}
function CreateServer(port){
if(server){
server.close();
server = false;
}
curPort = port;
server = http.createServer(OnRequest);
server.listen(curPort);
}
CreateServer(curPort);
Resources:
http://nodejs.org/docs/v0.4.4/api/http.html#server.close
I tested the close() function. It seems to do absolute nothing. The server still accepts connections on his port. restarting the process was the only way for me.
I used the following code:
var http = require("http");
var server = false;
function OnRequest(req,res){
res.end("server now listens on port "+8889);
CreateServer(8889);
}
function CreateServer(port){
if(server){
server.close();
server = false;
}
server = http.createServer(OnRequest);
server.listen(port);
}
CreateServer(8888);
I was about to file an issue on the node github page when I decided to test my code thoroughly to see if it really is a bug (I hate filing bug reports when it's user error). I realized that the problem only manifests itself in the browser, because apparently browsers do some weird kind of HTTP request keep alive thing where it can still access dead ports because there's still a connection with the server.
What I've learned is this:
Browser caches keep ports alive unless the process on the server is killed
Utilities that do not keep caches by default (curl, wget, etc) work as expected
HTTP requests in node also don't keep the same type of cache that browsers do
For example, I used this code to prove that node http clients don't have access to old ports:
Client-side code:
var http = require('http'),
client,
request;
function createClient (port) {
client = http.createClient(port, 'localhost');
request = client.request('GET', '/create');
request.end();
request.on('response', function (response) {
response.on('end', function () {
console.log("Request ended on port " + port);
setTimeout(function () {
createClient(port);
}, 5000);
});
});
}
createClient(8888);
And server-side code:
var http = require("http");
var server,
curPort = 8888;
function CreateServer(port){
if(server){
server.close();
server = undefined;
}
curPort = port;
server = http.createServer(function (req, res) {
res.end("You are on port " + curPort);
if (req.url === "/create") {
CreateServer(curPort);
}
});
server.listen(curPort);
console.log("Server listening on port " + curPort);
}
CreateServer(curPort);
Thanks everyone for the responses.
What about using cluster?
http://learnboost.github.com/cluster/docs/reload.html
It looks interesting!