How to configure dual SSL WebSocket Servers using Node - node.js

I have an SSL enabled Node server (v0.10.35) that provides the user with HTTPS and WSS connections. I would like to add a new SSL enabled WebSocket server to this configuration to allow me to connect a control application to the Node server, ideally using a different URL. For example:
Public Access : https://myserver (via Browser)
Control Access : wss://myserver/control (via my Software)
I am assuming that this can be done but I'll be honest I do not have a clue as to how to configure it, so I would appreciate help from those that do.
I don't know if it is worth while pointing out but my current implementation already supports WS to the Browser, this allows me to (obviously) send and receive messages.
What I would like to do is either add another WS Server to handle the control messages or configure my existing install to have multiple URLs so the browser would default to wss://myserver but the application would use wss://myserver/control for example.

Related

How to use a secure WebSocket-Connection for a local client

I need informations about security risks and proof of concepts to work with an local client.
In my option, a user will install two components:
The game client
The client launcher
The launcher is running as an background process all the time. The launcher provides an WebSocket server.
The user will open my website to start the game (with game-server lists and other settings). The Website connects to the game launcher to handle all actions (change configuration, start the game executable)..
Problem:
How realize the communication with the website and the game launcher? Okay, Websockets, yes. But browsers forbid to connect to localhost/127.0.0.1 by security reason.
An fake-pointer as DNS or hosts-file to an subdomain like local.game.tld is bad, because SSL-Certificates can be revoked here as bad usage.
Another idea was to provide an NPAPI-Plugin for the browser. But it seems, that the NPAPI is deprecated and useless for the future.
Whats the best practice to communicate between webpages and local installed software?
But browsers forbid to connect to localhost/127.0.0.1 by security reason
This isn't true. Browsers allow you to connect to localhost / 127.0.0.1. I do it all the time on my machine.
The issue is that TLS (wss://localhost, not ws://localhost) requires a certificate and browsers forbid mixed content (you can't have an https website load non-encrypted resources).
fake-pointer as DNS or hosts-file to an subdomain like local.game.tld is bad, because SSL-Certificates can be revoked here as bad usage.
As part of your game installer you could create a hosts file entry with a certificate for mygame.localhost (possibly using a local script) and then ask the player to authorize the installation of the certificate using their password. This way your certificate won't be revoked... but you are right that this his suboptimal.
EDIT: also, please note that the domain name must be at the end, not at the beginning (i.e., game.localhost and not localhost.game).
Whats the best practice to communicate between webpages and local installed software?
Generally speaking, if your game is installed on the local machine, there's no need to encrypt the communication between the local browser and the local machine.
You can easily write your local server to accept only connections from the local machine (or, at worst, if need be, accept connections from the local area network - though this adds security risks).
Your webpage and WebSocket data can be sent "in the clear" (ws:// and http://) between the local server and the browser since they are both on the same machine - this way you don't need a browser. The local server would initiate (as a client) any encrypted connection it needs when communicating with an external service (was:// / https://).
EDIT (from the comments):
There are the only 2 solutions I know of:
Installing a self-signed certificate; or
Using http instead of https and having the server handle outside traffic as if it were a client (so all traffic going outside is encrypted).

beginner webrtc/nodejs issue connecting remote clients

I'm trying to develop a web application in nodejs. I'm using an npm package called "simple-peer" but i don't think this issue is related to that. I was able to use this package and get it working when integrating it with a laravel application using an apache server as the back end. I could access the host machine through it's IP:PORT on the network and connect a separate client to the host successfully with a peer-to-peer connection. However, I'm now trying to develop this specifically in node without an apache back end. I have my express server up and running on port 3000, I can access the index page from a remote client on the same network through IP:3000. But when I try to connect through webrtc, I get a "Connection failed" error. If I connect two different browser instances on the same localhost device, the connection succeeds.
For reference: i'm just using the copy/pasted code from this usage demo. I have the "simplepeer.min.js" included and referenced in the correct directory.
So my main questions are: is there a setting or some webRTC protocol that could be blocking the remote clients from connecting? What would I need to change to meet this requirement? Why would it work in a laravel/webpack app with apache and not with express?
If your remote clients can not get icecandidates, you need TURN server.
When WebRTC Peer behind NAT, firewall or using Cellular Network(like smartphone), P2P Connection will fail.
At that time, for fallback, TURN server will work as a relay server.
I recommend coTURN.
Here is an simple implementation of simple-peer with nodejs backend for multi-user video/audio chat. You can find the client code in /public/js/main.js. Github Project and the Demo.
And just like #JinhoJang said. You do need a turn server to pass the information. Here is a list of public stun/turn servers.

How to setup forward proxy on Windows server for outgoing HTTP and HTTPS requests?

I have a windows server 2012 VPS running a web app behind Cloudflare. The app needs to initiate outbound connections based on user actions (eg upload image from URL). The problem is that this 'leaks' my server's IP address and increases risk of DDOS attacks.
So I would like to prevent my server's IP from being discovered by setting up a forward proxy. So far my research has shown that this is no simple task, and would involve setting up another VPS to act as a proxy.
Does this extra forward proxy VPS have to be running windows ? Are their any paid services that could act as a forward proxy for my server (like cloudflare's reverse proxy system)?
Also, it seems that the suggested IIS forward proxy plugin, Application Request Routing, does not work for HTTPS.
Is there a solution for both types of outgoing (HTTPS + HTTP) requests?
I'm really lost here, so any help or suggestions would be appreciated.
You are correct in needing a "Forward Proxy". A good analogy for this is the proxy settings your browser has for outbound requests. In your case, the web application behaves like a desktop browser and can be configured to make the resource request through a proxy.
Often you can control this for individual requests at the application layer. An example of doing so with C#: C# Connecting Through Proxy
As far as the actual proxy server: No, it does not need to run Windows or IIS. Yes, you can use a proxy service. The vast majority of proxy services are targeted towards consumers and are used for personal privacy or to get around network restrictions. As such, I have no direct recommendations.
Cloudflare actually has recommendations regarding this: https://blog.cloudflare.com/ddos-prevention-protecting-the-origin/.
Features like "upload from URL" that allow the user to upload a photo from a given URL should be configured so that the server doing the download is not the website origin server.
This may be a more comfortable risk mitigator, as it wouldn't depend on a third party proxy service. A request for upload could be handled as a web service call to a dedicated "file downloader" server. Keep in mind that if you have a queued process for another server to do the work, and that server is hosted in the same infrastructure, both might be impacted by a DDoS, depending on the type of DDoS.
Your question implies that you may be comfortable using a non-windows server. Many softwares exist that can operate as a proxy(most web servers), but suffer from the same problem as ARR - lack of support for the HTTP "CONNECT" verb, which is used by modern browsers to start an HTTPS connection before issuing a "GET". SQUID is very popular, open source, and supports everything to connect to.. anything. It's not trivial to set up. Apache also has support for this in "mod_proxy_connect", but I have no experience in that and the online documentation isn't very robust. It's Apache, though, so it may be worth the extra investigation.

How to scrape socket.io updates to a third-party site?

I basically want to know if its possible to use Socket.io using the server-side only with no client side? BUT I want to know if my server-side can instead connect with a different site that I cannot use Socket.io to connect to.
Use PhantomJS to load the third-party site and then inject your own javascript into the page to catch events and send those events back to your own server.
socket.io is a two-way connection. Client <--> Server. You must have a socket.io endpoint at both ends to even establish a connection in the first place. And, then once you establish the connection, you must have agreed upon messages that can be exchanged between the two ends for it to do anything useful.
It is not useful to have a server-side socket.io that doesn't actually connect to anything and nothing connects to it. It wouldn't be doing anything, just sitting there waiting for someone to connect to it.
It is possible to have two cooperating servers connect to one another with socket.io (one server just acts like a client in that case by initiating the connection to the other server). But, again both endpoints must participate in the connection for the connection to even be established and certainly for it to do anything useful.
If you just want to download the contents of a site for scraping purposes, then you would not use socket.io for that. You would just use the nodejs http module (or any of several other modules built on top of it). Your server would essentially pretend to be a browser. It would request a web page from any random web server using HTTP (not socket.io). That web server would return the web page via the normal HTTP request. Your receiving server can then do whatever it wants with that web page (scrape it, whatever).

Openshift Websocket custom domain

I've recently changed my openshift default domain myapp.rhcloud.com in www.myapp.com with a custom ssl certificate. The config works perfectly well until the web page ask the server for a websocket connection. I use node with socket.io and websockets enabled
I first tried:
io.connect(www.myapp.com:8443/...)
But this return an error.
So I set back the socket connection url to:
io.connect(myapp.rhcloud.com:8443/...)
But I get this error:
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://www.myapp.com' is therefore not allowed access.
Is there a way to allow websocket connection via a custom domain on Openshift ? Or, do I need to set up cors?
EDIT
I left socket.io prefix the websocket url I do not pass the protocol to socket.io
From my understanding, WebSockets use something like this (wss instead of ws for secure)
wss://www.yourapp.com:8443
Make sure you setup openshift with your domain alias
https://www.openshift.com/kb/kb-e1096-how-to-setup-an-alias-for-your-application
When I log to https://www.myapp.com:8443/socket.io/1/ in Chrome, I get the error
Identity not verified
which is not the case on https://www.myapp.com... after few tests and searchs I think that this error is due to the websocket preview environnement on openshift. Source (https://www.openshift.com/blogs/paas-websockets):
Update: There is one more known complication. When using our
preview-deployment of WebSockets with HTTPS and WSS protocols, you
will face self-signed certificate. That is because this environment is
only temporary to give you insight into upcoming features. Once we
move the new routing layer to standard ports 80 and 443, i.e. we move
WebSockets support into production, the certificates used will be
signed and valid as they are with current deployment.

Resources