It must be a simple issue, but my knowledge of streams is limited.
HTTP/1 80 to HTTP/2 h2c proxy
script(not working):
const net = require('net');
const http = require('http');
const http2 = require('http2');
const socketPath = `/tmp/socket.test.${Date.now()}`;
// front http 80 server.
http.createServer((req, res) => {
const socket = net.createConnection(socketPath)
req.pipe(socket).pipe(res);
}).listen(80);
// private http2 socket server.
http2.createServer(function(socket) {
socket.write(`Echo from http2 server\r\n`);
socket.pipe(socket);
}).listen(socketPath);
HTTP/2 h2c to HTTP/2 h2c proxy
cli command to start request:
curl --http2-prior-knowledge -v http://localhost:3333/ --output -
script(not working):
const net = require('net');
const http = require('http');
const http2 = require('http2');
const socketPath = `/tmp/socket.test.${Date.now()}`;
const port = 3333;
const private = http2.createServer({allowHTTP1: true});
private.on('stream', (stream, headers) => {
console.log('private http2 request');
stream.end('HTTP/2');
});
private.listen(socketPath, () => console.log('private http2 server is listening', socketPath));
const public = http2.createServer({allowHTTP1: true});
public.on('stream', (stream, headers) => {
console.log('public http2 request');
const socket = net.connect(socketPath);
stream.pipe(socket).pipe(stream);
});
public.listen(port, () => console.log('public http2 server is listening port', port));
Finally, http2(h2c) to http2(h2c) (with unix socket) proxy works!
const net = require('net');
const http2 = require('http2');
const socketPath = `/tmp/socket.test.${Date.now()}`;
const port = 4444;
const priv = http2.createServer({});
priv.on('stream', (stream, headers) => {
console.log('private http2 request');
stream.end('HTTP/2');
});
priv.listen(socketPath, () => console.log('private http2 server is listening', socketPath));
const pub = http2.createServer({});
pub.on('stream', (stream, headers) => {
const clientSession = http2.connect('http://0.0.0.0', {
createConnection: () => net.connect({path: socketPath})
});
const req = clientSession.request({
':path': `/`,
});
req.pipe(stream).pipe(req);
});
pub.listen(port, () => console.log('public http2 server is listening port', port));
I am not a node expert, and I may have completely misunderstood what you are trying to do here, but I am really struggling to make sense of the question...
If you are trying to have Node act as a HTTP/2 proxy (so a client can connect via h2c to node and it passes on those details to another HTTP/2 aware server), then the way you are going about it seems... weird to say the least.
A proxy can be a Level 4 proxy (e.g. a TCP proxy), where it creates two TCP separate connections (one from client to proxy, and one from proxy to destination server) and sends the connects of those TCP packets between them without really inspecting or interfering with them, other then the TCP headers.
Alternative a proxy can be a Level 7 proxy (e.g. a HTTP proxy), where it creates two separate HTTP connections (one from client to proxy, and one from proxy to destination server) and sends HTTP messages between them, mapping the HTTP headers and details between them and sometimes changing details or even adding more headers (e.g. X-FORWARDED-FOR).
You seem to be trying to create some kind of hybrid between these two distinct and incompatible modes of working! You are hoping to create a HTTP or HTTP/2 server and then open a TCP socket and pass those TCP messages between them and hope this works? While this might possibly work over a simple protocol like HTTP/1 it is never going to work over HTTP/2!
For your first example a HTTP/1 instance is entirely different to HTTP/2. So to set these two up and expect them to work is flawed from the start. If one of your friends only spoke German and the other friend only spoke Spanish and you passed all the German messages, verbatim, unfiltered, and still in German to the Spanish speaker then would you expect the Spanish speaker to magically be able to understand them? Of course not! So you cannot connect HTTP/1 and HTTP/2 at a socket level - they are completely different protocols. You need your proxy to act like a translator between them.
In your second example I'm even more confused. I guess you are trying to create two HTTP/2 servers, and have the client connect to one, and then proxy requests over to the other? And presumably you would add some logic in there at some stage so only certain requests made it through otherwise this would be pointless. Regardless this will almost certainly not work. HTTP/2 is a complex protocol, in many ways more akin to TCP. So each packet needs to be given a unique stream id, and many other settings need to be negotiated between the two end points. So to assume that one HTTP/2 message will seamlessly translate to an identical HTTP/2 message on another HTTP/2 connection is extremely naive! Back to the language analogy, copying German messages verbatim to another German speaker who is perhaps hard of hearing but sitting closer to you, might work initially but as soon as one end fails to keep up, speaks a slightly different dialect, or asks you to repeat something they missed, the whole show comes tumbling down.
I would suggest you either want to make this a Level 4 proxy (so ignore HTTP and HTTP/2 and just use sockets) or you want this to be a HTTP proxy (in which case ingest each HTTP message, read it, and send a similar HTTP message on the downstream connection). You cannot have both.
I would also question why and if you need to do this? HTTP/2 is not universally supported, and gets most of it's gains between client and edge server (proxy in this case), so why do you feel the need to speak HTTP/2 all the way through? See this question for more details on that: HTTP/2 behind reverse proxy
Hope that helps and apologies if I have completely misunderstood your question or your intention!
Related
I have been really struggling to send data from Matlab over a network to a series of 'Dashboards' written in HTML/JS that essentially just display the data.
In Matlab I use uSend = udpport("datagram","IPV4","LocalHost","127.0.0.1","LocalPort",3333) then write(uSend,D,"char","LocalHost",2560) to send an array D=jsonencode([1,2,3,4,5]) to port 2560.
My current implementation uses NodeJS 'dgram' to receive the data. The implementation below works:
const dgram = require('dgram');
const socket = dgram.CreateSocket('udp4');
socket.on('message', (msg,rinfo) => {
console.log(`Data Received: ${JSON.parse(msg)}`)
})
socket.bind(2560,"127.0.0.1")
BUT: This only works with NodeJS i.e. run the script above node script.js. The 'Dashboards' need to run essentially on a chrome browser, which dgram won't work (not even with browserify, it's not a supported function).
Hence, my hands are sort of tied, with Matlab I can realistically only send UDP (it's multicast) and I can't get UDP data on the JS side of things.
I was wondering, with webRTC, is it possible to get it to listen to a port? e.g. something like webRTC listen to port 2560 #127.0.0.1?
Any help would be much appreciated! I am relatively new to programming so I may be asking the wrong question here. Thanks
WebRTC requires a lot more then just a UDP port unfortunately.
I would suggest instead running a node.js server that accepts incoming multicast traffic and caches like this.
const dgram = require('dgram');
const socket = dgram.CreateSocket('udp4');
let data = [];
socket.on('message', (msg,rinfo) => {
data.push(JSON.parse(msg))
})
socket.bind(2560,"127.0.0.1")
Then I would provide a HTTP endpoint that returns the JSON. You can have the browser poll this at an interval of your choosing.
const http = require('http');
const server = http.createServer((req, res) => {
response.end(JSON.stringify(data));
})
const port = 3000
server.listen(port, () => {
console.log(`Server listening on port ${port}`);
})
You can then test this by doing curl http://localhost:3000. You are just caching the data then making it available via HTTP request.
In the future I would look into using a Websocket instead. This way the server can push the data to the browser as soon as it is available. Using a HTTP endpoint requires the browser to request on an interval.
I am in the process of writing an intercepting proxy tool like Burpsuite for security testing. An important part of that would be sending malformed HTTP requests in the case of which we would have to give the user full control over the request!
So, I can't have complete control while using a library! I need to be able to send raw HTTP requests to the target hosts like,
GET / HTTP/1.1
Host: google.com
My attempt :-
I tried using the node JS net module, and I was able to connect to host on port 80 (HTTP), and while connecting to port 443 (HTTPS), a connection is established but returns an empty response!
On some researching, I found out that this has something to do with SSL, as I tried telnet and it too failed for HTTPS connections and by looking at some stackoverflow answers!
Is there any option through which I can directly send raw HTTP/HTTPS requests directly from my node application?
Thanks!
There is a module http-tag, which allow writing literal http messages like -
const net = require('net')
const HTTPTag = require('http-tag')
const socket = net.createConnection({
host: 'localhost',
port: 8000,
}, () => {
// This callback is run once, when socket connected
// Instead of manually writing like this:
// socket.write('GET / HTTP/1.1\r\n')
// socket.write('My-Custom-Header: Header1\r\n\r\n')
// You will be able to write your request(or response) like this:
const xHeader = 'Header1' // here in the epressions you can pass any characters you want
socket.write(
HTTPTag`
GET / HTTP/1.1
My-Custom-Header: ${xHeader}
`
)
socket.end()
})
socket.on('close', hasError => console.log(`Socket Closed, hasError: ${hasError}`))
// set readable stream encoding
socket.setEncoding('utf-8')
socket.on('data', data => console.log(data))
Regarding TLS, currently i am in research on built-in node modules, and I haven’t view the tls yet.
I see a lot of examples hooking an http server during the creation of a WS server, more or less like the following
var server = http.createServer(function(request, response) {
// process HTTP request. Since we're writing just WebSockets
// server we don't have to implement anything.
});
server.listen(1337, function() { });
// create the server
wsServer = new WebSocketServer({
httpServer: server
});
or
var httpServer = http.createServer().listen(websocketport);
/*
* Hook websockets in to http server
*/
socketServer.installHandlers(httpServer, { prefix: '/websockets' });
I dont understand the reason why. Is there any benefit from that?
What is wrong with the classic WS setup, like so
const WebSocket = require('ws')
const wss = new WebSocket.Server({ port: 8080 })
wss.on('connection', ws => {....
Why shouldn't I just use a WS server with no http server at all?
This depends entirely on the project.
Typically, most examples you will see are not pure websocket server examples and instead will assume that you are incorporating websocket functionality into a larger application stack.
If you have no need for an HTTP server, then you shouldn't attach your websocket listener to an instance of one. If you do need an HTTP server, then you are best off attaching your websocket listener in the following situations:
If your HTTP server and your WebSocket listener are on the same domain
You want to be able to manage or push to your socket connections when some HTTP requests are made.
This is most likely because usually, you would provide some REST APIs which uses the HTTP server along with your WS server.
If you are trying to use the same host and port for both of them (HTTP & WS) together, you need to sort of combine or encapsulate one of them into another so that it can accept and handle both kinds of protocol.
However, if you are only trying to use WS without HTTP, then you probably do not need the HTTP server, just the WS server will do, as you have shown in your classic WS setup.
On the other hand, if you only need the HTTP server without WS, then you do not need to implement WS, just HTTP server will do.
Consider the following.
let bluebird = require('bluebird');
let fs = bluebird.promisifyAll(require('fs'));
let express = require('express');
let https = require('https');
let app = express();
let eventualKeys = bluebird.all(['key', 'crt'].map(x => fs.readFileAsync("server." + x)));
let eventualCredentials = eventualKeys.then(([key, cert]) => {
return {key: key, cert: cert};
});
let eventualHttpsServer = eventualCredentials.then(credentials => https.createServer(credentials, app));
eventualHttpsServer.then(httpsServer => httpsServer.listen(4443));
If I make a request to the server using https, everything works fine.
However if I make a request using http, it hangs indefinitely.
Obviously as it is an https server, it can't be expected to handle http requests. But is there a cleaner way of handling this? For instance, nginx replies to attempts to query the https port using http with a much less confusing "The plain HTTP request was sent to HTTPS port" message.
Also is this behavior likely to cause a resource leak on the server side?
This hangs because the express server is not responding to the client (because it is not listening to the port). As the server is not handling the request it will not cause a resource leak on the server, and the client is waiting as long as it can to give the server a chance to respond.
You could set up another server listening on the http port (80) to respond with a failure or a redirect (301) if you want to handle this kind of response. However if you are using nginx or apache it is recommended that you handle any such refusals or redirects with them, as they are less resource intensive than starting up a new node http instance just to drop a connection.
I made a basic chat app using node.js, express and socket.io. It's not too different from the tutorial chat app for socket.io, it simply emits events between connected clients. When I ran it on port 3001 on my server, it worked fine.
Then I made a proxy server app using node-http-proxy which listens on port 80 and redirects traffic based on the requested url to various independent node apps I have running on different ports. Pretty straightforward. But something is breaking. Whenever anyone disconnects, every single socket dis- and re-connects. This is bad for my chat app, which has connection-based events. The client consoles all show:
WebSocket connection to 'ws://[some socket info]' failed: Connection closed before receiving a handshake response
Here's what I think are the important parts of my code.
proxy-server.js
var http = require('http');
var httpProxy = require('http-proxy');
//create proxy template object with websockets enabled
var proxy = httpProxy.createProxyServer({ws: true});
//check the header on request and return the appropriate port to proxy to
function sites (req) {
//webapps get their own dedicated port
if (req == 'mychatwebsite.com') {return 'http://localhost:3001';}
else if (req == 'someothersite.com') {return 'http://localhost:3002';}
//static sites are handled by a vhost server on port 3000
else {return 'http://localhost:3000';}
}
//create node server on port 80 and proxy to ports accordingly
http.createServer(function (req, res) {
proxy.web(req, res, { target: sites(req.headers.host) });
}).listen(80);
chat-app.js
/*
...other modules
*/
var express = require("express");
var app = exports.app = express(); //I probably don't need "exports.app" anymore
var http = require("http").Server(app);
var io = require("socket.io")(http);
io.on("connection", function (socket) {
/*
...fun socket.on and io.emit stuff
*/
socket.on("disconnect", function () {
//say bye
});
});
http.listen(3001, function () {
console.log("listening on port 3001");
});
Now from what I've read on socket.io's site, I might need to use something to carry the socket traffic through my proxy server. I thought that node-http-proxy did that for me with the {ws: true} option as it states in their docs, but apparently it doesn't work like I thought it would. socket.io mentions three different things:
sticky session based on node's built in cluster module
socket.io-redis, which allows separate socket.io instances to talk to each other
socket.io-emitter, which allows socket.io to talk to non-socket.io processes
I have exactly no idea what any of this means or does. I am accidentally coding way above my skill level here, and I have no idea which of these tools will solve my problem (if any) or even what the cause of my problem really is.
Obligatory apology: I'm new to node.js, so please forgive me.
Also obligatory: I know other apps like nginx can solve a lot of my issues, but my goal is to learn and understand how to use this set of tools before I go picking up new ones. And, the less apps I use, the better.
I think your intuition about needing to "carry the socket traffic through" the proxy server is right on. To establish a websocket, the client makes an HTTP request with a special Upgrade header, signalling the server to switch protocols (RFC 6455). In node, http.Server instances emit an upgrade event when this happens and if the event is not handled, the connection is immediately closed.
You need to listen for the upgrade event on your http server and handle it:
var proxy = httpProxy.createProxyServer({ws: true})
var http = http.createServer(/* snip */).listen(80)
// handle upgrade events by proxying websockets
// something like this
http.on('upgrade', function (req, socket, head) {
proxy.ws(req, socket, head, {target:sites(req.headers.host)})
})
See the node docs on the upgrade event and the node-http-proxy docs for more.