Handling CSP Errors - node.js

I am writing a client program to connect to a website over HTTP/HTTPS. The program first tries to connect to the server using HTTPS. However, after receiving a response status code of 301, I tried handling the request with HTTP whenever there is a 301 by making a new request to the HTTP server. As is commonly done to consume data, I added a listener callback on the 'data' event of the http.get() using the on method of http.clientRequest. However, there is no data in the console output. I am suspecting this is due to the following CSP header that I have been receiving with requests:
Message Headers:
content-type: text/html; charset=UTF-8
location: http://www.whoscored.com/
server: Microsoft-IIS/8.0
strict-transport-security: max-age=16070400
content-security-policy: frame-ancestors *.whoscored.com; upgrade-insecure-requests;
x-content-security-policy: frame-ancestors *.whoscored.com; upgrade-insecure-requests;
date: Sun, 29 Oct 2017 02:44:33 GMT
connection: close
content-length: 148
Logging the data:
The code is provided below:
Https.get(options, (res: Http.IncomingMessage): void => {
logger.log('HTTPS Client for ScrapeX');
logger.log('-------------------------');
logger.logHeaders(res.headers);
switch(res.statusCode) {
case 200:
(() => {
//
logger.log('The connection was established successfully');
})();
break;
case 301:
(() => {
// fallback to http
let buf = '';
httpClient((res1) => {
logger.log('HTTP Client');
logger.log('----------------');
logger.logHeaders(res1.headers);
}, n_options)
.on('error', (err) => {
logger.log('Error: ' + err.message);
logger.printStack(err);
})
.on('data', (chunk: string): void => {
buf += chunk;
})
.on('close', () => {
logger.log('Logging the data: ');
logger.log(buf);
});
})();
break;
}
})
.on('error', (err) => {
logger.log(err.message);
logger.log(err.stack);
})
.on('close', () => {
logger.log('Connection closed');
});

You can't connect to that server with HTTP, that's your problem; it's using both upgrade-insecure-requests and more importantly strict-transport-security. Any browser that respects strict-transport-security will simply refuse to connect over unsecured HTTP.
Not sure what to tell you beyond in this case you simply can't retry in HTTP, or at least, it'll always give either an error or just redirect to HTTPS.
The CSP shouldn't really be an issue, that's for loading other resources while on that page, it doesn't block people from downloading the page in the first place.

Related

Connection:keep-alive is not keeping the socket connection for HTTP request in NodeJS

I have heard that Connection:Keep-Alive header will tell the server to keep the connection between client and server for a while to prevent the effort for each time client establish a request to server. I tried to add that to the request's header but it didn't work out as expected. The socket connection still close on each request.
Could you please help to explain why that happened? Did I missing something about Connection:Keep-Alive or did I implement it the wrong way?
Client:
const http = require("http");
const options = {
port: 4000,
headers: {
connection: "keep-alive",
},
};
function request() {
console.log(`making a request`);
const req = http.request(options, (res) => {
console.log(`STATUS: ${res.statusCode}`);
console.log(`HEADERS: ${JSON.stringify(res.headers)}`);
res.setEncoding("utf8");
res.on("data", (chunk) => {
console.log(`BODY: ${chunk}`);
});
res.on("end", () => {
console.log("No more data in response.");
});
});
req.on("error", (e) => {
console.error(`problem with request: ${e.message}`);
});
req.end();
}
setInterval(() => {
request();
}, 3000);
Server:
const http = require("http");
const server = http.createServer((req, res) => {
setTimeout(() => {
res.end();
}, 500);
});
server.on("connection", function (socket) {
socket.id = Date.now();
console.log(
"A new connection was made by a client." + ` SOCKET ${socket.id}`
);
socket.on("end", function () {
console.log(
`SOCKET ${socket.id} END: other end of the socket sends a FIN packet`
);
});
socket.on("timeout", function () {
console.log(`SOCKET ${socket.id} TIMEOUT`);
});
socket.on("error", function (error) {
console.log(`SOCKET ${socket.id} ERROR: ` + JSON.stringify(error));
});
socket.on("close", function (had_error) {
console.log(`SOCKET ${socket.id} CLOSED. IT WAS ERROR: ` + had_error);
});
});
server.on("clientError", (err, socket) => {
socket.end("HTTP/1.1 400 Bad Request\r\n\r\n");
});
server.listen(4000);
And this is what I've got:
Client:
making a request
STATUS: 200
HEADERS: {"date":"Thu, 09 Apr 2020 13:05:44 GMT","connection":"keep-alive","content-length":"81"}
No more data in response.
making a request
STATUS: 200
HEADERS: {"date":"Thu, 09 Apr 2020 13:05:47 GMT","connection":"keep-alive","content-length":"81"}
No more data in response.
making a request
STATUS: 200
HEADERS: {"date":"Thu, 09 Apr 2020 13:05:50 GMT","connection":"keep-alive","content-length":"81"}
No more data in response.
and Server:
A new connection was made by a client. SOCKET 1586437543111
{ connection: 'keep-alive', host: 'localhost:1234' }
SOCKET 1586437543111 END: other end of the socket sends a FIN packet
SOCKET 1586437543111 CLOSED. IT WAS ERROR: false
A new connection was made by a client. SOCKET 1586437546091
{ connection: 'keep-alive', host: 'localhost:1234' }
SOCKET 1586437546091 END: other end of the socket sends a FIN packet
SOCKET 1586437546091 CLOSED. IT WAS ERROR: false
A new connection was made by a client. SOCKET 1586437549095
{ connection: 'keep-alive', host: 'localhost:1234' }
SOCKET 1586437549095 END: other end of the socket sends a FIN packet
SOCKET 1586437549095 CLOSED. IT WAS ERROR: false
NodeJS version: 10.15.0
There is one more thing makes me even more confuse is that when I use telnet localhost 1234 to make a request:
GET / HTTP/1.1
Connection: Keep-Alive
Host: localhost
Then the connection is not close, and no new connection being created as expected. Is that because telnet receive Connection: Keep-Alive and keep the connection by itself?
I have heard that Connection:Keep-Alive header will tell the server to keep the connection between client and server for a while to prevent the effort for each time client establish a request to server.
This is wrong. The client does not demand anything from the server, it just suggests something to the server.
Connection: keep-alive just tells the server that the client would be willing to keep the connection open in order to send more requests. And implicitly suggests that it would be nice if the server would do the same in order to reuse the existing connection for more requests. The server then can decide by its own if it keeps the connection open after the response was send or closes it immediately or closes it after some inactivity or whatever.
Of course, it is not enough that the client just sends the HTTP header (which is implicit with HTTP/1.1 anyway so no need to send it). It must actually keep the TCP connection open in order to send more requests on the same TCP connection. Your current implementation will not do this as can be seen from the servers log, i.e. the client is closing the connection first:
SOCKET 1586437549095 END: other end of the socket sends a FIN packet
In order to have real keep-alive on the client side you would use an agent which keeps the connection open through multiple http.request. See HTTP keep-alive in node.js for more.
I think websockets are a better solution than tradition http/https for your use case. The connection between client and server will stay alive on the websocket, until the client side is closed, or if you force close the connection.
Node.js websocket package
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8080 });
wss.on('connection', function connection(ws) {
ws.on('message', function incoming(message) {
console.log('received: %s', message);
});
ws.send('something');
});
Client side websockets

Simple node.js http request proxy giving me a gzip error

I am using the below code, to start a server that accepts http requests, and then forwards them to an https api, then relay the response.
I have a problem in that the api gives a 'Content-Encoding: gzip' response, which seems to be causing me trouble.
If I don't relay the gzip response header, the C# code I'm testing gets a response that is just random symbols, the compressed data I assume, as does Postman. When I do include as per my example below, I get StatusCode 0: "The magic number in GZip header is not correct. Make sure you are passing in a GZip stream."
I've tried passing the response headers back as:
res.writeHead(cres.statusCode, cres.headers});
But that just seems to result in the jumbled output again. How can I fix this?
const https = require('https')
const port = 55555
const requestHandler = (req, res) => {
console.log(req.url)
var options = {
host: 'my.api.com',
path: '/7f308b16-d165-4062-b00f-76970783442e'+req.url,
path: req.url,
method: 'GET',
headers: req.headers
};
var json = '';
var creq = https.request(options, function(cres) {
// set encoding
cres.setEncoding('utf8');
// wait for data
cres.on('data', function(chunk){
console.log('data: '+chunk);
json += chunk;
});
cres.on('close', function(){
console.log('close: '+cres.statusCode);
res.writeHead(cres.statusCode);
res.end();
});
cres.on('end', function(){
console.log('end: '+json.toString());
console.log(res.headers)
res.writeHead(cres.statusCode, {'Content-Encoding': 'gzip', 'Content-Type':'text/json; charset=utf-8', 'Cache-Control':'private' });
res.end(json);
});
}).on('error', function(e) {
// we got an error, return 500 error to client and log error
console.log(e.message);
res.writeHead(500);
res.end();
});
creq.end();
}
const server = http.createServer(requestHandler)
server.listen(port, (err) => {
if (err) {
return console.log('something bad happened', err)
}
console.log(`server is listening on ${port}`)
})```

How to consume multiple JSONs from a HTTPs keep-alive request?

I have a node web server which listens on
this.app.use('/register-receiver', (req, res) => {
const receiverId = req.query.id;
res.status(200).set({
connection: 'keep-alive',
'Cache-Control': 'no-cache',
'Content-Type': 'application/json',
});
this.receivers[receiverId] = res;
});
It periodically sends a JSON payload to clients connected to /register-receiver with
this.receivers[receiverId].write(JSON.stringify({
// some big json
}))
I have another node program which acts as a client that makes a connection to the server on startup.
const options = {
agent: false,
host: <my host>,
defaultPort: <my port>
path: `/register-receiver?id=${activeConfig.receiver.id}`
headers: {
connection: 'keep-alive',
'Content-Type': 'application/json',
}
};
http.get(options, res => {
res.setEncoding('utf8');
res.on('data', data => {
try {
const response = JSON.parse(data);
// do stuff with response
} catch(e) {
console.log(`error ${e} with ${data}`);
}
});
res.on('end', () => console.log(`connection finished`) )
})
The server needs to periodically send JSON payloads to the client. The client should receive these JSONs and do something with them. However, the problem is that large JSONs writes are chunked such that the client will receive the JSON in pieces. This breaks JSON.parse(data) and now the client doesn't know how to process the server payloads. I can't rely on res.on('end') to detect the the completion of a chunked write because this is a keep-alive request that should stay open forever.
I want to avoid designing my own protocol for combining JSON chunks because of it's complexity. It's not as simple as concatenating the strings since I could have interleaved JSON chunks if the server sends 2 large JSON payloads at the same time.
Is it possible to force the server to write the entire JSON as 1 chunk in the stream? How can I setup my client such that it establishes a "forever" connection with my server and listens for complete JSON payloads?

express.js not streaming chunked 'text/event-stream' resposne

I'm trying to send a SSE text/event-stream response from an express.js end point. My route handler looks like:
function openSSE(req, res) {
res.writeHead(200, {
'Content-Type': 'text/event-stream; charset=UTF-8',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Transfer-Encoding': 'chunked'
});
// support the polyfill
if (req.headers['x-requested-with'] == 'XMLHttpRequest') {
res.xhr = null;
}
res.write(':' + Array(2049).join('\t') + '\n'); //2kb padding for IE
res.write('id: '+ lastID +'\n');
res.write('retry: 2000\n');
res.write('data: cool connection\n\n');
console.log("connection added");
connections.push(res);
}
Later I then call:
function sendSSE(res, message){
res.write(message);
if (res.hasOwnProperty('xhr')) {
clearTimeout(res.xhr);
res.xhr = setTimeout(function () {
res.end();
removeConnection(res);
}, 250);
}
}
My browser makes the and holds the request:
None of the response gets pushed to the browser. None of my events are fired. If I kill the express.js server. The response is suddenly drained and every event hits the browser at once.
If I update my code to add res.end() after the res.write(message) line It flushes the stream correctly however it then fallsback to event polling and dosen't stream the response.
I've tried adding padding to the head of the response like
res.write(':' + Array(2049).join('\t') + '\n');
as I've seen from other SO post that can trigger a browser to drain the response.
I suspect this is an issue with express.js because I had been previously using this code with nodes native http server and it was working correctly. So I'm wondering if there is some way to bypass express's wrapping of the response object.
This is the code I have working in my project.
Server side:
router.get('/listen', function (req, res) {
res.header('transfer-encoding', 'chunked');
res.set('Content-Type', 'text/json');
var callback = function (data) {
console.log('data');
res.write(JSON.stringify(data));
};
//Event listener which calls calback.
dbdriver.listener.on(name, callback);
res.socket.on('end', function () {
//Removes the listener on socket end
dbdriver.listener.removeListener(name, callback);
});
});
Client side:
xhr = new XMLHttpRequest();
xhr.open("GET", '/listen', true);
xhr.onprogress = function () {
//responseText contains ALL the data received
console.log("PROGRESS:", xhr.responseText)
};
xhr.send();
I was struggling with this one too, so after some browsing and reading I solved this issue by setting an extra header to the response object:
res.writeHead(200, {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Content-Encoding": "none"
});
Long story short, when the EventSource is negotiating with the server, it is sending an Accept-Encoding: gzip, deflate, br header which is making express to respond with an Content-Encoding: gzip header. So there are two solutions for this issue, the first is to add a Content-Encoding: none header to the response and the second is to (gzip) compress your response.

NodeJS/ExpressJS send response of large amount of data in 1 stream

I'm prototyping an app using the native mongo rest api where Node returns about 400K of json. I use the following to maket he request to mongo's native api and return the result:
http.request(options, function(req)
{
req.on('data', function(data)
{
console.log(data,data.rows);
response.send( 200, data );
}
);
}
)
.on('error', function(error)
{
console.log('error\t',error);
response.send(500, error);
}
)
.end();
When I hit http://localhost:8001/api/testdata via curl, the response is proper (both what is outputted to Node's console from the console.log and what is received by curl). But when I hit it via ajax in my app, the stream is…interupted, even data outputted to Node's console (Terminal) is odd: It has multiple EOFs, and the Network > response for the call in chrome's dev tools ends at the first EOF.
One other strange thing: data looks like:
{
"offset": 0,
"rows": [ … ]
}
but in neither Node nor client-side (angular) can I reference data.rows (it returns undefined). typeof data returns [object Object].
EDIT The request headers for both curl and angular (as reported by Node) are:
req.headers: {
'x-action': '',
'x-ns': 'test.headends',
'content-type': 'text/plain;charset=utf-8',
connection: 'close',
'content-length': '419585'
}
EDIT I checked response headers in both angular and curl directly (instead of from Node), annnd there's a disagreement (same output from both curl and angular directly instead of from node):
access-control-allow-headers: "Origin, X-Requested-With, Content-Type, Accept"
access-control-allow-methods: "OPTIONS,GET,POST,PUT,DELETE"
access-control-allow-origin: "*"
connection: "keep-alive"
content-length: "65401" // <---------------- too small!
content-type: "application/octet-stream"
// ^-- if i force "application/json"
// with response.json() instead of response.send() in Node,
// the client displays octets (and it takes 8s instead of 0s)
date: "Mon, 15 Jul 2013 18:36:50 GMT"
etag: ""-207110537""
x-powered-by: "Express"
Node's http.request() returns data in chunks for streaming (would be nice if they explicitly state this). Thus it's necessary to write each chunk to the body of Express's response, listen for the end of the http request (which is not really documented), and then call response.end() to actually finish the response.
var req = http.request(options, function(res)
{
res.on( 'data', function(chunk) { response.write(chunk); } );
res.on( 'end', function() { response.end(); } );
}
);
req.on('error', function(error) { … });
req.end();
Where response is Express's response the the initial client request (curl or angular's ajax call).
resp.set('content-type' , 'application/json');
const stream = db.Country.findAllWithStream();
// console.log(stream);
stream.on('data', chunk => {
stream.pipe(resp);
});
stream.on('end', () => {
console.log('\n\nEND!!!!!');
resp.end()
});

Resources